| Peer-Reviewed

Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals

Received: 12 August 2021     Accepted: 11 September 2021     Published: 23 September 2021
Views:       Downloads:
Abstract

As the soul of music, emotion information is widely used in music retrieval and recommendation systems because the pursuit of emotional experience is the main motivation for music listening. In the field of music emotion recognition, computer scientists investigated computation models to automatically detect the perceived emotion of music, but this method ignores the differences between listeners. To provide users with the most accurate music emotion information, this study investigated the effects of physiological features on personalized music emotion recognition (PMER) models, which can automatically identify an individual’s perceived emotion of music. Applying machine learning methods, we formed relations among audio features, physiological features, and music emotions. First, computational modeling analysis shows that physiological features extracted from electrocardiogram and electro-dermal activity signals can predict the perception of music emotion for some individuals. Second, we compared the performance of physiological feature-based perception and feeling models and observed substantial individual differences. In addition, we found that the performance of the perception model and the feeling model is related in predicting happy, relaxed, and sad emotions. Finally, by adding physiological features to the audio-based PMER model, the prediction effect of some individuals was improved. Our work investigated the relationship between physiological state and perceived emotion of music, constructed models with practical value, and provided a reference for the optimization of PMER systems.

Published in American Journal of Life Sciences (Volume 9, Issue 5)
DOI 10.11648/j.ajls.20210905.12
Page(s) 105-119
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Music Emotion Recognition, Physiological Signal Processing, Machine Learning, Perceived Emotion

References
[1] Almond, A. (2017). Acqknowledge (Software). The International Encyclopedia of Communication Research Methods, 1-2.
[2] Andridge, R. R., & Little, R. J. (2010). A review of hot deck imputation for survey non-response. International statistical review, 78 (1), 40-64.
[3] Bai, J., Peng, J., Shi, J., Tang, D., Wu, Y., Li, J., & Luo, K. (2016, August). Dimensional music emotion recognition by valence-arousal regression. In 2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI* CC) (pp. 42-49). IEEE.
[4] Barros, P., Parisi, G., & Wermter, S. (2019, May). A Personalized Affective Memory Model for Improving Emotion Recognition. In International Conference on Machine Learning (pp. 485-494).
[5] Baumgartner, H. (1992). Remembrance of things past: Music, autobiographical memory, and emotion. Advances in Consumer Research, 19, 613–620.
[6] Chen, S. H., Lee, Y. S., Hsieh, W. C., & Wang, J. C. (2015, December). Music emotion recognition using deep Gaussian process. In 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) (pp. 495-498). IEEE.
[7] Chin, Y. H., Lin, C. H., Siahaan, E., Wang, I. C., & Wang, J. C. (2013, March). Music emotion classification using double-layer support vector machines. In 2013 1st International Conference on Orange Technologies (ICOT) (pp. 193-196). IEEE.
[8] Das, P., Khasnobish, A., & Tibarewala, D. N. (2016, June). Emotion recognition employing ECG and GSR signals as markers of ANS. In 2016 Conference on Advances in Signal Processing (CASP) (pp. 37-42). IEEE.
[9] De Nadai, S., D'Incà, M., Parodi, F., Benza, M., Trotta, A., Zero, E.,... & Sacile, R. (2016, June). Enhancing safety of transport by road by on-line monitoring of driver emotions. In 2016 11th System of Systems Engineering Conference (SoSE) (pp. 1-4). IEEE.
[10] Deng, J. J., Leung, C. H., Milani, A., & Chen, L. (2015). Emotional states associated with music: Classification, prediction of changes, and consideration in recommendation. ACM Transactions on Interactive Intelligent Systems (TiiS), 5 (1), 4.
[11] Downie, J. S. (2008). The music information retrieval evaluation exchange (2005–2007): A window into music information retrieval research. Acoustical Science and Technology, 29 (4), 247-255.
[12] Edelberg, R. (1967). Electrical properties of the skin. Methods in psychophysiology, 1-53.
[13] Ekman, P. (1992). An argument for basic emotions. Cognition & emotion, 6 (3-4), 169-200.
[14] Gabrielsson, A. (2001). Emotion perceived and emotion felt: Same or different?. Musicae scientiae, 5 (1_suppl), 123-147.
[15] Gabrielsson, A., & Lindström, E. (2001). The influence of musical structure on emotional expression. In P. N. Juslin & J. A. Sloboda (Eds.), Series in affective science. Music and emotion: Theory and research (pp. 223–248). Oxford University Press.
[16] Gladstone, J. J., Matz, S. C., & Lemaire, A. (2019). Can psychological traits be inferred from spending? Evidence from transaction data. Psychological science, 30 (7), 1087-1096.
[17] Gong, P., Ma, H. T., & Wang, Y. (2016, June). Emotion recognition based on the multiple physiological signals. In 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) (pp. 140-143). IEEE.
[18] Guendil, Z., Lachiri, Z., Maaoui, C., & Pruski, A. (2015, December). Emotion recognition from physiological signals using fusion of wavelet based features. In 2015 7th International Conference on Modelling, Identification and Control (ICMIC) (pp. 1-6). IEEE.
[19] Guo, R., Li, S., He, L., Gao, W., Qi, H., & Owens, G. (2013, May). Pervasive and unobtrusive emotion sensing for human mental health. In 2013 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops (pp. 436-439). IEEE.
[20] Herbert, R. (2013). An empirical study of normative dissociation in musical and non-musical everyday life experiences. Psychology of Music, 41 (3), 372-394.
[21] Hunter, P. G., Schellenberg, E. G., & Schimmack, U. (2010). Feelings and perceptions of happiness and sadness induced by music: similarities, differences, and mixed emotions. Psychology of Aesthetics Creativity & the Arts, 4 (1), 47-56.
[22] Hsu, Y. L., Wang, J. S., Chiang, W. C., & Hung, C. H. (2017). Automatic ecg-based emotion recognition in music listening. IEEE Transactions on Affective Computing (Vol. 11, pp. 85-99).
[23] Izard, C. E. (1977). Differential emotions theory. In Human emotions (pp. 43-66). Springer, Boston, MA.
[24] Izard, C. E. (2007). Basic emotions, natural kinds, emotion schemas, and a new paradigm. Perspectives on psychological science, 2 (3), 260-280.
[25] Jun, S., Rho, S., Han, B. J., & Hwang, E. (2008). A fuzzy inference-based music emotion recognition system. In 5th International Conference on Visual Information Engineering (VIE) (pp. 673-677).
[26] Juslin, P. N. (2001). Communicating emotion in music performance: A review and a theoretical framework. In P. N. Juslin, & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 309–337). Oxford, UK: Oxford University Press.
[27] Juslin, P. N., & Daniel Västfjäll. (2008). Emotional responses to music: the need to consider underlying mechanisms. Behavioral & Brain Sciences, 31 (6), 751-751.
[28] Juslin, P. N., Harmat, L., & Eerola, T. (2013). What makes music emotionally significant? Exploring the underlying mechanisms. Psychology of Music, 42 (4), 599-623.
[29] Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of new music research, 33 (3), 217-238.
[30] Juslin, P. N., Liljeström, S., Västfjäll, D., & Lundqvist, L. O. (2010). How does music evoke emotions? Exploring the underlying mechanisms. In P. N. Juslin, & J. A. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 605–642). Oxford, UK: Oxford University Press.
[31] Kallinen, K., & Ravaja, N. (2006). Emotion perceived and emotion felt: Same and different. Musicae Scientiae, 10 (2), 191-213.
[32] Kawakami, A., & Katahira, K. (2015). Influence of trait empathy on the emotion evoked by sad music and on the preference for it. Frontiers in psychology, 6, 1541.
[33] Kim, J., & André, E. (2008). Emotion recognition based on physiological changes in music listening. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30 (12), 2067-2083.
[34] Kim, Y. E., Schmidt, E. M., Migneco, R., Morton, B. G., Richardson, P., Scott, J.,... & Turnbull, D. (2010, August). Music emotion recognition: A state of the art review. In Proceedings of International Society for Music Information Retrieval (IMSIR) (Vol. 86, pp. 937-952).
[35] Konečni, V. J. (2008). Does music induce emotion? A theoretical and methodological analysis. Psychology of Aesthetics, Creativity, and the Arts, 2 (2), 115.
[36] Laukka, P. (2007). Uses of music and psychological well-being among the elderly. Journal of happiness studies, 8 (2), 215.
[37] Lin, Y. P., Wang, C. H., Jung, T. P., Wu, T. L., Jeng, S. K., Duann, J. R., & Chen, J. H. (2010). EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57 (7), 1798-1806.
[38] Liu, D., Lu, L., & Zhang, H. J. (2003). Automatic mood detection from acoustic music data. In Proceedings of International Society for Music Information Retrieval (IMSIR) (pp. 81–87).
[39] Malik, M., Adavanne, S., Drossos, K., Virtanen, T., Ticha, D., & Jarina, R. (2017). Stacked convolutional and recurrent neural networks for music emotion recognition. arXiv preprint arXiv: 1706.02292.
[40] McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Battenberg, E., & Nieto, O. (2015, July). librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference (Vol. 8).
[41] Monajati, M., Abbasi, S. H., Shabaninia, F., & Shamekhi, S. (2012). Emotions states recognition based on physiological parameters by employing of fuzzy-adaptive resonance theory. International Journal of Intelligence Science, 2, 166-175.
[42] Mualem, O., & Lavidor, M. (2015). Music education intervention improves vocal emotion recognition. International Journal of Music Education, 33 (4), 413-425.
[43] Naji, M., Firoozabadi, M., & Azadfallah, P. (2014). Classification of music-induced emotions based on information fusion of forehead biosignals and electrocardiogram. Cognitive Computation, 6 (2), 241-252.
[44] Nardelli, M., Valenza, G., Greco, A., Lanata, A., & Scilingo, E. P. (2015). Recognizing emotions induced by affective sounds through heart rate variability. IEEE Transactions on Affective Computing, 6 (4), 385-394.
[45] Orjesek, R., Jarina, R., Chmulik, M., & Kuba, M. (2019). DNN-Based music emotion recognition from raw audio signal. In 2019 29th International Conference Radioelektronika (pp. 1–4). New York, NY: IEEE.
[46] Osborne, J. W. (1980). The mapping of thoughts, emotions, sensations, and images as responses to music. Journal of Mental Imagery, 5, 133–136.
[47] Panwar, S., Rad, P., Choo, K. K. R., & Roopaei, M. (2019). Are you emotional or depressed? Learning about your emotional state from your music using machine learning. Journal of Supercomputing, 75 (6), 2986-3009.
[48] Park, S. H., Ihm, S. Y., Jang, W. I., Nasridinov, A., & Park, Y. H. (2015). A music recommendation method with emotion recognition using ranked attributes. In Computer Science and its Applications (pp. 1065-1070). Springer, Berlin, Heidelberg.
[49] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O.,... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12 (Oct), 2825-2830.
[50] Pouyanfar, S., & Sameti, H. (2014, February). Music emotion recognition using two level classification. In 2014 Iranian Conference on Intelligent Systems (ICIS) (pp. 1-6). IEEE.
[51] Quiroz, J. C., Geangu, E., & Yong, M. H. (2018). Emotion Recognition Using Smart Watch Sensor Data: Mixed-Design Study. JMIR mental health, 5 (3), e10153.
[52] Raykar, V. C., Yu, S., Zhao, L. H., Valadez, G. H., Florin, C., Bogoni, L., & Moy, L. (2010). Learning from crowds. Journal of Machine Learning Research, 11 (Apr), 1297-1322.
[53] Russell, J. A. (1980). A circumplex model of affect. Journal of personality and social psychology, 39 (6), 1161.
[54] Scherer, K., & Zentner, M. (2008). Music evoked emotions are different–more often aesthetic than utilitarian. Behavioral and Brain Sciences, 31 (5), 595-596.
[55] Schubert, E. (2004). Modeling perceived emotion with continuous musical features. Music Perception: An Interdisciplinary Journal, 21 (4), 561-585.
[56] Schubert, E. (2013). Emotion felt by the listener and expressed by the music: literature review and theoretical perspectives. Frontiers in psychology, 4, 837.
[57] Sease, R., & McDonald, D. W. (2011). The organization of home media. ACM Transactions on Computer-Human Interaction (TOCHI), 18 (2), 1-20.
[58] Sen, A., & Srivastava, M. (2012). Multiple regression. In Ashish Sen, & Muni Srivastava (Eds.), Regression analysis: theory, methods, and applications (pp. 28-49). Springer Science & Business Media.
[59] Shu, L., Xie, J., Yang, M., Li, Z., Li, Z., Liao, D.,... & Yang, X. (2018). A review of emotion recognition using physiological signals. Sensors, 18 (7), 2074.
[60] Sloboda, J. A., & Juslin, P. N. (2001). Psychological perspectives on music and emotion. In P. N. Juslin & J. A. Sloboda (Eds.), Series in affective science. Music and emotion: Theory and research (p. 71–104). Oxford University Press.
[61] Soleymani, M., Caro, M. N., Schmidt, E. M., Sha, C. Y., & Yang, Y. H. (2013, October). 1000 songs for emotional analysis of music. In Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia (pp. 1-6). ACM.
[62] Song, T., Zheng, W., Lu, C., Zong, Y., Zhang, X., & Cui, Z. (2019). MPED: A multi-modal physiological emotion database for discrete emotion recognition. IEEE Access, 7, 12177-12191.
[63] Thayer, R. E. (1989). The Biopsychology of Mood and Arousal, New York, Oxford University Press.
[64] Udovičić, G., Ðerek, J., Russo, M., & Sikora, M. (2017, October). Wearable emotion recognition system based on GSR and PPG signals. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care (pp. 53-59).
[65] Valenza, G., Citi, L., Lanatá, A., Scilingo, E. P., & Barbieri, R. (2014). Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics. Scientific reports, 4, 4998.
[66] Van Dooren, M., & Janssen, J. H. (2012). Emotional sweating across the body: Comparing 16 different skin conductance measurement locations. Physiology & behavior, 106 (2), 298-304.
[67] Vanderlei, F. M., de Abreu, L. C., Garner, D. M., & Valenti, V. E. (2016). Symbolic analysis of heart rate variability during exposure to musical auditory stimulation. Altern. Ther. Health Med, 22, 24-31.
[68] Vempala, N. N., & Russo, F. A. (2018). Modeling Music Emotion Judgments Using Machine Learning Methods. Frontiers in psychology, 8, 2239.
[69] Verschuere, B., Crombez, G., Koster, E., & Uzieblo, K. (2006). Psychopathy and physiological detection of concealed information: A review. Psychologica Belgica, 46 (1-2).
[70] Wagner, J., Kim, J., & André, E. (2005, July). From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification. In 2005 IEEE international conference on multimedia and expo (pp. 940-943). IEEE.
[71] Wang, J. C., Yang, Y. H., Wang, H. M., & Jeng, S. K. (2012, October). The acoustic emotion Gaussians model for emotion-based music annotation and retrieval. In Proceedings of the 20th ACM international conference on Multimedia (pp. 89-98). ACM.
[72] Wen, W., Liu, G., Cheng, N., Wei, J., Shangguan, P., & Huang, W. (2014). Emotion recognition based on multi-variant correlation of physiological signals. IEEE Transactions on Affective Computing, 5 (2), 126-140.
[73] Wong, W. M., Tan, A. W., Loo, C. K., & Liew, W. S. (2010, December). PSO optimization of synergetic neural classifier for multichannel emotion recognition. In 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC) (pp. 316-321). IEEE.
[74] Yang, X., Dong, Y., & Li, J. (2018). Review of data features-based music emotion recognition methods. Multimedia Systems, 24 (4), 365-389.
[75] Yang, Y. H., Su, Y. F., Lin, Y. C., & Chen, H. H. (2007, September). Music emotion recognition: The role of individuality. In Proceedings of the international workshop on Human-centered multimedia (pp. 13-22). ACM.
[76] Yazdani, A., Kappeler, K., & Ebrahimi, T. (2011, November). Affective content analysis of music video clips. In Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies (pp. 7-12).
[77] Xu, L., Wen, X., Shi, J., Li, S., Xiao, Y., Wan, Q., & Qian, X. (2020). Effects of individual factors on perceived emotion and felt emotion of music: Based on machine learning methods. Psychology of Music. Advance online publication.
[78] Zangwill, N. (2011). Music, essential metaphor, and private language. American Philosophical Quarterly, 48 (1), 1-16.
[79] Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion, 8 (4), 494.
[80] Zong, C., & Chetouani, M. (2009, December). Hilbert-Huang transform based physiological signals analysis for emotion recognition. In 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) (pp. 334-339). IEEE.
[81] Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12 (6), 1100-1122.
[82] Tulppo, M. P., Makikallio, T. H., Takala, T. E., Seppanen, T. H. H. V., & Huikuri, H. V. (1996). Quantitative beat-to-beat analysis of heart rate dynamics during exercise. American journal of physiology-heart and circulatory physiology, 271 (1), H244-H252.
[83] Xu, L., Zheng, Y., Xu, D., & Xu, L. (2021). Predicting the Preference for Sad Music: The Role of Gender, Personality, and Audio Features. IEEE Access, 9, 92952-92963.
Cite This Article
  • APA Style

    Liang Xu, Jie Wang, Xin Wen, Zaoyi Sun, Rui Sun, et al. (2021). Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals. American Journal of Life Sciences, 9(5), 105-119. https://doi.org/10.11648/j.ajls.20210905.12

    Copy | Download

    ACS Style

    Liang Xu; Jie Wang; Xin Wen; Zaoyi Sun; Rui Sun, et al. Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals. Am. J. Life Sci. 2021, 9(5), 105-119. doi: 10.11648/j.ajls.20210905.12

    Copy | Download

    AMA Style

    Liang Xu, Jie Wang, Xin Wen, Zaoyi Sun, Rui Sun, et al. Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals. Am J Life Sci. 2021;9(5):105-119. doi: 10.11648/j.ajls.20210905.12

    Copy | Download

  • @article{10.11648/j.ajls.20210905.12,
      author = {Liang Xu and Jie Wang and Xin Wen and Zaoyi Sun and Rui Sun and Liuchang Xu and Xiuying Qian},
      title = {Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals},
      journal = {American Journal of Life Sciences},
      volume = {9},
      number = {5},
      pages = {105-119},
      doi = {10.11648/j.ajls.20210905.12},
      url = {https://doi.org/10.11648/j.ajls.20210905.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajls.20210905.12},
      abstract = {As the soul of music, emotion information is widely used in music retrieval and recommendation systems because the pursuit of emotional experience is the main motivation for music listening. In the field of music emotion recognition, computer scientists investigated computation models to automatically detect the perceived emotion of music, but this method ignores the differences between listeners. To provide users with the most accurate music emotion information, this study investigated the effects of physiological features on personalized music emotion recognition (PMER) models, which can automatically identify an individual’s perceived emotion of music. Applying machine learning methods, we formed relations among audio features, physiological features, and music emotions. First, computational modeling analysis shows that physiological features extracted from electrocardiogram and electro-dermal activity signals can predict the perception of music emotion for some individuals. Second, we compared the performance of physiological feature-based perception and feeling models and observed substantial individual differences. In addition, we found that the performance of the perception model and the feeling model is related in predicting happy, relaxed, and sad emotions. Finally, by adding physiological features to the audio-based PMER model, the prediction effect of some individuals was improved. Our work investigated the relationship between physiological state and perceived emotion of music, constructed models with practical value, and provided a reference for the optimization of PMER systems.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Physiological State Can Help Predict the Perceived Emotion of Music: Evidence from ECG and EDA Signals
    AU  - Liang Xu
    AU  - Jie Wang
    AU  - Xin Wen
    AU  - Zaoyi Sun
    AU  - Rui Sun
    AU  - Liuchang Xu
    AU  - Xiuying Qian
    Y1  - 2021/09/23
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ajls.20210905.12
    DO  - 10.11648/j.ajls.20210905.12
    T2  - American Journal of Life Sciences
    JF  - American Journal of Life Sciences
    JO  - American Journal of Life Sciences
    SP  - 105
    EP  - 119
    PB  - Science Publishing Group
    SN  - 2328-5737
    UR  - https://doi.org/10.11648/j.ajls.20210905.12
    AB  - As the soul of music, emotion information is widely used in music retrieval and recommendation systems because the pursuit of emotional experience is the main motivation for music listening. In the field of music emotion recognition, computer scientists investigated computation models to automatically detect the perceived emotion of music, but this method ignores the differences between listeners. To provide users with the most accurate music emotion information, this study investigated the effects of physiological features on personalized music emotion recognition (PMER) models, which can automatically identify an individual’s perceived emotion of music. Applying machine learning methods, we formed relations among audio features, physiological features, and music emotions. First, computational modeling analysis shows that physiological features extracted from electrocardiogram and electro-dermal activity signals can predict the perception of music emotion for some individuals. Second, we compared the performance of physiological feature-based perception and feeling models and observed substantial individual differences. In addition, we found that the performance of the perception model and the feeling model is related in predicting happy, relaxed, and sad emotions. Finally, by adding physiological features to the audio-based PMER model, the prediction effect of some individuals was improved. Our work investigated the relationship between physiological state and perceived emotion of music, constructed models with practical value, and provided a reference for the optimization of PMER systems.
    VL  - 9
    IS  - 5
    ER  - 

    Copy | Download

Author Information
  • Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China

  • Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China

  • Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China

  • College of Education, Zhejiang University of Technology, Hangzhou, China

  • Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China

  • College of Mathematics and Computer Science, Zhejiang A&F University, Hangzhou, China

  • Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China

  • Sections