تعداد نشریات | 418 |
تعداد شمارهها | 9,995 |
تعداد مقالات | 83,546 |
تعداد مشاهده مقاله | 77,333,319 |
تعداد دریافت فایل اصل مقاله | 54,362,329 |
Emotion Extraction from Video Fragments using Gaze Tracking and AdaBoost Classifier | ||
Majlesi Journal of Electrical Engineering | ||
مقاله 10، دوره 13، شماره 2، شهریور 2019، صفحه 67-81 اصل مقاله (1.29 M) | ||
نوع مقاله: Review Article | ||
نویسندگان | ||
Atiyeh Yaghoubiy* 1؛ Seyed Kamaledin Setarehdan2؛ Keivan Maghooli3 | ||
1Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran | ||
2Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran. | ||
3Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran. | ||
چکیده | ||
The application of communication between human and computer has been emerging as an important matter in communication with the surrounding environment. If the computer could sense the human’s emotion, it would be easier to establish a connection between the computer and human. Therefore, the extraction of emotions is an important topic in communication between them. For extracting an emotion, which is absolutely undeniable, various biological signals are used. One of the simple and high-precision methods to acquire data from these signals is to implement the eye tracking and the concentration on the screen technique. In this paper, the eye tracking technique is used to extract emotions for the communication between the human and computer. According to the acquired data from the persons and videos, some of the characteristics of signals, including focus areas, pupil diameter, statistical features, and features of videos are extracted. In addition, in order to improve the results, combining the features are proposed. Afterwards, based on two distinct outputs i.e. Arousal and Valence, and employing a linear combination and reducing the dimension, some features are selected separately. Finally, to classify the two-axes associated with the Arousal and Valence in the range of 0 to 9, which is divided into three equal parts; special types of KNN and SVM methods combined with AdaBoost classifier are used. The numerical studies have shown that the average extraction accuracy is 68.66% for Arousal axis, and 74.66% for Valence axis. As a result, the overall accuracy is improved 5.5% compared to the previous works, respectively. | ||
کلیدواژهها | ||
Emotion Extraction؛ Support vector machine؛ K nearest neighbor؛ Combine Classifiers؛ Biological Signals؛ Eye and Gaze Tracking | ||
مراجع | ||
[1] A. Yaghoubiy and K. Maghooli, “Emotions extraction methods and applications, ” presented at the 4th Int. Conf. Applied Research, Computer Engineering and Signal Processing, Tehran, Iran, 2016.
[2] G. Chanel, C. Rebetez, M. Betrancourt and T. Pun, “Emotion assessment from physio signals for adaptation of game difficulty, ” IEEE Trans. Systems Man Cybernetics A: Systems and Humans, vol. 41, pp. 1052-1063, 2011.
[3] P. Rani, L. Changchon, S. Nilanjan and V. Eric, “An empirical study of machine learning techniques for affect recognition in human–robot interaction, ” Journal Pattern Anal. Appl., vol. 9, pp. 58-69, 2006.
[4] M. Soleymani, S. Asghari-Esfeden, Y. Fu and M. Pantic, “Analysis of EEG signals and facial expressions for continuous emotion detection, ” IEEE Trans. Affective Comp., vol. 7, pp. 17-28, 2016.
[5] M. Murugappan, “Human emotion classification using Wavelet transform and KNN, ” presented at the Int. IEEE Conf. Pattern Analysis Intelligent Robotics, Putrajaya, Malaysia, 2011.
[6] M. Soleymani, J. Lichtenauer, M. Pantic and T. Pun, “A multimodal database for affect recognition and implicit tagging, ” IEEE Trans. Affective Comp., vol. 3, 42-55, 2012.
[7] T. Balli, S. M. Deniz, B. Cebeci, M. Erbey, A. D. Duru and T. Demiralp, “Emotion recognition based on spatially smooth spectral features of the EEG, ” presented at the 6th Int. IEEE/EMBS Conf. Neural Engineering, San Diego, California, USA, 2013.
[8] M. Murugappan and S. Murugappan, “Human emotion recognition through short time electroencephalogram (EEG) signals using fast fourier transform (FFT), ” presented at the 9th Int. IEEE Colloquium Signal Processing Applications, Kuala Lumpur, Malaysia, 2013.
[9] C. A. Torres, A. A. Orozco and M. A. Alvarez, “Feature selection for multimodal emotion recognition in the Arousal-Valence space, ” presented at the 35th Annual Int. Conf. IEEE Engineering Medicine Biology Society, Osaka, Japan, 2013.
[10] R. N. Duan, J. Y. Zhu and B. L. Lu, “Differential entropy feature for EEG-based emotion classification, ” presented at the 6th Int. IEEE/EMBS Conf. Neural Engineering, San Diego, California, USA, 2013.
[11] J. W. Matiko, S. P. Beeby and J. Tudor, “Fuzzy logic based emotion classification, ” presented at the IEEE Int. Conf. Acoustics Speech Signal Processing, Florence, Italy, 2014.
[12] S. Hatamikia and A. M. Nasrabadi, “Recognition of emotional states induced by music videos based on nonlinear feature extraction and SOM classification, ” presented at the 21th Iranian Conf. Biomedical Engineering, Tehran, Iran, 2014.
[13] H. Xu and K. N. Plataniotis, “Subject independent affective states classification using EEG signals, ” presented at the IEEE Global Conf. Signal Information Processing, Orlando, Florida, USA, 2015.
[14] M. Kołodziej, A. Majkowski, P. Tarnowski and J. R. Remigiusz, “Recognition of visually induced emotions based on electroencephalography, ” presented at the 8th Int. Conf. Intelligent Data Acquisition Advanced Computing Systems Technology Applications, Warsaw, Poland, 2015.
[15] A. Bhardwaj, A. Gupta, P. Jain, A. Rani and J. Yadav, “Classification of human emotions from EEG signals using SVM and LDA Classifiers, ” presented at the 2nd Int. Conf. Signal Processing Integrated Networks, Noida, India, 2015.
[16] R. M. Mehmood and H. J. Lee, “Emotion classification of EEG brain signal using SVM and KNN, ” presented at the IEEE Int. Conf. Multimedia Expo Workshops, Turin, Italy, 2015.
[17] H. Candra, M. Yuwono, A. Handojoseno, R. Chai, S. Su and H. T. Nguyen, “Recognizing emotions from EEG subbands using wavelet analysis, ” presented at the 37th Annual Int. Conf. IEEE Engineering Medicine Biology Society, Milan, Italy, 2015.
[18] C. Aracena, S. Basterrech, V. Snael and J. Velasquez, “Neural networks for emotion recognition based on eye tracking data, ” presented at the IEEE Int. Conf. Systems, Man Cybernetics, Kowloon, China, 2015.
[19] M. J. Maguire, G. Magnon and A. E. Fitzhugh, “Improving data retention in EEG research with children using child-centered eye tracking, ” Journal Neurosci. Methods, vol. 238, pp. 78-81, 2014.
[20] K. H. Kim, S. W. Bang and S. R. Kim, “Emotion recognition system using short-term monitoring of physiological signals, ” Medical and Biological Eng. Comp., vol. 42, pp. 419-427, 2004.
[21] K. Jonghwa and E. Ande, “Emotion recognition based on physiological changes in music listening, ” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, pp. 2067-2083, 2008.
[22] W. L. Zheng, B. N. Dong and B. L. Lu, “Multimodal emotion recognition using EEG and eye tracking data, ” presented at the 36th Annual Int. Conf. of the IEEE Engineering Medicine Biology Society, Chicago, IL, USA, 2014.
[23] S. Alghowinem, M. AlShehri, R. Goecke and M. Wagner, “Exploring eye activity as an indication of emotional states using an eye-tracking sensor, ” in Intelligent Systems for Science and Information, L. Chen, S. Kapoor and R. Bhatia, Ed. Switzerland: Springer, 2014, pp. 261-276.
[24] Y. Lu, W. L. Zheng, B. Li and B. L. Lu, “Combining eye movements and EEG to enhance emotion recognition, ” presented at the 24th Int. Joint Conf. Artificial Intelligence, Buenos Aires, Argentina, 2015.
[25] C. Aracena, S. Basterrech, V. Snasel and J. Velasquez, “Neural networks for emotion recognition based on eye tracking data, ” presented at the IEEE Int. Conf. Systems Man Cybernetics, Kowloon, China, 2015.
[26] K. Pasupa, P. Chatkamjuncharoen, C. Wuttilertdeshar and M. Sugimoto, “Using image features and eye tracking device to predict human emotions, ” presented at the 7th Pacific-Rim Symposium Image Video Technology, Auckland, New Zealand, 2015.
[27] P. Ekman and W. Friesen, “Universals and cultural differences in the judgments of facial expressions of emotion, ” Journal Personal Soc. Psychol., vol. 53, pp. 712-714, 1998.
[28] P. J. Lang, “The emotion probe: Studies of motivation and attention, ” American Psychol., vol. 50, pp. 372-385, 1995.
[29] M. Soleymani, Implicit and automated emotional tagging of videos, ” Ph.D. dissertation, Dept. Comp. Science, Univ. Geneva, Switzerland, 2011.
[30] P. Christian and H. Antje, “Emotion representation and physiology assignments in digital systems, ” Journal Interact. Comp., vol. 18, pp. 139-170, 2006.
[31] R. A. Adams, E. Aponte, L. Marshall and K. J. Friston, “Active inference and oculomotor pursuit: The dynamic causal modelling of eye movements, ” Journal Neurosci. Methods, vol. 242, pp. 1-14, 2015.
[32] D. W. Hansen and Q. Ji, “In the eye of the beholder: A survey of models for eyes and gaze, ” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, pp. 478-500, 2010.
[33] P. De Luna, B. M. Faiz, M. Mustafar and G. Rainer, “A MATLAB-based eye tracking control system using non-invasive helmet head restraint in the macaque, ” Journal Neurosci. Methods, vol. 235, pp. 41-50, 2014.
[34] L. E. Chul, W. J. Cheol, K. J. Hwa, M. Whang and P. K. Ryoung, “A brain-computer interface method combined with eye tracking for 3D interaction, ” Journal Neurosci. Methods, vol. 190, pp. 289-298, 2010.
[35] J. K. Ong and T. Haslwanter, “Measuring torsional eye movements by tracking stable iris features, ” Journal Neurosci. Methods, vol. 192, pp. 261-267, 2010.
[36] J. Zimmermann, Y. Vazquez, P. W. Glimcher, B. Pesaran and K. Louie, “Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates, ” Journal Neurosci. Methods, vol. 270, pp. 138-146, 2016.
[37] HCI Tagging Database, [Online]. Available: http://mahnob-db.eu/hci-tagging/
[38] Tobii Eye Tracking Device, [Online]. Available: http://www.tobii.com/
[39] A. T. Ozdemir and K. Danisman, “A comparative study of two different FPGA-based arrhythmia classier architectures, ” Turkish Journal Elect. Eng. Comp. Science, vol. 23, pp. 2089-2106, 2015.
[40] A. G. Khandizod, R. R. Deshmukh, S. N. Borade, “Spectral biometric verification system for person identification, ” in Information and communication technology for sustainable development, D. K. Mishra, M. K. Nayak and A. Joshi, Ed. Singapore: Springer, 2018. pp. 103-111.
[41] D. T. Laros, “Data mining methods and models, ” New Jersey, USA: John Wiley and Sons Press, 2006.
[42] S. H. Nabavi Karizi and E. Kabir, “Combining classifiers: Diversifying and rules of composition, ” CSI Journal Comp. Science Eng., vol. 3, pp. 95-108, 2005.
[43] J. Zhu, H. Zou, S. Rosset and T. Hastie, “Multi-class AdaBoost, ” Journal Stat. Interface, vol. 2, pp. 349-360, 2009. | ||
آمار تعداد مشاهده مقاله: 46 تعداد دریافت فایل اصل مقاله: 65 |