New Approach of Estimating Sarcasm Based on the Percentage of Happiness of Facial Expression Using Fuzzy Inference System
The procedure of determining whether micro expressions are present is accorded a high priority in the majority of settings. This is due to the fact that despite the best attempts of the person, these expressions will always expose the genuine sentiments that are buried under the surface. The purpose of this study is to provide a novel approach to the problem of measuring sarcasm by using a fuzzy inference system. The method involves analysing a person's facial expressions to evaluate the degree to which they are taking pleasure in something. It is feasible to distinguish five separate areas of a person's face, and precise active distances may be determined from the outline points of each of these regions. This category includes the brows on both sides of the face, as well as the eyes and lips. In order to arrive at a representation of an individual's degree of happiness while working within the parameters of the fuzzy inference system that has been provided, membership functions are first applied to computed distances. After that, the findings from the membership functions are put to use in yet another membership function so that an estimate of the sarcasm percentage may be derived from them. The suggested method is validated by using photos of human faces taken from the SMIC, SAMM, and CAS(ME) 2 datasets, which are the industry standards. This helps to guarantee that the method is effective.
Attardo, S., Eisterhold, J., Hay, J., & Poggi, I. (2003). Multimodal markers of irony and sarcasm.
Bennett, J. M. (Ed.). (2015). The SAGE encyclopedia of intercultural competence. Sage Publications.
Boughrara, H., Chtourou, M., Ben Amar, C., & Chen, L. (2016). Facial expression recognition based on a mlp neural network using constructive training algorithm. Multimedia Tools and Applications, 75(2), 709-731.
Cowen, A. S., & Keltner, D. (2020). Universal facial expressions uncovered in art of the ancient Americas: A computational approach. Science advances, 6(34), eabb1005.
Dhall, A., Goecke, R., Joshi, J., Sikka, K., & Gedeon, T. (2014, November). Emotion recognition in the wild challenge 2014: Baseline, data and protocol. In Proceedings of the 16th international conference on multimodal interaction (pp. 461-466).
Gendron, M., Crivelli, C., & Barrett, L. F. (2018). Universality reconsidered: Diversity in making meaning of facial expressions. Current directions in psychological science, 27(4), 211-219.
Haiman, J. (1998). Talk is cheap: Sarcasm, alienation, and the evolution of language. Oxford University Press on Demand.
Hancock, J. T. (2004). LOL: Humor online. interactions, 11(5), 57-58.
Hao, M., Liu, G., Gokhale, A., Xu, Y., & Chen, R. (2019). Detecting happiness using hyperspectral imaging technology. Computational Intelligence and Neuroscience, 2019.
Jain, D. K., Shamsolmoali, P., & Sehdev, P. (2019). Extended deep neural network for facial emotion recognition. Pattern Recognition Letters, 120, 69-74.
Kim, B. K., Roh, J., Dong, S. Y., & Lee, S. Y. (2016). Hierarchical committee of deep convolutional neural networks for robust facial expression recognition. Journal on Multimodal User Interfaces, 10(2), 173-189.
Kobayashi, H., & Hara, F. (1997, October). Facial interaction between animated 3D face robot and human beings. In 1997 IEEE international conference on systems, man, and cybernetics. Computational cybernetics and simulation (Vol. 4, pp. 3732-3737). IEEE.
Li, S., Deng, W., & Du, J. (2017). Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2852-2861).
Lyubomirsky, S., King, L., & Diener, E. (2005). The benefits of frequent positive affect: Does happiness lead to success?. Psychological bulletin, 131(6), 803.
Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., & Cohn, J. F. (2013). Disfa: A spontaneous facial action intensity database. IEEE Transactions on Affective Computing, 4(2), 151-160.
North, R. J., Pai, A. V., Hixon, J. G., & Holahan, C. J. (2011). Finding happiness in negative emotions: An experimental test of a novel expressive writing paradigm. The Journal of Positive Psychology, 6(3), 192-203.
Salih, S. K., Aljunid, S. A., Aljunid, S. M., & Maskon, O. (2014). New Approach for Diagnosing Left Ventricular Hypertrophy Cardiac Disease Using Fuzzy Inference System. Journal of Medical Imaging and Health Informatics, 4(6), 848-857.
Sato, W., Hyniewska, S., Minemoto, K., & Yoshikawa, S. (2019). Facial expressions of basic emotions in Japanese laypeople. Frontiers in psychology, 10, 259.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61, 85-117.
Sumathi, S., & Paneerselvam, S. (2010). Computational intelligence paradigms: theory & applications using MATLAB. CRC Press.
Yang, B., Cao, J., Ni, R., & Zhang, Y. (2017). Facial expression recognition using weighted mixture deep neural network based on double-channel facial images. IEEE access, 6, 4630-4640.
Yu, Z., & Zhang, C. (2015, November). Image based static facial expression recognition with multiple deep network learning. In Proceedings of the 2015 ACM on international conference on multimodal interaction (pp. 435-442).
Zhang, W., Zhang, Y., Ma, L., Guan, J., & Gong, S. (2015). Multimodal learning for facial expression recognition. Pattern Recognition, 48(10), 3191-3202.
Zhao, X., Shi, X., & Zhang, S. (2015). Facial expression recognition via deep learning. IETE technical review, 32(5), 347-355.
Zhong, L., Liu, Q., Yang, P., Huang, J., & Metaxas, D. N. (2014). Learning multiscale active facial patches for expression analysis. IEEE transactions on cybernetics, 45(8), 1499-1510.
Copyright (c) 2022 Journal La Multiapp
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.