• Monday, Apr 6th, 2026

International Journal of Advanced Research in Education and TechnologY(IJARETY)
International, Double Blind-Peer Reviewed & Refereed Journal, Open Access Journal
|Approved by NSL & NISCAIR |Impact Factor: 8.152 | ESTD: 2014|

|Scholarly Open Access Journals, Peer-Reviewed, and Refereed Journal, Impact Factor-8.152 (Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool), Multidisciplinary, Bi-Monthly, Citation Generator, Digital Object Identifier(DOI)|

Article

TITLE EmotiBite: An Emotion Aware Food Recommendation System using Multimodal Techniques
ABSTRACT Most diet apps on the market today only care about counting calories or tracking what a user ate yesterday. They completely ignore how a person actually feels. But human psychology shows that mood directly changes what foods we want to eat. If software can figure out a person's emotional state, it could suggest much better meals to actually improve their mood. This project, EmotiBite, bridges that gap. It is a custom software system that connects dietary suggestions directly to how someone is feeling right now. To guess the user's mood, the system runs two separate AI models at the exact same time. First, it uses a text-reading transformer model to check the user's typed words. Second, it processes voice recordings to look for stress in the audio signal. Once the app decides if a user is sad, happy, or stressed, it does not just pick random comfort food. Instead, the logic engine uses nutritional biology to find recipes packed with specific mood-fixing vitamins. Our project tests showed that putting voice and text data together makes the mood-guessing heavily accurate. In the end, this approach upgrades a normal recipe app into a smart wellness tool meant to actively boost mental health.
AUTHOR Vaishnavi Padmashali Master of Computer Applications, CMR Institute of Technology, Bangalore, India
VOLUME 13
DOI DOI: 10.15680/IJARETY.2026.1302025
PDF 25_EmotiBite An Emotion Aware Food Recommendation System using Multimodal Techniques.pdf
KEYWORDS
References [1] H. Lian, C. Lu, S. Li, Y. Zhao, C. Tang, and Y. Zong, “A survey of deep learning-based multimodal emotion recognition: Speech, text, and face,” Entropy, vol. 25, no. 10, pp. 1440–1460, 2023.
[2] L. Trinh Van, T. Dao Thi Le, T. Le Xuan, and E. Castelli, “Emotional speech recognition using deep neural networks,” Sensors, vol. 22, no. 4, pp. 1414–1427, 2022.
[3] M. Hussain, C. Chen, M. Hussain, M. Anwar, M. Abaker, A. Abdelmaboud, and I. Yamin, “Optimised knowledge distillation for efficient social media emotion recognition using DistilBERT and ALBERT,” Scientific Reports, vol. 15, pp. 30104, 2025.
[4] M. Yi, K. Kwak, and J. Shin, “HyFusER: Hybrid multimodal transformer for emotion recognition using dual cross-modal attention,” Applied Sciences, vol. 15, no. 3, pp. 1053–1067, 2025.
[5] G. Praakash and P. Khanna, “Multimodal emotion recognition: A tri-modal approach using speech, text, and visual cues for enhanced interaction analysis,” Journal of Information Systems Engineering and Management, vol. 10, no. 1, pp. 1–12, 2025.
[6] S. W. Byun and S. P. Lee, “A study on a speech emotion recognition system with effective acoustic features using deep learning algorithms,” Applied Sciences, vol. 11, no. 4, pp. 1890–1904, 2021.
[7] R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar, and T. Alhussain, “Speech emotion recognition using deep learning techniques: A review,” IEEE Access, vol. 7, pp. 117327–117345, 2019.
[8] Mustaqeem and S. Kwon, “A CNN-assisted enhanced audio signal processing for speech emotion recognition,” Sensors, vol. 20, no. 1, pp. 183–197, 2020.
[9] M. Hussain, C. Chen, M. Anwar, S. A. Ghorashi, A. Ahmed, M. S. A. Malik, and I. Yamin, “Adaptive multitask emotion recognition and sentiment analysis using resource-constrained MobileBERT and DistilBERT,” PeerJ Computer Science, vol. 11, pp. 1–21, 2025.
[10] F. N. Jacka, A. O’Neil, R. Opie, C. Itsiopoulos, S. Cotton, M. Mohebbi, S. Castle, M. Dash, A. Mihalopoulos, and M. Berk, “A randomised controlled trial of dietary improvement for adults with major depression (the SMILES trial),” BMC Medicine, vol. 15, no. 23, pp. 1–13, 2017.
[11] T. Espinoza-Tellez, R. Quevedo-León, D. Izaguirre-Torres, L. M. Paucar-Menacho, and A. L. Huamani-Huamani, “Nutrients and foods associated with people's emotional state: Scientific advances and future perspectives,” Scientia Agropecuaria, vol. 17, no. 1, pp. 39–65, 2026.
[12] K. Likhar, A. Yeole, T. Ninawe, K. Meshram, V. Lahoti, and V. Agrawal, “AI-driven emotion sentiment analysis,” International Research Journal of Innovations in Engineering and Technology, vol. 9, no. 10, pp. 122–127, 2025.
[13] Y. Wu, Q. Ma, and T. Gao, “A comprehensive review of multimodal emotion recognition techniques, challenges and future directions,” IEEE Access, vol. 13, pp. 21543–21567, 2025.
[14] G. Udahemuka, A. Nyandwi, and J. Uwineza, “Multimodal emotion recognition using visual, vocal and physiological signals,” Applied Sciences, vol. 14, no. 7, pp. 3110–3125, 2024.
[15] S. M. S. A. Abdullah and A. Ameen, “Multimodal emotion recognition using deep learning,” Journal of Applied Science and Technology Trends, vol. 2, no. 4, pp. 120–127, 2021.
[16] P. Barra, C. Bisogni, and A. Castiglione, “Multimodal emotion recognition from voice and video,” CEUR Workshop Proceedings, vol. 3415, pp. 52–60, 2023.
[17] J. Pan, W. Fang, and Z. Zhang, “Multimodal emotion recognition based on facial expressions, speech and EEG,” IEEE Open Journal of Engineering in Medicine and Biology, vol. 4, pp. 120–131, 2023.
[18] W. Marx, F. Moseley, M. Berk, and F. Jacka, “Nutritional psychiatry: The present state of the evidence,” Molecular Psychiatry, vol. 26, pp. 147–161, 2021.
[19] G. Qiao, Y. Chen, and H. Zhang, “Food recommendation towards personalized wellbeing,” Future Generation Computer Systems, vol. 150, pp. 88–101, 2025.
[20] S. Agarwal, R. Gupta, and P. Sharma, “A user preference-based food recommender system using artificial intelligence,” International Journal of Intelligent Systems and Applications, vol. 16, no. 2, pp. 45–56, 2024.