Enhancing Human-Robot Collaboration through Multimodal Emotion and Context-Aware Object Detection
Main Article Content
Abstract
In the evolving landscape of human-robot interaction, the ability of robots to perceive and respond to human emotions and surrounding objects is essential for effective collaboration. This study proposes an integrated framework that combines multimodal emotion detection and context-aware object recognition to enhance the intuitiveness and responsiveness of human-robot collaboration. The approach utilizes visual (facial expressions) and auditory (speech tone) cues for emotion detection, while simultaneously identifying and interpreting relevant objects in the environment using computer vision and contextual data. There is an advanced fusion algorithm to make the robots synchronize emotional states and environmental understanding, which allows the robots to have adaptive decisions in real time. For instance, the identification of hazardous objects in addition to reading a user’s stress can enable the robot to change its behavior in such a way, in which the assistance can be, the robot keeps a safe distance or the robot changes its task strategy. Through the integration of these technologies, the robot can be more aware of the situation and have more personalized and humanlike interactions that are more efficient. The research is intended to show that multimodal and context aware systems can change human robot collaboration from reactive automation to proactive cooperation. The findings enable intelligent robots to be deployed in collaborations that require emotional sensitivity and context awareness in healthcare, manufacturing, customer service and domestic environments.