Multi- View Fusion for Action Recognition in Child-Robot Interaction
Ημερομηνία
2018Γλώσσα
en
Λέξη-κλειδί
Επιτομή
Answering the challenge of leveraging computer vision methods in order to enhance Human Robot Interaction (HRI) experience, this work explores methods that can expand the capabilities of an action recognition system in such tasks. A multi-view action recognition system is proposed for integration in HRI scenarios with special users, such as children, in which there is limited data for training and many state-of-the-art techniques face difficulties. Different feature extraction approaches, encoding methods and fusion techniques are combined and tested in order to create an efficient system that recognizes children pantomime actions. This effort culminates in the integration of a robotic platform and is evaluated under an alluring Children Robot Interaction scenario. © 2018 IEEE.
Collections
Related items
Showing items related by title, author, creator and subject.
-
“Am I Talking to a Human or a Robot?”: A Preliminary Study of Human’s Perception in Human-Humanoid Interaction and Its Effects in Cognitive and Emotional States
Baka E., Vishwanath A., Mishra N., Vleioras G., Thalmann N.M. (2019)The current preliminary study concerns the identification of the effects human-humanoid interaction can have on human emotional states and behaviors, through a physical interaction. Thus, we have used three cases where ... -
Control of medical robotics and neurorobotic prosthetics by non invasive brain-robot interfaces via EEG and RFID technology
Eleni, A. (2008)Brain-robot interface (BRI) has been a growing field of innovative research and development in cognitive neuroscience and brain bioimaging processing technologies. In this paper we endeavor to explore how medical robotics ... -
An Audiovisual Child Emotion Recognition System for Child-Robot Interaction Applications
Filntisis P.P., Efthymiou N., Potamianos G., Maragos P. (2021)We present an audiovisual emotion recognition system tailored to child-robot interaction scenarios. Our proposed system is based on deep learning and the Temporal Segment Networks framework, receives input from both the ...

