Action, Gesture and Spoken Command Recognition in Human-Robot Interaction

KEC 1003
Mon, 02/22/2016 - 4:00pm

Petros Maragos
Professor, School of E.C.E., National Technical University of Athens

Abstract:
In this talk we will  present some advances from our research in the EU
project MOBOT   which generally aims at the development of an intelligent
active mobility assistance robot. We will focus on one of its main goals:  to
provide multimodal sensory processing capabilities for human action
recognition. Specifically, a reliable multimodal information processing and
action recognition system needs to be developed, that will detect, analyze
and recognize the human user actions based on the captured multimodal sensory
signals and with a reasonable level of accuracy and detail for intelligent
assistive robotics. One of the main thrusts in the above effort is the
development of robust and effective computer vision techniques to achieve the
visual processing goals based on multiple cues such as spatiotemporal RGB
appearance data as well as depth data from Kinect sensors. Another major
challenge is the integration of recognizing specific verbal and gestural
commands in the considered human-robot interaction context. In this
presentation we summarize advancements in three tasks of the above multimodal
processing system for human-robot interaction (HRI): action recognition,
gesture recognition and spoken command recognition. Our multi-sensor spoken
command recognition system has been developed in the framework of the EU
project DIRHA. More information, related papers and current results can be
found in  http://cvsp.cs.ntua.gr  and http://robotics.ntua.gr.

Bio:


URL:
http://eecs.oregonstate.edu/colloquium/action-gesture-and-spoken-command-recognition-human-robot-interaction

_______________________________________________
Colloquium mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/colloquium

Reply via email to