First-Person Activity Recognition: Understanding Videos from One's Own Viewpoint
KEC 1007
Friday, March 6, 2015 - 9:00am to 10:00am

Michael S. Ryoo
Research Staff
NASA Jet Propulsion Laboratory

Abstract:
We are entering the era of big video data where cameras are ubiquitous. In 
particular, the amount of videos from wearable cameras and robots is 
explosively rising. These videos, taken from an actor's own viewpoint, are 
called 'first-person videos' or 'egocentric videos'. Millions of individuals 
are already recording their lives using wearable cameras and, soon, robots in 
public places will obtain similar videos capturing their operations and 
interactions in the world. This talk presents automated methodologies to make 
sense of all this visual data by detecting important events in videos and 
generating compact summaries that describe these events. This will not only 
allow video-based robot learning and recognition, but also enable construction 
of intelligent wearable systems supporting human tasks such as medical 
operations, law enforcement, activities of daily living, and human-robot 
teaming. We discuss features and recognition algorithms necessary for 
'activity-level' under!
standing of such first-person videos, and describe how they make recognition of 
human-human (and human-robot) interactions possible. Approaches for early 
recognition of ongoing activities from streaming videos will be described, and 
the future scenario of multiple wearable/robot cameras and static cameras 
(e.g., surveillance cameras in smart cities) coexisting will be discussed.

Speaker Bio:
Michael S. Ryoo is a research staff at NASA's Jet Propulsion Laboratory. His 
research interests are in computer vision and robotics, including semantic 
understanding of video data, first-person vision, and intelligent 
interaction/collaboration between humans and wearbles/robots. Dr. Ryoo received 
the B.S. degree in computer science from Korea Advanced Institute of Science 
and Technology (KAIST) in 2004, and the M.S. and Ph.D. degrees in computer 
engineering from the University of Texas at Austin in 2006 and 2008, 
respectively. He has authored a number of pioneering papers on human activity 
recognition, has been providing tutorials on activity recognition at major 
computer vision conferences including CVPR 2011, AVSS 2012, and CVPR 2014, and 
is the corresponding author of the activity recognition survey paper published 
by ACM Computing Surveys on 2011. He organized the first ICPR contest on human 
activity recognition (SDHA 2010) and the 3rd workshop on Egocentric Vision at 
CV!
PR 2014.


_______________________________________________
Colloquium mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/colloquium

Reply via email to