Third Workshop on Assistive Computer Vision and Robotics
Santiago, Chile - 12 December 2015
J.M. Rehg, Georgia Institute of Technology, US

James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is co-Director of the Computational Perception Lab (CPL) and Director of the Center for Behavioral Imaging. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, and Face and Gesture 2015, and a 2013 Method of the Year Award from the journal Nature Methods. Dr. Rehg serves on the Editorial Board of the Intl. J. of Computer Vision, and he served as the Program co-Chair for ACCV 2012 and General co-Chair for CVPR 2009, and will serve as Program co-Chair for CVPR 2017. He has authored more than 100 peer-reviewed scientific papers and holds 25 issued US patents. His research interests include computer vision, machine learning, pattern recognition, and robot perception. Dr. Rehg is the lead PI on an NSF Expedition to develop the science and technology of Behavioral Imaging, the measurement and analysis of social and communicative behavior using multi-modal sensing, with applications to developmental disorders such as autism. He is also the Deputy Director of the NIH Center of Excellence on Mobile Sensor Data-to-Knowledge (MD2K). See and for details.

Analyzing Social Interactions through Behavioral Imaging

Beginning in infancy, individuals acquire the social and communication skills that are vital for a healthy and productive life. Children with developmental delays face great challenges in acquiring these skills, resulting in substantial lifetime risks. Children with an Autism Spectrum Disorder (ASD) represent a particularly significant risk category, due both to the increasing rate of diagnosis of ASD and its consequences. Since the genetic basis for ASD is unclear, the diagnosis, treatment, and study of the disorder depends fundamentally on the observation of behavior.

In this talk, I will describe our research agenda in Behavioral Imaging, which targets the capture, modeling, and analysis of social and communicative behaviors between children and their caregivers and peers. We are developing computational methods and statistical models for the analysis of vision, audio, and wearable sensor data. I will present several recent findings, including a method for detecting eye contact between children and adults using wearable cameras, an approach to behavior retrieval in large video collections, and audio-video analysis of paralinguistic events in young children’s speech. I will also describe our plans for clinical applications of this technology. This is joint work with Drs. Agata Rozga and Mark Clements, and Ph.D. students Arridhana Ciptadi, Yin Li, Zhefan Ye, and Hrishikesh Rao.

Gregory Hager, John Hopkins University, US

Gregory D. Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University. His research interests include collaborative and vision-based robotics, time-series analysis of image data, and medical applications of image analysis and robotics. He has published over 300 articles and books in these areas. Professor Hager is also Chair of the Computing Community Consortium, a board member of the Computing Research Association, and is currently a member of the governing board of the International Federation of Robotics Research. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University of Munich where he also holds an appointment in Computer Science. He is a fellow of the IEEE for his contributions to Vision-Based Robotics, and has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV. Professor Hager received his BA in Mathematics and Computer Science Summa Cum Laude at Luther College (1983), and his MS (1986) and PhD (1988) from the University of Pennsylvania. He was a Fulbright Fellow at the University of Karlsruhe, and was on the faculty of Yale University prior to joining Johns Hopkins. He is founding CEO of Clear Guide Medical.

Creating Machines that Augment Human Capabilities

We are entering an era where people will interact with smart machines to enhance the physical aspects of their lives, just as smart mobile devices have revolutionized how we access and use information. Robots already provide surgeons with physical enhancements that improve their ability to cure disease, we are seeing the first generation of robots that collaborate with humans to enhance productivity in manufacturing, and a new generation of startups are looking at ways to enhance our day to day existence through a variety of augmentations that have been enabled by recent enormous advances in perception.

In this talk, I will frame some of the broad science, technology, and commercial trends that are converging to fuel progress on perception-based human-machine collaborative systems for a variety of applications. I will describe how surgical robots can be used to observe surgeons “at work” and to define a “language of manipulation” from data, mirroring the statistical revolution in speech processing. With these models, it is possible to recognize, assess, and intelligently augment surgeons’ capabilities. Beyond surgery, new advances in perception, coupled with steadily declining costs and increasing capabilities of manipulation systems, have opened up new science and commercialization opportunities around manufacturing assistants that can be instructed “in-situ.” Finally, I will describe more recent work on how these ideas can be used to define aids to the disabled.

University Of Catania Consiglio Nazionale Ricerche National Institute of Optics University of Southern California University of California, San Diego
ACVR2015 / Contact: / site developed by Alberto Chiaramonte