Thirteenth International Workshop on Assistive Computer Vision and Robotics

Afternoon, 19th October 2025, Honolulu

Angela Yao, National University of Singapore, SG https://www.comp.nus.edu.sg/~ayao/

Angela is on sabbatical for 2025 and working at Meta Reality Labs in Zurich, Switzerland. She is a Dean's Chair associate professor in the School of Computing at the National University of Singapore. She lead the Computer Vision and Machine Learning group. Their group's research centers on video understanding and digital humans. Their research is generously funded by the NRF Fellowship and grants from MoE Singapore, AI Singapore, and various industry partners. Before moving to Singapore, Angela led a group in Visual Computing at the University of Bonn, founded a startup on smart parking, and completed a PhD at ETH Zurich. In an even earlier life, she studied Engineering Science at the University of Toronto.

Visual AI Assistance: From Seeing Towards Helping

Building assistive AI systems requires bridging a gap between visual understanding and human-centered help. In this talk, I will present recent progress toward visual AI assistance, structured around three threads. First, I will examine the performance of vision–language models (VLMs), identifying key issues of grounding, consistency, and reliability that limit their use in assistive scenarios. Second, I will discuss architectural choices that enable faster, real-time response times, a critical factor for practical deployment. Finally, I will highlight our efforts in dataset curation, with a focus on intention grounding and accessibility for blind users. Together, these directions illustrate a path toward visual AI systems that are perceptually capable and assistive.

Daekyum Kim, Korea University, KR https://mintlab.korea.ac.kr/

Daekyum Kim received his B.S. degree in Mechanical Engineering from the University of California, Los Angeles, (Los Angeles, CA, USA), in 2015. He earned his Ph.D. degree in Computer Science at KAIST (Daejeon, Republic of Korea), in 2021. He was a Postdoctoral Research Fellow at the John A. Paulson School of Engineering and Applied Sciences, Harvard University (Cambridge, MA, USA), co-affiliated with Wyss Institute. Since September 2023, he has been an Assistant Professor with the School of Smart mobility and the School of Mechanical Engineering, Korea University (Seoul, Republic of Korea). His research interests are in the areas of machine learning, computer vision, robotics, and digital healthcare.

Vision-based Intelligence for Assistive Wearable Robots

Hand function is essential for activities of daily living, as people rely on their hands to interact with the world around them. For individuals with spinal cord injury or other neuromuscular impairments, reduced hand function leads to significant challenges in independence and quality of life. Soft wearable hand robots have been developed to assist these individuals, taking advantage of non-rigid materials that ensure safety and comfort. However, despite promising prototypes, these systems remain limited in real-world adoption. One major barrier is the lack of robust intelligence that enables seamless operation in everyday environments. I will present recent advances from my group on developing vision-based user intention detection methods to augment human hand function with wearable robotics. By integrating computer vision and assistive robotic technologies, we aim to bridge the gap between laboratory prototypes and practical use, highlighting how intelligent perception can make wearable robots more responsive, adaptive, and ultimately more usable in daily life. These studies illustrate how assistive computer vision and robotics can work hand-in-hand to support people with motor impairments and enhance their autonomy.