First Person Vision @ IPLAB
The terms First Person Vision or Egocentric Vision refer to the study and development of Computer Vision techniques in the scenario in which images and video are acquired from the user's point of view. This is generally done employing wearable cameras such as Google Glass, Microsoft HoloLens and GoPro. This acquisition paradigm is in contrast with standard Third Person Vision applications which assume that images are acquired by fixed cameras.
Visual content acquired according to the First Person Vision paradigm is inherently different from standard Third Person Vision Content. Indeed, while a fixed camera observes event from a neutral point of view, egocentric visual content captures the personal visual experience of the camera wearer. While Third Person Vision content is often edited and pre-segmented (e.g., movies or collections of YouTube videos), egocentric video is generally acquired in a continuous fashion and hence it tends to be unstructured and difficult to index. While strong assumptions can be generally made on Third Person Visual content, First Person data is inherently characterized by a continuosly changing context which must be dealt with.
Whith its intrinsic mobility, First Person Vision poses some new challenges (e.g., changing context, motion blur, unstructured content), and offers unique opportunity to develop true intelligent systems able to assist the user and augment his abilities.
The aim of this page is to present research done by the Image Processing LABoratory(IPLAB) in the scope of First Person Vision.