On Embodied Visual Navigation
in Real Environments Through Habitat

Marco Rosano1,3
Antonino Furnari1
Luigi Gulino3
Giovanni Maria Farinella1,2
marco.rosano@unict.it
furnari@dmi.unict.it
luigi.gulino@orangedev.it
gfarinella@dmi.unict.it

1 FPV@IPLAB - Departement of Mathematics and Computer Science, University of Catania, Italy
2 Cognitive Robotics and Social Sensing Laboratory, ICAR-CNR, Palermo, Italy
3 OrangeDev s.r.l., Firenze, Italy


Visual navigation models based on deep learning can learn effective policies when trained on large amounts of visual observations through reinforcement learning. Unfortunately, collecting the required experience in the real world requires the deployment of a robotic platform, which is expensive and time-consuming. To deal with this limitation, several simulation platforms have been proposed in order to train visual navigation policies on virtual environments efficiently. Despite the advantages they offer, simulators present a limited realism in terms of appearance and physical dynamics, leading to navigation policies that do not generalize in the real world. In this paper, we propose a tool based on the Habitat simulator which exploits real world images of the environment, together with sensor and actuator noise models, to produce more realistic navigation episodes. We perform a range of experiments to assess the ability of such policies to generalize using virtual and real-world images, as well as observations transformed with unsupervised domain adaptation approaches. We also assess the impact of sensor and actuation noise on the navigation performance and investigate whether it allows to learn more robust navigation policies. We show that our tool can effectively help to train and evaluate navigation policies on real-world observations without running navigation episodes in the real world.


Navigation episodes demo



Source code

We have released the implementation of our Domain Adaptation approach, together with the pre-trained models. Check them out on:
[GitHub]


Dataset

We have released the 3D model and the real-world images of the proposed new environment (OrangeDev environment). You can find them, the train and test navigation trajectories, the pre-trained model for Domain Adaptation (DA) and the pre-trained model with CycleGAN in the links below.
For more information about how to use the data with Habitat Simulator, please take a look at the user guide on the GitHub project page.


[3D + Images + Trajectories + Pre-trained model for DA]
[CycleGAN Sim2Real pre-trained checkpoint]


Paper and Bibtex

[Paper]

Citation
 
Rosano, M., Furnari, A., Gulino, L., and Farinella G.M., 2020.
On Embodied Visual Navigation in Real Environments Through Habitat.
In International Conference on Pattern Recognition (ICPR).

[Bibtex]
@inproceedings{rosano2020navigation,
  title={On Embodied Visual Navigation in Real Environments Through Habitat},
  author={Rosano, Marco and Furnari, Antonino and
            Gulino, Luigi and Farinella, Giovanni Maria},
  booktitle={International Conference on Pattern Recognition (ICPR)},
  year={2020}}
                


Acknowledgements

This research is supported by OrangeDev s.r.l., by Piano della Ricerca 2016-2018 - CHANCE - Linea di Intervento 1 of DMI, University of Catania, and by MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 - AIM1893589 - CUP ` E64118002540007.
Website template from here and here.