|
|
|
|
|
|
|
|
| |
|
|
|
|
|
2 Robotics Laboratory - Departement of Mathematics and Computer Science, University of Catania, Catania, Italy 3 OrangeDev s.r.l., Firenze, Italy 4 Cognitive Robotics and Social Sensing Laboratory, ICAR-CNR, Palermo, Italy 3 Next Vision s.r.l., Catania, Italy |
Code of the extended journal paper: [GitHub] |
Code of the conference paper: [GitHub] |
Dataset of the extended journal paper
In the updated work we used a new set of real-world images and we proposed a range of new visual navigation models, to combine multiple mid-level representations that capture different visual properties of the scene.
The new dataset includes the updated checkpoints of the DA models and the checkpoints of the best performing modality fusion models, together with new real-world observations and with additional updates to improve the models' training and testing efficiency.
[3D + New Images + Trajectories]
|
[Checkpoints of the navigation models] |
Dataset of the conference paper
This dataset includes the pre-trained model for Domain Adaptation (DA) and the pre-trained model with CycleGAN.
[3D + Images + Trajectories + Pre-trained model for DA]
For more information on how to use the data with the Habitat Simulator, please take a look at the user guide on the GitHub project page. |
[CycleGAN Sim2Real pre-trained checkpoint] |
|
Journal paper @article{rosano2022multirepr, title={Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models, Benchmark and Efficient Evaluation}, author={Marco Rosano and Antonino Furnari and Luigi Gulino and Corrado Santoro and Giovanni Maria Farinella}, year={2023}, journal={Autonomous Robots}, } |
|
|
Conference paper @inproceedings{rosano2020navigation, title={On Embodied Visual Navigation in Real Environments Through Habitat}, author={Rosano, Marco and Furnari, Antonino and Gulino, Luigi and Farinella, Giovanni Maria}, booktitle={International Conference on Pattern Recognition (ICPR)}, year={2020}} |
AcknowledgementsWebsite template from here and here. |