New paper accepted
![📢](https://static.xx.fbcdn.net/images/emoji.php/v9/td8/2/16/1f4e2.png)
44 pages + 385 references survey now available on @openreviewnet
We invite comments/suggestions/corrections for 30days.
Contribute to our survey [instructions in
]
![🧵](https://static.xx.fbcdn.net/images/emoji.php/v9/t3f/2/16/1f9f5.png)
Major contributions will be acknowledged.
![✨](https://static.xx.fbcdn.net/images/emoji.php/v9/t75/2/16/2728.png)
❷ Click on the “Comment” button
❸ Insert a title and comment
❹ Click on “Submit”
![🔑](https://static.xx.fbcdn.net/images/emoji.php/v9/t4c/2/16/1f511.png)
![💬](https://static.xx.fbcdn.net/images/emoji.php/v9/td/2/16/1f4ac.png)
We envisage the future through character-based stories, review 12 tasks: localisation, 3D scene understanding, anticipation, recognition, gaze, social understanding, full body pose estimation, hand and hand-object interactions, person re-ID, privacy, summarisation and VQA.
Collab @BristolUni Università di Catania @PoliTO @chiaraplizzari @GGoletto @anfurnari @Sid__Bansal F Ragusa @GMFarinella @tommasi_tatiana and Dima Damen
Available also on ArXiv: https://arxiv.org/abs/2308.07123