Visual RSSI fingerprinting for radio-based indoor localization

Giuseppe Puglisi, Daniele Di Mauro, Luigi Gulino, Antonino Furnari, Giovanni Farinella


The problem of localizing objects exploiting RSSI signals has been tackled using both geometric and machine learning based methods. Solutions machine learning based have the advantage to better cope with noise, but require many radio signal observations associated to the correct position in the target space. This data collection and labeling process is not trivial and it typically requires building a grid of dense observations, which can be resource-intensive. To overcome this issue, we propose a pipeline which uses an autonomous robot to collect RSSI-image pairs and Structure from Motion to associate 2D positions to the RSSI values based on the inferred position of each image. This method, as we shown in the paper, allows to acquire large quantities of data in an inexpensive way. Using the collected data, we experiment with machine learning models based on RNNs and propose an optimized model composed of a set of LSTMs that specialize on the RSSI observations coming from different antennas. The proposed method shows promising results outperforming different baselines, suggesting that the proposed pipeline allowing to collect and automatically label observations is useful in real scenarios. Furthermore, to aid research in this area, we publicly release the collected dataset comprising 57158 RSSI observations paired with RGB images.



We create a pipeline that goes through the following steps: 1) Visual RSSI Fingerprinting, in which we collect different RSSI values and associate them to visual observations in the form of RGB images. 2) Structure From Motion, which is used to associate 3D poses to each image, and hence to each RSSI value. 3) Projection of the 3D poses to the 2D floor-plan and exportation of the associated RSSI values useful for training machine learning algorithms for localization via radio signals.

Method



We hence propose a neural network architecture to exploit the temporal nature of the data and the different contribution of each antenna. Specifically, we design an architecture composed by 5 LSTMs, one for each antenna, to process in parallel features related to the different antennas. At each training step, every LSTM takes as input a sequence containing the RSSI signals of the last 20 seconds measured with respect to each corresponding antenna. The 128-dimensional hidden vectors of the different LSTMs are then concatenated in a single vector and fed to a Multi Layer Perceptron (MLP) made of 4 fully connected layers to regress the final 2D pose.

Dataset

Click here

Paper

G. Puglisi, D. Di Mauro, L. Gulino, A. Furnari, G. M. Farinella, Visual RSSI fingerprinting for radio-based indoor localization. International Conference on Signal Processing and Multimedia Applications, 2022

@inproceedings{puglisi2022sigmap,
 title = { Visual RSSI fingerprinting for radio-based indoor localization. }, 
 author = {G. Puglisi D. Di Mauro and L. Gulino and A. Furnari and G. M. Farinella }, 
 year = { 2022 }, 
 booktitle = { International Conference on Signal Processing and Multimedia Applications (SIGMAP) }
 }


Acknowledgement

This research is supported by the project MEGABIT - PIAno di inCEntivi per la RIcerca di Ateneo 2020/2022 (PIACERI) – linea di intervento 2, DMI - University of Catania.

People