Evaluation of Egocentric Action Recognition

Antonino Furnari, Sebastiano Battiato, Giovanni Maria Farinella


Egocentric action analysis methods often assume that input videos are trimmed and hence they tend to focus on action classification rather than recognition. Consequently, adopted evaluation schemes are often unable to assess important properties of the desired action video segmentation output, which are deemed to be meaningful in real scenarios (e.g., oversegmentation and boundary localization precision). To overcome the limits of current evaluation methodologies, we propose a set of measures aimed to quantitatively and qualitatively assess the performance of egocentric action recognition methods. To improve exploitability of current action classification methods in the recognition scenario, we investigate how frame-wise predictions can be turned into action-based temporal video segmentations. Experiments on both synthetic and real data show that the proposed set of measures can help to improve evaluation and to drive the design of egocentric action recognition methods.



(a) Action classification versus (b) action recognition. In the classification scenario, input videos are trimmed and the output of the classification process is an action label. Action recognition methods take the whole video as input and return an output video semantically segmented.

Paper

[EPIC 2017] A. Furnari, S. Battiato, G. M. Farinella. How Shall we Evaluate Egocentric Action Recognition? International workshop on egocentric perception, interaction and computing (EPIC) in conjunction with ICCV 2017. Download Paper

@inproceedings {furnari2017how,
    author    = "Furnari, Antonino and Battiato, Sebastiano and Farinella, Giovanni Maria ",
    title     = "How Shall we Evaluate Egocentrci Action Recognition?",
    booktitle = "International Workshop on Egocentric Perception, Interaction and Computing (EPIC) in conjunction with ICCV",
    year      = "2017"
}


Code

We provide Python code implementing the evaluation measures defined in this paper.
To get started, put the .py file (download below) in your path. Then, in an ipython shell, type:
from egoeval import evaluate
evaluate?
Example usage:
from egoeval import evaluate
import numpy as np
gt_segments=[np.array([[112,220,3],[250,330,1],[450,620,2],[660,700,2]]),np.array([[60,130,3],[150,230,0],[250,300,1]])]
pred_segments=[np.array([[110,250,2,0.8],[252,260,2,0.2],[300,360,1,0.6],[400,600,2,0.2]]),np.array([[70,130,0,1],[155,231,0,0.8]])]
out=evaluate(gt_segments,pred_segments)
print "mAUMOTAP",out['MOTAP']['mean_score']#-> ~0.18
Download Code

People