Hand labelled eye movements for a subset of the Hollywood2 data set

Ioannis Agtzidis 870fa6d620 Added sp_tool smoothed files 4 years ago
evaluation 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
ground_truth c71cd64629 Fixed aspect ratio and evaluation table 4 years ago
output_berg 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_blstm 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_dorr 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_komogortsev 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_larsson 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_remodnav 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_sp_tool 1ff0585dac Added output of evaluated algorithms and detailed evaluation reports 4 years ago
output_sp_tool_smoothed 870fa6d620 Added sp_tool smoothed files 4 years ago
README.md c71cd64629 Fixed aspect ratio and evaluation table 4 years ago

README.md

In this repository we provide a partial hand-labelled ground-truth eye movment annotation of the large Hollywood2 data set [1].

ALGORITHM EVALUATION

Below we provide preliminary evaluation results for some popular eye movement classification algorithms. The reported F1 scores are computed through the quality evaluation functionality of the sp_tool [2].

Sample F1 Sample F1 Sample F1 Event F1 Event F1 Event F1
Model average F1 Fixation Saccade SP Fixation Saccade SP
1D CNN-BLSTM: speed + direction 0.787 0.872 0.827 0.680 0.808 0.946 0.588
sp_tool smoothed 0.755 0.853 0.816 0.617 0.820 0.905 0.516
REMoDNaV [3] 0.748 0.779 0.755 0.622 0.784 0.931 0.615
sp_tool [2] 0.703 0.819 0.815 0.616 0.587 0.900 0.483
(Dorr et al. 2010) [4] 0.685 0.832 0.796 0.373 0.821 0.884 0.403
(Larsson et al. 2015) [5] 0.647 0.796 0.803 0.317 0.807 0.886 0.274
(Berg et al. 2009) [6] 0.601 0.824 0.729 0.137 0.845 0.826 0.243
I-VMP 0.564 0.726 0.688 0.564 0.503 0.563 0.338
I-KF 0.523 0.816 0.770 0.000 0.748 0.803 0.000
I-VDT 0.504 0.813 0.700 0.136 0.557 0.559 0.263
I-HMM 0.480 0.811 0.720 0.000 0.646 0.700 0.000
I-DT 0.473 0.803 0.486 0.000 0.744 0.802 0.000
I-VT 0.432 0.810 0.705 0.000 0.520 0.555 0.000
I-VVT 0.390 0.751 0.705 0.247 0.061 0.555 0.023
I-MST 0.385 0.793 0.349 0.000 0.590 0.576 0.000

REFERENCES

[1] Mathe, S., & Sminchisescu, C. (2012, October). Dynamic eye movement datasets and learnt saliency models for visual action recognition. In European Conference on Computer Vision (pp. 842-856). Springer, Berlin, Heidelberg.

[2] Startsev, M., & Agtzidis, I, & Dorr, M. (2019). Characterising and Automatically Detecting Smooth Pursuit in a Large-Scale Ground-Truth Data Set of Dynamic Natural Scenes. Journal of Vision

[3] Dar, A. H., Wagner, A. S., & Hanke, M. (2019). REMoDNaV: Robust Eye Movement Detection for Natural Viewing. BioRxiv, 619254.

[4] Dorr, M., Martinetz, T., Gegenfurtner, K. R., & Barth, E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of vision, 10(10), 28-28.

[5] Larsson, L., Nyström, M., Andersson, R., & Stridh, M. (2015). Detection of fixations and smooth pursuit movements in high-speed eye-tracking data. Biomedical Signal Processing and Control, 18, 145-152.

[6] Berg, D. J., Boehnke, S. E., Marino, R. A., Munoz, D. P., & Itti, L. (2009). Free viewing of dynamic stimuli by humans and monkeys. Journal of vision, 9(5), 19-19.