Variational saliency maps for dynamic image sequences

Autor(en)
Aniello Raffaele Patrone, Christian Valuch, Ulrich Ansorge, Otmar Scherzer
Abstrakt

Saliency maps are an important tool for modeling and predicting human eye movements but their applications to dynamic image sequences, such as videos, are still limited. In this work we propose a variational approach for determining saliency maps in dynamic image sequences. The main idea is to merge information from static saliency maps computed on every single frame of a video (Itti & Koch, 2001) with the optical flow (Horn & Schunk, 1981; Weickert & Schnorr, 2011) representing motion between successive video frames. Including motion information into saliency maps is not a novelty but our variational approach presents a new solution to the problem, which is also conceptually compatible with successive stages of visual processing in the human brain. We present the basic concept and compare our modeling results to classical methods of saliency and optical flow computation. In addition, we present preliminary eye tracking results from an experiment in which 24 participants viewed 80 real-world dynamic scenes. Our data suggest that our algorithm allows feasible and computationally cheap modeling of human attention in videos.

Organisation(en)
Forschungsverbund Kognitionswissenschaft, Institut für Psychologie der Kognition, Emotion und Methoden
Seiten
224
Publikationsdatum
2015
ÖFOS 2012
101028 Mathematische Modellierung
Link zum Portal
https://ucrisportal.univie.ac.at/de/publications/f8893028-dc77-4134-aa20-781580afadcb