Philippine Journal of Science
150 (2): 515-525, April 2021
ISSN 0031 – 7683
Date Received: 28 Jul 2020

Investigating Visual Attention-based
Traffic Accident Detection Model

Caitlienne Diane C. Juan1, Jaira Rose A. Bat-og1,
Kimberly K. Wan1, and Macario O. Cordel II2*

1College of Computer Studies, De La Salle University
Manila City, Metro Manila 0922 Philippines
2Data Science Institute, De La Salle University
Manila City, Metro Manila 0922 Philippines

*Corresponding author: macario.cordel@dlsu.edu.ph

ABSTRACT

Spotting abnormal or anomalous events using street and road cameras relies heavily on human observers which are subject to fatigue, distractions, and simultaneous attention limit. There are several proposed anomalous event detection systems based on complex computer vision algorithms and deep learning architectures. However, these systems are objective-agnostic, resulting in high false-negative cases in task-driven abnormal event detection. A straightforward solution is to use visual attention models. However, these are based on low-level features integrated with object detectors and scene context, rather than on the observers’ object of gaze. In this paper, we explore a task-driven visual attention-based traffic accident detection system. We first examine the human fixations in free-viewing and task-driven goals using our proposed, first task-driven, fixation dataset of traffic incidents from different road cameras called TaskFix. We then used TaskFix to fine-tune the visual attention model, in this work called TaskNet. We evaluated the proposed fine-tuned model with quantitative and qualitative tests and compared it with other visual attention prediction architectures. The results indicate the potential of the visual attention models in abnormal event detection. The dataset is available here: https://bit.ly/TaskFixDataset