Using human observer eye movements in automatic image classifiers

Show full item record

Title: Using human observer eye movements in automatic image classifiers
Author: Jaimes, Alejandro; Pelz, Jeff; Grabowski, Timothy; Babcock, Jason; Chang, Shih-Fu
Abstract: We explore the way in which people look at images of different semantic categories (e.g., handshake, landscape), and directly relate those results to computational approaches for automatic image classification. Our hypothesis is that the eye movements of human observers differ for images of different semantic categories, and that this information can be effectively used in automatic content-based classifiers. First, we present eye tracking experiments that show the variations in eye movements (i.e., fixations and saccades) across different individuals for images of 5 different categories: handshakes (two people shaking hands), crowd (cluttered scenes with many people), landscapes (nature scenes without people), main object in uncluttered background (e.g., an airplane flying), and miscellaneous (people and still lives). The eye tracking results suggest that similar viewing patterns occur when different subjects view different images in the same semantic category. Using these results, we examine how empirical data obtained from eye tracking experiments across different semantic categories can be integrated with existing computational frameworks, or used to construct new ones. In particular, we examine the Visual Apprentice, a system in which image classifiers are learned (using machine learning) from user input as the user defines a multiple level object definition hierarchy based on an object and its parts (scene, object, object-part, perceptual area, region), and labels examples for specific classes (e.g., handshake). The resulting classifiers are applied to automatically classify new images (e.g., as handshake/non-handshake). Although many eye tracking experiments have been performed, to our knowledge, this is the first study that specifically compares eye movements across categories, and that links categoryspecific eye tracking results to automatic image classification techniques.
Description: This article may also be accessed on the publisher's website at: http://spiedl.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PSISDG004299000001000373000001&idtype=cvips&gifs=yes Copyright 2001 Society of Photo-Optical Instrumentation Engineers. This paper was published in the Proceedings of SPIE: Human Vision and Electronic Imaging VI, SPIE vol. 4299, and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
Record URI: http://hdl.handle.net/1850/3023
Date: 2001-06

Files in this item

Files Size Format View
JPelzConfProc06-2001.pdf 671.1Kb PDF View/Open

The following license files are associated with this item:

This item appears in the following Collection(s)

Show full item record

Search RIT DML


Advanced Search

Browse