Where to Look: Real-time Neuromorphic Dynamic Visual Saliencyn
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, however, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention.
RalphEtienne-Cummings, an IEEE and Kavli Frontiers Fellow, received his B. Sc. in physics, 1988, from Lincoln University, Pennsylvania. He completedhis M.S.E.E. and Ph.D. in electrical engineering at the University of Pennsylvania in December 1991 and 1994, respectively. Currently, Dr. Etienne-Cummings is a professor of electrical and computer engineering, and computer science at Johns Hopkins University (JHU). He is the current Chairman of the Electrical and Computer Engineering Department at JHU. He was the founding Director of the Institute of Neuromorphic Engineering (INE), and currently serves Treasurer of the INE.