Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Calden presented “Psychophysical Evaluation of Saliency Algorithms” at VSS 2016


By toli95 | June 14, 2016 | Category Uncategorized

Authors: Calden Wloka, Sang-Ah Yoo, Rakesh Sengupta, Toni Kunic, and John K. Tsotsos
Title: Psychophysical Evaluation of Saliency Algorithms
Abstract: A great deal of effort has been spent evaluating the performance of saliency algorithms at predicting human fixations in natural images. However, many other aspects of human visual attention have received relatively little focus in the saliency literature but have been richly characterized by psychophysical investigations. Bruce et al. [2] have recommended the development of an axiomatic set of model constraints grounded in this body of psychophysical knowledge. We aim to provide a step towards this goal by linking human visual search response time to saliency algorithm output. Duncan and Humphreys [3] theorized that subject response time in visual search tasks is correlated with similarity between search items (with search time increasing both as targets become more similar to distractors and the heterogeneity of distractors increases). This result fits well with the widely held notion in the saliency model literature that saliency is largely driven by stimulus uniqueness, but has not been explicitly tested against the performance of saliency algorithms. To do so systematically, we need a well-characterized human performance curve for a given set of visual search stimuli.

Wolfe [4] provides a list of features which can, given su‑cient target-distractor differences, elicit effi‑cient search for singleton targets. These features therefore provide a strong candidate set upon which to test saliency algorithm performance. Arun [1] produced a well-characterized performance curve for the first such feature, orientation, by testing humans over a broad range of target-distractor orientation differences ranging from 7-60 degrees. Here we replicate Arun’s experiment, showing that saliency algorithm performance falls into three broad categories: those which cannot consistently find the target, those which consistently find the target but have no differentiated performance with target-distractor difference, and those which are able to fit a human performance-like curve. We can use these results to guide future saliency model development.

References
[1] S. P. Arun. Turning visual search time on its head. Vision Research, 74:86 92, 2012.
[2] Neil D. B. Bruce, Calden Wloka, Nick Frosst, Shan Rahman, and John K. Tsotsos. On computational modeling of visual saliency: Examining what’s right, and what’s left. Vision Research, 116:95 112, 2015.
[3] John Duncan and Glyn W Humphreys. Visual search and stimulus similarity. Psychological review, 96(3):433, 1989.
[4] Jeremy M. Wolfe. Visual search. In Harold Pashler, editor, Attention. Psychology Press, 1998.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent News


  • Lab members at the VSS conference
  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023
  • Current and former lab members at the VSS conference

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org