The Laboratory for Active and Attentive Vision (LAAV) has its roots in the original Computer Vision Laboratory at the University of Toronto founded by John Tsotsos in 1980. In those years it was part of the Artificial Intelligence Group in the Department of Computer Science. There, Tsotsos also founded the Technical Report Series: Research in Biological and Computational Vision (1984 – 1996). In January 2000, Tsotsos moved to York University to take up the Directorship of the Centre for Vision Research and a portion of that lab followed him. The history of the current lab thus goes back to 1980 and includes a significant number of students, post-docs and publications from the pre-York era.

At York, the Laboratory for Active and Attentive Vision is situated within the Department of Computer Science & Engineering. It is also one of over 35 labs in the much larger Centre for Vision Research ( The lab has grown steadily over its history now bursting at the seams in rooms 3001 A, B and 3054 in the Lassonde building. With a rich set of international collaborators, and a well-equipped infrastructure the lab is an exciting research focus for interdisciplinary research on human and primate visual attention and active vision for robotics. Research is ongoing within four themes: Refinements and Expansions of the Selective Tuning Model (ST) for Visual Attention; Human Experimental Investigations on the relationship of ST To Biological Vision and Visually-Guided Robotics with application to Aids for the Physically Disabled.

Our Research

  • Visual Attention

    Little agreement exists on the definition, role and mechanisms of visual attention. As elsewhere in neuroscience, computational modelling has an important role to play by bridging the gap between different investigative methods.

  • Robotics

    Active vision allows for feedback between vision and motor commands, enabling a system some control over the subsequent images it gathers. This can have a major impact on the difficulty and design of a vision algorithm. We study how a mobile visual platform perceives, explores, and interacts with its surroundings, with the goal of developing robust visual systems capable of handling unconstrained environments.

  • Computational Neuroscience

    Computational modeling has an important role to play in neuroscience by being the only technique that can bridge the gap between current investigative methods and provide answers to questions that are beyond their reach.

  • Computational Vision

    Perception is a form of cognition; it is not sufficient to detect sensory stimuli, but one must also understand and interpret that stimuli. We develop computational algorithms which solve visual tasks, as well as computational models which further our understanding of visual processing.

Recent News

  • JKT65 Celebration!

    John Tsotsos recently turned 65! To honour John’s many contributions in the visual sciences, a Spoken Festschrift and Celebratory Symposium was held on Saturday, May 12th, 2018 at York University in Toronto, Canada. There were talks from distinguished researchers in areas in which John has contributed, as well as talks by his academic children.   […]

    By tech | August 1, 2018

    Read More

  • The Elephant in the Room

    A Demonstration of interesting failures of State-of-The-Art object detectors by Amir Rosenfeld. Contact details can be found here:

    By tech | July 5, 2018

    Read More

  • Iuliia presents Active Fixation Control to Predict Saccade Sequences at CVPR 2018

    Open access version of the paper: Code on GitHub: Iuliia presented “Active Fixation Control to Predict Saccade Sequences” by Calden Wloka, Iuliia Kotseruba, and John K. Tsotsos at CVPR 2018.

    By tech | July 3, 2018

    Read More

  • Totally-Looks-Like at CVPR 2018

    Markus Solbach and Amir Rosenfeld presented Totally-Looks-Like: How Humans Compare, Compared to Machines at the CVPR 2018 workshop “Mutual benefits of Cognitive and Computer Vision”. CVPR 2018 — Workshop “Mutual Benefits of Cognitive and Computer Vision”: Project page:

    By tech | July 3, 2018

    Read More