The Laboratory for Active and Attentive Vision (LAAV) has its roots in the original Computer Vision Laboratory at the University of Toronto founded by John Tsotsos in 1980. In those years it was part of the Artificial Intelligence Group in the Department of Computer Science. There, Tsotsos also founded the Technical Report Series: Research in Biological and Computational Vision (1984 – 1996). In January 2000, Tsotsos moved to York University to take up the Directorship of the Centre for Vision Research and a portion of that lab followed him. The history of the current lab thus goes back to 1980 and includes a significant number of students, post-docs and publications from the pre-York era.

At York, the Laboratory for Active and Attentive Vision is situated within the Department of Computer Science & Engineering. It is also one of over 35 labs in the much larger Centre for Vision Research (cvr.yorku.ca). The lab has grown steadily over its history now bursting at the seams in rooms 3001 A, B and 3054 in the Lassonde building. With a rich set of international collaborators, and a well-equipped infrastructure the lab is an exciting research focus for interdisciplinary research on human and primate visual attention and active vision for robotics. Research is ongoing within four themes: Refinements and Expansions of the Selective Tuning Model (ST) for Visual Attention; Human Experimental Investigations on the relationship of ST To Biological Vision and Visually-Guided Robotics with application to Aids for the Physically Disabled.


Our Research

  • Visual Attention

    Little agreement exists on the definition, role and mechanisms of visual attention. As elsewhere in neuroscience, computational modelling has an important role to play by bridging the gap between different investigative methods.

  • Robotics

    Active vision allows for feedback between vision and motor commands, enabling a system some control over the subsequent images it gathers. This can have a major impact on the difficulty and design of a vision algorithm. We study how a mobile visual platform perceives, explores, and interacts with its surroundings, with the goal of developing robust visual systems capable of handling unconstrained environments.

  • Computational Neuroscience

    Computational modeling has an important role to play in neuroscience by being the only technique that can bridge the gap between current investigative methods and provide answers to questions that are beyond their reach.

  • Computational Vision

    Perception is a form of cognition; it is not sufficient to detect sensory stimuli, but one must also understand and interpret that stimuli. We develop computational algorithms which solve visual tasks, as well as computational models which further our understanding of visual processing.


Recent News

  • Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot

      Abstract: In this paper we introduce a stereo vision based CNN tracker for a person following robot. The tracker is able to track a human in real time using an online convolutional neural network. Our approach enables the robot to follow a target under challenging situations like occlusions, appearance changes, pose changes, crouching, illumination […]

    By tech | May 30, 2017

    Read More

  • Tsotsos Lab at VSS 2017

    The Interaction of Target-Distractor Similarity and Visual Search Efficiency for Basic Features Authors: Calden Wloka, Sang-Ah Yoo, Rakesh Sengupta, and John K. Tsotsos Abstract: Visual search efficiency is commonly measured by the relationship between subject response time (RT) and display set size. Basic features are visual features for which a singleton target can be found […]

    By tech | May 28, 2017

    Read More

  • Person Following Robot using Selected Online Ada-Boosting using a Stereo Camera

    Abstract: Person following behaviour is an important task for social robots. For letting robots to follow a person, we have to track the target in real time without critical failures. There are many situations where the robot will potentially loose tracking in a dynamic environment, e.g. occlusion, illumination, pose-changes, etc. Often, people use a complex […]

    By tech | February 12, 2017

    Read More