The Laboratory for Active and Attentive Vision (LAAV) has its roots in the original Computer Vision Laboratory at the University of Toronto founded by John Tsotsos in 1980. In those years it was part of the Artificial Intelligence Group in the Department of Computer Science. There, Tsotsos also founded the Technical Report Series: Research in Biological and Computational Vision (1984 – 1996). In January 2000, Tsotsos moved to York University to take up the Directorship of the Centre for Vision Research and a portion of that lab followed him. The history of the current lab thus goes back to 1980 and includes a significant number of students, post-docs and publications from the pre-York era.

At York, the Laboratory for Active and Attentive Vision is situated within the Department of Computer Science & Engineering. It is also one of over 35 labs in the much larger Centre for Vision Research (cvr.yorku.ca). The lab has grown steadily over its history now bursting at the seams in rooms 3001 A, B and 3054 in the Lassonde building. With a rich set of international collaborators, and a well-equipped infrastructure the lab is an exciting research focus for interdisciplinary research on human and primate visual attention and active vision for robotics. Research is ongoing within four themes: Refinements and Expansions of the Selective Tuning Model (ST) for Visual Attention; Human Experimental Investigations on the relationship of ST To Biological Vision and Visually-Guided Robotics with application to Aids for the Physically Disabled.


Our Research

  • Visual Attention

    Little agreement exists on the definition, role and mechanisms of visual attention. As elsewhere in neuroscience, computational modelling has an important role to play by bridging the gap between different investigative methods.

  • Robotics

    Active vision allows for feedback between vision and motor commands, enabling a system some control over the subsequent images it gathers. This can have a major impact on the difficulty and design of a vision algorithm. We study how a mobile visual platform perceives, explores, and interacts with its surroundings, with the goal of developing robust visual systems capable of handling unconstrained environments.

  • Computational Neuroscience

    Computational modeling has an important role to play in neuroscience by being the only technique that can bridge the gap between current investigative methods and provide answers to questions that are beyond their reach.

  • Computational Vision

    Perception is a form of cognition; it is not sufficient to detect sensory stimuli, but one must also understand and interpret that stimuli. We develop computational algorithms which solve visual tasks, as well as computational models which further our understanding of visual processing.


Recent News

  • Tsotsos Lab at the Lassonde Open House

    On April 28th our lab welcomed a number of high school students interested in joining the Lassonde School of Engineering at the Lassonde Open House. The EECS department organized lab tours where students had an opportunity to see exciting research happening here first-hand, as well as hear about undergraduate opportunities at the School. Markus Solbach, […]

    By tech | May 2, 2018

    Read More

  • Scene Classification in Indoor Environments for Robots using Word Embeddings

    Abstract Scene Classification has been addressed with numerous techniques in the computer vision literature. However with the increasing size of datasets in the field, it has become difficult to achieve high accuracy in the context of robotics. We overcome this problem and obtain good results through our approach. In our approach, we propose to address […]

    By tech | April 1, 2018

    Read More

  • Totally-Looks-Like: How Humans Compare, Compared to Machines

    Paper (arXiv): Totally-Looks-Like: How Humans Compare, Compared to Machines Project website. Perceptual judgment of image similarity by humans relies on a rich internal representations ranging from low-level features to high-level concepts, scene properties and even cultural associations. Exist ing methods and datasets attempting to explain perceived similarity use stimuli which arguably do not cover the […]

    By tech | March 8, 2018

    Read More