Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Blog


Paria Mehrani presents “Border Ownership Assignment based on Dorsal and Horizontal Modulations” at VSS 2018

By tech | June 25, 2018 | Category Presentations

Venue: VSS 2018, Florida Paper: Border Ownership Assignment based on Dorsal and Horizontal Modulations Poster: https://osf.io/p4ygh/ Abstract: The face-vase illusion introduced by Rubin (Rubin, 1915) demonstrates how one can switch back and forth between different interpretations by assigning borders to either side of contours in an image. Border ownership assignment is an important step in perception of […]

Read More

Tsotsos Lab at the Lassonde Open House

By tech | May 2, 2018 | Category Uncategorized

On April 28th our lab welcomed a number of high school students interested in joining the Lassonde School of Engineering at the Lassonde Open House. The EECS department organized lab tours where students had an opportunity to see exciting research happening here first-hand, as well as hear about undergraduate opportunities at the School. Markus Solbach, […]

Read More

Presenting “Random Polyhedral Scenes: An Image Generator for Active Vision System Experiments”

By tech | April 6, 2018 | Category Announcements

We present a Polyhedral Scene Generator system which creates a random scene based on a few user parameters, renders the scene from random view points and creates a dataset containing the renderings and corresponding annotation files. We think that this generator will help to understand how a program could parse a scene if it had […]

Read More

Scene Classification in Indoor Environments for Robots using Word Embeddings

By tech | April 1, 2018 | Category Uncategorized

Abstract Scene Classification has been addressed with numerous techniques in the computer vision literature. However with the increasing size of datasets in the field, it has become difficult to achieve high accuracy in the context of robotics. We overcome this problem and obtain good results through our approach. In our approach, we propose to address […]

Read More

Totally-Looks-Like: How Humans Compare, Compared to Machines

By tech | March 8, 2018 | Category Uncategorized

Paper (arXiv): Totally-Looks-Like: How Humans Compare, Compared to Machines Project website. Perceptual judgment of image similarity by humans relies on a rich internal representations ranging from low-level features to high-level concepts, scene properties and even cultural associations. Exist ing methods and datasets attempting to explain perceived similarity use stimuli which arguably do not cover the […]

Read More

Publications – 2018

By tech | March 6, 2018 | Category Publications

A. Rasouli, I. Kotseruba, and J. K. Tsotsos,“It’s Not All About Size: On the Role of Data Properties in Pedestrian Detection,” In Proc. European Conference on Computer Vision (ECCV) Workshop, 2018, pp. 210-225. A. Rasouli, I. Kotseruba, and J. K. Tsotsos,“Towards Social Autonomous Vehicles: Understanding Pedestrian-Driver Interactions,” In Proc. International Conference on Intelligent Transportation Systems […]

Read More

Localization Among Humans

By tech | February 10, 2018 | Category Uncategorized

Abstract Indoor Localization is a primary task for social robots. We are particularly interested in how to solve this problem for a mobile robot using primarily vision sensors. This work examines a critical issue related to generalizing approaches for static environments to dynamic ones: (i) it considers how to deal with dynamic users in the […]

Read More

Markus presents “Vision-Based Fallen Person Detection for the Elderly” at ICCV 2017

By tech | October 13, 2017 | Category Presentations

Venue: ICCV 2017 Workshop: ACVR, Venice, 2017 Paper: Vision-Based Fallen Person Detection for the Elderly  Project link: https://github.com/TsotsosLab/fallen-person-detector Abstract: Falls are serious and costly for elderly people. The Centers for Disease Control and Prevention of the US reports that millions of older people, 65 and older, fall each year at least once. Serious injuries such […]

Read More

Paria Mehrani presents “A Hierarchical Model for Border Ownership” at CVR 2017

By tech | July 20, 2017 | Category Presentations

Venue: CVR 2017, York University Abstract: Experiments on the visual cortex show existence of border ownership (BOS) neurons in V1 and V2. The responses of these neurons not only depend on the orientation of borders, but also on which side of the border the figure is. Neurophysiological studies show that BOS cell responses depend on […]

Read More

Congratulations to Bao Chen and Raghavendar Sahdev on the 2017 CIPPRS/ACTIRF Best Robotics Paper Award

By tech | June 2, 2017 | Category Uncategorized

Bao Chen, Raghavendar Sahdev and John Tsotsos were co-recipients of the 2017 CIPPRS/ACTIRF Best Robotics Paper at the Conference on Computer and Robot Vision (CRV).  Their paper describes efforts to develop a person following robot using stereo cameras. More details about the paper are available here.

Read More

Previous Posts Next posts

Recent News


  • Lab members at the VSS conference
  • Publications – 2025
  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org