Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Blog


Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot

By tech | May 30, 2017 | Category Uncategorized

Abstract: In this paper we introduce a stereo vision based CNN tracker for a person following robot. The tracker is able to track a human in real time using an online convolutional neural network. Our approach enables the robot to follow a target under challenging situations like occlusions, appearance changes, pose changes, crouching, illumination changes […]

Read More

Tsotsos Lab at VSS 2017

By tech | May 28, 2017 | Category Uncategorized

The Interaction of Target-Distractor Similarity and Visual Search Efficiency for Basic Features Authors: Calden Wloka, Sang-Ah Yoo, Rakesh Sengupta, and John K. Tsotsos Abstract: Visual search efficiency is commonly measured by the relationship between subject response time (RT) and display set size. Basic features are visual features for which a singleton target can be found […]

Read More

Markus presents “Visuospatial Functionality for Active Observers: The Same-Different Task” at NCFRN 2017

By tech | May 2, 2017 | Category Presentations

Visuospatial Functionality for Active Observers: The Same-Different Task Venue: NSERC Canadian Field Robotics Network, Ottawa, 2017 Abstract: To understand vision in computational terms brings us closer to understand how the brain works and transferring this knowledge to build machines whose visual system approaches human ability would be extremely useful in a great number of applications […]

Read More

Publications – 2017

By tech | March 6, 2017 | Category Publications

A. Rasouli, I. Kotseruba, and J. K. Tsotsos, “Agreeing to cross: How drivers and pedestrians communicate,” In Proc. Intelligent Vehicles Symposium (IV), 2017, pp. 264–269. A. Rasouli, I. Kotseruba, and J. K. Tsotsos, “Are they going to cross? a benchmark dataset and baseline for pedestrian crosswalk behavior,” In Proc. International Conference on Computer Vision (ICCV) […]

Read More

Person Following Robot using Selected Online Ada-Boosting using a Stereo Camera

By tech | February 12, 2017 | Category Uncategorized

Abstract: Person following behaviour is an important task for social robots. For letting robots to follow a person, we have to track the target in real time without critical failures. There are many situations where the robot will potentially loose tracking in a dynamic environment, e.g. occlusion, illumination, pose-changes, etc. Often, people use a complex […]

Read More

A cluster of conspicuity representations for eye fixation selection.

By tech | January 23, 2017 | Category Uncategorized

J.K. Tsotsos, Y. Kotseruba, and C. Wloka (2016) A cluster of conspicuity representations for eye fixation selection. Society for Neuroscience (SfN)   Abstract: A computational explanation of how visual attention, interpretation of visual stimuli, and eye movements combine to produce visual behavior seems elusive. Here, we focus on one component: how selection is accomplished for […]

Read More

New Dataset: Joint Attention in Autonomous Driving (JAAD)

By tech | January 3, 2017 | Category Uncategorized

JAAD is a new dataset (by I. Kotseruba, A. Rasouli, J.K. Tsotsos) for studying joint attention in the context of autonomous driving. It contains an annotated collection of short video clips representing scenes typical for everyday urban driving in various weather conditions. JAAD dataset contains 346 high-resolution video clips (most are 5-10 sec) extracted from […]

Read More

Indoor Place Recognition System for Localization of Mobile Robots

By tech | December 27, 2016 | Category Uncategorized

The dataset contains 17 different places built from 2 different robots, the virtualMe and Pioneer. (Raghavender Sahdev, John K. Tsotsos.) More details at: http://www.raghavendersahdev.com/place-recognition.html

Read More

Beyond slots and resources: an integrative approach to visual working memory.

By tech | December 23, 2016 | Category Uncategorized

R. Sengupta, J.K. Tsotsos, S-A. Yoo, C. Wloka, and T. Kunic (2016) Beyond slots and resources: an integrative approach to visual working memory. Society for Neuroscience (SfN) Abstract: In order to perform everyday visual tasks we store temporary sensory information in visual working memory (VWM). In spite of considerable neurophysiological and psychophysical data, the actual […]

Read More

Calden and Nick participate in NeuroTechTO Debate

By tech | June 20, 2016 | Category Uncategorized

Title: Creating Intelligence: A NeuroTechTO Debate Date: June 23rd, 2016 Panel: Francis Jeanson, Nick Frosst, Calden Wloka, and Luca Pisterzi Calden Wloka, along with lab alumnus Nick Frosst (now at Google Canada), Francis Jeanson (Ontario Brain Institute), and Luca Pisterzi (Canadian Partnership Against Cancer), took part in a panel discussion on the role of AI […]

Read More

Previous Posts Next posts

Recent News


  • Lab members at the VSS conference
  • Publications – 2025
  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org