Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Towards an Intelligent Driver for an Autonomous Car


There are a number of active investigations into various aspects of autonomous driving, which are briefly summarized:

Pedestrian behavior understanding

The goal of this project is to observe and understand pedestrian actions at the time of crossing, and identify the factors that influence the way pedestrians make crossing decision. We intend to incorporate these factors into predictive models in order to improve prediction of pedestrian behavior.

For more detailed information regarding this project, please click here.

Pedestrian intention estimation

The objective of this project is to develop methods to predict underlying intention of pedestrians on the road. Understanding the intention helps distinguish between pedestrians that will potentially cross the street and the ones that will not do so, e.g. those waiting for a bus. To achieve this objective we want to establish a baseline by asking human participants to observe pedestrians under various conditions and tell us what the the intention of the pedestrians were. We want to use this information to train an intention estimation model and examine how it can improve the prediction of pedestrians’ trajectories and actions.

For more detailed information regarding this project, please click here.

Pedestrian crossing action and trajectory prediction for autonomous vehicles

Different approaches using recurrent neural networks within encoder-decoder ensembles are being designed, implemented and evaluated to predict future crossing/not crossing action of pedestrians, as well as their future trajectories both in 2D and in 3D, using a monocular camera onboard a vehicle and different features such as appearance, pedestrian location, ego-vehicle dynamics, etc. The main goal is to devise a system capable of inferring future behaviours and locations of pedestrians to improve the safety of current advanced driver assistance systems and autonomous vehicles.

This work is in collaboration with Dr. David Fernández Llorca, University of Alcalá, Alcalá de Henares (Madrid), Spain.

Lane change prediction for autonomous vehicles

Lane change prediction of surrounding vehicles using appearance, local context and optical flow features. Images from a frontal-view camera onboard of a vehicle are used as the main source of information. Several two-streams CNN-based architectures are being implemented and evaluated. The final goal is to devise a system able to infer future lane changes of surrounding vehicles for autonomous vehicles.

This work is in collaboration with Dr. David Fernández Llorca, University of Alcalá, Alcalá de Henares (Madrid), Spain.

Task-based attention for driving

This project will explore explicit modelling of driver’s attention, such as task-based saliency, and various implicit attention mechanisms common in the deep learning literature. The goal is to develop an approach which can localize and prioritize objects (or object parts) relevant for driving and potentially lead to performance improvements on visual tasks involved in driving, such as road user action prediction and object detection.   Even though we do not aim for a biologically-realistic solution we plan to collect human behavioural data (in-lab and in-vehicle) that can be used for training and evaluation of the algorithms.


Back to homepage

Overview of Autonomous Vehicles

Recent News


  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023
  • Current and former lab members at the VSS conference
  • Publications – 2023

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org