Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Person Following Robot using Selected Online Ada-Boosting using a Stereo Camera


By tech | February 12, 2017 | Category Uncategorized

Abstract: Person following behaviour is an important task for social robots. For letting robots to follow a person, we have to track the target in real time without critical failures. There are many situations where the robot will potentially loose tracking in a dynamic environment, e.g. occlusion, illumination, pose-changes, etc. Often, people use a complex tracking algorithm to improve robustness. However, the trade-off is that their approaches may not able to run in real-time on modern robots. In this paper, we propose a modified tracking algorithm. We also build a challenging dataset for the task of person following which covers different situations like squatting, partial and complete occlusion of the target being tracked, people wearing similar clothes, walking facing the front and back side of the person to the robot and normal walking.

Download links: dataset, paper, presentation.

This paper won the Best Robotics Paper Award at AI-GI-CRV conference. 14th Conference on Computer and Robot Vision, Edmonton, May 17-19, 2017.

If you use the dataset in your research please cite the following paper:

Person Following Robot Using Selected Ada-Boosting with a Stereo Camera, By Bao Xin Chen, Raghavender Sahdev and John K. Tsotsos, In the 14th Conference on Computer and Robot Vision, Edmonton, Alberta, May 16-19, 2017.

Robot following a person under different conditions (varying poses, motions, illumination conditions) in different environments:

Appearance changes and same clothes occlusions. The robot follows the correct person even when another person wearing the same clothes occludes the target being tracked. It also shows appearance changes – the target removes his jacket first and after some time wears the jacket. The robot follows the correct target the whole time. In a university hallway. The robot follows the person in a university hallway. The target is occluded by another person, shows picking up a bag and later removing the bag and continues to walk. Multiple crossings. A challenging sequence built to show the robot following the person with a lot of crossings having partial and complete occlusions.

From a hallway into an elevator. The robot follows the person from a hallway into an elevator. The target is occluded by another human multiple times.

Lecture hall. The robot is following a person from a corridor into a lecture, then the human sits on the chair and gets up, the target being tracked is also partially and completely occluded by another person.

Hallway. The robot follows a person in a university hallway, the target picks up a bag then removes it and finally is seen to be duck walking.

8 thoughts on “Person Following Robot using Selected Online Ada-Boosting using a Stereo Camera”

  • pcm1123@gmail.com' Paul McElroy says:
    April 24, 2017 at 12:53 am

    Do you know when the paper for this will be available? Also, will you publish any code? Thank you!

    Reply
    • raghavendersahdev@gmail.com' Raghavender Sahdev says:
      April 25, 2017 at 10:31 pm

      Thank you for your interest in our work Paul! The paper was recently accepted at the 14th Conference on Computer and Robot Vision (May 16-19, 2017 in Edmonton), and the paper will be available immediately after the conference. We will keep you posted regarding the code!

      Reply
      • pcm1123@gmail.com' Paul McElroy says:
        May 3, 2017 at 2:25 pm

        Thank you! I look forward to it!

        Reply
  • pcm1123@gmail.com' Paul McElroy says:
    May 22, 2017 at 2:22 pm

    Is this paper available now? Thank you!

    Reply
    • raghavendersahdev@gmail.com' Raghavender Sahdev says:
      May 25, 2017 at 10:06 pm

      Hi Paul yes the paper is available for download, here is the link: http://www.raghavendersahdev.com/uploads/3/9/6/2/39623741/person_following_robot_crv-2017.pdf
      please cite it if you use it 🙂
      You can let us know if you have any further question regarding the paper.

      Reply
  • zhengou@outlook.com' Eric says:
    September 8, 2017 at 1:23 am

    Hello, will be the code available? Thank you.

    Reply
    • raghavendersahdev@gmail.com' Raghavender Sahdev says:
      September 9, 2017 at 3:46 pm

      Hi Eric, thank you for your interest in our work! We do not currently have a release timeline, but we will let you know by early October.

      Reply
      • zhengou@outlook.com' Eric says:
        October 11, 2017 at 10:44 am

        Hello, have you decided to make the code pubilc or not? Thank you!

        Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent News


  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023
  • Current and former lab members at the VSS conference
  • Publications – 2023

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org