Tsotsos Lab
Menu
  • News
  • People
  • Research
    • Visual Attention
    • Robotics
    • Computational Neuroscience
    • Computational Vision
    • Biomedical and Assistive technologies
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
  • Research
    • Visual Attention
    • Robotics
    • Computational Neuroscience
    • Computational Vision
    • Biomedical and Assistive technologies
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Datasets


Joint Attention in Autonomous Driving (JAAD)

JAAD is a new dataset (by I. Kotseruba, A. Rasouli, J.K. Tsotsos) for studying joint attention in the context of autonomous driving. It contains an annotated collection of short video clips representing scenes typical for everyday urban driving in various weather conditions.

JAAD dataset contains 346 high-resolution video clips (most are 5-10 sec) extracted from approx. 240 hours of driving videos filmed in several locations in North America and Eastern Europe.

It is available at: http://data.nvision2.eecs.yorku.ca/JAAD_dataset/


Place Recognition Dataset

A dataset of images from 17 indoor places built by 2 robots (virtualMe and Pioneer) under different lighting conditions (day and night) used for place recognition tasks in the following paper:

R. Sahdev and J. K. Tsotsos, “Indoor Place Recognition for Localization of Mobile Robots,” In 13th International Conference on Computer and Robot Vision, 2016, Victoria, BC, June 1-3, 2016

We respectfully ask that if you use the dataset, you cite the above paper as its source. DOWNLOAD


Sensor Parameters Dataset

A dataset of images that were captured under variable sensorshutter speeds and gain values. The dataset was compiled and  used as part of the following paper:

A. Andreopoulos, J. K. Tsotsos. “On Sensor Bias In  Experimental Methods for Comparing Interest Point,  Saliency and Recognition Algorithms”. IEEE Transactions  On Pattern Analysis and Machine Intelligence (2011, in press).

We respectfully ask that if you use the dataset, you cite the  above paper as its source. DOWNLOAD


Cardiac MRI Dataset

Alexander Andreopoulos, John K. Tsotsos, Efficient and Generalizable Statistical Models of Shape and Appearance for Analysis of Cardiac MRI, Medical Image Analysis, Volume 12, Issue 3, June 2008, Pages 335-357. PDF


Use is free of charge; We respectfully ask that if you use this dataset, you cite the above paper as its source.

The authors would like to acknowledge Dr. Paul Babyn, Radiologist-in-Chief, and Dr. Shi-Joon Yoo, Cardiac Radiologist, of the Hospital for Sick Children, Toronto, for the data sets and their assistance with this research project.

Disclaimer: The dataset is provided for research purposes only and there are no warranties provided nor liabilities assumed by York University nor the researchers involved in the production of the dataset.

Downloads

  • Cardiac MR images acquired from 33 subjects. Each subject’s sequence consists of 20 frames and 8-15 slices along the long axis, for a total of 7980 images. The sequence corresponding to each subject x is in a distinct .mat (MATLAB) file named sol_yxzt_patx.mat. These are the raw, unprocessed images, that were originally stored as 16-bit DICOM images. DOWNLOAD
  • Segmentations of the above sequences. We have manually segmented each of the 7980 images where both the endocardium and epicardium of the left ventricle were visible, for a total of 5011 segmented MR images and 10022 contours. The segmentation corresponding to each subject x is in a distinct .mat (MATLAB) file named manual_seg_32points_patx.mat. Each contour is described by 32 points given in pixel coordinates. DOWNLOAD
  • Two small MATLAB functions for visualizing the segmentations on their corresponding images. Please see the included README file for examples of their use. DOWNLOAD
  • Metadata containing the pixel-spacing (mm per pixel), the spacing between slices along the long axis (mm per slice) of each subject’s sequence, each subject’s age and diagnosis. DOWNLOAD

Fixation Data and Code

Fixation data and code are available here: AIM.zip. The code written in MATLAB and includes a variety of learned ICA bases. Note that the code given  expects a relatively low resolution image as the receptive fields are  small, for a high resolution larger image, you may wish to try some larger receptive fields. Also, if you have any questions about the  code, feel free to ask. To use within matlab, you should be able to simply do something along the lines of: info = AIM(’21.jpg’,0.5); with  the parameter being a rescaling factor. It is also possible to vary a variety of parameters on both the command line and within the code  itself, so feel free to experiment. There are also some comments and  notes specific to psychophysics examples within one of the included  files. Note that all of these bases should result in better performance  than that based on the *very* small 7×7 filters used in the original  NIPS paper. The eye tracking data may be found at eyetrackingdata.zip. This includes binary maps for each of the images which indicate which pixel locations were fixated in addition to the raw data. Correspondence is best addressed to Neil.Bruce[at]sophia.inria.fr


Facial Gestures Dataset

This webpage contains a dataset of  images of facial gestures taken by a camera mounted on a wheelchair.  The dataset was first compiled by Gregory Fine and used as part of the his Master of  Science thesis:

Examining the  feasibility of face gesture detection using a wheelchair mounted camera.

Use is free of charge; We respectfully ask that if you  use this dataset, you cite the above paper as its source.

Disclaimer: The dataset is provided for research purposes only and   there are no warranties provided nor liabilities assumed by York University nor the  researchers involved in the production of the dataset.

Downloads:

  • Facial gestures images acquired from 10 subjects. Each subject’s sequence consists of 10 gestures and 100 images for each gesture, for a total of 9140 images. DOWNLOAD
  • Images used to train AAM algorithm to detect the eyes and mouth. It contains images along with ground truth contours of the eyes and mouth. DOWNLOAD​
  • Images of facial gestures used to test false positive rate of the algorithm. It contains 440 images of facial gestures produced by 5 subjects. DOWNLOAD

Recent News


  • CASCON 2018 Keynote: “It Only Took 60 Years to Solve Artificial Intelligence – That Wasn’t so Hard, Was it?”
  • Yulia Kotseruba and John Tsotsos Keynote Talk at 2018 AAAI Fall Symposium
  • Tsotsos Lab’s recent media mentions
  • BDCV 2018 Keynote, “Visual Attention: Brain Mechanisms and Computational Models”
  • ICSC 2018 Keynote: “Attention is More Important for Visual Cognition and Reasoning Than You Think”

Tsotsos Lab on Twitter


My Tweets

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University

Sponsors

Tsotsos Lab on Social Media

Copyright © 2015 Tsotsos Lab

Theme created by PWT. Powered by WordPress.org