Tsotsos Lab
Menu
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact
  • News
  • People
    • Current Members
    • Lab Alumni
  • Active Research Topics
    • Active Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Complexity
      • Spatial Cognition
      • Visual Search
    • Cognitive Architectures
      • Attention Control
      • Autonomous Vehicles
      • Cognitive Programs
      • Complexity
      • Development
      • Eye Movements
      • Learning by Composition and Exploration
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Computational Neuroscience
      • Attention Control
      • Colour
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Vision Architecture
    • Computer Vision
      • Active Recognition
      • Autonomous Vehicles
      • Binocular Heads
      • Biomedical Applications
      • Colour
      • Complexity
      • Motion
      • Navigation
      • Saliency
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Transformers
      • Vision Architecture
      • Visual Search
    • Human Vision and Visual Behaviour
      • Attention Control
      • Colour
      • Complexity
      • Development
      • Eye Movements
      • Motion
      • Selective Tuning
      • Shape
      • Spatial Cognition
      • Vision Architecture
      • Visual Working Memory
    • Visual Attention
      • Attention Control
      • Autonomous Vehicles
      • Complexity
      • Development
      • Eye Movements
      • Saliency
      • Selective Tuning
      • Spatial Cognition
      • Vision Architecture
    • Visually Guided Robotics
      • Active Recognition
      • Autonomous Vehicles
      • Navigation
      • Visual Search
  • Publications
    • Publications
    • Software
    • Datasets
  • Open Positions
  • Contact

Paria Mehrani presents “Border Ownership Assignment based on Dorsal and Horizontal Modulations” at VSS 2018


By tech | June 25, 2018 | Category Presentations

Venue: VSS 2018, Florida

Paper: Border Ownership Assignment based on Dorsal and Horizontal Modulations

Poster: https://osf.io/p4ygh/

Abstract:

The face-vase illusion introduced by Rubin (Rubin, 1915) demonstrates how one can switch back and forth between different interpretations by assigning borders to either side of contours in an image. Border ownership assignment is an important step in perception of forms. Zhou et al. (Zhou, Friedman, & von der Heydt, 2000) suggested that certain neurons in the visual cortex encode border ownership. They showed that the responses of these neurons not only depend on the local features present in their classical receptive fields, but also on the contextual information. Various models (Layton, Mingolla, & Yazdanbakhsh, 2012; Tschechne & Neumann, 2014) proposed employing feedback modulations for border ownership neurons as the neurons higher in the ventral stream have larger receptive fields and hence, can provide the required contextual information. Zhaoping (Zhaoping, 2005), however, suggested lateral connections could provide the required contextual information. The time course of border ownership neurons does not support feedback from higher layers in the ventral stream and that horizontal connections cannot be the only source of contextual information (Zhang & von der Heydt, 2010). In this study, we propose a model that provides the global information to border ownership neurons by incorporating modulatory signals from MT in the dorsal stream as well as horizontal connections. MT neurons are sensitive to spatiotemporal variations at coarser scales and have relatively large receptive fields. Moreover, they are computationally fast and fit well within the time course of border ownership computation (Schmolesky et al., 1998). Our simulation experiments show that our model border ownership neurons, similar to their biological counterparts, exhibit a difference of response to figure on either side of the border. Moreover, the difference in responses becomes smaller as the figure size increases and the responses are invariant to outlined and solid figures.”

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent News


  • Lab members at the VSS conference
  • Congrats to Iuliia Kotseruba on wining the Best Student Paper Award at IV 2024!
  • Lab members at NCRN24
  • Markus Solbach presents “Visuospatial Hypothesize-and-Test Strategies Yield High Accuracy without Training; Their Efficiency Improves with Practice” at RAW 2023
  • Current and former lab members at the VSS conference

University Links

  • Centre for Vision Research
  • Department of Electrical Engineering and Computer Science
  • Lassonde School of Engineering
  • York University
  • Centre for Innovation in Computing at Lassonde
  • Tsotsos Lab on Social Media

    Copyright © 2015 Tsotsos Lab

    Theme created by PWT. Powered by WordPress.org