Transformers
On the role of attention in Vision Transformer Models
Vision transformers (ViTs) have become a dominant architecture in many AI models tackling vision problems, outperforming convolutional neural networks (CNNs) in vision tasks such as image classification and segmentation. What sets these models apart from CNNs is the introduction of attention modules that were motivated by reference to the effects of attention in humans. Some studies reported that ViTs are more human-like than CNNs, closing the gap in machine and human vision. While most studies assessed human-like performance in ViTs in an end-to-end manner, such as by comparing object recognition errors, direct probing of the attention modules to disentangle their role in this improvement is often neglected. If attention modules in ViTs make them more human-like, do they also have the same known effects as visual attention in humans? To answer this question, we probed attention modules in ViTs and studied their effects on figure border ownership assignment, namely, facilitatory attentional modulation and the inhibition of distractor influence. As feedforward architectures, attention in ViTs is bottom-up. Therefore, we inspected the effect of bottom-up attention by comparing pre- and post-attention responses to a single shape in the image. In this case, one expects attention on the single salient shape, leading to an increase in the modulation index. Instead, we observed a reduction. To simulate selective attention in ViTs, we infused a top-down signal that restricted the computation of attention scores to either preferred or non-preferred stimuli. Unlike in humans, where attending to a non-preferred stimulus suppresses neural responses, ViTs exhibited an enhancement, nearly matching the response to the preferred stimulus. Together, our results highlight substantial gaps between attention in ViTs and humans, suggesting that despite human-like object recognition errors, vision transformers lack attentional mechanisms with effects similar to those observed in humans.
Self-attention in vision transformers performs perceptual grouping, not attention
Recently, a considerable number of studies in computer vision involve deep neural architectures called vision transformers. Visual processing in these models incorporates computational models that are claimed to implement attention mechanisms. Despite an increasing body of work that attempts to understand the role of attention mechanisms in vision transformers, their effect is largely unknown. Here, we asked if the attention mechanisms in vision transformers exhibit similar effects as those known in human visual attention. To answer this question, we revisited the attention formulation in these models and found that despite the name, computationally, these models perform a special class of relaxation labeling with similarity grouping effects. Additionally, whereas modern experimental findings reveal that human visual attention involves both feed-forward and feedback mechanisms, the purely feed-forward architecture of vision transformers suggests that attention in these models cannot have the same effects as those known in humans. To quantify these observations, we evaluated grouping performance in a family of vision transformers. Our results suggest that self-attention modules group figures in the stimuli based on similarity of visual features such as color. Also, in a singleton detection experiment as an instance of salient object detection, we studied if these models exhibit similar effects as those of feed-forward visual salience mechanisms thought to be utilized in human visual attention. We found that generally, the transformer-based attention modules assign more salience either to distractors or the ground, the opposite of both human and computational salience. Together, our study suggests that the mechanisms in vision transformers perform perceptual organization based on feature similarity and not attention.
Publications
Mehrani, P., & Tsotsos, J. K. (2023). Self-attention in vision transformers performs perceptual grouping, not attention. Frontiers in Computer Science, 5. https://doi.org/10.3389/fcomp.2023.1178450