AI Reveals Hidden Brain Mechanisms of Covert Attention & Emergent Neuron Types (2026)

Bold claim: AI reveals surprising brain tricks behind covert attention and even uncovers new neuron types. And this discovery could reshape how we think about attention in both brains and machines. But here’s where it gets controversial... is an artificial network truly mirroring biology, or just mimicking its outputs? This is the question at the heart of a new study from UC Santa Barbara researchers Sudhanshu Srivastava, Miguel Eckstein, and William Wang. Using convolutional neural networks (CNNs), they explored how covert attention operates when we shift focus in a visual scene without moving our eyes, such as when you’re driving or scanning a room for cues to react to a joke. Their work uncovers underlying mechanisms of covert attention and identifies emergent neuron types, which they validated with real mouse brain data.

The researchers’ key insight is that attention functions like a spotlight or zoom lens in the brain, directing processing resources to a chosen region of the visual field. Modern computational and neurobiological models often include an attention mechanism that enhances signal quality or reduces noise at the attended location. In experimental settings, scientists study covert attention by presenting a cue (a flash or arrow) before or with a briefly shown target and then measuring how much faster and more accurately people detect the target when the cue matches its location. The cue is believed to bias the brain’s attention system toward that region, altering how visual information is processed.

Despite how natural covert attention feels to humans, for a long time this behavior was thought to be exclusive to primates and tied to the parietal lobes—and even linked to consciousness. More recently, researchers have observed similar attentional behaviors in species with simpler brains, including archer fish, mice, and bees. This broader view prompted the team to ask whether covert attention could be an emergent property arising from the coordinated activity of many neurons, rather than the product of a dedicated attention module.

Mapping the brain’s information processing to reveal how attention is optimized is exceptionally challenging. The human brain contains billions of neurons and is highly dynamic and variable. Imaging techniques can’t currently resolve activity at the level of single neurons across the entire brain. The study therefore turns to artificial intelligence as a proxy: by modeling a simplified brain and training the model on tasks the human brain performs, researchers can inspect the internal units of the AI to infer how real neural networks might be organized for such tasks.

In 2024, Srivastava, Eckstein, and Wang demonstrated that CNNs housing between 200,000 and 1 million artificial neurons could exhibit hallmarks of human covert attention in various target-detection tasks, even though the networks did not include a built-in attention-orienting mechanism. This finding suggested that covert attention might emerge spontaneously in systems trained to detect targets as efficiently as possible.

So what within the CNN enables emergent covert attention? The new paper digs into the network’s inner workings, asking whether the emergent neuronal mechanisms in CNNs can account for the behavioral signatures of covert attention. Senior author Eckstein explains that they treated the CNN not as a black box but as a miniature brain. While recording single neurons in a real brain is currently impractical on a huge scale, the CNN allows researchers to examine every unit. By analyzing 1.8 million artificial neurons (180,000 units across 10 trained CNNs) on a Posner cueing task, they looked for units that resemble biological neurons involved in attention.

Among the discoveries were CNN units that behaved similarly to neurons observed in primate and mouse brains, even though those networks lacked explicit attention mechanisms. Importantly, the team found several previously unreported CNN neuron types, including a surprising “cue inhibitory” type that reduces activity in response to a cue. Yet perhaps the most striking find is a “location opponent” neuron: an excitatory unit that boosts activity at the target’s location when the cue and target coincide, while suppressing activity at other locations. This push-pull dynamic, where attention both amplifies relevant signals and dampens distractions, is a novel insight for covert attention research and mirrors opponent processes seen in other sensory domains (for example, color-opponent cells that react to red versus green).

Eckstein emphasizes that most attention studies focus on excitation—neurons increasing their firing in response to attention. The discovery of inhibitory responses highlights a broader mechanism that can shape neural activity in subtle, yet powerful ways.

To validate these CNN-derived insights, the researchers compared them with neural data from mice performing cueing tasks. They found that the so-called location-opponent neurons and other newly identified neuron types also exist in the mouse superior colliculus, a midbrain structure involved in orienting responses, along with cue-inhibitory and location-summation neurons. This cross-species correspondence suggests the emergent mechanisms may reflect general principles of how attention operates across brains.

Notably, one CNN neuron type that combines cue-based opponency with excitatory summation for the target at both locations did not have a counterpart in mouse data, hinting that artificial networks might reveal potential neural strategies that biology has yet to implement or that biology constrains.

How far these findings extend to humans remains an open question. The work represents an early but meaningful step toward understanding how covert attention can emerge from networked neural activity and how CNNs can predict neural types with properties not previously documented. It challenges the view that attention is confined to specialized brain modules and points instead to a more distributed, emergent organization.

As Srivastava put it, the work fundamentally changes how we think about attention. The team continues to explore the interface between human and machine intelligence through UCSB’s Mind & Machine Intelligence Initiative, seeking to bridge AI advances with the study of the mind.

Source:
Srivastava, S., et al. (2025). Emergent neuronal mechanisms mediating covert attention in convolutional neural networks. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.2411909122.

AI Reveals Hidden Brain Mechanisms of Covert Attention & Emergent Neuron Types (2026)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Fredrick Kertzmann

Last Updated:

Views: 5869

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Fredrick Kertzmann

Birthday: 2000-04-29

Address: Apt. 203 613 Huels Gateway, Ralphtown, LA 40204

Phone: +2135150832870

Job: Regional Design Producer

Hobby: Nordic skating, Lacemaking, Mountain biking, Rowing, Gardening, Water sports, role-playing games

Introduction: My name is Fredrick Kertzmann, I am a gleaming, encouraging, inexpensive, thankful, tender, quaint, precious person who loves writing and wants to share my knowledge and understanding with you.