Shared neural representation of events and objects
Leo Fernandino
Leo Fernandino
A lay summary of the study "A Common Representational Code for Event and Object Concepts in the Brain" by Tong, J.-Q., Binder, J. R., Conant, L. L., Mazurchuk, S., Anderson, A. J., & Fernandino, L. (2025). The Journal of Neuroscience, 45(41) e2166242025. https://doi.org/10.1523/JNEUROSCI.2166-24.2025
Neuroscientists have long known that injury to specific parts of the brain can selectively disrupt one’s knowledge of certain kinds of things while leaving other domains relatively unaffected. For example, a person with damage to the posterior superior temporal sulcus (indicated in red in the figure below) may struggle to talk—or even think—about common actions and events while having little difficulty talking about concrete objects. Conversely, damage to the inferior temporal gyrus (blue in the figure) may result in the opposite pattern. The knowledge deficit can be much more severe for concrete entities of a particular kind, such as animals, or tools, than for others, again depending on the precise location of the lesion. Such category-specific semantic impairments, as they are known, have led many researchers to believe that the brain contains distinct knowledge systems for different domains of experience, each supported by its own dedicated network of regions. Our study provides direct evidence against this view by showing that concepts from different categories are jointly represented within the same brain regions, according to a shared representational code.
We used functional MRI (fMRI) to examine how concepts from different categories are encoded in patterns of brain activity. Our primary question was whether any regions of the cerebral cortex represent information exclusively about concrete objects (e.g., “spoon,” “horse,” “truck”) or exclusively about events (e.g., “storm”, “lecture”, “wedding”). We also asked whether concepts from different semantic categories (animals, tools, vehicles, social events, natural events, etc.) are encoded in terms of a shared set of semantic features or in terms of distinct, non-overlapping representational codes.
In the MRI scanner, neurologically healthy participants were presented with 320 words—half concrete objects, half events—and asked to think about the meaning of each word as it appeared on the screen. Meanwhile, the unique pattern of brain activity corresponding to each word was recorded with fMRI. By controlling for the visual appearance and for orthographic and phonological features of the words, we were able to isolate activation patterns specifically related to their individual meanings, which were then used to evaluate our hypotheses.
Two different analysis techniques, representational similarity analysis (RSA) and encoding model analysis, generated convergent results. In the RSA approach, the degree of similarity between two concepts, according to judgments from human participants, was determined for all possible pairs of event concepts and, separately, for all possible pairs of object concepts. If the pairwise similarity values for a set of concepts correlate with the degree of similarity among their corresponding activation patterns in a given region of the brain—and the correlation is substantially stronger than what would be expected by chance—we can conclude that the region in question contributes to the neural representation of those concepts. In each of the cortical areas examined, including those previously claimed to be dedicated to objects or to events, we found significant RSA correlations for both kinds of concepts.
In the encoding model approach, a model consisting of a set of hypothesized component features was “trained” to fit a subset of the neuroimaging data—that is, the amplitude of each component was scaled to provide the best overall match—and the trained model was then used to predict a different subset of the data. Specifically, the model was trained to fit the brain activity patterns produced by 80 different concepts and then used to predict the activity patterns produced by a different subset of concepts. That is, once the model is adjusted to fit the training data, it is used to try and decode the word indentities in the test data. The accuracy of the predictions was evaluated by comparing them to the actual activation data for those concepts. This procedure was repeated many times, each time using different subsets of training and test data, and the results were averaged together. We found that the same set of 65 component features could decode activations for event concepts even when trained exclusively on activations for objects, and vice versa, indicating that these two conceptual categories are encoded in brain activity according to a shared code. We also found that this feature-based model could decode concepts belonging to a given subcategory (e.g., animals) even when trained exclusively on data from a different subcategory (e.g., tools).
These results were found in the brain regions most strongly associated with object concepts (green-teal in the figure below) as well as in the regions most strongly associated with events (yellow). The study confirmed that these regions are not equally important for the two kinds of concepts—some areas encode more information about events than about objects, and vice versa—but found that these differences simply reflect the degree to which a given region responds to the each of the underlying features (i.e., its tuning profile), rather than reflecting categorical representation.
Unlike most previous studies that decoded linguistic meaning from brain activity, our approach relied on a representational model grounded in interpretable semantic features that correspond to known principles of brain organization. In addition to accurately decoding individual concepts from patterns of neural activity, the model also predicted—reasonably well—the whole-brain activation map obtained when contrasting event words with object words; that is, it captured which regions are, on average, more strongly activated for one category than the other.
These results support the theory that concepts consist of flexible combinations of high-level sensory-motor and affective features distributed throughout the cerebral cortex. According to this theory, retrieving a concept from memory consists in simultaneously re-activating its component sensory-motor and emotional features in coordinated fashion, resulting in an integrated simulation of perceptual experience that can be used to make inferences and achieve goals.
A deeper understanding of how concepts are encoded in the brain can lead to transformative real-world applications, including brain–computer interface (BCI) devices that assist individuals with aphasia—a neurological condition that severely impairs spoken and written communication. Although people with aphasia typically know which ideas they want to express, their brain can no longer translate ideas into words because the neural systems responsible for this operation have been permanently damaged. A BCI capable of decoding the concepts a person intends to communicate directly from their brain activity and converting them into synthesized speech could restore their ability to communicate.
TLDR:
The study indicates that, despite the fact that some individuals with brain injury develop category-related deficits in their conceptual knowledge, the brain uses the same basic code to represent knowledge across all categories.
The results conflict with the idea that our knowledge of the world is based on domain-specific systems and stored in category-specific brain regions.
The model of concept representation supported by the study may provide the theoretical foundations for brain-computer interface devices designed to help individuals with aphasia.
The study was co-authored by Jia-Qing Tong, Jeffrey Binder, Lisa Conant, Stephen Mazurchuk, Andrew Anderson, and Leonardo Fernandino, with generous support from the Department of Neurology of the Medical College of Wisconsin, the National Institutes of Health, and the Advancing a Healthier Wisconsin Endowment. We thank Jed Mathis, Sidney Schoenrock, and Joseph Heffernan for their indispensable assistance with regulatory requirements, participant recruitment, and data collection, curation, and preprocessing.