Techinsider
Thursday, November 21, 2024

Our minds may process language like chatbots, study reveals

Zero-shot encoding and decoding analysis. Credit: Nature Communications (2024). DOI: 10.1038/s41467-024-46631-y

A recent study has found fascinating similarities in how the human brain and artificial intelligence models process language. The research, published in Nature Communications, suggests that the brain, like AI systems such as GPT-2, may use a continuous, context-sensitive embedding space to derive meaning from language, a breakthrough that could reshape our understanding of neural language processing.

The study was led by Dr. Ariel Goldstein from the Department of Cognitive and Brain Sciences and Business School at the Hebrew University of Jerusalem, with close collaboration with Google Research in Israel and New York University School of Medicine.
Unlike traditional language models based on fixed rules, deep language models like GPT-2 employ neural networks to create “embedding spaces”—high-dimensional vector representations that capture relationships between words in various contexts. This approach allows these models to interpret the same word differently based on surrounding text, offering a more nuanced understanding. Dr. Goldstein’s team sought to explore whether the brain might employ similar methods in its processing of language.
To investigate, the researchers recorded neural activity in the inferior frontal gyrus—a region known for language processing—of participants as they listened to a 30-minute podcast. By mapping each word to a “brain embedding” in this area, they found that these brain-based embeddings displayed geometric patterns similar to the contextual embedding spaces of deep language models.
Remarkably, this shared geometry enabled the researchers to predict brain responses to previously unencountered words, an approach called zero-shot inference. This implies that the brain may rely on contextual relationships rather than fixed word meanings, reflecting the adaptive nature of deep learning systems.

“Our findings suggest a shift from symbolic, rule-based representations in the brain to a continuous, context-driven system,” explains Dr. Goldstein. “We observed that contextual embeddings, akin to those in deep language models, align more closely with neural activity than static representations, advancing our understanding of the brain’s language processing.”
This study indicates that the brain dynamically updates its representation of language based on context rather than depending solely on memorized word forms, challenging traditional psycholinguistic theories that emphasized rule-based processing. Dr. Goldstein’s work aligns with recent advancements in artificial intelligence, hinting at the potential for AI-inspired models to deepen our understanding of the neural basis of language comprehension.
The team plans to expand this research with larger samples and more detailed neural recordings to validate and extend these findings. By drawing connections between artificial intelligence and brain function, this work could shape the future of both neuroscience and language-processing technology, opening doors to innovations in AI that better reflect human cognition.

More information:
Ariel Goldstein et al, Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns, Nature Communications (2024). DOI: 10.1038/s41467-024-46631-y

Provided by
Hebrew University of Jerusalem

Citation:
Our minds may process language like chatbots, study reveals (2024, November 18)
retrieved 20 November 2024
from https://medicalxpress.com/news/2024-11-minds-language-chatbots-reveals.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Hot this week

Topics

Related Articles

Popular Categories