AI Demonstrates Remarkable Ability to Form Memories Similar to Humans

21 December 2023 1958
Share Tweet

An interdisciplinary team has discovered that AI models, particularly the Transformer, utilize memory in a similar way to how the hippocampus of the human brain does. This significant discovery suggests that the application of neuroscience principles, like the functioning of the NMDA receptor, to AI could enhance memory capabilities, progressing the AI field and providing insights into human brain function. Photo credit: SciTechDaily.com

Scientists have identified parallels between AI memory consolidation processes and those of the human brain, particularly in the hippocampus. This presents opportunities for developments in AI and a deeper comprehension of human memory mechanisms.

A research team, comprising investigators from both the Center for Cognition and Sociality and the Data Science Group within the Institute for Basic Science (IBS), has noticed a striking resemblance between how AI models and the human brain, specifically the hippocampus, process memory. This novel perspective sheds light on memory consolidation, the process that changes short-term memories into long-term ones, in AI systems.

As we strive to develop Artificial General Intelligence (AGI), spearheaded by leading entities like OpenAI and Google DeepMind, understanding and duplicating human-like intelligence has become a significant research focus. At the heart of these advancements is the Transformer model, which is being investigated closely.

Figure 1. (a) Diagram of the ion channel activity in post-synaptic neurons. AMPA receptors are involved in activating post-synaptic neurons, whereas NMDA receptors are blocked by magnesium ions (Mg²⁺) but stimulate synaptic plasticity through calcium ions (Ca²⁺) influx when the post-synaptic neuron is sufficiently stimulated. (b) Flowchart illustrating the computational process within the Transformer AI model. Information is processed in sequential stages, such as feed-forward layers, layer normalization, and self-attention layers. The graph depicting the current-voltage relationship of the NMDA receptors is incredibly similar to the nonlinearity of the feed-forward layer. The input-output graph, according to the density of magnesium (α), displays the changes in the nonlinearity of the NMDA receptors. Picture credit: Institute for Basic Science.

Understanding how they comprehend and retain information is crucial to robust AI systems. The team implemented human brain learning principles, focusing on memory consolidation via the NMDA receptor in the hippocampus, to AI models.

The NMDA receptor functions like a smart door in your brain that promotes learning and memory creation. A nerve cell experiences excitation if a brain chemical known as glutamate is present. Conversely, a magnesium ion serves as a door-blocking gatekeeper. Only when this ionic gatekeeper moves out of the way are substances permitted to enter the cell. This mechanism enables the brain to create and store memories, with the magnesium ion gatekeeper playing a unique role.

The team's intriguing discovery revealed that the Transformer model appears to employ a gatekeeping mechanism akin to the brain's NMDA receptor. This prompted the researchers to explore whether the Transformer's memory consolidation could be regulated by a system comparable to the NMDA receptor's gating procedure.

In animal brains, it is known that low magnesium levels impair memory function. The scientists revealed that by mimicking the NMDA receptor, the Transformer's long-term memory improved. Much like how changing magnesium concentrations in the brain influences memory potency, adjusting the Transformer's parameters to mirror the NMDA receptor's gating action resulted in enhanced memory in the AI model. This pivotal discovery indicates that established neuroscience knowledge can clarify how AI models learn.

Institute neuroscientist director C. Justin LEE said, "This research constitutes a critical stride forward in advancing AI and neuroscience. It enables us to probe deeper into the brain's operational rules and build more sophisticated AI systems based on these findings."

Data scientist at KAIST and team member CHA Meeyoung stated, "The human brain is exceptional in its ability to function minimally, unlike substantial AI models that require substantial resources. Our findings pave the way for low-cost, high-performing AI systems that learn and retain information similarly to humans."

What sets this study apart is its initiative to incorporate brain-inspired nonlinearity into an AI construct, signifying a significant advancement in simulating human-like memory consolidation. The convergence of human cognitive mechanisms and AI design not only holds promise for creating low-cost, high-performance AI systems but also provides valuable insights into the workings of the brain through AI models.


RELATED ARTICLES