İkbari
Sunday, 15 February 2026
Breaking

New Devices May Overcome AI's Persistent Memory Wall

University of California, San Diego Researchers Develop 3D-S

New Devices May Overcome AI's Persistent Memory Wall
7dayes
3 days ago
109

United States - Ekhbary News Agency

New Devices May Overcome AI's Persistent Memory Wall

In a significant leap forward for artificial intelligence, scientists have developed a new memory technology that could shatter the long-standing 'memory wall' hindering AI's progress. The conventional challenge lies in the immense time and energy consumed by transferring data between processors and memory, even for highly optimized AI models. This bottleneck limits the speed and learning capabilities of current AI systems.

The innovation stems from a novel approach to Resistive Random-Access Memory (RRAM), a type of non-volatile memory that stores data by altering its electrical resistance. The researchers at the University of California, San Diego (UCSD), have engineered a new form of RRAM, termed 'bulk RRAM,' designed to perform key computations for neural networks – such as matrix multiplication and summation – directly within the memory cells. This 'in-memory computing' paradigm aims to eliminate the inefficient data shuttling between separate processing and memory units.

Dr. Duygu Kuzum, an electrical engineer at UCSD and lead researcher on the project, explained that the core innovation involves a fundamental redesign of how RRAM operates. Traditional RRAM relies on forming low-resistance 'filaments' within a higher-resistance dielectric material. This process often requires high voltages incompatible with standard CMOS technology, complicating integration into processors, and the filament formation itself is inherently noisy and random. Such instability is detrimental to AI, where even slight variations in computational weights can lead to drastically different outcomes.

Furthermore, the inherent instability and the need for isolation using selector transistors in filament-based RRAM make complex 3D stacking challenging. These limitations have historically made conventional RRAM unsuitable for the demanding parallel matrix operations crucial for modern neural networks. The UCSD team's decision was to completely move away from these filaments.

Instead, their 'bulk RRAM' devices operate by switching the resistance state of an entire layer, from high to low and back again, with a single voltage pulse. This approach sidesteps the need for high-voltage filament formation and eliminates the geometry-limiting selector transistor, paving the way for more compact and integrated designs. While bulk RRAM concepts are not entirely new, the UCSD group achieved significant breakthroughs in miniaturization and 3D circuit fabrication with this technology.

The researchers successfully scaled down their RRAM devices to the nanoscale, with individual cells measuring just 40 nanometers across. Crucially, they demonstrated the ability to stack these bulk RRAM cells into an impressive eight layers. This high-density, 3D-stacked architecture is a major advancement in memory technology.

A key advantage of this new design is its capacity for multi-level data storage. Each cell in the eight-layer stack can represent 64 distinct resistance values (equivalent to 6 bits of data) using a single voltage pulse. This level of precision and density is exceptionally difficult to achieve with traditional, filament-based RRAM, whose inherent noise makes such fine-grained control unreliable. The ability to store more information per cell and perform computations locally is vital for the complex calculations AI demands.

At the IEEE International Electron Device Meeting (IEDM) in December, the team presented findings showcasing a continually learning neural network running on their novel RRAM structure. This demonstration of in-memory computing for continuous learning marks a pivotal moment, suggesting a path toward more powerful, energy-efficient, and scalable AI systems. The potential applications are vast, ranging from advanced robotics and autonomous vehicles to sophisticated data analysis and real-time AI processing.

This breakthrough in bulk RRAM technology offers a tangible solution to the 'memory wall' problem. By enabling computation within memory, it promises to dramatically reduce power consumption, accelerate data processing speeds, and facilitate the development of more complex and capable AI models. As research progresses, these 3D-stacked bulk RRAM devices are poised to play a crucial role in defining the future of computing and artificial intelligence.

Keywords: # AI # artificial intelligence # memory wall # RRAM # bulk RRAM # in-memory computing # 3D stacking # computer memory # semiconductor # data processing # UCSD