Breakthrough CRAM technology ditches von Neumann model, makes AI 1,000x more energy efficient

Breakthrough CRAM technology ditches von Neumann model, makes AI 1,000x more energy efficient


Futurology: The global demand for AI computing has data centers consuming electricity like frat houses chug beer. But researchers from the University of Minnesota might have a wildly innovative solution to curb AI’s growing thirst for power with a radical new device that promises vastly superior energy efficiency.

The researchers have designed a new “computational random-access memory” (CRAM) prototype chip that could reduce energy needs for AI applications by a mind-boggling 1,000 times or more compared to current methods. In one simulation, the CRAM tech showed an incredible 2,500x energy savings.

Traditional computing relies on the decades-old von Neumann architecture of separate processor and memory units, which requires constantly moving data back and forth in an energy-intensive process. The Minnesota team’s CRAM completely upends that model by performing computations directly within the memory itself using spintronic devices called magnetic tunnel junctions (MTJs).

Rather than relying on electrical charges to store data, spintronic devices leverage the spin of electrons, offering a more efficient substitute for traditional transistor-based chips.

“As an extremely energy-efficient digital-based in-memory computing substrate, CRAM is very flexible in that computation can be performed in any location in the memory array. Accordingly, we can reconfigure CRAM to best match the performance needs of a diverse set of AI algorithms,” said Ulya Karpuzcu, a co-author on the paper published in Nature. He added that it is more energy-efficient than traditional building blocks for today’s AI systems.

By eliminating those power-hungry data transfers between logic and memory, CRAM technologies like this prototype could be critical for making AI vastly more energy efficient at a time when its energy needs are exploding.

The International Energy Agency forecasted in March that global electricity consumption for AI training and applications could more than double from 460 terawatt-hours in 2022 to over 1,000 terawatt-hours by 2026 – nearly as much as all of Japan uses.

The researchers stated in a press release that the foundations of this breakthrough were over 20 years in the making, going back to pioneering work by engineering professor Jian-Ping Wang on using MTJ nanodevices for computing.

Wang admitted their initial proposals to ditch the von Neumann model were “considered crazy” two decades ago. But the Minnesota team persisted, building on Wang’s patented MTJ research that enabled magnetic RAM (MRAM) now used in smartwatches and other embedded systems.

Of course, as with any breakthrough of this sort, the researchers still need to tackle challenges around scalability, manufacturing, and integration with existing silicon. They’re already planning demo collaborations with semiconductor industry leaders to help make CRAM a commercial reality.




#Breakthrough #CRAM #technology #ditches #von #Neumann #model #1000x #energy #efficient,
#Breakthrough #CRAM #technology #ditches #von #Neumann #model #1000x #energy #efficient

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *