Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

The Cerebras Andromeda, a 13.5 million core AI supercomputer
Enlarge / The Cerebras Andromeda, a 13.5 million main AI supercomputer.

On Monday, Cerebras Systems unveiled its 13.5 million core Andromeda AI supercomputer for deep learning, reviews Reuters. In accordance Cerebras, Andromeda delivers in excess of one 1 exaflop (1 quintillion functions per 2nd) of AI computational power at 16-little bit half precision.

The Andromeda is itself a cluster of 16 Cerebras C-2 computers linked alongside one another. Each CS-2 is made up of one Wafer Scale Engine chip (normally identified as “WSE-2”), which is at the moment the largest silicon chip at any time designed, at about 8.5-inches sq. and packed with 2.6 trillion transistors arranged into 850,000 cores.

Cerebras designed Andromeda at a information center in Santa Clara, California, for $35 million. It can be tuned for programs like significant language models and has by now been in use for academic and commercial function. “Andromeda delivers in the vicinity of-ideal scaling via straightforward information parallelism throughout GPT-course huge language styles, together with GPT-3, GPT-J and GPT-NeoX,” writes Cerebras in a press launch.

The Cerebras WSL2 chip is roughly 8.5-inches square and packs 2.6 trillion transistors.
Enlarge / The Cerebras WSL2 chip is approximately 8.5-inches square and packs 2.6 trillion transistors.


The phrase “In close proximity to-fantastic scaling” signifies that as Cerebras provides additional CS-2 computer units to Andromeda, schooling time on neural networks is diminished in “in the vicinity of great proportion,” according to Cerebras. Normally, to scale up a deep-understanding design by introducing more compute electric power applying GPU-based systems, just one might see diminishing returns as hardware charges increase. Further more, Cerebras statements that its supercomputer can perform jobs that GPU-based devices can’t:

GPU unachievable perform was demonstrated by a person of Andromeda’s first users, who achieved near best scaling on GPT-J at 2.5 billion and 25 billion parameters with long sequence lengths—MSL of 10,240. The consumers tried to do the similar perform on Polaris, a 2,000 Nvidia A100 cluster, and the GPUs have been not able to do the get the job done because of GPU memory and memory bandwidth restrictions.”

Whether these claims keep up to exterior scrutiny is still to be observed, but in an era wherever providers usually prepare deep-studying products on progressively huge clusters of Nvidia GPUs, Cerebras seems to be providing an alternate method.

How does Andromeda stack up towards other supercomputers? At the moment, the world’s quickest, Frontier, resides at Oak Ridge National Labs and can perform at 1.103 exaflops at 64-bit double precision. That pc expense $600 million to make.

Obtain to Andromeda is obtainable now for use by various users remotely. It truly is previously staying used by professional creating assistant JasperAI and Argonne Countrywide Laboratory, and the University of Cambridge for analysis.