TX-GAIA is tailor-made for crunching through deep neural network operations.
A photo of the outside of a storage faciilty for the computing system.
TX-GAIA is housed inside of a new EcoPOD, manufactured by Hewlett Packard Enterprise, at the site of the Lincoln Laboratory Supercomputing Center in Holyoke, Massachusetts. Photo: Glen Cooper.

The new TX-GAIA (Green AI Accelerator) computing system at Lincoln Laboratory's Supercomputing Center (LLSC) has been ranked as the most powerful artificial intelligence (AI) supercomputer at any university in the world. The ranking comes from TOP500, which publishes a list of the top supercomputers in various categories biannually. The system, which was built by Hewlett Packard Enterprise, combines traditional high-performance computing hardware — nearly 900 Intel processors — with hardware optimized for AI applications — 900 Nvidia GPU accelerators.

"We are thrilled by the opportunity to enable researchers across Lincoln and MIT to achieve incredible scientific and engineering breakthroughs," said Jeremy Kepner, a Lincoln Laboratory Fellow who heads the LLSC. "TX-GAIA will play a large role in supporting AI, physical simulation, and data analysis across all Laboratory missions."

TOP500 rankings are based on a LINPACK Benchmark, which is a measure of a system's floating-point computing power, or how fast a computer solves a dense system of linear equations. TX-GAIA's TOP500 benchmark performance is 3.9 quadrillion floating-point operations per second, or petaflops (though since the ranking was announced in June 2019, Hewlett Packard Enterprise has updated the system's benchmark to 4.725 petaflops). The June TOP500 benchmark performance places the system #1 in the Northeast, #20 in the United States, and #51 in the world for supercomputing power. The system's peak performance is more than 6 petaflops. 

But more notably, TX-GAIA has a peak performance of 100 AI petaflops, which makes it #1 for AI flops at any university in the world. An AI flop is a measure of how fast a computer can perform deep neural network (DNN) operations. DNNs are a class of AI algorithms that learn to recognize patterns in huge amounts of data. This ability has given rise to "AI miracles," as Kepner put it, in speech recognition and computer vision; the technology is what allows Amazon's Alexa to understand questions and self-driving cars to recognize objects in their surroundings. As these DNNs grow more complex, the longer it takes for them to process the massive datasets they learn from. TX-GAIA's Nvidia GPU accelerators are specially designed for performing these DNN operations quickly.

TX-GAIA is housed in a new modular data center, called an EcoPOD, at the LLSC’s green, hydroelectrically powered site in Holyoke, Massachusetts. It joins the ranks of other powerful systems at the LLSC, such as the TX-E1, which supports collaborations with MIT campus and other institutions, and TX-Green, which is currently ranked #490 on the TOP500 list.

image of computer racks with equipment.
The TX-GAIA system comprises nearly 900 Intel processors and 900 Nvidia GPU accelerators.

Kepner says that the system's integration into the LLSC will be completely transparent to users when it comes online this fall. "The only thing users should see is that many of their computations will be dramatically faster," he said.

Among its AI applications, TX-GAIA will be tapped for training machine learning algorithms, including those that use DNNs. It will more quickly crunch through terabytes of data — for example, hundreds of thousands of images or years' worth of speech samples — to teach these algorithms to figure out solutions on their own. The system's compute power will also expedite simulations and data analysis. These capabilities will support projects across the Laboratory's R&D areas, such as improving weather forecasting, accelerating medical data analysis, building autonomous systems, designing synthetic DNA, and developing new materials and devices.

TX-GAIA, which is also ranked the #1 system in the Department of Defense, will also support the recently announced MIT–Air Force AI Accelerator. The partnership will combine the expertise and resources of MIT, including those at the LLSC, and the Air Force to conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems.