Nvidia’s Volta GPUs helped the US build the world’s two fastest supercomputers

Photo credit: Lawrence Livermore National Laboratory (Sierra supercomputer). Click for original. (Image credit: LLNL)

Nvidia isn't just making waves with its raytracing hardware, it's making a splash in the supercomputing space as well. The company's Tesla V100 GPUs now power the world's two fastest supercomputers as ranked on the Top500 list, with both spots occupied by the United States.

The US had already claimed the top spot previously with the US Department of Energy's Summit at Oak Ridge National Laboratory. It's even faster now, offering 143.5 petaflops of performance in Linpack.

As with Summit, the DoE's Sierra Supercomputer at Lawrence Livermore National Lab added more than 20 petaflops to its Linpack total, bringing it up to 94.6 petaflops. That was enough to move up from third place and steal the number two spot from China, though China still has more systems in the Top500 list overall (227 to the US's 109).

And if that's not enough, Nvidia's own SaturnV isn't even submitted to the Top500 list, but based on core specs it should place in the top five. And that's assuming Nvidia hasn't added nodes to the supercomputer. Our GPU editor Jarred at least thinks there's a real chance Nvidia might stealthily own the top spot with its in-house project.

Nvidia founder and CEO Jensen Huang used the achievement to once again declare the death of Moore's Law, which is a not-so-veiled shot at Intel.

"This is a breakout year for Nvidia in the world of supercomputing," Huang said. "With the end of Moore’s law, a new HPC market has emerged, fueled by new AI and machine learning workloads. These rely as never before on our high-performance, highly efficient GPU platform to provide the power required to address the most challenging problems in science and society."

Nvidia has reason to be excited about the direction supercomputers have gone. In the past year, it has seen a 48 percent increase in the number of supercomputers using its GPU accelerators—it now powers 127 of the top 500 systems. And if narrowing the list to the "greenest" supercomputers, Nvidia's GPUs are found in 22 of the top 25 systems in the Green500 list.

"At least some of the company’s momentum can be attributed to the fact the V100 can accelerate both traditional 64-bit HPC simulations, as well as machine learning/deep learning (ML/DL) algorithms—the latter using the lower precision capabilities of the GPU’s Tensor Cores. While this might have seemed like an odd pairing a year or two ago, it turns out more and more HPC users are inserting ML/DL into their workflows. Once again, Nvidia was ahead of the curve, much to the chagrin of its competition – AMD, Intel, et al," Top500 noted.

This doesn't mean much for gaming, at least not directly. However, it's not without some crossover. Nvidia has been porting its AI and machine learning efforts to the consumer side, with its Turing GPU architecture featuring Tensor Cores as well. We also have to wonder how many Turing GPUs have been shipped to non-gaming sectors, including potentially ML/DL supercomputers. That might explain the somewhat limited availability of RTX 2080 Ti cards at least.

Paul Lilly

Paul has been playing PC games and raking his knuckles on computer hardware since the Commodore 64. He does not have any tattoos, but thinks it would be cool to get one that reads LOAD"*",8,1. In his off time, he rides motorcycles and wrestles alligators (only one of those is true).