NVLink

NVLink is a wire-based communications protocol serial multi-lane near-range communication link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central Hub.

Principle

NVLink is a wire-based communications protocol for near-range semiconductor communications developed by Nvidia that can be used for data and control code transfers in processor systems between CPUs and GPUs and solely between GPUs. NVLink specifies a point-to-point connections with data rates of 20 and 25 Gbit/s (v1.0/v2.0) per data lane per direction. Total data rates in real world systems are 160 and 300 GByte/s (v1.0/v2.0) for the total system sum of input and output data streams.[1] NVLink products introduced to date focus on the high-performance application space. NVLINK, first announced in March 2014, uses a proprietary High-Speed Signaling interconnect (NVHS) developed by Nvidia.[2]

Performance

The following table shows a comparison of relevant bus parameters for real world semiconductors that all offer NVLink as one of their options:

SemiconductorInterconnectTransmission
Technology
Rate (per lane)
Lanes per
Sub-Link
(out + in)
Sub-Link Data Rate
(per data direction)
Sub-Link
Count
Total Data Rate
(out + in)
Total
Lanes
(out + in)
Total
Data Rate
(out + in)
Nvidia P100[3]PCIe 3.08 GT/s16 + 16 128 Gbit/s = 16 GByte/s116 + 16 GByte/s[4]32 32 GByte/s
IBM Power9[5]PCIe 4.016 GT/s16 + 16 256 Gbit/s = 32 GByte/s396 + 96 GByte/s48192 GByte/s
Nvidia P100NVLink 1.020 GT/s8 + 8 160 Gbit/s = 20 GByte/s480 + 80 GByte/s64160 GByte/s
IBM Power8+NVLink 1.020 GT/s8 + 8 160 Gbit/s = 20 GByte/s480 + 80 GByte/s64160 GByte/s
Nvidia V100NVLink 2.025 GT/s8 + 8 200 Gbit/s = 25 GByte/s6[6]150 + 150 GByte/s96300 GByte/s
IBM Power9[7]NVLink 2.0
(BlueLink ports)
25 GT/s8 + 8 200 Gbit/s = 25 GByte/s6150 + 150 GByte/s96300 GByte/s

Note: Data Rate columns were rounded by being approximated by transmission rate, see real world performance paragraph

: sample value; NVLink sub-link bundling should be possible
: sample value; other fractions for the PCIe lane usage should be possible
: a single(no! 16) PCIe lane transfers data over a differential pair

Real world performance could be determined by applying different encapsulation taxes as well usage rate. Those come from various sources:

  • 128b/130b line code
  • Link control characters
  • Transaction header
  • Buffering capabilities (depends on device)
  • DMA usage on computer side (depends on other software, usually negligible on benchmarks)

Those physical limitations usually reduce the data rate to between 90 and 95% of the transfer rate. NVLink benchmarks show an achievable transfer rate of about 35.3 GB/s (host to device) for a 40 GB/s (2 sub-lanes uplink) NVLink connection towards a P100 GPU in a system that is driven by a set of IBM Power8 CPUs.[8]

On 5 April 2016, Nvidia announced that NVLink would be implemented in the Pascal-microarchitecture-based GP100 GPU, as used in, for example, Nvidia Tesla P100 products.[9] With the introduction of the DGX-1 high performance computer base it was possible to have up to eight P100 modules in a single rack system connected to up to two host CPUs. The carrier board (...) allows for a dedicated board for routing the NVLink connections – each P100 requires 800 pins, 400 for PCIe + power, and another 400 for the NVLinks, adding up to nearly 1600 board traces for NVLinks alone (...).[10] Each CPU has direct connection to 4 units of P100 via PCIe and each P100 has one NVLink each to the 3 other P100s in the same CPU group plus one more NVLink to one P100 in the other CPU group. Each NVLink (link interface) offers a bidirectional 20 GB/sec up 20 GB/sec down, with 4 links per GP100 GPU, for an aggregate bandwidth of 80 GB/sec up and another 80 GB/sec down.[11] NVLink supports routing so that in the DGX-1 design for every P100 a total of 4 of the other 7 P100s are directly reachable and the remaining 3 are reachable with only one hop. According to depictions in Nvidia's blog based publications from 2014 NVLink allows bundling of individual links for increased point to point performance so that for example a design with two P100s and all links established between the two units would allow the full NVLink bandwidth of 80 GB/s between them.[12]

At GTC2017 Nvidia presented its Volta generation of GPUs and indicated the integration of a revised version 2.0 of NVLink that would allow total i/o data rates of 300 GB/s for a single chip for this design, and further announced the option for pre-orders with a delivery promise for Q3/2017 of the DGX-1 and DGX-Station high performance computers that will be equipped with GPU modules of type V100 and have NVLink 2.0 realized in either a networked (two groups of four V100 modules with inter-group connectivity) of or a fully interconnected fashion of one group of four V100 modules. In 2017-2018 IBM and Nvidia delivered two supercomputers for the US Department of Energy named "Summit" and "Sierra",[13] which combine IBM's POWER9 family of CPUs and Nvidia's Volta architecture, using NVLink 2.0 for the CPU-GPU and GPU-GPU interconnects and InfiniBand EDR for the system interconnects.[14]

See also

References

  1. "What Is NVLink?". Nvidia. 2014-11-14.
  2. Nvidia NVLINK 2.0 arrives in IBM servers next year by Jon Worrel on fudzilla.com on August 24, 2016
  3. All aboard the PCIe bus for Nvidia's Tesla P100 supercomputer grunt by Chris Williams at theregister.co.uk on June 20, 2016
  4. NVLink Takes GPU Acceleration To The Next Level by Timothy Prickett Morgan at nextplatform.com on May 4, 2016
  5. POWER9 Webinar presentation by IBM for Power Systems VUG by Jeff Stuecheli on January 26, 2017
  6. GV100 Blockdiagramm in "GTC17: NVIDIA präsentiert die nächste GPU-Architektur Volta - Tesla V100 mit 5.120 Shadereinheiten und 16 GB HBM2" by Andreas Schilling on hardwareluxx.de on May 10, 2017
  7. NVIDIA Volta GV100 GPU Chip For Summit Supercomputer Twice as Fast as Pascal P100 – Speculated To Hit 9.5 TFLOPs FP64 Compute by Hassan Mujtaba at wccftech.com on December 20, 2016
  8. Comparing NVLink vs PCI-E with NVIDIA Tesla P100 GPUs on OpenPOWER Servers by Eliot Eshelman on microway.com on January 26, 2017
  9. "Inside Pascal: NVIDIA's Newest Computing Platform". 2016-04-05.
  10. Anandtech.com
  11. NVIDIA Unveils the DGX-1 HPC Server: 8 Teslas, 3U, Q2 2016 by anandtech.com on April, 2016
  12. How NVLink Will Enable Faster, Easier Multi-GPU Computing by Mark Harris on November 14, 2014
  13. "Whitepaper: Summit and Sierra Supercomputers" (PDF). 2014-11-01.
  14. "Nvidia Volta, IBM POWER9 Land Contracts For New US Government Supercomputers". AnandTech. 2014-11-17.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.