Netdev 0x17 venue
Vancouver, Canada
Previous editions
Fosstodon
NETDEV VIDEOS
Session
Device Memory TCP
Speakers
Mina Almasry
Willem de Bruijn
Eric Dumazet
Kaiyuan Zhang
Label
Nuts and Bolts
Session Type
Talk
Contents
Description
- TL;DR:
Device memory TCP (devmem TCP) is a proposal for transferring data to and/or from device memory efficiently, without bouncing the data to a host memory buffer.
- Problem:
A large amount of data transfers have device memory as the source and/or destination. Accelerators drastically increased the volume of such transfers. Some examples include:
-
ML accelerators transferring large amounts of training data from storage into GPU/TPU memory. In some cases ML training setup time can be as long as 50% of TPU compute time, improving data transfer throughput & efficiency can help improving GPU/TPU utilization.
-
Distributed training, where ML accelerators, such as GPUs on different hosts, exchange data among them.
-
Distributed raw block storage applications transfer large amounts of data with remote SSDs, much of this data does not require host processing.
Today, the majority of the Device-to-Device data transfers the network are implemented as the following low level operations: Device-to-Host copy, Host-to-Host network transfer, and Host-to-Device copy.
The implementation is suboptimal, especially for bulk data transfers, and can put significant strains on system resources, such as host memory bandwidth, PCIe bandwidth, etc. One important reason behind the current state is the kernel’s lack of semantics to express device to network transfers.
- Proposal:
In this patch series we attempt to optimize this use case by implementing socket APIs that enable the user to:
- send device memory across the network directly, and
- receive incoming network packets directly into device memory.
Packet payloads go directly from the NIC to device memory for receive and from device memory to NIC for transmit. Packet headers go to/from host memory and are processed by the TCP/IP stack normally. The NIC must support header split to achieve this.
Advantages:
-
Alleviate host memory bandwidth pressure, compared to existing network-transfer + device-copy semantics.
-
Alleviate PCIe BW pressure, by limiting data transfer to the lowest level of the PCIe tree, compared to traditional path which sends data through the root complex.
With this proposal we’re able to reach ~96.6% line rate speeds with data sent and received directly from/to device memory.
Recent News
Group Booking Discount at Paradox Hotel
[Mon, 16, Oct. 2023]
Bronze Sponsor, Relianoid
[Fri, 06, Oct. 2023]
Registration is now Open
[Mon, 18, Sep. 2023]
Bronze Sponsor, NVIDIA
[Fri, 15, Sep. 2023]
Silver Sponsor, Intel®
[Tue, 12, Sep. 2023]
Important Dates
Closing of CFS | Aug 27th, 2023 |
Notification by | Sep 15th, 2023 |
Conference dates | Oct 30th - Nov 3rd, 2023 |