Netdev 1.1 - Talks http://www.netdevconf.org/1.1/tags/talks en Talk: "Speeding up the Linux TCP/IP stack with a fast packet I/O framework" (Michio Honda) http://www.netdevconf.org/1.1/talk-speeding-linux-tcpip-stack-fast-packet-io-framework-michio-honda <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>Linux network stack has provided the most innovative TCP implementation, adopting a large number of protocol extensions and optimizations. However, we’ve known for a while that it does not perform well for transaction workloads, which involve a lot of small packets and large number of concurrent TCP connections.</p> <p>We are not going to OS-bypass TCP/IPs, because none of them implements rich features Linux TCP has, including DCTCP, Fast Open, to name a few. In this talk, we show that we can improve Linux TCP/IP performance by integrating the netmap framework.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Thu, 28 Jan 2016 10:17:58 +0000 admin 75 at http://www.netdevconf.org/1.1 Talk: "Virtual switch HW acceleration" (Rony Efraim) http://www.netdevconf.org/1.1/talk-virtual-switch-hw-acceleration-rony-efraim <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>Software based switching consumes CPU resources that can instead be offloaded to modern network adapters.</p> <p>In this talk we propose switch acceleration where functionality is done by software and when possible/sensible to offload to HW utilizing the switchdev framework when possible. Our approach can be implemented using existing functionality of most modern NICs that already support packet classification, multiple send and receive rings, traffic shapers and L2/L3/L4 overlay networks encapsulation / decapsulation.</p> <p>The proposed framework does HW classification of packets and associating an action per classification rule (for example, through 12 tuple classification). The following are the initial proposed actions:</p> <ul> <li>1.Mark a packet - use the HW based classification to tag the packet according using skbedit TC action.</li> <li>2.Send and Receive rings mapping - use dedicated HW rings per VM/MAC/other.</li> <li>3.QoS (Scheduling, Shaping, Metering, Rate limiting ...).</li> <li>4.Overlay networks encapsulation/decapsulation insert and strip in HW for non SRIOV VMs (VXLAN, NVGRE MPLS, QinQ....).</li> <li>5.Drop (e.g., accelerating a SW firewall implementation etc.).</li> <li>6.Count (Packets, bytes...).</li> </ul> <p>In this talk we will discuss how the proposed framework for HW acceleration is transparently mapped into the TC subsystem Filter&Action framework. Additionally we will suggest virtual switch control and data plane interfaces for enabling the acceleration framework.</p></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Tue, 26 Jan 2016 12:53:08 +0000 admin 72 at http://www.netdevconf.org/1.1 Talk: "Challenges in Testing - How OpenSourceRouting tests Quagga" (Martin Winter) http://www.netdevconf.org/1.1/talk-challenges-testing-how-opensourcerouting-tests-quagga-martin-winter <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>The talk gives an overview on how NetDEF/OpenSourceRouting tests the Quagga projects and discusses some of the challenges. In the talk we’ll go into the details on how we (as OpenSourceRouting) tests Quagga and the challenges we have with a multiplatform tool, supporting many different OS variations, CPU architectures and a community of various volunteers and commercial users. The goal of the talk is to give some inspiration to other projects on how to approach this and start a discussion.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Sun, 17 Jan 2016 11:12:38 +0000 admin 66 at http://www.netdevconf.org/1.1 Talk: "HW High-Availability and Link Aggregation for Ethernet switch and NIC RDMA using Linux bonding/team" (Or Gerlitz, Tzahi Oved) http://www.netdevconf.org/1.1/talk-hw-high-availability-and-link-aggregation-ethernet-switch-and-nic-rdma-using-linux-bondingteam <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>The Linux networking stack support High-Availability (HA) and Link Aggregation (LAG) through usage of bonding/teaming drivers, where both set a software netdevice on top of two or more netdevs.</p> <p>Those HA devices are set as "upper" devices acting over "lower" devices. The core networking stack uses notifier mechanism to announce setup/tear-down of such relations.</p> <p>We show how to take advantage of standard bonding/team and their associated notifiers to reflect HW/LAG into HW and achieve enhanced functionality.</p> <p>We present four use cases dealing with RDMA, SR-IOV Virtual Functions (VFs) and physical switch.</p> <p>In the 1st RDMA case, the RDMA stack presents a RoCE (RDMA-over-Ethernet) device with one port where this device is backed-up by two bonded Ethernet NICs, and the HW goes through setup that makes RDMA connections set over this device (are offloaded from the networking stack) to be subject to HA and LAG.</p> <p>In the SRIOV case, the PF host net-devices are bonded while the VF sees a HW device with one port. The HW setup done by the PF driver causes the overall VF traffic (both conventional TCP/IP and offloaded RDMA) to be subject to HA and LAG.</p> <p>In the physical switch case, the creation of a LAG above the port netdevices is propagated to the device driver using network notifiers.</p> <p>The device driver can either program the device to create the hardware LAG, or forbid the operation in case hardware resources were exceeded or because it lacks support for certain LAG parameters.</p> <p>The creation of further upper devices on top the LAG is propagated to the lower port netdevices in the same way as if the upper device was created directly on top of them.</p> <p>In the 2nd RDMA case, we propose an architecture for OS bypass Ethernet and RDMA bonding driver as a new kernel module for aggregating IB Devices network interfaces.</p> <p>IB Device (struct ib_device) exposes verbs programing network API which allows OS bypass for RAW Ethernet networking and RDMA operations.</p> <p>The driver will provide method for aggregating multiple IB device interfaces into a single logical bonded interface. This aggregation will allow existing verbs applications to use single logical device transparently and enjoy networking HA, load balancing and NUMA locality.</p> <p>The IB bonding driver works similarly and in conjunction with the Linux standard bonding/team drivers and with the latter continue to support standard network aggregation.</p> <p>In the talk we will present the architecture of the planned driver along with several configurations and offloads support as well as articulate various aggregation modes including Active-Active, Active Passive, resource allocation according to device affinity, and SRIOV bonding configuration.<p></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Sat, 16 Jan 2016 12:51:16 +0000 admin 65 at http://www.netdevconf.org/1.1 Talk: "nftables switchdev support" (Pablo Neira Ayuso) http://www.netdevconf.org/1.1/talk-nftables-switchdev-support-pablo-neira-ayuso <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>This talk covers the design and implementation of the nftables switchdev support. The goal is to introduce the audience to the new in-kernel infrastructure to represent the rulesets through a generic abstract syntax tree that can be easily transformed from the drivers into the hardware specific representation.</p> <p>Whenever a netdevice comes with ACL offload capabilities and switchdev support, nftables transparently offloads the ACL configuration to the hardware. The expressiveness is restricted to the hardware capabilities. The frontend Netlink API remains the same as in pure software mode, in order to hide all the complexity to ensure easy extensibility in the long run.</p> <p>This implementation relies on the rocker switch prototype and it should open the window for other new possible clients already available in the networking tree.</p> <p>This infrastructure can also be potentially used to provide just-in-time (jit) compilation from the kernel backend in a way that avoids the exposition of this internal representation to userspace.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Fri, 15 Jan 2016 11:00:21 +0000 admin 64 at http://www.netdevconf.org/1.1 Talk: "On getting tc classifier fully programmable with cls_bpf" (Daniel Borkmann) http://www.netdevconf.org/1.1/talk-getting-tc-classifier-fully-programmable-clsbpf-daniel-borkmann <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>In this talk/paper, we provide a technical deep-dive into the eBPF architecture, comparing it to the classic BPF framework and how tc's (traffic control) packet classification in the kernel is making use of it.</p> <p>The talk will discuss recently upstreamed features to the kernel and iproute2 and walk through some examples on how classifier/actions can be programmed in restricted C and loaded into the kernel on ingress/egress side with the help of llvm and tc. It'll also cover the topic of sharing eBPF maps and working with eBPF tail calls.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Thu, 14 Jan 2016 09:57:04 +0000 admin 63 at http://www.netdevconf.org/1.1 Talk: "Flow-based tunneling for SR-IOV using switchdev API" (Ilya Lesokhin, Haggai Eran, Or Gerlitz) http://www.netdevconf.org/1.1/talk-flow-based-tunneling-sr-iov-using-switchdev-api-ilya-lesokhin-haggai-eran-or-gerlitz <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>SR-IOV devices present improved performance for network virtualization, but pose limitations today on the ability of the hypervisor to manage the network. For instance, UDP and IP tunnels that are commonly used on the cloud are not supported today with SR-IOV. Flow based approaches like Open vSwitch and TC are common in managing virtual machine traffic. Both technologies are not supported with today's SR-IOV Linux driver model, which only allows to program MAC or MAC+VLAN based forwarding for virtual function traffic.</p> <p>We present a design that facilitates SR-IOV performance while maintaining flow-based management for both non-tunneled and VXLAN tunneled flows and uses the switchdev framework to program the SR-IOV eSwitch. Our prototype uses hardware offloads for most traffic, and a software fallback for traffic we cannot offload.</p> <p>We expose a representor netdev for each port in the SR-IOV eSwitch, one per virtual function and another for the uplink, to enable the management of these ports by the kernel and also the send and receive packets through the software fallback path. Our implementation currently uses open-vswitch for managing flows. It should be possible to extend it to other management schemes such as TC. A flow's match and actions are reflected to the underlying device using extended switchdev APIs. For tunneling we also propagate information about the tunnel FDB, and the kernel routing table and neighbor table.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Tue, 12 Jan 2016 10:48:30 +0000 admin 61 at http://www.netdevconf.org/1.1 Talk: "Scaling the Number of Network Interfaces on Linux" (David Ahern, Nikolay Aleksandrov, Roopa Prabhu) http://www.netdevconf.org/1.1/talk-scaling-number-network-interfaces-linux-david-ahern-nikolay-aleksandrov-roopa-prabhu <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>Linux is a popular OS for network switches, routers, hypervisors and other devices in the data center today. These deployments are using an increasing number of network interfaces, both physical and logical, pushing scaling and performance boundaries with the implementation.</p> <p>This paper examines problems with increasing the number of network interfaces on Linux. We will mostly look at deployment and configurations on network switches, though the content discussed applies to all Linux deployments.</p> <p>We plan to cover:</p> <ul> <li>Large scale network interface deployment scenarios.</li> <li>Performance data/numbers.</li> <li>Problem areas in the kernel.</li> <li>Possible solutions.</li> <li>Possible solutions to scale netlink notifications and dumps.</li> <li>Managing network interfaces at scale in user space.</li> </ul></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Tue, 05 Jan 2016 09:41:29 +0000 admin 57 at http://www.netdevconf.org/1.1 Talk: "Zebra 2.0 and Lagopus: newly-designed routing stack on high-performance packet forwarder" (Yoshihiro Nakajima, Kunihiro Ishiguro, Masaru Oki, Hirokazu Takahashi) http://www.netdevconf.org/1.1/talk-zebra-20-and-lagopus-newly-designed-routing-stack-high-performance-packet-forwarder-yoshihiro <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>Zebra 2.0 is new version of open source networking software which is implemented from scratch. Planning to support BGP/OSPF/LDP/RSVP-TE and co-working with Lagopus as fast packet forwarder with OpenFlow support.</p> <p>In this new version of Zebra, it adapts new architecture which is mixture of thread model and task completion model to achieve maximum scalability with multi-core CPUs. Zebra has separate independent configuration manager that supports commit/rollback and validation functionality. The configuration manager understand YANG based configuration model so we can easily add a new configuration written in YANG.</p> <p>Lagopus is an SDN/OpenFlow software switch that is designed to achieve high-performance packet processing and high-scalable flow handling leveraging multi-core CPUs and DPDK on commodity servers. Lagopus supports match/action-based packet forwarding and processing as well as encapsulation/decapsulation operation for MPLS, VxLAN, and NSH. The interwork mechanism of the userspace dataplane of Lagopus and the network stack in kernel allows easy integration with Zebra 2.0 as well as the existing routing software.</p> <p>The live demo of Zebra 2.0 as networking software and Lagopus as fast packet forwarder is presented.</p> </div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Mon, 04 Jan 2016 09:39:28 +0000 admin 55 at http://www.netdevconf.org/1.1 Talk: "Reducing Latency in Linux Wireless Network Drivers" (Tim Shepard) http://www.netdevconf.org/1.1/talk-reducing-latency-linux-wireless-network-drivers-tim-shepard <div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><strong>Description</strong>: <p>Qdiscs such as fq_codel can be used to reduce latency in the Linux network queues. But a network device driver that has (perhaps good) reason to pull packets out of the Linux qdisc before it is actually time to transmit them can reintroduce queuing latency (in some cases much more than we would like) and result in head-of-line blocking for traffic for which the qdisc configuration is supposed to provide prioritized low-latency service.</p> <p>Good reasons for a network device driver to pull packets early out of the qdisc include: (1) it needs to schedule transmissions to different destinations subject to constraints that vary per destination, (2) it needs to aggregate packets together for transmission in lower-layer bundles (to better amortize per-transmission overhead and achieve good throughput), and (3) it needs to make sure it has chained enough packets up for device DMA ahead of time so that it doesn't risk leaving the channel idle while there are more packets queued for transmission. Wireless network device drivers have all three of these, and another wrinkle: the transmission rates vary.</p> <p>Byte Queue Limits (BQL) already provides a good auto-tuning solution for device drivers which only have reason (3) and whose transmission rates are not varying. Many wired network device drivers have already been enhanced with BQL to automatically figure out just the right amount of bytes to commit to the DMA queues to achieve high throughput without adding unnecessary latency. But adapting Byte Queue Limits for wireless network device drivers is not so straightforward. Besides (1) through (3) above and the extra wrinkle of varying transmission rates, wireless devices (in hardware, below the device driver) can also take varying lengths of time per transmitted packet at a given rate because of hardware retransmissions and unpredictable channel access times (e.g. on a busy channel).</p> <p>We review some work towards solving this problem (e.g. mac80211 intermediate software queues), explain what is still missing, and discuss our work underway to deveop a BQL-like solution appropriate for mac80211 wireless drivers and demonstate its operation.</p></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Tags: </h3><ul class="links"><li class="taxonomy-term-reference-0" rel="dc:subject"><a href="/netdev/tags/talks" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Talks</a></li></ul></div> Sat, 02 Jan 2016 10:56:01 +0000 admin 53 at http://www.netdevconf.org/1.1