Blog post written by Shyam Iyer and Ahmad Ali.
The present Linux DCB stack is a result of various vendors independently pushing code into the Linux kernel. This code is aligned with each vendor’s hardware strategy and submitted to separate subsystems under different maintainers. The maintainers weren’t necessarily coordinating these changes and the uncoordinated and parallel development efforts have resulted in the differences in user-space interfaces and management tools that we see today. There are pro’s and con’s for each implementation. Albeit, there is a need to standardize management interfaces in the Linux DCB stack so that system administrators manage only common tools and interfaces across vendors for basic tasks.
One vendor camp requires a netdev interface to push/get configurations or retrieve statistics. They rely on Open-LLDP based open source tools to provide the LLDP engine. Initially, this approach was used for software FCoE initiators only, and no integration was done with Open-iSCSIbased tools for software iSCSI initiators. Over time this approach was corrected by allowing DCB configurations for iSCSI through Open-LLDP/dcbtool and support for iSCSI App TLVs in these tools. Additionally, this approach has also been integrated with system wide QoS solutions like cgroup. The specific tool that helps achieve this is cgdcbxd.
Other vendor camp requires the vendor’s specific tools to manage DCB on Linux. These are mainly the converged network adapter (CNA) vendors and traditional Fibre Channel HBA vendors. To its credit, the original customers for such adapters wanted the same look and feel of a Fibre Channel HBA when migrating to Ethernet based FCoE HBAs. However, convergence also created different user-kernel interfaces for the converged functions in the same adapter. For instance, a typical CNA vendor provides a SCSI HBA interface for its FCoE and iSCSI functions and a netdev interface for its NIC function. This means that an Open-LLDP based DCB management using a netdev interface is not sufficient for CNAs. The DCB queue statistics observed from a NIC interface (netdev) may not correlate with statistics pertaining to the iSCSI function or the FCoE function of the CNA.
The netdev camp contends that DCB is a layer-2 Ethernet function and a netdev interface best describes the management index to probe or configure an Ethernet based adapter. But the reality is, the Ethernet is just a medium to transport Fibre Channel, iSCSI, or any other storage/interconnect protocol. In a similar development of a tectonic shift in the industry, Open-iSCSI tools have already started integrating control and management aspects of iSCSI offload HBAs in addition to software iSCSI initiators. This has made iSCSI configuration and management simpler and easy to use. Also users don’t have to relearn new vendor specific tools. Similarly, Linux DCB management could benefit by allowing a common user-space tool to do simple DCB configurations to coordinate with switch settings and provide basic statistics.
For NIC based solutions that lack an LLDP engine in the firmware, Open-LLDP provides that engine in the host OS. Any NIC that provides DCB primitives in the hardware could benefit from Open-LLDP’s LLDP state machine and turn into a fully capable DCB adapter. This has helped vendors’ jumpstart efforts to offer DCB based adapters and has been standardized as a common interface for many traditional NIC vendors. Additionally, running an LLDP engine in the Linux user-space architecturally benefits KVM based virtualization solutions to benefit from querying DCB configurations and match the virtual machine QoS SLAs to the hardware based DCB QoS configurations.
The DCB management tools in the Host should be common for both NIC-only DCB adapters, and CNAs. One approach towards this is to retain the DCB stack in the OS but lessen its netdev dependency. A DCB device class could be a netdev device or a SCSI device and should support the same or similar API to query or configure the DCB configurations. This architecture visualizes a core DCB kernel library with callbacks to both NIC drivers and HBA drivers. The core DCB kernel library should be manageable using dcbnl sockets via Open-LLDP thus standardizing the Kernel ABI to manage DCB for all adapters. DCB should also integrate seamlessly with the requirements of a virtual machine’s QoS SLAs and provide hardware based lossless fabric and other QoS features like bandwidth guarantees per VM. This can be achieved only if the management interfaces are common and open across all the vendors. In conclusion, standardizing on Open-LLDP, dcbtool, and other host-based tools for NICs and CNAs will make Linux DCB management much simpler and easier to manage for system administrators. DCB as a technology has many desirable features for Network, Storage and Virtual Server Administrators.
Image may be NSFW.Clik here to view.
