Welcome to the foundation of Linux networking! In this first installment, we dive deep into the physical and link layers - the bedrock upon which all network communication in Linux is built. Understanding these layers is crucial because every packet that flows through your system, whether it’s a simple ping or high-frequency trading data, starts its journey here.
The Big Picture
Before we dive into the specifics, let’s understand where the physical and link layers fit in the Linux networking stack:
Hardware: Network interface cards (NICs) handle the physical transmission
Link Layer (our focus): Ethernet frames, device drivers, packet buffers
Network Layer: IP processing and routing (covered in Part 2)
Transport Layer: TCP/UDP protocols (covered in Part 3)
Application Layer: Sockets and system calls (covered in Part 4)
In this post, we’ll explore three fundamental components that make it all work: network device abstraction, the sk_buff structure, and Ethernet driver implementation.
Network Device Abstraction
Linux treats network interfaces as abstract devices through the struct net_device, providing a uniform interface regardless of whether you’re dealing with Ethernet, WiFi, or even virtual interfaces.
The net_device Structure
At the heart of every network interface is struct net_device defined in include/linux/netdevice.h:
/* 3. Set hardware address */ memcpy(dev->dev_addr, hardware_mac_address, ETH_ALEN);
/* 4. Register with kernel */ int err = register_netdev(dev); if (err) { free_netdev(dev); return err; }
The Heart of Networking: sk_buff
The sk_buff (socket buffer) is arguably the most important data structure in Linux networking. Every network packet, from the moment it arrives at the hardware until it reaches an application, is represented as an sk_buff.
sk_buff Architecture
The sk_buff is designed for efficiency, with carefully arranged fields to optimize memory access patterns:
structsk_buff { union { struct { structsk_buff *next;// Next buffer in list structsk_buff *prev;// Previous buffer in list structnet_device *dev;// Associated network device }; structrb_noderbnode;// For RB-tree storage (TCP) };
dma_error: dev_err(&adapter->pdev->dev, "TX DMA map failed\n"); /* Unmap any mapped buffers */ buffer_info->dma = 0; if (count) count--;
while (count--) { if (i == 0) i += tx_ring->count; i--; buffer_info = &tx_ring->buffer_info[i]; e1000_unmap_and_free_tx_resource(tx_ring, buffer_info); }
return0; }
Performance Considerations
Multi-Queue Support
Modern NICs support multiple transmit and receive queues for improved performance:
for (i = 0; i < adapter->num_tx_queues; i++) { err = e1000_setup_tx_resources(adapter->tx_ring[i]); if (err) { e_err("Allocation for TX Queue %u failed\n", i); for (i--; i >= 0; i--) e1000_free_tx_resources(adapter->tx_ring[i]); break; } }
return err; }
/* CPU affinity for interrupt handling */ staticvoide1000_setup_msix(struct e1000_adapter *adapter) { structnet_device *netdev = adapter->netdev; intvector = 0;
/* Set up RX interrupts */ for (i = 0; i < adapter->num_rx_queues; i++) { structe1000_ring *rx_ring = adapter->rx_ring[i]; rx_ring->ims_val = E1000_IMS_RXT0; adapter->msix_entries[vector].entry = vector; adapter->msix_entries[vector].vector = 0; vector++; } }
Cache Optimization
Drivers optimize for CPU cache efficiency:
1 2 3 4 5 6 7 8 9 10 11 12 13
/* Prefetch next descriptors */ staticvoide1000_rx_desc_prefetch(struct e1000_ring *rx_ring, int cleaned_count) { unsignedint i = rx_ring->next_to_use;
if (cleaned_count >= E1000_RX_BUFFER_WRITE) { /* Prefetch next cache line of descriptors */ prefetch(&rx_ring->desc[i]); } }
/* Align sk_buff data for optimal cache usage */ skb = netdev_alloc_skb_ip_align(netdev, length);
Debugging and Monitoring
Interface Statistics
Monitor interface health through statistics:
1 2 3 4 5 6 7 8
# View interface statistics cat /proc/net/dev ip -s link show eth0 ethtool -S eth0
# Monitor packet drops netstat -i dropwatch -l kas
You now have a solid foundation in Linux networking’s physical and link layers. You understand:
How network devices are abstracted and managed in the kernel
The sk_buff structure that carries every packet through the system
How Ethernet drivers efficiently move packets between hardware and kernel
Performance considerations and optimization techniques
In Part 2, we’ll build on this foundation to explore the network layer, where we’ll see how IP packets are processed, how routing decisions are made, and how the netfilter framework enables packet filtering and modification.
The journey from hardware to application is complex, but understanding these fundamental building blocks gives you the tools to debug performance issues, optimize network applications, and truly understand how Linux networking works under the hood.