Linux* Base Driver for 10 Gigabit Intel® Network Connection

NOTES:  This release includes the ixgbe Linux* Base Driver for the Intel® 10 Gigabit Family of Adapters. All 82599 and 82598-based 10 Gigabit network connections require the ixgbe driver. All other 10 Gigabit network connections require the ixgb driver.  The ixgb driver can be downloaded from: http://sourceforge.net/projects/e1000. First identify your adapter.  Then follow the appropriate steps for building, installing, and configuring the specified driver.

Using the ixgbe base driver

Important Note

Overview

Building and Installation

Command Line Parameters

Additional Configurations

Performance Tuning

Known Issues/Troubleshooting 


Important Note

Warning: The ixgbe driver compiles by default with the LRO (Large Receive Offload) feature enabled. This option offers the lowest CPU utilization for
receives, but is completely incompatible with *routing/ip forwarding* and *bridging*. If enabling ip forwarding or bridging is a requirement, it is necessary to disable LRO using compile time options as noted in the LRO section later in this document. The result of not disabling LRO when combined with ip forwarding or bridging can be low throughput or even a kernel PANIC.

Overview

The Linux* base driver supports the 2.6.x kernel, and includes support for any Linux supported system, including Itanium(R)2, x86_64, i686, and PPC.

These drivers are only supported as a loadable module at this time. Intel is not supplying patches against the kernel source to allow for static linking of the driver. A version of the driver may already be included by your distribution and/or the kernel.org kernel. For questions related to hardware requirements, refer to the documentation supplied with your Intel adapter. All hardware requirements listed apply to use with Linux.

The following features are now available in supported kernels:

Channel Bonding documentation can be found in the Linux kernel source: /Documentation/networking/bonding.txt

The driver information previously displayed in the /proc file system is not supported in this release.  Alternatively, you can use ethtool (version 1.6 or later), lspci, and ifconfig to obtain the same information.  Instructions on updating ethtool can be found in the section Additional Configurations later in this document.

The driver in this release is compatible with 82598 and 82599-based Intel Network Connections.

For more information on how to identify your adapter, go to the Adapter & Driver ID Guide at:

http://support.intel.com/support/go/network/adapter/proidguide.htm

For the latest Intel network drivers for Linux, refer to the following website. In the search field, enter your adapter name or type, or use the networking link on the left to search for your adapter:

http://support.intel.com/support/go/network/adapter/home.htm

SFP+ Devices with Pluggable Optics

82599-BASED ADAPTERS

NOTES:
  • If your 82599-based Intel® Network Adapter came with Intel optics or is an Intel® Ethernet Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed below.
  • When 82599-based SFP+ devices are connected back to back, they should be set to the same Speed setting via Ethtool. Results may vary if you mix speed settings.

Supplier            Type                                                     Part Numbers
SR Modules                 
Intel              DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
Intel              DUAL RATE 1G/10G SFP+ SR (bailed) FTLX1471D3BCL
LR Modules                
Intel               DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
Intel               DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2

The following is a list of 3rd party SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.

Supplier            Type                                                     Part Numbers
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
     
Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
     
Molex 1m - Twin-ax cable  74752-1101
Molex 3m - Twin-ax cable 74752-2301
Molex 5m - Twin-ax cable 74752-3501
Molex 10m - Twin-ax cable 74752-9004
Tyco 1m - Twin-ax cable 2032237-2
Tyco 3m - Twin-ax cable 2032237-4
Tyco 5m - Twin-ax cable 2032237-6
Tyco 10m - Twin-ax cable 1-2032237-1

82598-BASED ADAPTERS

NOTES:
  • Intel® Network Adapters that support removable optical modules only support their original module type (i.e., the Intel® 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load.

  • Hot Swapping/hot plugging optical modules is not supported.

  • Only single speed, 10 gigabit modules are supported.

  • LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are not supported. Please see your system documentation for details.

The following is a list of SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices.

Supplier            Type                                                     Part Numbers
Finisar              SFP+ SR bailed, 10g single rate   FTLX8571D3BCL
Avago              SFP+ SR bailed, 10g single rate   AFBR-700SDZ
Finisar              SFP+ LR bailed, 10g single rate FTLX1471D3BCL
Molex               1m - Twin-ax cable 74752-1101
Molex               3m - Twin-ax cable 74752-2301
Molex               5m - Twin-ax cable 74752-3501
Molex               10m - Twin-ax cable  74752-9004
Tyco                 1m - Twin-ax cable   2032237-2
Tyco                 3m - Twin-ax cable    2032237-4
Tyco                 5m - Twin-ax cable 2032237-6
Tyco                 10m - Twin-ax cable   1-2032237-1

THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY’S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE ABOVE SPECIFICATIONS. THERE MAY BE OTHER MANUFACTURERS OR SUPPLIERS, PRODUCING OR SUPPLYING OPTIC MODULES AND CABLES WITH SIMILAR OR MATCHING DESCRIPTIONS. CUSTOMERS MUST USE THEIR OWN DISCRETION AND DILIGENCE TO PURCHASE OPTIC MODULES AND CABLES FROM ANY THIRD PARTY OF THEIR CHOICE. CUSTOMERS ARE SOLELY RESPONSIBLE FOR ASSESSING THE SUITABILITY OF THE PRODUCT AND/OR DEVICES AND FOR THE SELECTION OF THE VENDOR FOR PURCHASING ANY PRODUCT. THE OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE NOT WARRANTED OR SUPPORTED BY INTEL. INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF SUCH THIRD PARTY PRODUCTS OR SELECTION OF VENDOR BY CUSTOMERS.


Building and Installation

To build a binary RPM* package of this driver, run 'rpmbuild -tb ixgbe.tar.gz'.

NOTE: For the build to work properly, the currently running kernel MUST match the version and configuration of the installed kernel sources. If you have just recompiled the kernel reboot the system now.

RPM functionality has only been tested in Red Hat distributions.

To manually build this driver:

  1. Move the base driver tar file to the directory of your choice. For example, use '/home/username/ixgbe' or '/usr/local/src/ixgbe'.

  2. Untar/unzip the archive:

    tar zxf ixgbe-x.x.x.tar.gz

  3. Change to the driver src directory:

    cd ixgbe-x.x.x/src/

  4. Compile the driver module:

    make install

    The binary will be installed as:

    /lib/modules/[KERNEL_VERSION]/kernel/drivers/net/ixgbe/ixgbe.[k]o

    The install location listed above is the default location. This may differ for various Linux distributions.

  5. Load the module:

    For kernel 2.6.x, use the modprobe command -

          modprobe ixgbe <parameter>=<value>

    Note that for 2.6 kernels the insmod command can be used if the full path to the driver module is specified. For example:

         insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgbe/ixgbe.ko

    With 2.6 based kernels also make sure that older ixgbe drivers are removed from the kernel, before loading the new module:

    rmmod ixgbe; modprobe ixgbe

  6. Assign an IP address to the interface by entering the following, where x is the interface number:

    ifconfig ethx <IP_address> netmask <netmask>

  7. Verify that the interface works. Enter the following, where <IP_address> is the IP address of another machine on the same subnet as the interface that is being tested:

    ping <IP_address>

To build ixgbe driver with DCA

This example assumes the ioatdma and ixgbe sources are in /usr/src

  1. Unpack the ioatdma source, build and install

        cd /usr/src
        tar zxf ioatdma-<ioat version>.tar.gz
        cd ioatdma-<ioat version>
        make
        make install

  2. Unpack with ixgbe driver, build with DCA support and install

        cd /usr/src
        tar zxf ixgbe-<ixgbe version>.tar.gz
        cd ixgbe-<ixgbe-version>/src
        make install CFLAGS_EXTRA="-DCONFIG_DCA -I/path/to/ioatdma-<ioat-version>/include"


Command Line Parameters

If the driver is built as a module, the following optional parameters are used by entering them on the command line with the modprobe command using this syntax:

modprobe ixgbe [<option>=<VAL1>,<VAL2>,...]

For example:

modprobe ixgbe InterruptThrottleRate=16000,16000

The default value for each parameter is generally the recommended setting, unless otherwise noted.

Parameter Name Valid Range/Settings Default Description
RSS - Receive Side Scaling (or multiple queues for receives) 0 - 16

 

1 0 = disables RSS
1 = enables RSS and sets the descriptor queue count to 16 or the number of
online cpus, whichever is less.
2-16 = enables RSS, with 2-16 queues

RSS also effects the number of transmit queues allocated on 2.6.23 and
newer kernels with CONFIG_NETDEVICES_MULTIQUEUE set in the kernel .config file. CONFIG_NETDEVICES_MULTIQUEUE only exists from 2.6.23 to 2.6.26. Other options enable multiqueue in 2.6.27 and newer kernels.

MQ - Multi Queue 0, 1

 

1 0 = Disables Multiple Queue support
1 = Enabled Multiple Queue support (a prerequisite for RSS)
DCA - Direct Cache Access 0, 1

 

1 (when IXGBE_DCA is enabled) 0 = Disables DCA support in the driver
1 = Enables DCA support in the driver

See the above instructions for enabling DCA. If the driver is enabled for
DCA this parameter allows load-time control of the feature.

RxBufferMode 0-2 2 0 = Driver will use single buffer for Rx packets.
1 = Driver will use packet split mode for Rx. Packet header will be received in header buffer and payload will be received in data buffer.
2. = Optimal mode. Driver will use single buffer mode for non-Jumbo configurations and packet split mode for Jumbo configurations.
InterruptType 0-2 (0 = Legacy Int, 1 = MSI and 2 = MSIX) 2 Interrupt type controls allow load time control over the type of interrupt registered for by the driver. MSI-X is required for multiple queue support, and some kernels and combinations of kernel .config options will force a lower level of interrupt support. 'cat /proc/interrupts' will show
different values for each type of interrupt.
InterruptThrottleRate 956-488281 (0=off, 1=dynamic) 8000 Interrupt Throttle Rate (interrupts/sec). The ITR parameter controls how many interrupts each interrupt vector can generate per second. On MQ/RSS enabled kernels with MSI-X interrupts this means that each RX vector can generate (by default) 8000 interrupts per second and each TX vector can generate (by default) 4000 interrupts per second. Increasing ITR lowers latency at the cost of increased CPU utilization, though it may help throughput in some circumstances.

1 = Dynamic mode attempts to moderate interrupts per vector while maintaining very low latency. This can sometimes cause extra CPU utilization. If planning on deploying ixgbe in a latency sensitive environment please consider this parameter.

0 = Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency, but is generally not suitable for bulk throughput traffic due to the increased cpu utilization of the higher interrupt rate.

LLI (Low Latency Interrupts)

 
    LLI allows for immediate generation of an interrupt upon processing receive
packets that match certain criteria as set by the parameters described below.
LLI parameters are not enabled when Legacy interrupts are used. You must be
using MSI or MSI-X (see cat /proc/interrupts) to successfully use LLI.
LLIPort 0 - 65535 0 (disabled) LLI is configured with the LLIPort command-line parameter, which specifies
which TCP port should generate Low Latency Interrupts.

For example, using LLIPort=80 would cause the hardware to generate an
immediate interrupt upon receipt of any packet sent to TCP port 80 on the
local machine.
CAUTION: Enabling LLI can result in an excessive number of interrupts/second that may cause problems with the system and in some cases may cause a kernel panic.
    LLIPush       0-1 0 (disabled) LLIPush can be set to be enabled or disabled (default). It is most effective in an environment with many small transactions.
NOTE: Enabling LLIPush may allow a denial of service attack.
    LLISize       0-1500 0 (disabled) LLISize causes an immediate interrupt if the board receives a packet smaller than the specified size.
LLIEType 0-x8fff 0 (disabled) Low Latency Interrupt Ethernet Protocol Type.
LLIVLANP 0-7 0 (disabled) Low Latency Interrupt on VLAN priority threshold.
Flow Control    

Flow Control looks at packets and assigns them into buckets. Flow Control is enabled by default. If you want to disable a flow control capable link partner, use Ethtool:

ethtool -A eth? autoneg off rx off tx off

For 82598 backplane cards entering 1 gig mode, flow control default behavior is changed to off.  Flow control in 1 gig mode on these devices can lead to Tx hangs.
Intel(R) Ethernet Flow Director     Supports advanced filters that direct receive packets by their flows to
different queues. Enables tight control on routing a flow in the platform.
Matches flows and CPU cores for flow affinity. Supports multiple parameters for flexible flow classification and load balancing.

Flow director is enabled only if the kernel is multiple TX queue capable.

An included script (set_irq_affinity.sh) automates setting the IRQ to CPU affinity.

You can verify that the driver is using Flow Director by looking at the counter in ethtool: fdir_miss and fdir_match.

The following three parameters impact Flow Director.
FdirMode 0-2 (0=off, 1=ATR, 2=Perfect filter mode) 1 (ATR) Flow Director filtering modes.
FdirPballoc 0-2 (0=64k, 1=128k, 2=256k) 0 (64k) Flow Director allocated packet buffer size.
AtrSampleRate 1-100 20 Software ATR Tx packet sample rate. For example, when set to 20, every 20th packet, looks to see if the packet will create a new flow.

Additional Configurations

Configuring the Driver on Different Distributions

Configuring a network driver to load properly when the system is started is distribution dependent. Typically, the configuration process involves adding an alias line to /etc/modules.conf or etc/modprobe.conf, as well as editing other system startup scripts and/or configuration files. Many popular Linux distributions ship with tools to make these changes for you. To learn the proper way to configure a network device for your system, refer to your distribution documentation. If during this process you are asked for the driver or module name, the name for the Linux Base Driver for the 10 Gigabit Family of Adapters is ixgbe.

Viewing Link Messages

Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see network driver link messages on your console, set dmesg to eight by entering the following:

     dmesg -n 8

NOTE: This setting is not saved across reboots.

Jumbo Frames

The driver supports Jumbo Frames for all adapters. Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500. The maximum value for the MTU is 16110. Use the ifconfig command to increase the MTU size. For example, enter the following where <x> is the interface number:

ifconfig ethx mtu 9000 up

The maximum MTU setting for Jumbo Frames is 16110. This value coincides with the maximum Jumbo Frames size of 16128. This driver will attempt to
use multiple page sized buffers to receive each jumbo packet. This should help to avoid buffer starvation issues when allocating receive packets.

Ethtool

The driver utilizes the ethtool interface for driver configuration and diagnostics, as well as displaying statistical information.  Ethtool version 3.0 or later is required for this functionality.

The latest release of ethtool can be found at: http://sourceforge.net/projects/gkernel

NAPI

NAPI (Rx polling mode) is supported in the ixgbe driver. NAPI is enabled or disabled based on the configuration of the kernel. To override the default, use the following compile-time flags.

You can tell if NAPI is enabled in the driver by looking at the version number of the driver. It will contain the string -NAPI if NAPI is enabled.

To enable NAPI, compile the driver module, passing in a configuration option:

     make CFLAGS_EXTRA=-DIXGBE_NAPI install

NOTE: This will not do anything if NAPI is disabled in the kernel.

To disable NAPI, compile the driver module, passing in a configuration option:

     make CFLAGS_EXTRA=-DIXGBE_NO_NAPI install

See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.

LRO

Large Receive Offload (LRO) is a technique for increasing inbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. LRO combines multiple Ethernet frames into a single receive in the stack, thereby potentially decreasing CPU utilization for receives.

IXGBE_NO_LRO is a compile time flag. The user can enable it at compile time to remove support for LRO from the driver. The flag is used by adding
CFLAGS_EXTRA="-DIXGBE_NO_LRO" to the make file when it's being compiled.

make CFLAGS_EXTRA="-DIXGBE_NO_LRO" install

You can verify that the driver is using LRO by looking at these counters in Ethtool:

lro_flushed - the total number of receives using LRO.

lro_aggregated - counts the total number of Ethernet packets that were combined.

NOTE: IPv6 and UDP are not supported by LRO.

HW RSC

82599-based adapters support HW based receive side coalescing (RSC) which can merge multiple frames from the same IPv4 TCP/IP flow into a single structure that can span one or more descriptors. It works similarly to SW Large receive offload technique. By default HW RSC is enabled and SW LRO can not be used for 82599-based adapters unless HW RSC is disabled.

IXGBE_NO_HW_RSC is a compile time flag. The user can enable it at compile time to remove support for HW RSC from the driver. The flag is used by adding CFLAGS_EXTRA="-DIXGBE_NO_HW_RSC" to the make file when it's being compiled.

make CFLAGS_EXTRA="-DIXGBE_NO_HW_RSC" install

You can verify that the driver is using HW RSC by looking at the counter in Ethtool:

hw_rsc_count - counts the total number of Ethernet packets that were being combined.

rx_dropped_backlog

When in a non-Napi (or Interrupt) mode, this counter indicates that the stack is dropping packets. There is an adjustable parameter in the stack that allows you to adjust the amount of backlog. We recommend increasing the netdev_max_backlog if the counter goes up.

# sysctl -a |grep netdev_max_backlog

net.core.netdev_max_backlog = 1000

# sysctl -e net.core.netdev_max_backlog=10000

net.core.netdev_max_backlog = 10000

Flow Control

Flow control is enabled by default. If you want to disable a flow control capable link partner, use Ethtool:

    ethtool -A eth? autoneg off rx off tx off

FCoE

This release of the ixgbe driver contains new code to enable users to use Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) functionality that is supported by the 82598-based hardware. This code has no default effect on the regular driver operation, and configuring DCB and FCoE is outside the scope of this driver README. Refer to http://www.open-fcoe.org/ for FCoE project information and contact e1000-eedc@lists.sourceforge.net for DCB information.


Performance Tuning

An excellent article on performance tuning can be found at:

http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf


Known Issues/Troubleshooting

NOTE: After installing the driver, if your Intel Network Connection is not working, verify that you have installed the correct driver.

MSI-X Issues with Kernels between 2.6.19 - 2.6.21 (inclusive)

Kernel panics and instability may be observed on any MSI-X hardware if you use irqbalance with kernels between 2.6.19 and 2.6.21. If such problems are encountered, you may disable the irqbalance daemon or upgrade to a newer kernel.

Driver Compilation

When trying to compile the driver by running make install, the following error may occur:  "Linux kernel source not configured - missing version.h"

To solve this issue, create the version.h file by going to the Linux source tree and entering:

make include/linux/version.h

Do Not Use LRO When Routing or Bridging Packets

Due to a known general compatibility issue with LRO and routing, do not use LRO when routing or bridging packets.

LRO and iSCSI Incompatibility

LRO is incompatible with iSCSI target or initiator traffic. A panic may occur when iSCSI traffic is received through the ixgbe driver with LRO enabled. To workaround this, the driver should be built and installed with:

# make CFLAGS_EXTRA=-DIXGBE_NO_LRO install

Performance Degradation with Jumbo Frames

Degradation in throughput performance may be observed in some Jumbo frames environments. If this is observed, increasing the application's socket buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. See the specific application manual and /usr/src/linux*/Documentation/networking/ip-sysctl.txt for more details.

Multiple Interfaces on Same Ethernet Broadcast Network

Due to the ARP behavior on Linux, it is not possible to have one system on two IP networks in the same Ethernet broadcast domain (non-partitioned switch) behave as expected. All Ethernet interfaces will respond to IP traffic for any IP address assigned to the system. This results in unbalanced receive traffic.

If you have multiple interfaces in a server, either turn on ARP filtering by entering:

     echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter

(this only works if your kernel's version is higher than 2.4.5), or install the interfaces in separate broadcast domains (either in different switches or in a switch partitioned to VLANs).

UDP Stress Test Dropped Packet Issue

Under small packets UDP stress test with 10GbE driver, the Linux system may drop UDP packets due to the fullness of socket buffers. You may want to change the driver's Flow Control variables to the minimum value for controlling packet reception.

Or you can increase the kernel's default buffer sizes for UDP by changing the values in

/proc/sys/net/core/rmem_default and rmem_max

Unplugging network cable while ethtool -p is running

In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging the network cable while ethtool -p is running will cause the system to become unresponsive to keyboard commands, except for control-alt-delete. Restarting the system appears to be the only remedy.

Cisco Catalyst 4948-10GE port resets may cause switch to shut down ports

82598-based hardware can re-establish link quickly and when connected to some switches, rapid resets within the driver may cause the switch port to become
isolated due to "link flap". This is typically indicated by a yellow instead of a green link light. Several operations may cause this problem, such as repeatedly running ethtool commands that cause a reset.

A potential workaround is to use the Cisco IOS command "no errdisable detect cause all" from the Global Configuration prompt which enables the switch to keep the interfaces up, regardless of errors.

Installing Red Hat Enterprise Linux 4.7, 5.1, or 5.2 with an Intel(R) 10 Gigabit AT Server Adapter may cause kernel panic

A known issue may cause a kernel panic or hang after installing an 82598AT-based Intel(R) 10 Gigabit AT Server Adapter in a Red Hat Enterprise Linux 4.7, 5.1, or 5.2 system. The ixgbe driver for both the install kernel and the runtime kernel can create this panic if the 82598AT adapter is installed. Red Hat may release a security update that contains a fix for the panic that you can download using RHN (Red Hat Network) or Intel recommends that you install the ixgbe-1.3.31.5 driver or newer BEFORE installing the hardware.

Rx Page Allocation Errors

Page allocation failure. order:0 errors may occur under stress with kernels 2.6.25 and above. This is caused by the way the Linux kernel reports this stressed condition.

DCB: Generic segmentation offload on causes bandwidth allocation issues

In order for DCB to work correctly, GSO (Generic Segmentation Offload aka software TSO) must be disabled using ethtool. By default since the hardware supports TSO (hardware offload of segmentation) GSO will not be running. The GSO state can be queried with ethtool using ethtool -k ethX.

When using 82598-based network connections, ixgbe driver only supports 16 queues on a platform with more than 16 cores

Due to known hardware limitations, RSS can only filter in a maximum of 16 receive queues.

82599-based network connections support up to 64 queues.

Disable GRO when routing/bridging

Due to a known kernel issue, GRO must be turned off when routing/bridging. GRO can be turned off via ethtool.

ethtool -K ethX gro off

where ethX is the ethernet interface you're trying to modify.

Lower than expected performance on dual port 10GbE devices

Some PCI-E x8 slots are actually configured as x4 slots. These slots have insufficient bandwidth for full 10Gbe line rate with dual port 10GbE devices. The driver can detect this situation and will write the following message in the system log: “PCI-Express bandwidth available for this card is not sufficient for optimal performance. For optimal performance a x8 PCI-Express slot is required.”

If this error occurs, moving your adapter to a true x8 slot will resolve the issue.


Support

For general information, go to the Intel support website at:

    www.intel.com/support/

If an issue is identified with the released source code on the supported kernel with a supported adapter, email the specific information related to the issue to linux.nics@intel.com.


Last modified on 10/20/09 2:59p 10/22/04 9:45a 141