Copyright(c) 2013 - 2020 Intel Corporation

This release includes the native i40en VMware ESX Driver for Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family

Driver version: 1.10.6

Supported ESXi release: 6.5
Compatible ESXi versions: 6.7

=================================================================================

Contents
--------

- Important Notes
- Supported Features
- New Features
- New Hardware Supported
- Physical Hardware Configuration Maximums
- Bug Fixes
- Known Issues and Workarounds
- Command Line Parameters
- Previously Released Versions

=================================================================================

Important Notes:
----------------

- Recovery Mode
   A device will enter recovery mode if a device's NVM becomes corrupted.
   If a device enters recovery mode because of an interrupted NVM update, you should attempt to finish the update.
   If the device is in recovery mode because of a corrupted NVM, use the nvmupdate utility to reset
   the NVM back to factory defaults.
   Wake on LAN will be disabled during recovery mode for X722 adapters.

   NOTE: You must power cycle your system after using Recovery Mode to completely reset the firmware and hardware.

- Backplane devices
   Backplane devices are operating in auto mode only, and thus the user cannot manually overwrite speed settings.

- VLAN Tag Stripping Control for VF drivers
   VLAN Tag Stripping Control feature is enabled by default but can be disabled by VF driver.
   On a Linux VM with i40evf SR-IOV device (VF) driver, use below command to control the feature:
   ethtool --offload <IF> rxvlan on/off

   NOTE: Disabling VLAN Tag Stripping is only applicable to Virtual Guest Tagging (VGT) configurations.
   NOTE: VLAN Tag Stripping Control feature is currently not available on Windows VF drivers.

- Malicious Driver Detection (MDD)
   Malicious Driver Detection feature protects NIC from malformed packets or any other hostile actions
   which may be performed by drivers operating with the NIC (accidentally or deliberately).
   Virtual Function (VF) should be assigned as an SR-IOV Passthough Adapter to a Virtual Machine (VM).
   Please refer to available VMware vSphere documentation about device/hardware assignment to the VM
   using PCI Passthrough (also known as DirectPath IO) vs SR-IOV Passthrough Adapter.
   Assigning a VF to a VM as a PCI Passthrough device and updating some network settings
   (such as MTU size) may result in the following:
     - driver reporting an MDD event in the kernel log,
     - the driver resetting the port,
     - the network connection incurring some packet loss while the port resets.
   In case of Malicious Driver event detection, the driver reacts in one of two ways:
     - if the source of the MDD event was i40en driver (Physical Function [PF] driver), hardware is reset;
     - if the source of the MDD event was Virtual Machine's SR-IOV driver (Virtual Function [VF] driver),
       suspected VF is disabled after 4th such event - malicious VM SR-IOV adapter becomes unavailable.
       To bring it back, VM reboot or VF driver reload is required.

- LLDP Agent
   Link Layer Discovery Protocol (LLDP) supports Intel X710 and XL710 adapters with FW 6.0 and later
   as well as X722 adapters with FW 3.10 and later.
   Set LLDP driver load param to allow or disallow LLDP frames forwarded to the network stack

     LLDP agent is enabled in firmware by default (Default FW setting)
     Set LLDP=0 to disable LLDP agent in firmware
     Set LLDP=1 to enable LLDP agent in firmware
     Set LLDP to anything other then 0 or 1 will fallback to the default setting (LLDP enabled in firmware)
     LLDP agent is always enabled in firmware when MFP (Multi Functional Port, i.e. NPAR) is enabled,
     regardless of the driver parameter LLDP setting.

   When the LLDP agent is enabled in firmware, the ESXi OS will not receive LLDP frames and Link Layer
   Discovery Protocol information will not be available on the physical adapter inside ESXi.

   Please note that the LLDP driver module parameter is an array of values. Each value represents LLDP agent
   setting for a physical port.
   Please refer to "Command Line Parameters" section for suggestions on how to set driver module parameters.

- Flat NVM images on ESXi 6.0/6.5
   ESXi 6.0/6.5 in UEFI boot mode does not support X722 device with flat NVM image.
   To use NVM flat images on the X722 device, change BIOS boot mode to Legacy Mode or use ESXi 6.7.

- Trusted Virtual Function
   Setting a Virtual Function (VF) to be trusted using the Intel extended esxcli tool (intnetcli) allows the VF to
   request unicast/multicast promiscuous mode. Additionally, a trusted mode VF can request more MAC addresses and VLANs,
   subject to hardware limitations only.
   It is required to set a VF to the desired mode every time after rebooting a VM or host since ESXi kernel may
   assign a different VF to the VM after reboot.
   NOTE: Using this feature may impact performance.


Supported Features:
-------------------

- Rx, Tx, TSO checksum offload
- Netqueue (VMDQ)
- VxLAN Offload
- Geneve Offload
- Hardware VLAN filtering
- Rx Hardware VLAN stripping
- Tx Hardware VLAN inserting
- Interrupt moderation
- SR-IOV (supports four queues per VF, VF MTU, and VF VLAN)
        Valid range for max_vfs
        1-32 (4 port devices)
        1-64 (2 port devices)
        1-128 (1 port devices)
- Link Auto-negotiation
- Flow Control
- Management APIs for CIM Provider, OCSD/OCBB
- Firmware Recovery Mode
- VLAN Tag Stripping Control for VF drivers
- Trusted Virtual Function
- Added VF support for 2.5G and 5G link speeds
- Added PHY power off feature during link down
- VF can stay Trusted persistently between VM reboots
- Wake on LAN (WoL) support


New Features:
-------------

- Support for VF drivers working in polling mode


New Hardware Supported:
-----------------------

- Added new devices support for specific OEMs


Physical Hardware Configuration Maximums:
-----------------------------------------

40Gb Ethernet Ports (Intel) = 4
25Gb Ethernet Ports (Intel) = 4
10Gb Ethernet Ports (Intel) = 16


Bug Fixes:
----------

- Addresses of PCI devices are reported in the intnet cli tool


Known Issues and Workarounds:
-----------------------------

- VF adapter cannot receive any packet after VM reboot.
  The probability of issue occurrence increases with the overall number of VFs and number of VMs reboots.
   Workaround: power off and on VMs with VFs instead of rebooting them.
- ARP broadcast storm when a virtual appliance or a virtual machine acts as a Ethernet bridge between multiple vSwitches.
   Workaround: use the following command to turn off VMDQ Tx loopback path on vmnics which are linked by the bridge:
   esxcli intnet misc vmdqlb -e 0 -n vmnicX
   The esxcli intnet plug-in is available at the following link: https://downloadcenter.intel.com/download/28479
- Intermittent packet drops when two management interfaces are defined
   Workaround: Switch off LLDP agent in the firmware
- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
   Workaround: Please look at the VMware Knowledge Base 2057874
- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
   Workaround: Please look at the VMware Knowledge Base 2147604
- Cannot set maximum values for VMDQ and SR-IOV VFs on a port at the same time
   Workaround: Reduce the VMDQ or max_vfs value for the port
- In MFP adapter mode multicast traffic does not work on emulated adapters when a VM with an SR-IOV VF adapter is powered on
   Workaround: Do not mix SR-IOV and emulated traffic in MFP mode
- Setting Geneve options length larger than 124 bytes causes VLAN-tagged Geneve traffic to drop
   Workaround: Don't set Geneve options length to more than 124 bytes or don't assign a VLAN to Geneve tunnel
- In RHEL 7.2 an IPv6 connection persists between VF adapters after changing port group VLAN mode from trunk (VGT) to port VLAN (VST)
   Workaround: Upgrade to RHEL 7.3 or newer. This is a Linux kernel bug that causes packets to arrive at the wrong virtual interface.
- Disabling VFs due to MDD events caused by configuring VF adapters as 'PCI Device' instead of 'SR-IOV Passthru Device'
   Workaround: Configure VMs with 'SR-IOV Passthru Device'
- Switching port (vmnic) of management uplink may lead to connectivity issues
   Workaround: Switch the port of management uplink back to the original one


Command Line Parameters:
------------------------

ethtool is not supported for native driver.
Please use esxcli, vsish, or esxcfg-* to set or get the driver information, for example:

- Get the driver supported module parameters
  esxcli system module parameters list -m i40en

- Set a driver module parameter (clearing other parameter settings)
  esxcli system module parameters set -m i40en -p LLDP=0

- Set a driver module parameter (other parameter settings left unchanged)
  esxcli system module parameters set -m i40en -a -p LLDP=0

- Get the driver info
  esxcli network nic get -n vmnic1

- Get an uplink stats
  esxcli network nic stats -n vmnic1

- Get the private stats
  vsish -e get /net/pNics/vmnic1/stats

The extended esxcli tool allows users to set a VF as trusted/untrusted, enable/disable MAC address spoof-checking, etc.
The tool is available at the following link: https://downloadcenter.intel.com/download/28479
Example commands:

- Set VF 1 as trusted
   esxcli intnet sriovnic vf -n vmnic0 -v 1 -t on

- Set VF 1 as untrusted
   esxcli intnet sriovnic vf -n vmnic0 -v 1 -t off

- Enable VF spoof-check for VF 1
   esxcli intnet sriovnic vf -v 1 -n vmnic0 -s on

- Disable VF spoof-check for VF 1
   esxcli intnet sriovnic vf -v 1 -n vmnic0 -s off

- Get the current settings for VF 1
   esxcli intnet sriovnic vf get -n vmnic0 -v 1

- Turn on VMDQ Tx loopback path on vmnic0
   esxcli intnet misc vmdqlb -e 1 -n vmnic0

- Turn off VMDQ Tx loopback path on vmnic0
   esxcli intnet misc vmdqlb -e 0 -n vmnic0

- Set Auto-FEC mode on vmnic0
   esxcli intnet fec set -m Auto-FEC -n vmnic0

- Get FEC status on vmnic0
   esxcli intnet fec get -n vmnic0

- Enable link privileges on vmnic0
   esxcli intnet admin link set -p enable -n vmnic0

- Disable link privileges on vmnic0
   esxcli intnet admin link set -p disable -n vmnic0

- Get link privileges of vmnic0
   esxcli intnet admin link get -n vmnic0
=================================================================================

Previously Released Versions:
-----------------------------

- Driver Version: 1.9.5
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0, 6.5 and 6.7
   New Features Supported:
      - Added VF support for 2.5G and 5G link speeds
      - Added PHY power off feature during link down
      - VF can stay Trusted persistently between VM reboots
   Bug Fixes:
      - Fix for loosing VLAN stripping configuration
      - Fixed extended esxcli tool interworking when displaying FEC information
      - Corrected log output when setting VF promiscuous mode
      - Fixed incorrect output of EEE parameters status
      - Fixed false positive MDD reporting
      - Fixed meaningless status reported by intnetcli tool when subcommand is not supported
      - Fixed link flapping on some 25G cards
      - Fixed false positive error reporting when no cables were connected to the SFP+
  modules in some cards
      - Fixed connection loss when reloading driver during reset loop
- Driver Version: 1.8.6
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0, 6.5 and 6.7
   New Features Supported:
      - Added ability to select 2.5G and 5G speeds on specific devices.
      - Added support for Energy-Efficient Ethernet (EEE) on specific X710 devices.
      - Added Wake on LAN (WoL) support.
      - Added support for setting and displaying forward error correction (FEC) mode for 25G links.
   New Hardware Supported:
      - Added new devices support for specific OEMs
   Bug Fixes:
      - Fixed incorrect alignment of the TX ring size value provided by the user.
      - Fixed beacon probing network failure detection.
      - Fixed RX throughput is not matching the statistics of RX queues.
      - Fixed dynamic interrupt throttling calculation. This might improve total performance, especially for a large number
        of small packets traffic type.
      - Fixed incorrect handling of link speed changes.
      - Fixed system halt when a link event comes in during driver initialization.
      - Allow VF to only enable/disable Rx/Tx queues selectively.
      - Fixed intermittent errors in resetting corrupted NVM back to factory defaults.
      - Fixed PSOD when rebooting a host with an adapter in NVM recovery mode.
      - Fixed incorrect calculation of TX descriptors. This may improve performance of transferring large UDP
        packets (more than 8kb) or heavy TCP traffic when TSO is enabled.
      - Fix for intermittently dropped packets when Geneve offload is enabled.
      - Fixed incorrect rxMissErrors value for uplink stats.


- Driver Version: 1.7.17
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0, 6.5 and 6.7
   New Features Supported:
      - Trusted Virtual Function.
      - Implemented feature to make the firmware update process more fault tolerant.
   New Hardware Supported:
      - Added new devices support for specific OEMs.
   Bug Fixes:
      - Fixed an issue where Wake on LAN does not allow the driver to exit from recovery mode for X722 adapters.
      - Fixed enabling and disabling LLDP on X722 adapters.


- Driver Version: 1.7.11
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0, 6.5 and 6.7
   New Features Supported:
      - Introduced support for firmware recovery mode
   New hardware Supported
      - Added new devices support for specific OEMs
   Bug Fixes:
      - Fixed Malicious Driver Detection (MDD) event handling. Previous drivers detected MDD events but did not properly reset
        the adapter. The PF driver also now properly disables an offending VF after it detects 4 MDD events on the same VF.
      - Fixed an issue where SR-IOV was unable to be enabled via Web Client when i40en driver failed to load all PFs.
      - Fixed a PSOD when booting a Supermicro X710DAi with X722 adapters.
      - Fixed link not being detected while toggling Promiscuous Mode on a VF interface which could lead to VM instability and spontaneous rebooting.


- Driver Version: 1.7.5
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0, 6.5 and 6.7
   New Features Supported:
      - Introduced support for firmware recovery mode
   New hardware Supported
      - Added new devices support for specific OEMs
   Bug Fixes:
      - Reduced driver's memory footprint
      - Prevent VxLAN port reprogramming failures after changing VxLAN port more than 16 times
      - Prevent dropped packets during link speed change
      - Don't show a link down message on SFP+ module removal
      - Fix for dropped emulated adapter traffic between MFP mode master and slave partitions
      - Fix for the NIC down procedure hanging when heavy traffic is running.
      - Fixed intermittent link flap after running NVM Update
      - Fixed multicast traffic not being received on emulated adapters when a VM with an SR-IOV VF adapter is powered on
      - Fix for SR-IOV VF adapters hanging when PF is brought down
      - Fixed issue with enabling 128 VF's on single port adapters
      - Show correct cable types for AUI, MII and 1000BaseT-Optical link types
      - Fix intermittent PSOD during NVM Update
      - Fix for MDD event and TX hang caused by TSO_MSS option smaller than 64 bytes
      - Fixed issue where adapter could end up in a reset loop after a TX hang event
      - Show an error message when trying to set invalid pause frame parameters
      - Fix for 'Failed to add Geneve cloud filter' message when running heavy Geneve traffic
      - Fix for VF driver hang when GOS requested VF promiscuous mode
      - Fix for intermittent packet loss when link is brought down and up
   Known Issues:
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874
      - Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
         Workaround: Please look at the VMware Knowledge Base 2147604
      - Cannot set maximum values for VMDQ and SR-IOV VFs on a port at the same time
         Workaround: Reduce the VMDQ or max_vfs value for the port
      - Unable to unload the driver when a VM with a VF adapter is powered on
         Workaround: Shut down all VMs with VF adapters and try unloading the driver again.
      - SR-IOV settings not taking effect in vSphere vServer Web Client when a FVL mezzanine / daughterboard adapter is present
         Workaround: Configure SR-IOV manually using the max_vfs module parameter or remove the mezzanine / daughterboard adapter.
      - Setting Geneve options length larger than 124 bytes causes VLAN-tagged Geneve traffic to drop
         Workaround: Don't set Geneve options length to more than 124 bytes or don't assign a VLAN to Geneve tunnel
      - In RHEL 7.2 an IPv6 connection persists between VF adapters after changing port group VLAN mode from trunk (VGT) to port VLAN (VST)
         Workaround: Upgrade to RHEL 7.3 or newer. This is a Linux kernel bug that causes packets to arrive at the wrong virtual interface.
      - X722 adapter causes PSOD on Supermicro X10DAi
         Workaround: None


- Driver Version: 1.5.8
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0 and 6.5
   Compatible ESXi version: 6.7
   New Features Supported:
   Bug Fixes:
      - Fix duplicated packets under heavy traffic when VMkernel adapter's MAC address is the same as PF's MAC address
      - NIC occasionally stops working right after updating the firmware
   Known Issues:
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874
      - Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
         Workaround:
         - ESXi 6.5 and 6.7: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.5.6
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0 and 6.5
   Compatible ESXi version: 6.7
   New Features Supported:
      - Log an error message if an SFP+ module does not meet the thermal requirements
      - Add LLDP driver load param to allow or disallow LLDP frames forwarded to the network stack
        This feature only supports Intel X710 and XL710 adapters with FW 6.0.x and  later
   Bug Fixes:
      - ESXi crashes when NPAR-EP is enabled with 2 or more devices
      - Fix incorrect PHY type, 0x20, detection for XXV710 adapter
      - Fix VF guest VLAN tagging issue for Windows GOS
      - VF link status is still up after pulling the cable and the PF is down
      - Fix Windows GOS VF connectivity issues
      - VF traffic does not resume after PF reset
      - Unable to set auto negotiation when physical link is removed on X710 10GBASE-T adapter
      - Disabling uplink during heavy traffic causes a network hang
      - Possible TX queue hang during heavy VMDq traffic
      - No connectivity between NPAR master / slave ports from the same PF
      - The driver does not report pause frame statistics
   Known Issues:
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
         Workaround: Please look at the VMware Knowledge Base 2057874
      - Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
         Workaround:
         - ESXi 6.5: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.4.3
   Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
   Supported ESXi releases: 6.0 and 6.5
   Compatible ESXi version: 6.7
   New Features Supported:
      - None
   Bug Fixes:
      - Fix Link speed changing
      - There is no traffic when vmknic and VF are configured using the same PF port
      - Unable to set Pause Parameters
      - Duplicate Packets across queues when SR-IOV is enabled
      - SFP+ module swap link down
   Known Issues:
      - ESXi crashes when NPAR-EP is enabled
        Workaround: Use only one i40en adapter in the system when NPAR-EP is enabled
      - Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
        Workaround: Please look at the VMware Knowledge Base 2057874
      - Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
        Workaround:
         - ESXi 6.5: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.2.1
   Hardware Supported: Intel(R) Ethernet Controllers X710 and XL710 family
   Supported ESXi release: 6.5
   Compatible ESXi version: 6.7
   Features Supported:
      - Rx, Tx, TSO checksum offload
      - Netqueue (VMDQ)
      - VxLAN Offload
      - Hardware VLAN filtering
      - Rx Hardware VLAN stripping
      - Tx Hardware VLAN inserting
      - Interrupt moderation
   Bug Fixes:
      - None
   Known Issues:
      - None
