VMwoes: Purple Screen of Death E1000PollRxRing

VMwoes Purple Screen of Death: VMware ESXi 5.5 host experiences a purple diagnostic screen mentioning E1000PollRxRing and E1000DevRx

VMware Purple Screen of Death

VMware Purple Screen of Death

VMware ESXi 5.5.0 (Releasebuild-1331020 x86_64] #PF Exception 14 in world 264638:vmm1:AGB-Dub IP 0x418039010c57 addr 0x0
PTEs:0x0;
cr0=0x80050031 cr2=0x0 cr3=0xa5a4f3000 cr4=0x42668
frame=0x4123a6f9cf30 ip=0x418039010c57 err=9 rflags=0x10206
rax=0x0 rbx=0x51 rcx=0x18
rdx=0x2 rbp=0x4123a6f9d3d0 rsi=0x1
rdi=0x4108a8348d40 r8=0x1 r9=0x1
r10=0x41122413a080 r11=0x4 r12=0x41001651cef4
r13=0x1 r14=0x4123a6f9d2e0 r15=0x4123a6f9d334
*PCPU24:264638/vmm1:AGB-DubalLive
PCPU 0: UVVVVUVVVVVVVVVVVVVVVVVVVVVVVVS
Code start: 0x418038e00000 VMK uptime: 60:02:35:05.115
0x4123a6f9d3d0:[0x418039010c57]E1000PollRxRing@vmkernel#nover+0xb73 stack: 0x8
0x4123a6f9d440:[0x418039013bb5]E1000DevRX@mkernel#nover+0x3a9 stack: 0x4123a6f9d658
0x4123a6f9d4e0:[0x418038f92164]I0Chain_Resume@vmkernel#nover+0x174 stack: 0x0
0x412306f9d530:[0x418038f79e22]PortOutput@vmkernel#nover+0x136 stack: 0x4108ff01f780
0x4123a6f9d590:[0x41803952ff58]EtherswitchForwardLeafPortsQuick@#+0x4c stack: 0x183c21
0x4123a6f9d7b0:[0x418039530f51]EtherswitchPortDispatche@#+0xe25 stack: 0x418000000015
0x4123a6f9d820:[0x418030f7a7d2]Port_InputResume@vmkernel#nover+0x192 stack: 0x412fc57f4a80
0x4123a6f9d870:[0x418038f7ba39]Port_Input_Committed@vmkernel#nover+0x25 stack: 0x0
0x4123a6f9d8e0:[0x41803901763a]E1000DevAsyncTx@vmkernel#nover+0x112 stack: 0x4123a6f9da60
0x4123a6f9d950:[0x418030fadd70]MatWorldletPerVMC0@vmkernel#nover+0x218 stack: 0x410800000000
0x4123a6f9dab8:[0x418038eeae77]WorldletProcessQueue@vmkernel#nover+0xcf stack: 0x0
0x4123a6f9daf0:[0x418038eeb93c]WorldletEHHIandlerft@vmkernel#nover+0x54 stack: 0x0
0x4123a6f9db80:[0x418038e2e94f]BH_DrainAndDisableInterrupts@vmkernel#nover+0xf3 stack: 0x2ff889001
0x4123a6f9dbc0:[0x418038e63e03]IDT_IntrHandler@vmkernel#nover+8x1af stack: 0x4123a6f9dce8
0x4123a6f9dbd0:[0x418038ef1064]gate_entry@vmkernel#nover+0x64 stack: 0x0
0x4123a6f9dce8:[0x4180391a32d3]Power_HaltPCPU@vmkernel#nover+0x237 stack: 0x418086e64100
0x4123a6f9dd58:[0x41803904e859]CpuSchedIdleLoopInt@vmkernel#nover+0x4bd stack: 0x4123a6f9dec8
0x4123a6f9deb8:[0x418039054938]CpuSchedDispatch@vmkernel#nover+0x1630 stack: 0x4123a6f9df20
0x4123a6f9df28:[0x418039055c65]CpuSchedHalt@vmkernel#nover+0x245 stack: 0xffffffff00000001
0x4123a6f9df98:[0x4180390561cb]CpuSched_VcpuHalt@vmkernel#nover+0x197 stack: 0x410000008000
0x4123a6f9dfe8:[0x418038ecde30]VMMVMKCall Call@vmkernel#nover+0x48c Stack: 0x0
0x418038ecd484:[0xfffffffffic223baa] vmk_symbol_MFSVolume_GetLocalPathf@com.vmmare.nfsmod#1.0.0.0+0
base fs=0x0 gs=0x418046000000 Kgs=0x0
Coredump to disk. Slot 1 of 1.
VASpace (00/12) DiskDump: Partial Dump: Out of space o=0x63ff800 l=0x1000
Finalized dump header (12/12) FileDump: Successful.
Debugger waiting(world 264638) -- no port for remote debugger. "Escape" for local debugger.

Apparently it is a known issue with the particular release of the VMware ESXi 5.5 hypervisor we use on just one of our host servers. It has since been patched, but we went with the workaround as there wasn’t a huge number of virtual machines to modify.

The workaround is to replace the E1000 network adapters with the VMXNET3 adapters.

There is further information on Running-system.com regarding this bug Purple Screen of Death caused by E1000 adapters and RSS (Receive Side Scaling).