On Tue, 20 Jan 2015, Tantilov, Emil S wrote:
> >What should I try next?
> 
> Try the current stable (3.18.3 as of this writing). If you can still 
> reproduce it - please file a bug at e1000.sf.net and include steps to 
> repro and your kernel config file.

Bug posted as https://sourceforge.net/p/e1000/bugs/450/ but also detailed 
here for the mailing list...

With 3.18.3 I had this crash:

[49356.792102] ------------[ cut here ]------------ 
[49356.792185] kernel BUG at net/core/skbuff.c:2019! 
[49356.792260] invalid opcode: 0000 [#1] SMP 
[49356.792336] Modules linked in: w83627hf_wdt ip_vs_wlc ip_vs_wlib ip_vs 
libcrc32c nf_conntrack bonding e1000e e1000
[49356.793074]  [<ffffffff813c0cc8>] netif_receive_skb_internal+0x28/0x90
[49356.793074]  [<ffffffff813c0de4>] napi_gro_complete+0xa4/0xe0
[49356.793074]  [<ffffffff813c0e85>] napi_gro_flush+0x65/0x90
[49356.793074]  [<ffffffff8131bf94>] ixgbe_poll+0x474/0x7c0
[49356.793074]  [<ffffffff813c0fdb>] net_rx_action+0xfb/0x1a0
[49356.793074]  [<ffffffff8105461b>] __do_softirq+0xdb/0x1f0
[49356.793074]  [<ffffffff8105493d>] irq_exit+0x9d/0xb0
[49356.793074]  [<ffffffff810043a7>] do_IRQ+0x57/0xf0
[49356.793074]  [<ffffffff81526f6a>] common_interrupt+0x6a/0x6a
[49356.793074]  <EOI>
[49356.793074]  [<ffffffff8100b6b6>] ? default_idle+0x6/0x10
[49356.793074]  [<ffffffff8100bf1a>] arch_cpu_idle+0xa/0x10
[49356.793074]  [<ffffffff81081a12>] cpu_startup_entry+0x262/0x290
[49356.793074]  [<ffffffff810a01b3>] ? clockevents_register_device+0xe3/0x140
[49356.793074]  [<ffffffff8102ec0f>] start_secondary+0x13f/0x150
[49356.793074] Code: 44 8b 4d b0 48 8b 45 b8 e9 40 fe ff ff be d2 07 00 00 48 c7
               c7 2f 0d 74 81 44 89 5d b8 e8 bd 1b ca ff 44 8b 4d b8 e9 14 ff 
ff ff <0f> 0b 66
               90 55 48 89 e5 48 83 ec 10 4c 8d 45 f0 48 c7 45 f0 f0
[49356.793074] RIP  [<ffffffff813afa7c>] __skb_checksum+0x28c/0x290
[49356.793074]  RSP <ffff88082fcc37e8>
[49356.798627] ---[ end trace c0598b5bc30231bf ]---
[49356.798752] Kernel panic - not syncing: Fatal exception in interrupt
[49356.798892] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 
0xffffffff80000000-0xffffffff9fffffff)
[49356.799092] Rebooting in 10 seconds..

__skb_checksum+0x28c/0x290 (skbuff.c line 2019):

        skb_walk_frags(skb, frag_iter) {
                int end;

                WARN_ON(start > offset + len);

                end = start + frag_iter->len;
                if ((copy = end - offset) > 0) {
                        __wsum csum2;
                        if (copy > len)
                                copy = len;
                        csum2 = __skb_checksum(frag_iter, offset - start,
                                               copy, 0, ops);
                        csum = ops->combine(csum, csum2, pos, copy);
                        if ((len -= copy) == 0)
                                return csum;
                        offset += copy;
                        pos    += copy;
                }
                start = end;
        }
>>>>    BUG_ON(len);

I put my config up at:

  https://www.caputo.com/foss/config_3.18.3_20150121.txt

This server is a router with a HotLava Systems Tambora 64G6 Part 
#6ST2830A2, PCI-e 2.0 (5GT/s), x8, 6-port, Intel 82599ES based NIC.

Four of the 10G ports are bonded and trunked.  There are packets being 
received and forwarded from one VLAN to another on the same bond1. Total 
utilization is under 5 Gbps.

Thanks,
Chris

------------------------------------------------------------------------------
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to