What about the HW info?  We need this before anything can be done to look at it.

BTW, this looks like interrupts are not being delivered to the driver.


Cheers,
John
-----------------------------------------------------------
"...that your people will judge you on what you can build, not what you 
destroy.", B. Obama, 2009 
 

>-----Original Message-----
>From: Stephen Hemminger [mailto:shemmin...@vyatta.com] 
>Sent: Monday, March 02, 2009 5:28 PM
>To: Kirsher, Jeffrey T
>Cc: e1000-devel@lists.sourceforge.net; Stig; Thormodsrud
>Subject: Re: [E1000-devel] e1000 driver and vmware
>
>On Mon, 2 Mar 2009 16:48:28 -0800
>Jeff Kirsher <jeffrey.t.kirs...@intel.com> wrote:
>
>> On Sun, Mar 1, 2009 at 8:15 PM, Stephen Hemminger 
><shemmin...@vyatta.com> wrote:
>> > We are seeing problems with E1000 driver when running over 
>VMware's emulated
>> > e1000 layer. These only started happening with 2.6.28
>> >
>> >> If I disconnect an interface using the VMware's 
>Disconnect, the link
>> >> appeared to have come up. If I then try to ping via the 
>interface, I see
>> >> these messages:
>> >> >
>> >> > [  264.820281] ------------[ cut here ]------------
>> >> > [  249.727468] e1000: eth1: e1000_watchdog: NIC Link is 
>Up 1000 Mbps
>> >> Full Duplex, Flow Control: None
>> >> > [  255.821378] e1000: eth1: e1000_clean_tx_irq: 
>Detected Tx Unit Hang
>> >> > [  255.821388]   Tx Queue             <0>
>> >> > [  255.821390]   TDH                  <0>
>> >> > [  255.821392]   TDT                  <1>
>> >> > [  255.821393]   next_to_use          <1>
>> >> > [  255.821395]   next_to_clean        <0>
>> >> > [  255.821401] buffer_info[next_to_clean]
>> >> > [  255.821403]   time_stamp           <ffffd3e0>
>> >> > [  255.821405]   next_to_watch        <0>
>> >> > [  255.821406]   jiffies              <ffffd4db>
>> >> > [  255.821408]   next_to_watch.status <0>
>> >> > [  257.822189] e1000: eth1: e1000_clean_tx_irq: 
>Detected Tx Unit Hang
>> >> > [  257.822195]   Tx Queue             <0>
>> >> > [  257.822196]   TDH                  <0>
>> >> > [  257.822198]   TDT                  <1>
>> >> > [  257.822200]   next_to_use          <1>
>> >> > [  257.822201]   next_to_clean        <0>
>> >> > [  257.822203] buffer_info[next_to_clean]
>> >> > [  257.822204]   time_stamp           <ffffd3e0>
>> >> > [  257.822206]   next_to_watch        <0>
>> >> > [  257.822207]   jiffies              <ffffd6cf>
>> >> > [  257.822209]   next_to_watch.status <0>
>> >> > [  259.821038] e1000: eth1: e1000_clean_tx_irq: 
>Detected Tx Unit Hang
>> >> > [  259.821043]   Tx Queue             <0>
>> >> > [  259.821045]   TDH                  <0>
>> >> > [  259.821046]   TDT                  <1>
>> >> > [  259.821048]   next_to_use          <1>
>> >> > [  259.821049]   next_to_clean        <0>
>> >> > [  259.821051] buffer_info[next_to_clean]
>> >> > [  259.821052]   time_stamp           <ffffd3e0>
>> >> > [  259.821054]   next_to_watch        <0>
>> >> > [  259.821055]   jiffies              <ffffd8c3>
>> >> > [  259.821057]   next_to_watch.status <0>
>> >> > [  261.821904] e1000: eth1: e1000_clean_tx_irq: 
>Detected Tx Unit Hang
>> >> > [  261.821908]   Tx Queue             <0>
>> >> > [  261.821910]   TDH                  <0>
>> >> > [  261.821912]   TDT                  <1>
>> >> > [  261.821913]   next_to_use          <1>
>> >> > [  261.821915]   next_to_clean        <0>
>> >> > [  261.821916] buffer_info[next_to_clean]
>> >> > [  261.821918]   time_stamp           <ffffd3e0>
>> >> > [  261.821920]   next_to_watch        <0>
>> >> > [  261.821921]   jiffies              <ffffdab7>
>> >> > [  261.821923]   next_to_watch.status <0>
>> >> > [  263.820721] e1000: eth1: e1000_clean_tx_irq: 
>Detected Tx Unit Hang
>> >> > [  263.820726]   Tx Queue             <0>
>> >> > [  263.820728]   TDH                  <0>
>> >> > [  263.820730]   TDT                  <1>
>> >> > [  263.820731]   next_to_use          <1>
>> >> > [  263.820733]   next_to_clean        <0>
>> >> > [  263.820734] buffer_info[next_to_clean]
>> >> > [  263.820736]   time_stamp           <ffffd3e0>
>> >> > [  263.820738]   next_to_watch        <0>
>> >> > [  263.820739]   jiffies              <ffffdcab>
>> >> > [  263.820741]   next_to_watch.status <0>
>> >> > [  264.820281] ------------[ cut here ]------------
>> >
>> >
>> 
>> Stephen,
>> 
>> What version of VMWare are you using?  Also can you provide the
>> hardware info and type of interrupts you are using.  We will try and
>> repro the issue here with the information you provide.
>> 
>
>From Stig:
>On linux I've seen it with VMware Server 1.0.3.  On Windows 
>I've seen it
>with VMware server 1.0.5.  
>
>---------------------------------------------------------------
>---------------
>Open Source Business Conference (OSBC), March 24-25, 2009, San 
>Francisco, CA
>-OSBC tackles the biggest issue in open source: Open Sourcing 
>the Enterprise
>-Strategies to boost innovation and cut costs with open source 
>participation
>-Receive a $600 discount off the registration fee with the 
>source code: SFAD
>http://p.sf.net/sfu/XcvMzF8H
>_______________________________________________
>E1000-devel mailing list
>E1000-devel@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/e1000-devel
>
------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel

Reply via email to