Re: [Users] Stack trace caused by FreeBSD client

2014-02-24 Thread Ronen Hod

On 02/23/2014 10:13 PM, Nir Soffer wrote:

- Original Message -

From: Johan Kooijman m...@johankooijman.com
To: users users@ovirt.org
Sent: Sunday, February 23, 2014 8:22:41 PM
Subject: [Users] Stack trace caused by FreeBSD client

Hi all,

Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with
virtio drivers, both disk and net.

The VM works fine, but when I connect over SSH to the VM, I see this stack
trace in messages on the node:

This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.


Probably, nobody bothered to productize FreeBSD.
You can try to use E1000 instead of virtio.

Ronen.




Feb 23 19:19:42 hv3 kernel: [ cut here ]
Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --- )
Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
data_len=5620 ip_summed=0
Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W
--- 2.6.32-431.5.1.el6.x86_64 #1
Feb 23 19:19:42 hv3 kernel: Call Trace:
Feb 23 19:19:42 hv3 kernel: IRQ [81071e27] ?
warn_slowpath_common+0x87/0xc0
Feb 23 19:19:42 hv3 kernel: [81071f16] ?
warn_slowpath_fmt+0x46/0x50
Feb 23 19:19:42 hv3 kernel: [a016c862] ? igb_get_drvinfo+0x82/0xe0
[igb]
Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
skb_warn_bad_offload+0xc2/0xf0
Feb 23 19:19:42 hv3 kernel: [814602c1] ?
__skb_gso_segment+0x71/0xc0
Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
Feb 23 19:19:42 hv3 kernel: [814603cb] ?
dev_hard_start_xmit+0x9b/0x480
Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
sch_direct_xmit+0x15a/0x1c0
Feb 23 19:19:42 hv3 kernel: [81460a58] ? dev_queue_xmit+0x228/0x320
Feb 23 19:19:42 hv3 kernel: [a035a898] ?
br_dev_queue_push_xmit+0x88/0xc0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a928] ?
br_forward_finish+0x58/0x60 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
[bridge]
Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
[bridge]
Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
br_handle_frame_finish+0x17b/0x2a0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
br_handle_frame+0x1aa/0x250 [bridge]
Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
__netif_receive_skb+0x529/0x750
Feb 23 19:19:42 hv3 kernel: [8145ba8a] ? process_backlog+0x9a/0x100
Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
Feb 23 19:19:42 hv3 kernel: EOI [8100fa75] ? do_softirq+0x65/0xa0
Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
[tun]
Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---

This is 100% reproducable, every time. The login itself works just fine. Some
more info:

[root@hv3 ~]# uname -a
Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@hv3 ~]# rpm -qa | grep vdsm
vdsm-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch

--
Met vriendelijke groeten / With kind regards,
Johan Kooijman

E m...@johankooijman.com

___
Users mailing list
Users@ovirt.org
http

Re: [Users] Stack trace caused by FreeBSD client

2014-02-24 Thread Nir Soffer
- Original Message -
 From: Ronen Hod r...@redhat.com
 To: Nir Soffer nsof...@redhat.com, Johan Kooijman 
 m...@johankooijman.com
 Cc: users users@ovirt.org
 Sent: Monday, February 24, 2014 5:27:06 PM
 Subject: Re: [Users] Stack trace caused by FreeBSD client
 
 On 02/23/2014 10:13 PM, Nir Soffer wrote:
  - Original Message -
  From: Johan Kooijman m...@johankooijman.com
  To: users users@ovirt.org
  Sent: Sunday, February 23, 2014 8:22:41 PM
  Subject: [Users] Stack trace caused by FreeBSD client
 
  Interesting thing I found out this afternoon. I have a FreeBSD 10 guest
  with
  virtio drivers, both disk and net.
 
  The VM works fine, but when I connect over SSH to the VM, I see this stack
  trace in messages on the node:
  This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.
 
 Probably, nobody bothered to productize FreeBSD.
 You can try to use E1000 instead of virtio.

You may find this useful:
http://www.linux-kvm.org/page/Guest_Support_Status#FreeBSD

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Stack trace caused by FreeBSD client

2014-02-23 Thread Johan Kooijman
Hi all,

Interesting thing I found out this afternoon. I have a FreeBSD 10 guest
with virtio drivers, both disk and net.

The VM works fine, but when I connect over SSH to the VM, I see this stack
trace in messages on the node:

Feb 23 19:19:42 hv3 kernel: [ cut here ]
Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
skb_warn_bad_offload+0xc2/0xf0() (Tainted: GW  ---   )
Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
data_len=5620 ip_summed=0
Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G
 W  ---2.6.32-431.5.1.el6.x86_64 #1
Feb 23 19:19:42 hv3 kernel: Call Trace:
Feb 23 19:19:42 hv3 kernel: IRQ  [81071e27] ?
warn_slowpath_common+0x87/0xc0
Feb 23 19:19:42 hv3 kernel: [81071f16] ?
warn_slowpath_fmt+0x46/0x50
Feb 23 19:19:42 hv3 kernel: [a016c862] ?
igb_get_drvinfo+0x82/0xe0 [igb]
Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
skb_warn_bad_offload+0xc2/0xf0
Feb 23 19:19:42 hv3 kernel: [814602c1] ?
__skb_gso_segment+0x71/0xc0
Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
Feb 23 19:19:42 hv3 kernel: [814603cb] ?
dev_hard_start_xmit+0x9b/0x480
Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
sch_direct_xmit+0x15a/0x1c0
Feb 23 19:19:42 hv3 kernel: [81460a58] ?
dev_queue_xmit+0x228/0x320
Feb 23 19:19:42 hv3 kernel: [a035a898] ?
br_dev_queue_push_xmit+0x88/0xc0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a928] ?
br_forward_finish+0x58/0x60 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
[bridge]
Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
[bridge]
Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
br_handle_frame_finish+0x17b/0x2a0 [bridge]
Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
br_handle_frame+0x1aa/0x250 [bridge]
Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
__netif_receive_skb+0x529/0x750
Feb 23 19:19:42 hv3 kernel: [8145ba8a] ?
process_backlog+0x9a/0x100
Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
Feb 23 19:19:42 hv3 kernel: EOI  [8100fa75] ?
do_softirq+0x65/0xa0
Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
[tun]
Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
[vhost_net]
Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---

This is 100% reproducable, every time. The login itself works just fine.
Some more info:

[root@hv3 ~]# uname -a
Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@hv3 ~]# rpm -qa | grep vdsm
vdsm-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman

E m...@johankooijman.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Stack trace caused by FreeBSD client

2014-02-23 Thread Nir Soffer
- Original Message -
 From: Johan Kooijman m...@johankooijman.com
 To: users users@ovirt.org
 Sent: Sunday, February 23, 2014 8:22:41 PM
 Subject: [Users] Stack trace caused by FreeBSD client
 
 Hi all,
 
 Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with
 virtio drivers, both disk and net.
 
 The VM works fine, but when I connect over SSH to the VM, I see this stack
 trace in messages on the node:

This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.

 
 Feb 23 19:19:42 hv3 kernel: [ cut here ]
 Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907
 skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --- )
 Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F
 Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686
 data_len=5620 ip_summed=0
 Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache
 auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge
 stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport
 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin
 dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt
 iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core
 i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2
 mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas
 dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
 Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W
 --- 2.6.32-431.5.1.el6.x86_64 #1
 Feb 23 19:19:42 hv3 kernel: Call Trace:
 Feb 23 19:19:42 hv3 kernel: IRQ [81071e27] ?
 warn_slowpath_common+0x87/0xc0
 Feb 23 19:19:42 hv3 kernel: [81071f16] ?
 warn_slowpath_fmt+0x46/0x50
 Feb 23 19:19:42 hv3 kernel: [a016c862] ? igb_get_drvinfo+0x82/0xe0
 [igb]
 Feb 23 19:19:42 hv3 kernel: [8145b1d2] ?
 skb_warn_bad_offload+0xc2/0xf0
 Feb 23 19:19:42 hv3 kernel: [814602c1] ?
 __skb_gso_segment+0x71/0xc0
 Feb 23 19:19:42 hv3 kernel: [81460323] ? skb_gso_segment+0x13/0x20
 Feb 23 19:19:42 hv3 kernel: [814603cb] ?
 dev_hard_start_xmit+0x9b/0x480
 Feb 23 19:19:42 hv3 kernel: [8147bf5a] ?
 sch_direct_xmit+0x15a/0x1c0
 Feb 23 19:19:42 hv3 kernel: [81460a58] ? dev_queue_xmit+0x228/0x320
 Feb 23 19:19:42 hv3 kernel: [a035a898] ?
 br_dev_queue_push_xmit+0x88/0xc0 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035a928] ?
 br_forward_finish+0x58/0x60 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035a9da] ? __br_forward+0xaa/0xd0
 [bridge]
 Feb 23 19:19:42 hv3 kernel: [814897b6] ? nf_hook_slow+0x76/0x120
 Feb 23 19:19:42 hv3 kernel: [a035aa5d] ? br_forward+0x5d/0x70
 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035ba6b] ?
 br_handle_frame_finish+0x17b/0x2a0 [bridge]
 Feb 23 19:19:42 hv3 kernel: [a035bd3a] ?
 br_handle_frame+0x1aa/0x250 [bridge]
 Feb 23 19:19:42 hv3 kernel: [8145b7c9] ?
 __netif_receive_skb+0x529/0x750
 Feb 23 19:19:42 hv3 kernel: [8145ba8a] ? process_backlog+0x9a/0x100
 Feb 23 19:19:42 hv3 kernel: [81460d43] ? net_rx_action+0x103/0x2f0
 Feb 23 19:19:42 hv3 kernel: [8107a8e1] ? __do_softirq+0xc1/0x1e0
 Feb 23 19:19:42 hv3 kernel: [8100c30c] ? call_softirq+0x1c/0x30
 Feb 23 19:19:42 hv3 kernel: EOI [8100fa75] ? do_softirq+0x65/0xa0
 Feb 23 19:19:42 hv3 kernel: [814611c8] ? netif_rx_ni+0x28/0x30
 Feb 23 19:19:42 hv3 kernel: [a01a0749] ? tun_sendmsg+0x229/0x4ec
 [tun]
 Feb 23 19:19:42 hv3 kernel: [a027bcf5] ? handle_tx+0x275/0x5e0
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a027c095] ? handle_tx_kick+0x15/0x20
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a027955c] ? vhost_worker+0xbc/0x140
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [a02794a0] ? vhost_worker+0x0/0x140
 [vhost_net]
 Feb 23 19:19:42 hv3 kernel: [8109aee6] ? kthread+0x96/0xa0
 Feb 23 19:19:42 hv3 kernel: [8100c20a] ? child_rip+0xa/0x20
 Feb 23 19:19:42 hv3 kernel: [8109ae50] ? kthread+0x0/0xa0
 Feb 23 19:19:42 hv3 kernel: [8100c200] ? child_rip+0x0/0x20
 Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---
 
 This is 100% reproducable, every time. The login itself works just fine. Some
 more info:
 
 [root@hv3 ~]# uname -a
 Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12
 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 [root@hv3 ~]# rpm -qa | grep vdsm
 vdsm-4.13.3-3.el6.x86_64
 vdsm-xmlrpc-4.13.3-3.el6.noarch
 vdsm-python-4.13.3-3.el6.x86_64
 vdsm-cli-4.13.3-3.el6.noarch
 
 --
 Met vriendelijke groeten / With kind regards,
 Johan Kooijman
 
 E m...@johankooijman.com
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users