Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-31 Thread Matt Bailey
Performance issues with networking and veth devices can be often
linked to the implementation of hardware acceleration in various
kernel drivers for various NICs.  I found this from a lot of toggling
of the acceleration tunables with ethtool. I'm sure there's a deeper
issue with the drivers, but it was just a trial and error discovery on
my part.

--
Matt Bailey
303.871.4923
Senior Software Specialist
University of Denver, UTS
http://du.edu
http://system42.net



On Fri, May 28, 2010 at 3:12 AM, Daniel Lezcano daniel.lezc...@free.fr wrote:
 On 05/28/2010 02:58 AM, Toby Corkindale wrote:

 On 28/05/10 05:55, Matt Bailey wrote:

 /usr/sbin/ethtool -K br0 sg off
 /usr/sbin/ethtool -K br0 tso off

 Might fix your problem, YMMV; this worked for me.

 Bam! Problem fixed.
 All I needed was the 'sg' option - tso wasn't enabled anyway.

 Now getting a healthy 15-16 mbyte/sec.

 Great !

 Thanks for that..

 Is this a bug in a driver somewhere that I should, or just something one
 always needs to be aware of with LXC? (and thus should go in a FAQ)

 The true is the first time I see this problem solved by this trick. I
 suppose that has something related to the capabilities of your nic and the
 bridge inherit them. Dunno ...

 Matt,

 how did you find that ? Is it a problem spotted with the other
 virtualization solution (xen, vmware, qemu, openvz, ...) ? Do you have some
 pointer describing the problem/solution ? So we can add a FAQ with a good
 description / diagnostic of the problem.

 Thanks
  -- Daniel



--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-28 Thread Daniel Lezcano
On 05/28/2010 02:58 AM, Toby Corkindale wrote:
 On 28/05/10 05:55, Matt Bailey wrote:
 /usr/sbin/ethtool -K br0 sg off
 /usr/sbin/ethtool -K br0 tso off

 Might fix your problem, YMMV; this worked for me.

 Bam! Problem fixed.
 All I needed was the 'sg' option - tso wasn't enabled anyway.

 Now getting a healthy 15-16 mbyte/sec.

Great !

 Thanks for that..

 Is this a bug in a driver somewhere that I should, or just something one
 always needs to be aware of with LXC? (and thus should go in a FAQ)

The true is the first time I see this problem solved by this trick. I 
suppose that has something related to the capabilities of your nic and 
the bridge inherit them. Dunno ...

Matt,

how did you find that ? Is it a problem spotted with the other 
virtualization solution (xen, vmware, qemu, openvz, ...) ? Do you have 
some pointer describing the problem/solution ? So we can add a FAQ with 
a good description / diagnostic of the problem.

Thanks
   -- Daniel


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread atp
Hi,

Send 

ifconfig br0from the host
ifconfig eth0   from the container

and the version of lxc you're using. Do you have anything special with 
the /etc/sysctl.conf? 

On a completely blank container with no tuning, I get with scp;

host-container   squashfs.img 100%  639MB  33.6MB/s   00:19
container-host   squashfs.img  100%  639MB  29.0MB/s   00:22

Both tests inside the container. The limiting resource here is cpu for
the
encryption. 

I'm on kernel 2.6.34/fc12 for this. 

Andy

On Thu, 2010-05-27 at 17:51 +1000, Toby Corkindale wrote:

 Hi,
 I have several LXC containers, running on Ubuntu Lucid's current kernel, 
 2.6.32-22.
 
 Network performance from the containers to the LAN is fine.
 However copying files (scp/rsync/whatever) between the containers and 
 their host is extremely slow - a steady 32 kb/sec.
 
 I am using bridged networking, with an mtu of 1500 on the bridge and all 
 the related devices (virtual and real).
 I have tried setting 'setfd' to 0 on br0.
 
 Config includes:
 lxc.network.type = veth
 lxc.network.flags = up
 lxc.network.link = br0
 lxc.network.ipv4 = 0.0.0.0
 lxc.network.name = eth0
 
 
 I couldn't spot anything quite like this on the mailing list archives, 
 although I did see in february this:
 http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg00071.html
 
 but it didn't seem to be resolved.
 
 
 Any thoughts?
 Thanks,
 Toby
 
 --
 
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users

Andrew Phillips
Head of Systems

www.lmax.com 

Office: +44 203 1922509
Mobile: +44 (0)7595 242 900

LMAX | Level 2, Yellow Building | 1 Nicholas Road | London | W11 4AN




The information in this e-mail and any attachment is confidential and is 
intended only for the named recipient(s). The e-mail may not be disclosed or 
used by any person other than the addressee, nor may it be copied in any way. 
If you are not a named recipient please notify the sender immediately and 
delete any copies of this message. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden. Any view or 
opinions presented are solely those of the author and do not necessarily 
represent those of the company.--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Toby Corkindale
On 27/05/10 18:06, atp wrote:
As requested:


 ifconfig br0 from the host

br0   Link encap:Ethernet  HWaddr 00:1e:37:4d:8c:d8
   inet addr:192.168.1.206  Bcast:192.168.1.255  Mask:255.255.255.0
   inet6 addr: fe80::21e:37ff:fe4d:8cd8/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:3867723 errors:0 dropped:0 overruns:0 frame:0
   TX packets:1849343 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:0
   RX bytes:3451303555 (3.4 GB)  TX bytes:382610461 (382.6 MB)


 ifconfig eth0 from the container

eth0  Link encap:Ethernet  HWaddr 36:d1:4f:d9:51:59
   inet addr:192.168.1.88  Bcast:192.168.1.255  Mask:255.255.255.0
   inet6 addr: fe80::34d1:4fff:fed9:5159/64 Scope:Link
   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   RX packets:1416 errors:0 dropped:0 overruns:0 frame:0
   TX packets:495 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:1000
   RX bytes:1020033 (1.0 MB)  TX bytes:37512 (37.5 KB)



 and the version of lxc you're using.

It's close to the git head, master branch.
Last commit was 0093bb8ced5784468daf8e66783e6be3782e8fea on May 18th.
(The version that originally shipped with ubuntu was giving me errors 
about not being able to pivot_root)

 Do you have anything special with
 the /etc/sysctl.conf?

I think these came with the system, are they likely to be problematic?

net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.tcp_syncookies=1
vm.mmap_min_addr = 65536
fs.inotify.max_user_watches = 524288
kernel.shmmax = 38821888



 On a completely blank container with no tuning, I get with scp;

 host-container squashfs.img 100% 639MB 33.6MB/s 00:19
 container-host squashfs.img 100% 639MB 29.0MB/s 00:22

 Both tests inside the container. The limiting resource here is cpu for the
 encryption.

mm, yeah, I'd be waiting all week to copy an equivalently sized file 
like that. Although if i copy it to another host on the network, then 
back again, it's all fine :/

 I'm on kernel 2.6.34/fc12 for this.

I'm on 2.6.32-22/ubuntu 10.04


thanks,
Toby

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Daniel Lezcano
On 05/27/2010 10:21 AM, Toby Corkindale wrote:
 On 27/05/10 18:06, atp wrote:
 As requested:



 ifconfig br0 from the host
  
 br0   Link encap:Ethernet  HWaddr 00:1e:37:4d:8c:d8
 inet addr:192.168.1.206  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::21e:37ff:fe4d:8cd8/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:3867723 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1849343 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:3451303555 (3.4 GB)  TX bytes:382610461 (382.6 MB)


Can you give the routes of the host please ?

 ifconfig eth0 from the container
  
 eth0  Link encap:Ethernet  HWaddr 36:d1:4f:d9:51:59
 inet addr:192.168.1.88  Bcast:192.168.1.255  Mask:255.255.255.0
 inet6 addr: fe80::34d1:4fff:fed9:5159/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:1416 errors:0 dropped:0 overruns:0 frame:0
 TX packets:495 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:1020033 (1.0 MB)  TX bytes:37512 (37.5 KB)




 and the version of lxc you're using.
  
 It's close to the git head, master branch.
 Last commit was 0093bb8ced5784468daf8e66783e6be3782e8fea on May 18th.
 (The version that originally shipped with ubuntu was giving me errors
 about not being able to pivot_root)


 Do you have anything special with
 the /etc/sysctl.conf?
  
 I think these came with the system, are they likely to be problematic?

 net.ipv4.conf.default.rp_filter=1
 net.ipv4.conf.all.rp_filter=1
 net.ipv4.tcp_syncookies=1
 vm.mmap_min_addr = 65536
 fs.inotify.max_user_watches = 524288
 kernel.shmmax = 38821888




 On a completely blank container with no tuning, I get with scp;

 host-container squashfs.img 100% 639MB 33.6MB/s 00:19
 container-host squashfs.img 100% 639MB 29.0MB/s 00:22

 Both tests inside the container. The limiting resource here is cpu for the
 encryption.
  
 mm, yeah, I'd be waiting all week to copy an equivalently sized file
 like that. Although if i copy it to another host on the network, then
 back again, it's all fine :/


 I'm on kernel 2.6.34/fc12 for this.
  
 I'm on 2.6.32-22/ubuntu 10.04


 thanks,
 Toby

 --

 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread atp
Toby
 
Just FYI in case you were unaware - it seems one of your MXs is black
holed.

I tried to email you direct, but messagelabs said;
toby.corkind...@strategicdata.com.au:
74.125.148.10 does not like recipient.
Remote host said: 554 5.7.1 Service unavailable; Client host
[74.125.149.113] blocked using sbl-xbl.spamhaus.org;
http://www.spamhaus.org/query/bl?ip=74.125.149.113

reply below;
---
A quick look, all looks reasonably normal, txqueuelen for br0 is 0, and
probably should be more than 1000. 

The number of packets on eth0 seems very small - 1416 and 495. rx/tx 

I take it you've checked for things like arp poisoning - two of the
containers having the same mac address? 

What does the flow of packets with tcpdump look like from the container?
Are you seeing resets, packet duplicates, retransmissions?

Andy


The information in this e-mail and any attachment is confidential and is 
intended only for the named recipient(s). The e-mail may not be disclosed or 
used by any person other than the addressee, nor may it be copied in any way. 
If you are not a named recipient please notify the sender immediately and 
delete any copies of this message. Any unauthorized copying, disclosure or 
distribution of the material in this e-mail is strictly forbidden. Any view or 
opinions presented are solely those of the author and do not necessarily 
represent those of the company.

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Matt Bailey
/usr/sbin/ethtool -K br0 sg off
/usr/sbin/ethtool -K br0 tso off

Might fix your problem, YMMV; this worked for me.

--
Matt Bailey
303.871.4923
Senior Software Specialist
University of Denver, UTS
http://du.edu
http://system42.net



On Thu, May 27, 2010 at 4:48 AM, atp andrew.phill...@lmax.com wrote:
 Toby

 Just FYI in case you were unaware - it seems one of your MXs is black
 holed.

 I tried to email you direct, but messagelabs said;
 toby.corkind...@strategicdata.com.au:
 74.125.148.10 does not like recipient.
 Remote host said: 554 5.7.1 Service unavailable; Client host
 [74.125.149.113] blocked using sbl-xbl.spamhaus.org;
 http://www.spamhaus.org/query/bl?ip=74.125.149.113

 reply below;
 ---
 A quick look, all looks reasonably normal, txqueuelen for br0 is 0, and
 probably should be more than 1000.

 The number of packets on eth0 seems very small - 1416 and 495. rx/tx

 I take it you've checked for things like arp poisoning - two of the
 containers having the same mac address?

 What does the flow of packets with tcpdump look like from the container?
 Are you seeing resets, packet duplicates, retransmissions?

 Andy


 The information in this e-mail and any attachment is confidential and is 
 intended only for the named recipient(s). The e-mail may not be disclosed or 
 used by any person other than the addressee, nor may it be copied in any way. 
 If you are not a named recipient please notify the sender immediately and 
 delete any copies of this message. Any unauthorized copying, disclosure or 
 distribution of the material in this e-mail is strictly forbidden. Any view 
 or opinions presented are solely those of the author and do not necessarily 
 represent those of the company.

 --

 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Toby Corkindale
On 27/05/10 19:52, Daniel Lezcano wrote:
 On 05/27/2010 10:21 AM, Toby Corkindale wrote:
 On 27/05/10 18:06, atp wrote:
 As requested:


 ifconfig br0 from the host
 br0 Link encap:Ethernet HWaddr 00:1e:37:4d:8c:d8
 inet addr:192.168.1.206 Bcast:192.168.1.255 Mask:255.255.255.0
 inet6 addr: fe80::21e:37ff:fe4d:8cd8/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 RX packets:3867723 errors:0 dropped:0 overruns:0 frame:0
 TX packets:1849343 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:3451303555 (3.4 GB) TX bytes:382610461 (382.6 MB)

 Can you give the routes of the host please ?

$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse 
Iface
192.168.1.0 0.0.0.0 255.255.255.0   U 0  00 br0
192.168.122.0   0.0.0.0 255.255.255.0   U 0  00 
virbr0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 br0
0.0.0.0 192.168.1.1 0.0.0.0 UG10000 br0


I have some kvm (via libvirt) virtual machines running on this host as 
well, hence the virbr0 existing as well.

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Toby Corkindale
On 28/05/10 05:55, Matt Bailey wrote:
 /usr/sbin/ethtool -K br0 sg off
 /usr/sbin/ethtool -K br0 tso off

 Might fix your problem, YMMV; this worked for me.

Bam! Problem fixed.
All I needed was the 'sg' option - tso wasn't enabled anyway.

Now getting a healthy 15-16 mbyte/sec.


Thanks for that..

Is this a bug in a driver somewhere that I should, or just something one 
always needs to be aware of with LXC? (and thus should go in a FAQ)

thanks!
Toby

--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users