Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-31 Thread Matt Bailey
Performance issues with networking and veth devices can be often
linked to the implementation of hardware acceleration in various
kernel drivers for various NICs.  I found this from a lot of toggling
of the acceleration tunables with ethtool. I'm sure there's a deeper
issue with the drivers, but it was just a trial and error discovery on
my part.

--
Matt Bailey
303.871.4923
Senior Software Specialist
University of Denver, UTS
http://du.edu
http://system42.net



On Fri, May 28, 2010 at 3:12 AM, Daniel Lezcano daniel.lezc...@free.fr wrote:
 On 05/28/2010 02:58 AM, Toby Corkindale wrote:

 On 28/05/10 05:55, Matt Bailey wrote:

 /usr/sbin/ethtool -K br0 sg off
 /usr/sbin/ethtool -K br0 tso off

 Might fix your problem, YMMV; this worked for me.

 Bam! Problem fixed.
 All I needed was the 'sg' option - tso wasn't enabled anyway.

 Now getting a healthy 15-16 mbyte/sec.

 Great !

 Thanks for that..

 Is this a bug in a driver somewhere that I should, or just something one
 always needs to be aware of with LXC? (and thus should go in a FAQ)

 The true is the first time I see this problem solved by this trick. I
 suppose that has something related to the capabilities of your nic and the
 bridge inherit them. Dunno ...

 Matt,

 how did you find that ? Is it a problem spotted with the other
 virtualization solution (xen, vmware, qemu, openvz, ...) ? Do you have some
 pointer describing the problem/solution ? So we can add a FAQ with a good
 description / diagnostic of the problem.

 Thanks
  -- Daniel



--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread Matt Bailey
/usr/sbin/ethtool -K br0 sg off
/usr/sbin/ethtool -K br0 tso off

Might fix your problem, YMMV; this worked for me.

--
Matt Bailey
303.871.4923
Senior Software Specialist
University of Denver, UTS
http://du.edu
http://system42.net



On Thu, May 27, 2010 at 4:48 AM, atp andrew.phill...@lmax.com wrote:
 Toby

 Just FYI in case you were unaware - it seems one of your MXs is black
 holed.

 I tried to email you direct, but messagelabs said;
 toby.corkind...@strategicdata.com.au:
 74.125.148.10 does not like recipient.
 Remote host said: 554 5.7.1 Service unavailable; Client host
 [74.125.149.113] blocked using sbl-xbl.spamhaus.org;
 http://www.spamhaus.org/query/bl?ip=74.125.149.113

 reply below;
 ---
 A quick look, all looks reasonably normal, txqueuelen for br0 is 0, and
 probably should be more than 1000.

 The number of packets on eth0 seems very small - 1416 and 495. rx/tx

 I take it you've checked for things like arp poisoning - two of the
 containers having the same mac address?

 What does the flow of packets with tcpdump look like from the container?
 Are you seeing resets, packet duplicates, retransmissions?

 Andy


 The information in this e-mail and any attachment is confidential and is 
 intended only for the named recipient(s). The e-mail may not be disclosed or 
 used by any person other than the addressee, nor may it be copied in any way. 
 If you are not a named recipient please notify the sender immediately and 
 delete any copies of this message. Any unauthorized copying, disclosure or 
 distribution of the material in this e-mail is strictly forbidden. Any view 
 or opinions presented are solely those of the author and do not necessarily 
 represent those of the company.

 --

 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users


--

___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-start: Device or resource busy - could not unmount old rootfs

2010-04-14 Thread Matt Bailey
Those are also the instructions I followed.  The only difference I can
see is that I have a separate volume for /var on my host.

--
Matt Bailey
303.871.4923
Senior Software Specialist
University of Denver, UTS
http://du.edu
http://system42.net



On Wed, Apr 14, 2010 at 10:03 AM, Serge E. Hallyn se...@us.ibm.com wrote:
 Quoting Pier Fumagalli (p...@betaversion.org):
 Sorry for not pointing it out earlier.

 Yes, my problem is upon startup of the *first* container on Lucid, and it
 works when rolling back to 0.6.3.

 Hrmph.  I just installed a lucid container on a lucid kvm partition,
 basically following the directions at

 http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-lucid-containers/

 and don't reproduce this.  Can you look at the config in there and see
 what you do differently?

 -serge


--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] lxc-start: Device or resource busy - could not unmount old rootfs

2010-04-13 Thread Matt Bailey
 - failed to set
rootfs for 'test'
lxc-start: failed to set rootfs for 'test'
  lxc-start 1271189821.573 ERRORlxc_start - failed to setup
the container
lxc-start: failed to setup the container
  lxc-start 1271189821.573 NOTICE   lxc_start - '/sbin/init'
started with pid '2229'
  lxc-start 1271189821.573 DEBUGlxc_utils - closing fd '1'
  lxc-start 1271189821.573 DEBUGlxc_utils - closing fd '0'
  lxc-start 1271189821.573 DEBUGlxc_utils - closed all
inherited file descriptors
  lxc-start 1271189821.634 DEBUGlxc_start - child exited
  lxc-start 1271189821.634 INFO lxc_error - child 2229 ended
on error (255)
  lxc-start 1271189821.634 DEBUGlxc_cgroup - using cgroup
mounted at '/cgroup'
  lxc-start 1271189821.714 DEBUGlxc_cgroup - '/cgroup/test' unlinked


Thanks,
--
Matt Bailey

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users