Re: [Lxc-users] Can we run Ubuntu template on RHEL6?
On Wed, Mar 7, 2012 at 3:16 PM, Allen Elliott wrote: > HI: > I created a template of Ubuntu 10.10, and run it on RHEL6(with kernel > 2.6.32). I can't start it until I recompiled the 3.0 kernel. > > It seems fine most of time, except the connection, I can't connect to > the guest OS from the host with ssh, and also can't connect to the > guest OS from other machine with putty nor winscp nor vnc(I set up a > net bridge and can ping Ubuntu from outside, so the network is ok). It > seems the guest OS itself refused the connections. > > As the guest OS shares the same kernel with the host, is that the > kernel problem? Is it correct that run a Ubuntu container on RHEL6? > > > -- > Virtualization & Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/ > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > What are your guest logs saying? -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
I can say that it's true now :) I did multiple tests, and i've been induced in erroneous conclusions by having "newinstances" flag for devpts on the host. You're true you need to remove the devpts entry from the guest to make it work correctly. Thanks again, Olivier On Tue, Mar 6, 2012 at 11:06 AM, Iliyan Stoyanov wrote: > ** > Hi Mauras, > > Do you by any chance have an fstab file in your container's /etc directory > that is trying to mount devpts fs also. I had this issue a week ago with > some of my SL6.2 containers on a fedora 16 host. After removing everything > /dev/pts related from the fstab in the /etc directory of the containers, > everything magically worked. > > BR, > --ilf > > > On Tue, 2012-03-06 at 10:54 +0100, Mauras Olivier wrote: > > Hello, > > I've finally successfully migrated my SMACK setup over SElinux to isolate > my containers - Thanks to the folks on #selinux@freenode - on a > Scientific Linux 6.2 host. (I may share my policy with some details if some > of you are interested) > So far so good, after loads of hits and misses almost everything works > correctly. > > The only thing that is not, is the multiple devpts instances. It seems > that when specifying "lxc.pts" option in the container config, ssh stops > working while /dev/pts is correctly mounted _but_ is still showing pts > devices from the host. > There's no specific selinux avc denials, and ssh rejects the shell > connection with that kind of errors found when /dev/pts is not correctly > mounted: > > sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No such > file or directory > sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted > sshd[556]: error: open /dev/tty failed - could not set controlling tty: No > such device or address > > As you may guess /dev/tty is present and /dev/pts is correclty mounted as > i can do: ssh root@container "ls -la /dev/pts" > Only assigning the pts device for the shell doesn't... > > > Have any of you also hit this problem? Did you find a solution? > > > Regards, > Olivier > > > Ps: Using lxc 0.7.5 > > -- > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe > now!http://p.sf.net/sfu/learndevnow-d2d > ___ Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > > -- Virtualization & Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
On Tue, Mar 6, 2012 at 1:19 PM, Mauras Olivier wrote: > > > On Tue, Mar 6, 2012 at 12:13 PM, Ramez Hanna wrote: > >> On Tue, Mar 6, 2012 at 1:07 PM, Mauras Olivier >> wrote: >> > >> > >> > On Tue, Mar 6, 2012 at 11:12 AM, Ramez Hanna >> wrote: >> >> >> >> On Tue, Mar 6, 2012 at 12:06 PM, Iliyan Stoyanov wrote: >> >> > Hi Mauras, >> >> > >> >> > Do you by any chance have an fstab file in your container's /etc >> >> > directory >> >> > that is trying to mount devpts fs also. I had this issue a week ago >> with >> >> > some of my SL6.2 containers on a fedora 16 host. After removing >> >> > everything >> >> > /dev/pts related from the fstab in the /etc directory of the >> containers, >> >> > everything magically worked. >> >> > >> >> > BR, >> >> > --ilf >> >> > >> >> > >> >> > On Tue, 2012-03-06 at 10:54 +0100, Mauras Olivier wrote: >> >> > >> >> > Hello, >> >> > >> >> > I've finally successfully migrated my SMACK setup over SElinux to >> >> > isolate my >> >> > containers - Thanks to the folks on #selinux@freenode - on a >> Scientific >> >> > Linux 6.2 host. (I may share my policy with some details if some of >> you >> >> > are >> >> > interested) >> >> > So far so good, after loads of hits and misses almost everything >> works >> >> > correctly. >> >> > >> >> > The only thing that is not, is the multiple devpts instances. It >> seems >> >> > that >> >> > when specifying "lxc.pts" option in the container config, ssh stops >> >> > working >> >> > while /dev/pts is correctly mounted _but_ is still showing pts >> devices >> >> > from >> >> > the host. >> >> > There's no specific selinux avc denials, and ssh rejects the shell >> >> > connection with that kind of errors found when /dev/pts is not >> correctly >> >> > mounted: >> >> > >> >> > sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No >> >> > such >> >> > file or directory >> >> > sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted >> >> > sshd[556]: error: open /dev/tty failed - could not set controlling >> tty: >> >> > No >> >> > such device or address >> >> > >> >> > As you may guess /dev/tty is present and /dev/pts is correclty >> mounted >> >> > as i >> >> > can do: ssh root@container "ls -la /dev/pts" >> >> > Only assigning the pts device for the shell doesn't... >> >> > >> >> > >> >> > Have any of you also hit this problem? Did you find a solution? >> >> > >> >> > >> >> > Regards, >> >> > Olivier >> >> > >> >> > >> >> > Ps: Using lxc 0.7.5 >> >> > >> >> > >> >> > >> -- >> >> > Keep Your Developer Skills Current with LearnDevNow! >> >> > The most comprehensive online learning library for Microsoft >> developers >> >> > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, >> MVC3, >> >> > Metro Style Apps, more. Free future releases when you subscribe now! >> >> > http://p.sf.net/sfu/learndevnow-d2d >> >> > ___ Lxc-users mailing >> list >> >> > Lxc-users@lists.sourceforge.net >> >> > https://lists.sourceforge.net/lists/listinfo/lxc-users >> >> > >> >> > >> >> > >> >> > >> -- >> >> > Keep Your Developer Skills Current with LearnDevNow! >> >> > The most comprehensive online learning library for Microsoft >> developers >> >> > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, >> MVC3, >> >> > Metro Style Apps, more. Free future releases when you subscribe now! >> >> > http://p.sf.net/sfu/learndevnow-
Re: [Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
On Tue, Mar 6, 2012 at 12:13 PM, Ramez Hanna wrote: > On Tue, Mar 6, 2012 at 1:07 PM, Mauras Olivier > wrote: > > > > > > On Tue, Mar 6, 2012 at 11:12 AM, Ramez Hanna > wrote: > >> > >> On Tue, Mar 6, 2012 at 12:06 PM, Iliyan Stoyanov wrote: > >> > Hi Mauras, > >> > > >> > Do you by any chance have an fstab file in your container's /etc > >> > directory > >> > that is trying to mount devpts fs also. I had this issue a week ago > with > >> > some of my SL6.2 containers on a fedora 16 host. After removing > >> > everything > >> > /dev/pts related from the fstab in the /etc directory of the > containers, > >> > everything magically worked. > >> > > >> > BR, > >> > --ilf > >> > > >> > > >> > On Tue, 2012-03-06 at 10:54 +0100, Mauras Olivier wrote: > >> > > >> > Hello, > >> > > >> > I've finally successfully migrated my SMACK setup over SElinux to > >> > isolate my > >> > containers - Thanks to the folks on #selinux@freenode - on a > Scientific > >> > Linux 6.2 host. (I may share my policy with some details if some of > you > >> > are > >> > interested) > >> > So far so good, after loads of hits and misses almost everything works > >> > correctly. > >> > > >> > The only thing that is not, is the multiple devpts instances. It seems > >> > that > >> > when specifying "lxc.pts" option in the container config, ssh stops > >> > working > >> > while /dev/pts is correctly mounted _but_ is still showing pts devices > >> > from > >> > the host. > >> > There's no specific selinux avc denials, and ssh rejects the shell > >> > connection with that kind of errors found when /dev/pts is not > correctly > >> > mounted: > >> > > >> > sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No > >> > such > >> > file or directory > >> > sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted > >> > sshd[556]: error: open /dev/tty failed - could not set controlling > tty: > >> > No > >> > such device or address > >> > > >> > As you may guess /dev/tty is present and /dev/pts is correclty mounted > >> > as i > >> > can do: ssh root@container "ls -la /dev/pts" > >> > Only assigning the pts device for the shell doesn't... > >> > > >> > > >> > Have any of you also hit this problem? Did you find a solution? > >> > > >> > > >> > Regards, > >> > Olivier > >> > > >> > > >> > Ps: Using lxc 0.7.5 > >> > > >> > > >> > > -- > >> > Keep Your Developer Skills Current with LearnDevNow! > >> > The most comprehensive online learning library for Microsoft > developers > >> > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, > MVC3, > >> > Metro Style Apps, more. Free future releases when you subscribe now! > >> > http://p.sf.net/sfu/learndevnow-d2d > >> > ___ Lxc-users mailing list > >> > Lxc-users@lists.sourceforge.net > >> > https://lists.sourceforge.net/lists/listinfo/lxc-users > >> > > >> > > >> > > >> > > -- > >> > Keep Your Developer Skills Current with LearnDevNow! > >> > The most comprehensive online learning library for Microsoft > developers > >> > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, > MVC3, > >> > Metro Style Apps, more. Free future releases when you subscribe now! > >> > http://p.sf.net/sfu/learndevnow-d2d > >> > ___ > >> > Lxc-users mailing list > >> > Lxc-users@lists.sourceforge.net > >> > https://lists.sourceforge.net/lists/listinfo/lxc-users > >> > > >> > >> see my patch regarding f16 and my lxc-start-fedora script should give > >> you an idea > >> > >> -- > >> BR > >> RH > >> http://informatiq.org > > > > > > Hi, >
Re: [Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
On Tue, Mar 6, 2012 at 11:12 AM, Ramez Hanna wrote: > On Tue, Mar 6, 2012 at 12:06 PM, Iliyan Stoyanov wrote: > > Hi Mauras, > > > > Do you by any chance have an fstab file in your container's /etc > directory > > that is trying to mount devpts fs also. I had this issue a week ago with > > some of my SL6.2 containers on a fedora 16 host. After removing > everything > > /dev/pts related from the fstab in the /etc directory of the containers, > > everything magically worked. > > > > BR, > > --ilf > > > > > > On Tue, 2012-03-06 at 10:54 +0100, Mauras Olivier wrote: > > > > Hello, > > > > I've finally successfully migrated my SMACK setup over SElinux to > isolate my > > containers - Thanks to the folks on #selinux@freenode - on a Scientific > > Linux 6.2 host. (I may share my policy with some details if some of you > are > > interested) > > So far so good, after loads of hits and misses almost everything works > > correctly. > > > > The only thing that is not, is the multiple devpts instances. It seems > that > > when specifying "lxc.pts" option in the container config, ssh stops > working > > while /dev/pts is correctly mounted _but_ is still showing pts devices > from > > the host. > > There's no specific selinux avc denials, and ssh rejects the shell > > connection with that kind of errors found when /dev/pts is not correctly > > mounted: > > > > sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No > such > > file or directory > > sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted > > sshd[556]: error: open /dev/tty failed - could not set controlling tty: > No > > such device or address > > > > As you may guess /dev/tty is present and /dev/pts is correclty mounted > as i > > can do: ssh root@container "ls -la /dev/pts" > > Only assigning the pts device for the shell doesn't... > > > > > > Have any of you also hit this problem? Did you find a solution? > > > > > > Regards, > > Olivier > > > > > > Ps: Using lxc 0.7.5 > > > > > -- > > Keep Your Developer Skills Current with LearnDevNow! > > The most comprehensive online learning library for Microsoft developers > > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > > Metro Style Apps, more. Free future releases when you subscribe now! > > http://p.sf.net/sfu/learndevnow-d2d > > ___ Lxc-users mailing list > > Lxc-users@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/lxc-users > > > > > > > -- > > Keep Your Developer Skills Current with LearnDevNow! > > The most comprehensive online learning library for Microsoft developers > > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > > Metro Style Apps, more. Free future releases when you subscribe now! > > http://p.sf.net/sfu/learndevnow-d2d > > ___ > > Lxc-users mailing list > > Lxc-users@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/lxc-users > > > > see my patch regarding f16 and my lxc-start-fedora script should give > you an idea > > -- > BR > RH > http://informatiq.org > Hi, Thanks for your reply, i actually looked at your patch, but i don't think it's relevant to my problem as i don't start any getty in the container at all. Now i may be missing something, if so please enlighten me. Regards, Olivier -- Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
Just tried and same problem "PTY allocation request failed on channel 0" Cheers, Olivier On Tue, Mar 6, 2012 at 11:06 AM, Iliyan Stoyanov wrote: > ** > Hi Mauras, > > Do you by any chance have an fstab file in your container's /etc directory > that is trying to mount devpts fs also. I had this issue a week ago with > some of my SL6.2 containers on a fedora 16 host. After removing everything > /dev/pts related from the fstab in the /etc directory of the containers, > everything magically worked. > > BR, > --ilf > > > On Tue, 2012-03-06 at 10:54 +0100, Mauras Olivier wrote: > > Hello, > > I've finally successfully migrated my SMACK setup over SElinux to isolate > my containers - Thanks to the folks on #selinux@freenode - on a > Scientific Linux 6.2 host. (I may share my policy with some details if some > of you are interested) > So far so good, after loads of hits and misses almost everything works > correctly. > > The only thing that is not, is the multiple devpts instances. It seems > that when specifying "lxc.pts" option in the container config, ssh stops > working while /dev/pts is correctly mounted _but_ is still showing pts > devices from the host. > There's no specific selinux avc denials, and ssh rejects the shell > connection with that kind of errors found when /dev/pts is not correctly > mounted: > > sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No such > file or directory > sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted > sshd[556]: error: open /dev/tty failed - could not set controlling tty: No > such device or address > > As you may guess /dev/tty is present and /dev/pts is correclty mounted as > i can do: ssh root@container "ls -la /dev/pts" > Only assigning the pts device for the shell doesn't... > > > Have any of you also hit this problem? Did you find a solution? > > > Regards, > Olivier > > > Ps: Using lxc 0.7.5 > > -- > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe > now!http://p.sf.net/sfu/learndevnow-d2d > ___ Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > > -- Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] RH and clones 6.2, LXC, SElinux and multiple DEVPTS instances
Hello, I've finally successfully migrated my SMACK setup over SElinux to isolate my containers - Thanks to the folks on #selinux@freenode - on a Scientific Linux 6.2 host. (I may share my policy with some details if some of you are interested) So far so good, after loads of hits and misses almost everything works correctly. The only thing that is not, is the multiple devpts instances. It seems that when specifying "lxc.pts" option in the container config, ssh stops working while /dev/pts is correctly mounted _but_ is still showing pts devices from the host. There's no specific selinux avc denials, and ssh rejects the shell connection with that kind of errors found when /dev/pts is not correctly mounted: sshd[552]: error: ssh_selinux_setup_pty: security_compute_relabel: No such file or directory sshd[556]: error: ioctl(TIOCSCTTY): Operation not permitted sshd[556]: error: open /dev/tty failed - could not set controlling tty: No such device or address As you may guess /dev/tty is present and /dev/pts is correclty mounted as i can do: ssh root@container "ls -la /dev/pts" Only assigning the pts device for the shell doesn't... Have any of you also hit this problem? Did you find a solution? Regards, Olivier Ps: Using lxc 0.7.5 -- Keep Your Developer Skills Current with LearnDevNow! The most comprehensive online learning library for Microsoft developers is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, Metro Style Apps, more. Free future releases when you subscribe now! http://p.sf.net/sfu/learndevnow-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [MySQL] Weird performances problem between containers on a same host
Ok so to make things working out quickly, i added some network interfaces to the host and moved the MySQL container on a second bridge using eth2 No more speed problem... It only happens on containers sharing the same devices. On Thu, Aug 11, 2011 at 10:10 AM, Mauras Olivier wrote: > Also, bandwidth between containers is very bad... > scp a file doesn't get over 2.1MB/s - On gigabit interface - and drop over > time. > > > On Thu, Aug 11, 2011 at 9:32 AM, Mauras Olivier > wrote: > >> So here's my results. >> On 55 packets transmitted for the mysql request >> 25 correct checksums >> 28 incorrect >> >> Disabling TSO and GSO doesn't help much, got exact same result >> >> As for the macvlan bridge mode how can i check if it's available or not?? >> Would it let things work even if not supported? >> --- Misc --- >> Veth pair device: enabled >> Macvlan: enabled >> Vlan: enabled >> File capabilities: enabled >> >> >> Thanks, >> Olivier >> >> >> >> On Wed, Aug 10, 2011 at 6:25 PM, Daniel Lezcano >> wrote: >> >>> On 08/10/2011 05:54 PM, Daniel Lezcano wrote: >>> > On 08/10/2011 04:51 PM, Mauras Olivier wrote: >>> >> Hello, >>> >> >>> >> I have several containers running on a host - ~10 >>> >> One of them is running a MySQL database. Several of the others are >>> running >>> >> php code under apache that fetch datas from the database. >>> >> >>> >> Host is using eth0, while my containers are on a bridge using eth1, >>> and >>> >> configured in macvlan bridge mode. Host runs SL6 with 2.6.32 RH kernel >>> - >>> >> Host is a VMWare ESX virtual machine for that matter. >>> >> Ping latency between containers is at an average of 0.050 ms >>> >> >>> >> What i noted, is that one webapp were getting very slow... After >>> >> investigating, the only thing that i could find, is that requests from >>> >> containers are _slower_ than from any other hosts. >>> >> >>> >> See below: >>> >> >>> >> container1 ~ # time (echo "select * from testsuites;" | mysql -h >>> >> container_mysql -u nmp -pxxx testlink) >>> >> id details >>> >> 42 >>> >> (... cut only 25 entries anyway) >>> >> >>> >> real*0m0.875s* >>> >> >>> >> Time varies between 0.8 and 1.2s >>> >> >>> >> >From the host or another VM on the same network with the exact same >>> request: >>> >> >>> >> real*0m0.022s* >>> >> >>> >> >>> >> So that particular app that can loop over 19 requests takes sometimes >>> up to >>> >> 20s to display a page instead of ~0.5s from another host... >>> >> >>> >> Is there some sysctl/settings to tweak or it's just not relevant to >>> make >>> >> requests between containers on the same host?? >>> > Hmm, thanks for the detailed explanation. >>> > >>> > Can you check with tcpdump if there are problems with the patch >>> checksums ? >>> > And try to disable the TSO and SGO of eth1 if there are available ? >>> >>> Oh, and another question. AFAIK, the bridge mode is available since the >>> 2.6.33 kernel. >>> If we try to enable the bridge mode on a macvlan while this is not >>> supported, no error is reported. >>> So I don't know if the RH kernel did backport the bridge mode in their >>> .32 kernel. >>> >> >> > -- Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. http://p.sf.net/sfu/wandisco-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [MySQL] Weird performances problem between containers on a same host
Also, bandwidth between containers is very bad... scp a file doesn't get over 2.1MB/s - On gigabit interface - and drop over time. On Thu, Aug 11, 2011 at 9:32 AM, Mauras Olivier wrote: > So here's my results. > On 55 packets transmitted for the mysql request > 25 correct checksums > 28 incorrect > > Disabling TSO and GSO doesn't help much, got exact same result > > As for the macvlan bridge mode how can i check if it's available or not?? > Would it let things work even if not supported? > --- Misc --- > Veth pair device: enabled > Macvlan: enabled > Vlan: enabled > File capabilities: enabled > > > Thanks, > Olivier > > > > On Wed, Aug 10, 2011 at 6:25 PM, Daniel Lezcano wrote: > >> On 08/10/2011 05:54 PM, Daniel Lezcano wrote: >> > On 08/10/2011 04:51 PM, Mauras Olivier wrote: >> >> Hello, >> >> >> >> I have several containers running on a host - ~10 >> >> One of them is running a MySQL database. Several of the others are >> running >> >> php code under apache that fetch datas from the database. >> >> >> >> Host is using eth0, while my containers are on a bridge using eth1, and >> >> configured in macvlan bridge mode. Host runs SL6 with 2.6.32 RH kernel >> - >> >> Host is a VMWare ESX virtual machine for that matter. >> >> Ping latency between containers is at an average of 0.050 ms >> >> >> >> What i noted, is that one webapp were getting very slow... After >> >> investigating, the only thing that i could find, is that requests from >> >> containers are _slower_ than from any other hosts. >> >> >> >> See below: >> >> >> >> container1 ~ # time (echo "select * from testsuites;" | mysql -h >> >> container_mysql -u nmp -pxxx testlink) >> >> id details >> >> 42 >> >> (... cut only 25 entries anyway) >> >> >> >> real*0m0.875s* >> >> >> >> Time varies between 0.8 and 1.2s >> >> >> >> >From the host or another VM on the same network with the exact same >> request: >> >> >> >> real*0m0.022s* >> >> >> >> >> >> So that particular app that can loop over 19 requests takes sometimes >> up to >> >> 20s to display a page instead of ~0.5s from another host... >> >> >> >> Is there some sysctl/settings to tweak or it's just not relevant to >> make >> >> requests between containers on the same host?? >> > Hmm, thanks for the detailed explanation. >> > >> > Can you check with tcpdump if there are problems with the patch >> checksums ? >> > And try to disable the TSO and SGO of eth1 if there are available ? >> >> Oh, and another question. AFAIK, the bridge mode is available since the >> 2.6.33 kernel. >> If we try to enable the bridge mode on a macvlan while this is not >> supported, no error is reported. >> So I don't know if the RH kernel did backport the bridge mode in their >> .32 kernel. >> > > -- Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. http://p.sf.net/sfu/wandisco-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [MySQL] Weird performances problem between containers on a same host
So here's my results. On 55 packets transmitted for the mysql request 25 correct checksums 28 incorrect Disabling TSO and GSO doesn't help much, got exact same result As for the macvlan bridge mode how can i check if it's available or not?? Would it let things work even if not supported? --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Thanks, Olivier On Wed, Aug 10, 2011 at 6:25 PM, Daniel Lezcano wrote: > On 08/10/2011 05:54 PM, Daniel Lezcano wrote: > > On 08/10/2011 04:51 PM, Mauras Olivier wrote: > >> Hello, > >> > >> I have several containers running on a host - ~10 > >> One of them is running a MySQL database. Several of the others are > running > >> php code under apache that fetch datas from the database. > >> > >> Host is using eth0, while my containers are on a bridge using eth1, and > >> configured in macvlan bridge mode. Host runs SL6 with 2.6.32 RH kernel - > >> Host is a VMWare ESX virtual machine for that matter. > >> Ping latency between containers is at an average of 0.050 ms > >> > >> What i noted, is that one webapp were getting very slow... After > >> investigating, the only thing that i could find, is that requests from > >> containers are _slower_ than from any other hosts. > >> > >> See below: > >> > >> container1 ~ # time (echo "select * from testsuites;" | mysql -h > >> container_mysql -u nmp -pxxx testlink) > >> id details > >> 42 > >> (... cut only 25 entries anyway) > >> > >> real*0m0.875s* > >> > >> Time varies between 0.8 and 1.2s > >> > >> >From the host or another VM on the same network with the exact same > request: > >> > >> real*0m0.022s* > >> > >> > >> So that particular app that can loop over 19 requests takes sometimes up > to > >> 20s to display a page instead of ~0.5s from another host... > >> > >> Is there some sysctl/settings to tweak or it's just not relevant to make > >> requests between containers on the same host?? > > Hmm, thanks for the detailed explanation. > > > > Can you check with tcpdump if there are problems with the patch checksums > ? > > And try to disable the TSO and SGO of eth1 if there are available ? > > Oh, and another question. AFAIK, the bridge mode is available since the > 2.6.33 kernel. > If we try to enable the bridge mode on a macvlan while this is not > supported, no error is reported. > So I don't know if the RH kernel did backport the bridge mode in their > .32 kernel. > -- Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. http://p.sf.net/sfu/wandisco-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] [MySQL] Weird performances problem between containers on a same host
Hello, I have several containers running on a host - ~10 One of them is running a MySQL database. Several of the others are running php code under apache that fetch datas from the database. Host is using eth0, while my containers are on a bridge using eth1, and configured in macvlan bridge mode. Host runs SL6 with 2.6.32 RH kernel - Host is a VMWare ESX virtual machine for that matter. Ping latency between containers is at an average of 0.050 ms What i noted, is that one webapp were getting very slow... After investigating, the only thing that i could find, is that requests from containers are _slower_ than from any other hosts. See below: container1 ~ # time (echo "select * from testsuites;" | mysql -h container_mysql -u nmp -pxxx testlink) id details 42 (... cut only 25 entries anyway) real*0m0.875s* Time varies between 0.8 and 1.2s >From the host or another VM on the same network with the exact same request: real*0m0.022s* So that particular app that can loop over 19 requests takes sometimes up to 20s to display a page instead of ~0.5s from another host... Is there some sysctl/settings to tweak or it's just not relevant to make requests between containers on the same host?? Thanks, Olivier -- uberSVN's rich system and user administration capabilities and model configuration take the hassle out of deploying and managing Subversion and the tools developers use with it. Learn more about uberSVN and get a free download at: http://p.sf.net/sfu/wandisco-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Mitigating LXC Container Evasion?
Hi Andre, You're true it won't work out of the box, sorry i forgot the network part. echo 0.0.0.0/0 @ > /smack/netlabel This will resolve the problem. Smack supports Netlabel/CIPSO, but honestly i don't need it so i let full access on this side. You definitely want to check the documentation if you need to fine tune network accesses. Cheers, Olivier On Wed, Aug 3, 2011 at 7:36 PM, Andre Nathan wrote: > Hi Olivier > > On Tue, 2011-08-02 at 12:13 +0200, Mauras Olivier wrote: > > Here's a practical example: > > # smack_label.py -w -r /srv/lxc/lxc1 lxc1 > > # echo "lxc1" > /proc/self/current/attr > > # lxc-start -n lxc1 > > # echo "_" > /proc/self/current/attr > > Does networking inside the containers work for you with this setup? > > Thanks, > Andre > > -- BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! http://p.sf.net/sfu/rim-blackberry-1___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Mitigating LXC Container Evasion?
Hello Andre, All labels are set from the host, so it shouldn't matter if a directory is bind mounted or not. For the setup, this is actually pretty straightforward: - You apply the desired label recursively on the container rootdir - See my python script to ease the process here : https://svn.coredumb.net/filedetails.php?repname=Coredumb&path=%2Fscripts%2Ftrunk%2Fpython%2Fsmack_label.py - You change your current label to the desired one - You start the container - You change back your current label Here's a practical example: # smack_label.py -w -r /srv/lxc/lxc1 lxc1 # echo "lxc1" > /proc/self/current/attr # lxc-start -n lxc1 # echo "_" > /proc/self/current/attr You now have a container with all its files and processes labelled "lxc1". It's now up to you to set the accesses you need. Note: _ or "floor" is the default label Out from the documentation of Smack: A read or execute access requested on an object labelled "_" is permitted. This is the default behaviour and can sure be overridden. If you take my example in my previous mail, i tried to mount sysfs in the container and got it refused cause mounting it read-only is impossible. In the message from the host: type=1400 audit(1312278692.783:33840): lsm=SMACK fn=smack_sb_mount action=denied subject="curse" object="_" requested=w pid=19215 comm="mount" path="/sys" dev=sysfs ino=1 You can see here that object labeled "curse" tried to access sysfs labeled "_" in write mode and got explicitly refused. You could change this behaviour by issuing the following command: echo "curse _ rwx" > /smack/load As you guess this is not what you want to do, cause it would let your container write to the host ;) To summarize, by default only setting a different label - without any complex configuration at all - to your containers will ensure you that a root inside a container could only have minimal impact and/or no impact on the host. The "smack setup" is only setting up the rules you need to secure your containers and datas inside them. All smack documentation is available in the Kernel sources directory. Hope this helps and that i've made myself clear enough, Olivier On Mon, Aug 1, 2011 at 2:27 PM, Andre Nathan wrote: > Hi Olivier > > On Sun, 2011-07-31 at 16:42 +0200, Mauras Olivier wrote: > > Furthermore system has SMACK enabled - Simplified Mandatory Access > > Control - a label based MAC. > > Each LXC container has its files and processes labeled differently - > > Labels which can't write the host system default label, so basically a > > root in a container can't make anything harmfull on the host system. > > Same can be achieved _less easily_ with Selinux - Look at IBM papers. > > Would you mind sharing your SMACK setup? > > Also, do you know how this applies to bind-mounted directories? Can I > label a container's files when they are read-only bind-mounted from the > host? > > Thanks, > Andre > > -- BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA The must-attend event for mobile developers. Connect with experts. Get tools for creating Super Apps. See the latest technologies. Sessions, hands-on labs, demos & much more. Register early & save! http://p.sf.net/sfu/rim-blackberry-1___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Mitigating LXC Container Evasion?
Hello Matthew, Here's an example in on of my containers: root@nasty:~# ps ax PID TTY STAT TIME COMMAND 1 ?Ss 0:13 init [3] 44 ?Ss 0:02 /usr/sbin/syslogd 141 ?Ss 0:00 /usr/sbin/sshd 144 ?S 0:01 /usr/sbin/crond -l6 149 ?Ss 0:25 /usr/sbin/httpd -k start 2215 ?S 0:14 /usr/sbin/httpd -k start 7820 ?S 0:36 /usr/sbin/httpd -k start 8663 ?S 0:00 /usr/sbin/httpd -k start 10159 ?Ss 0:00 sshd: root@pts/18 10161 pts/18 Ss 0:00 -bash 10175 pts/18 R+ 0:00 ps ax 26928 ?S 0:05 /usr/sbin/httpd -k start 26936 ?S 0:05 /usr/sbin/httpd -k start 26937 ?S 0:05 /usr/sbin/httpd -k start 26938 ?S 0:05 /usr/sbin/httpd -k start 26939 ?S 0:05 /usr/sbin/httpd -k start 28054 ?S 1:41 /usr/sbin/httpd -k start 29670 ?S 0:15 /usr/sbin/httpd -k start root@nasty:~# whoami root root@nasty:~# mount -t sysfs sysfs /sys mount: block device sysfs is write-protected, mounting read-only mount: cannot mount block device sysfs read-only root@nasty:~# touch /test root@nasty:~# rm /test root@nasty:~# cat /sys/kernel/uevent_helper root@nasty:~# echo "test" > /sys/kernel/uevent_helper -bash: /sys/kernel/uevent_helper: Permission denied Here's capabilities dropped on the container: lxc.cap.drop = sys_module mknod lxc.cap.drop = mac_override kill sys_time lxc.cap.drop = setfcap setpcap sys_boot Furthermore system has SMACK enabled - Simplified Mandatory Access Control - a label based MAC. Each LXC container has its files and processes labeled differently - Labels which can't write the host system default label, so basically a root in a container can't make anything harmfull on the host system. Same can be achieved _less easily_ with Selinux - Look at IBM papers. Hope this helps, Olivier On Sun, Jul 31, 2011 at 3:10 AM, Matthew Franz wrote: > Had seen some previous discussions before, but are there any ways to > mitigate this design vulnerability? > > http://blog.bofh.it/debian/id_413 > > Are there any workarounds? > > Thanks, > > - mdf > > -- > -- > Matthew Franz > mdfr...@gmail.com > > > -- > Got Input? Slashdot Needs You. > Take our quick survey online. Come on, we don't ask for help often. > Plus, you'll get a chance to win $100 to spend on ThinkGeek. > http://p.sf.net/sfu/slashdot-survey > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > -- Got Input? Slashdot Needs You. Take our quick survey online. Come on, we don't ask for help often. Plus, you'll get a chance to win $100 to spend on ThinkGeek. http://p.sf.net/sfu/slashdot-survey___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC on ESXi (help)
I tried this way either, but there's two blocking problems with that - At least for me: - Can't use this feature on 2.6.32 kernels - Have to reboot to had a new interface to setup a new container - Yeah the say you want to add up a 11th container ;) Olivier On Tue, May 17, 2011 at 5:36 PM, Ulli Horlacher < frams...@rus.uni-stuttgart.de> wrote: > On Tue 2011-05-17 (17:18), David Touzeau wrote: > > > the host is a Virtual Machine stored on ESXi 4.0 > > > > The container can ping the host, the host can ping the container. > > Issue is others computers network. cannot ping the container and the > > container cannot ping the network. > > I have had the same problems. > > My solution is: "lxc.network.type = phys" > > Every container has its own (pseudo) physical ethernet interface, which > indeed is a ESX virtual interface, but Linux (LXC) sees a real ethernet > interface, therefore: lxc.network.type = phys > > I have created 10 more ethernet interface via vSphere. This costs > virtually nothing :-) > > root@zoo:/lxc# fpg network *cfg > > bunny.cfg: > lxc.network.type = phys > lxc.network.link = eth4 > lxc.network.name = eth4 > lxc.network.flags = up > lxc.network.mtu = 1500 > lxc.network.ipv4 = 129.69.8.7/24 > > flupp.cfg: > lxc.network.type = phys > lxc.network.link = eth1 > lxc.network.name = eth1 > lxc.network.flags = up > lxc.network.mtu = 1500 > lxc.network.ipv4 = 129.69.1.219/24 > > > vmtest1.cfg: > lxc.network.type = phys > lxc.network.link = eth2 > lxc.network.name = eth2 > lxc.network.flags = up > lxc.network.mtu = 1500 > lxc.network.ipv4 = 129.69.1.42/24 > > > > -- > Ullrich Horlacher Server- und Arbeitsplatzsysteme > Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de > Universitaet Stuttgart Tel:++49-711-685-65868 > Allmandring 30 Fax:++49-711-682357 > 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ > > > -- > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC on ESXi (help)
Hello David, As you can see you only force the MAC adress _inside_ the container, on the host the MAC for the veth is "out of the bounds" for ESX it doesn't seem to like that - At least that's my guess cause i have not been able to make it work correctly with this configuration. First thing to check out, is ensure that your ESX vswitch has promiscuous mode enabled - it's disabled by default. Next thing is to use Macvlan configuration for your containers. Here's a network config example i use successfully in my containers: lxc.utsname = lxc1 lxc.network.type = macvlan lxc.network.macvlan.mode = bridge lxc.network.flags = up lxc.network.link = br1 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.hwaddr = 00:50:56:3f:ff:00# High enough MAC to not overlap with ESX assignments - from 00 to FF gives quite a good number of guests :) lxc.network.ipv4 = 0.0.0.0 # I set the network inside the guest for minimal guest modifications I find a bit painfull to have to configure another macvlan interface on the host to be able to communicate to the guests, so i'm assigning 2 interfaces on the hosts - The advantage of virtualization ;) - eth0 stays for the host network, and i setup a bridge over eth1 which is called br1 and is used for the containers. I've achieved to have very good network performances since i set this up this way and have completely fixed my stability problems that i had why veth. Tell me if you need some more details. Cheers, Olivier On Tue, May 17, 2011 at 5:18 PM, David Touzeau wrote: > Dear > > According last discuss i have tried to change MAC address up to: > 00:50:56:XX:YY:ZZ > Thread was here : > http://sourceforge.net/mailarchive/message.php?msg_id=27400968 > > Using container veth+bridge > > the host is a Virtual Machine stored on ESXi 4.0 > > The container can ping the host, the host can ping the container. > Issue is others computers network. cannot ping the container and the > container cannot ping the network. > > Is there anybody encounter this issue > > > here it is the ifconfig of the host: > > br5 Link encap:Ethernet HWaddr 00:0C:29:AD:40:A7 > inet adr:192.168.1.64 Bcast:192.168.1.255 > Masque:255.255.255.0 > adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:607044 errors:0 dropped:0 overruns:0 frame:0 > TX packets:12087 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 lg file transmission:0 > RX bytes:54131332 (51.6 MiB) TX bytes:6350221 (6.0 MiB) > > eth1 Link encap:Ethernet HWaddr 00:0C:29:AD:40:A7 > adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien > UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 > RX packets:611474 errors:0 dropped:0 overruns:0 frame:0 > TX packets:13813 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 lg file transmission:1000 > RX bytes:63127550 (60.2 MiB) TX bytes:6638350 (6.3 MiB) > Interruption:18 Adresse de base:0x2000 > > vethZS6zKh Link encap:Ethernet HWaddr 5E:AE:96:7C:4B:D7 > adr inet6: fe80::5cae:96ff:fe7c:4bd7/64 Scope:Lien > UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 > RX packets:56 errors:0 dropped:0 overruns:0 frame:0 > TX packets:3875 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 lg file transmission:1000 > RX bytes:3756 (3.6 KiB) TX bytes:437097 (426.8 KiB) > > > > > container settings: > > lxc.tty = 4 > lxc.pts = 1024 > lxc.network.type = veth > lxc.network.link = br5 > lxc.network.ipv4 = 192.168.1.72 > lxc.network.hwaddr = 00:50:56:a5:af:30 > lxc.network.name = eth0 > lxc.network.flags = up > lxc.cgroup.memory.limit_in_bytes = 128M > lxc.cgroup.memory.memsw.limit_in_bytes = 512M > lxc.cgroup.cpu.shares = 1024 > lxc.cgroup.cpuset.cpus = 0 > > > > > > > > > > > > > -- > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
On Wed, Apr 27, 2011 at 11:59 AM, Mauras Olivier wrote: > > > On Tue, Apr 26, 2011 at 6:03 PM, Mauras Olivier > wrote: > >> >> >> On Sat, Apr 23, 2011 at 12:40 PM, Mauras Olivier > > wrote: >> >>> Hi Geordy, >>> >>> Thanks for your reply. The first one is actually already set here. I >>> asked ESX folks to create me my own vswitch with promisc mode enabled. >>> I saw the second one coming, but didn't think that could make >>> something... There's also a setting like "mac.verify" that can be set to >>> false directly from the .vmx file to allow you to use another MAC than >>> 00:50:56:xx for your VM. >>> I'll try to force a high MAC in the 00:50:56 subset for my containers and >>> see what happens. >>> >>> >>> I'll let you know, >>> >>> Olivier >>> >>> >>> On Sat, Apr 23, 2011 at 9:12 AM, Geordy Korte wrote: >>> >>>> On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: >>>> >>>>> Thought about it some more and i think it might be an advanced esx >>>>> feature that restricts this. Basically a couple of adv features block >>>>> spoofing and mac changes on a vhost. I will try to find the specific >>>>> command >>>>> you need to run on an esx host tomorrow, or maybee someone can google it. >>>>> I >>>>> am 100% sure that it's not a bug in either esx or lxc and no modifications >>>>> are needed on the lxc side. >>>>> >>>>> >>>> Hi, >>>> >>>> Sorry for the delay, kids birthday and my new job has not left me with >>>> much time. Anyways I did some digging and founds some stuff that might >>>> help. >>>> >>>> The first one is in the properties of the vswitch that is >>>> interconnecting the lxc host to the network. Edit the properties and in the >>>> Security Tab make sure that promiscus mode, Mac changes and forged macs are >>>> set to accept. Basically the vswitch will allow all mac's coming from the >>>> lxc and not block them. >>>> >>>> The second tip is more of a maybee... ESX 3.x basically would allow to >>>> you to change the mac of the Vhost to whatever you wanted. In ESX 4.0 >>>> Vmware >>>> rewrote the code and would allow you to specify a mac only if it was in the >>>> vmware OUI range. To make sure that ESX does not cut the communication try >>>> to set the macs of you LXC containers to: 00:50:56:XX:YY:ZZ >>>> >>>> I hope this helps a little. Give it a shot and let me know how it works >>>> out. >>>> >>>> Geordy >>>> >>>> >>>> -- >>>> Fulfilling the Lean Software Promise >>>> Lean software platforms are now widely adopted and the benefits have >>>> been >>>> demonstrated beyond question. Learn why your peers are replacing JEE >>>> containers with lightweight application servers - and what you can gain >>>> from the move. http://p.sf.net/sfu/vmware-sfemails >>>> >>>> ___ >>>> Lxc-users mailing list >>>> Lxc-users@lists.sourceforge.net >>>> https://lists.sourceforge.net/lists/listinfo/lxc-users >>>> >>>> >>> >> Hello, >> >> Good news here!! Forcing container MAC to 00:50:56:xx:xx:xx make it work >> flawlessly! Two containers running at the same time without the need to >> restart network nor Kernel Panic. So far so good!! >> Problem solved for me, will be able to deploy some more containers now. >> >> Thanks for your help. >> >> Olivier >> >> And actually not quite well... I still have random container freezes with > sometimes "eth0: received packet with own address as source address" in my > dmesg. > The container can't access network for 30s then get's back randomly, can't > find the reason of this :( > > Still have KP with multiple containers up and running, have to check dump. > > If anyone has any idea about theses network glitches... > > > Thanks, > Olivier > Hello, Just a quick notice to say that i have resolved some of my problems by upgrading the kernel. I can now have containers running on a physical interface which mak
Re: [Lxc-users] Fwd: Container inside an ESX VM
On Tue, Apr 26, 2011 at 6:03 PM, Mauras Olivier wrote: > > > On Sat, Apr 23, 2011 at 12:40 PM, Mauras Olivier > wrote: > >> Hi Geordy, >> >> Thanks for your reply. The first one is actually already set here. I asked >> ESX folks to create me my own vswitch with promisc mode enabled. >> I saw the second one coming, but didn't think that could make something... >> There's also a setting like "mac.verify" that can be set to false directly >> from the .vmx file to allow you to use another MAC than 00:50:56:xx for >> your VM. >> I'll try to force a high MAC in the 00:50:56 subset for my containers and >> see what happens. >> >> >> I'll let you know, >> >> Olivier >> >> >> On Sat, Apr 23, 2011 at 9:12 AM, Geordy Korte wrote: >> >>> On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: >>> >>>> Thought about it some more and i think it might be an advanced esx >>>> feature that restricts this. Basically a couple of adv features block >>>> spoofing and mac changes on a vhost. I will try to find the specific >>>> command >>>> you need to run on an esx host tomorrow, or maybee someone can google it. I >>>> am 100% sure that it's not a bug in either esx or lxc and no modifications >>>> are needed on the lxc side. >>>> >>>> >>> Hi, >>> >>> Sorry for the delay, kids birthday and my new job has not left me with >>> much time. Anyways I did some digging and founds some stuff that might help. >>> >>> The first one is in the properties of the vswitch that is interconnecting >>> the lxc host to the network. Edit the properties and in the Security Tab >>> make sure that promiscus mode, Mac changes and forged macs are set to >>> accept. Basically the vswitch will allow all mac's coming from the lxc and >>> not block them. >>> >>> The second tip is more of a maybee... ESX 3.x basically would allow to >>> you to change the mac of the Vhost to whatever you wanted. In ESX 4.0 Vmware >>> rewrote the code and would allow you to specify a mac only if it was in the >>> vmware OUI range. To make sure that ESX does not cut the communication try >>> to set the macs of you LXC containers to: 00:50:56:XX:YY:ZZ >>> >>> I hope this helps a little. Give it a shot and let me know how it works >>> out. >>> >>> Geordy >>> >>> >>> -- >>> Fulfilling the Lean Software Promise >>> Lean software platforms are now widely adopted and the benefits have been >>> demonstrated beyond question. Learn why your peers are replacing JEE >>> containers with lightweight application servers - and what you can gain >>> from the move. http://p.sf.net/sfu/vmware-sfemails >>> >>> ___ >>> Lxc-users mailing list >>> Lxc-users@lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/lxc-users >>> >>> >> > Hello, > > Good news here!! Forcing container MAC to 00:50:56:xx:xx:xx make it work > flawlessly! Two containers running at the same time without the need to > restart network nor Kernel Panic. So far so good!! > Problem solved for me, will be able to deploy some more containers now. > > Thanks for your help. > > Olivier > > And actually not quite well... I still have random container freezes with sometimes "eth0: received packet with own address as source address" in my dmesg. The container can't access network for 30s then get's back randomly, can't find the reason of this :( Still have KP with multiple containers up and running, have to check dump. If anyone has any idea about theses network glitches... Thanks, Olivier -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
On Sat, Apr 23, 2011 at 12:40 PM, Mauras Olivier wrote: > Hi Geordy, > > Thanks for your reply. The first one is actually already set here. I asked > ESX folks to create me my own vswitch with promisc mode enabled. > I saw the second one coming, but didn't think that could make something... > There's also a setting like "mac.verify" that can be set to false directly > from the .vmx file to allow you to use another MAC than 00:50:56:xx for > your VM. > I'll try to force a high MAC in the 00:50:56 subset for my containers and > see what happens. > > > I'll let you know, > > Olivier > > > On Sat, Apr 23, 2011 at 9:12 AM, Geordy Korte wrote: > >> On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: >> >>> Thought about it some more and i think it might be an advanced esx >>> feature that restricts this. Basically a couple of adv features block >>> spoofing and mac changes on a vhost. I will try to find the specific command >>> you need to run on an esx host tomorrow, or maybee someone can google it. I >>> am 100% sure that it's not a bug in either esx or lxc and no modifications >>> are needed on the lxc side. >>> >>> >> Hi, >> >> Sorry for the delay, kids birthday and my new job has not left me with >> much time. Anyways I did some digging and founds some stuff that might help. >> >> The first one is in the properties of the vswitch that is interconnecting >> the lxc host to the network. Edit the properties and in the Security Tab >> make sure that promiscus mode, Mac changes and forged macs are set to >> accept. Basically the vswitch will allow all mac's coming from the lxc and >> not block them. >> >> The second tip is more of a maybee... ESX 3.x basically would allow to >> you to change the mac of the Vhost to whatever you wanted. In ESX 4.0 Vmware >> rewrote the code and would allow you to specify a mac only if it was in the >> vmware OUI range. To make sure that ESX does not cut the communication try >> to set the macs of you LXC containers to: 00:50:56:XX:YY:ZZ >> >> I hope this helps a little. Give it a shot and let me know how it works >> out. >> >> Geordy >> >> >> -- >> Fulfilling the Lean Software Promise >> Lean software platforms are now widely adopted and the benefits have been >> demonstrated beyond question. Learn why your peers are replacing JEE >> containers with lightweight application servers - and what you can gain >> from the move. http://p.sf.net/sfu/vmware-sfemails >> >> ___ >> Lxc-users mailing list >> Lxc-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/lxc-users >> >> > Hello, Good news here!! Forcing container MAC to 00:50:56:xx:xx:xx make it work flawlessly! Two containers running at the same time without the need to restart network nor Kernel Panic. So far so good!! Problem solved for me, will be able to deploy some more containers now. Thanks for your help. Olivier -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
Hi Geordy, Thanks for your reply. The first one is actually already set here. I asked ESX folks to create me my own vswitch with promisc mode enabled. I saw the second one coming, but didn't think that could make something... There's also a setting like "mac.verify" that can be set to false directly from the .vmx file to allow you to use another MAC than 00:50:56:xx for your VM. I'll try to force a high MAC in the 00:50:56 subset for my containers and see what happens. I'll let you know, Olivier On Sat, Apr 23, 2011 at 9:12 AM, Geordy Korte wrote: > On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: > >> Thought about it some more and i think it might be an advanced esx feature >> that restricts this. Basically a couple of adv features block spoofing and >> mac changes on a vhost. I will try to find the specific command you need to >> run on an esx host tomorrow, or maybee someone can google it. I am 100% sure >> that it's not a bug in either esx or lxc and no modifications are needed on >> the lxc side. >> >> > Hi, > > Sorry for the delay, kids birthday and my new job has not left me with much > time. Anyways I did some digging and founds some stuff that might help. > > The first one is in the properties of the vswitch that is interconnecting > the lxc host to the network. Edit the properties and in the Security Tab > make sure that promiscus mode, Mac changes and forged macs are set to > accept. Basically the vswitch will allow all mac's coming from the lxc and > not block them. > > The second tip is more of a maybee... ESX 3.x basically would allow to you > to change the mac of the Vhost to whatever you wanted. In ESX 4.0 Vmware > rewrote the code and would allow you to specify a mac only if it was in the > vmware OUI range. To make sure that ESX does not cut the communication try > to set the macs of you LXC containers to: 00:50:56:XX:YY:ZZ > > I hope this helps a little. Give it a shot and let me know how it works > out. > > Geordy > > > -- > Fulfilling the Lean Software Promise > Lean software platforms are now widely adopted and the benefits have been > demonstrated beyond question. Learn why your peers are replacing JEE > containers with lightweight application servers - and what you can gain > from the move. http://p.sf.net/sfu/vmware-sfemails > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > > -- Fulfilling the Lean Software Promise Lean software platforms are now widely adopted and the benefits have been demonstrated beyond question. Learn why your peers are replacing JEE containers with lightweight application servers - and what you can gain from the move. http://p.sf.net/sfu/vmware-sfemails___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
So some more testing today. Here's what happens: When i have one container up with my host network restart trick, everything's fine, i can download gigas of data without problem. Starting a second one, redo the network trick to have network in this one either, everything looks ok. About 5minutes later interface get shut down and kernel panic... That's all for today :D On Mon, Apr 18, 2011 at 11:47 AM, Mauras Olivier wrote: > Thanks, help is really appreciated. > > > Cheers, > Olivier > > > On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: > >> Hi, >> >> Thought about it some more and i think it might be an advanced esx feature >> that restricts this. Basically a couple of adv features block spoofing and >> mac changes on a vhost. I will try to find the specific command you need to >> run on an esx host tomorrow, or maybee someone can google it. I am 100% sure >> that it's not a bug in either esx or lxc and no modifications are needed on >> the lxc side. >> >> Mvg >> >> Geordy Korte >> (Sent via iphone so shorter then normal) >> >> >> On 16 apr. 2011, at 23:51, Ulli Horlacher >> wrote: >> >> > On Sat 2011-04-16 (22:24), Geordy Korte wrote: >> > >> >> Due to the architecter of esx it will only permit 1 mac per vswitch >> port. >> >> If they would allow more then security would be comprimised. Solution >> >> would be to have each lxc bound to a vnic. >> > >> > I have had the same problem with lxc on ESX last week. I also thought >> > using separate vnics for each container would be a solution, but lxc has >> > a bug not giving back the interface original name to the host: eg, eth1 >> > becomes dev3 and you cannot rename it back. >> > >> > The result of this bug is: you can start a container only once, then you >> > have to reboot the host. This is a complete show stopper. >> > >> > >> > -- >> > Ullrich Horlacher Server- und Arbeitsplatzsysteme >> > Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de >> > Universitaet Stuttgart Tel:++49-711-685-65868 >> > Allmandring 30 Fax:++49-711-682357 >> > 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ >> > >> > >> -- >> > Benefiting from Server Virtualization: Beyond Initial Workload >> > Consolidation -- Increasing the use of server virtualization is a top >> > priority.Virtualization can reduce costs, simplify management, and >> improve >> > application availability and disaster protection. Learn more about >> boosting >> > the value of server virtualization. >> http://p.sf.net/sfu/vmware-sfdev2dev >> > ___ >> > Lxc-users mailing list >> > Lxc-users@lists.sourceforge.net >> > https://lists.sourceforge.net/lists/listinfo/lxc-users >> >> >> -- >> Benefiting from Server Virtualization: Beyond Initial Workload >> Consolidation -- Increasing the use of server virtualization is a top >> priority.Virtualization can reduce costs, simplify management, and improve >> application availability and disaster protection. Learn more about >> boosting >> the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev >> ___ >> Lxc-users mailing list >> Lxc-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/lxc-users >> > > -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
Thanks, help is really appreciated. Cheers, Olivier On Sun, Apr 17, 2011 at 8:39 AM, Geordy Korte wrote: > Hi, > > Thought about it some more and i think it might be an advanced esx feature > that restricts this. Basically a couple of adv features block spoofing and > mac changes on a vhost. I will try to find the specific command you need to > run on an esx host tomorrow, or maybee someone can google it. I am 100% sure > that it's not a bug in either esx or lxc and no modifications are needed on > the lxc side. > > Mvg > > Geordy Korte > (Sent via iphone so shorter then normal) > > > On 16 apr. 2011, at 23:51, Ulli Horlacher > wrote: > > > On Sat 2011-04-16 (22:24), Geordy Korte wrote: > > > >> Due to the architecter of esx it will only permit 1 mac per vswitch > port. > >> If they would allow more then security would be comprimised. Solution > >> would be to have each lxc bound to a vnic. > > > > I have had the same problem with lxc on ESX last week. I also thought > > using separate vnics for each container would be a solution, but lxc has > > a bug not giving back the interface original name to the host: eg, eth1 > > becomes dev3 and you cannot rename it back. > > > > The result of this bug is: you can start a container only once, then you > > have to reboot the host. This is a complete show stopper. > > > > > > -- > > Ullrich Horlacher Server- und Arbeitsplatzsysteme > > Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de > > Universitaet Stuttgart Tel:++49-711-685-65868 > > Allmandring 30 Fax:++49-711-682357 > > 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ > > > > > -- > > Benefiting from Server Virtualization: Beyond Initial Workload > > Consolidation -- Increasing the use of server virtualization is a top > > priority.Virtualization can reduce costs, simplify management, and > improve > > application availability and disaster protection. Learn more about > boosting > > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > > ___ > > Lxc-users mailing list > > Lxc-users@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/lxc-users > > > -- > Benefiting from Server Virtualization: Beyond Initial Workload > Consolidation -- Increasing the use of server virtualization is a top > priority.Virtualization can reduce costs, simplify management, and improve > application availability and disaster protection. Learn more about boosting > the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev > ___ > Lxc-users mailing list > Lxc-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/lxc-users > -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Container inside an ESX VM
On Sat, Apr 16, 2011 at 3:45 PM, Serge Hallyn wrote: > > As you see in this example, before issuing the network restart, my veth > MAC > > was already higher than the eth0 MAC but the guest hadn't a working > network > > connection. > > Thanks for the info. > > > After restarting network on the host while the guest is still running, as > > you can see my MACs haven't changed a bit but now the network inside the > > guest is working correctly. > > > > Hope this helps to better understand the problem. > > Heh, but not to understand the cause of the problem :) > > I will see if I can reproduce this problem next week with vmware > workstation. > > -serge > I'll check next week if that's not a problem with interface loadbalancing on the ESX vswitch side... -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Fwd: Container inside an ESX VM
Missed the list in copy. sorry. On Fri, Apr 15, 2011 at 3:20 PM, Serge Hallyn wrote: > Quoting Mauras Olivier (oliver.mau...@gmail.com): > > Hello, > > > > I'm struggling for two days now with some completely weird network > > behaviours. > > My host is a virtual machine hosted on an ESX farm. I planned to deploy > > several containers on it to achieve various tasks. > > > > Host is running Scientific Linux 6 with default kernel (2.6.32), and my > > container is an Oracle Linux 6. I discovered that i had to change ESX > > vswitch settings to allow promiscuous mode in order to make the host > bridge > > correctly behave, but it still gives me weird results. > > Most of the time after having started the container, network inside the > > container is erratic. I can ping or ssh from the host to the container, > but > > nothing gets out of the container or in the container from the LAN. While > > the container is still running, if i issue a network restart on the host, > > the container start behaving correctly and network works again as > expected. > > The problem is that it's not reliable at all. If i stop/restart the > > container several times, it starts losing network again that i can only > get > > back by issuing the network restart on the host... > > Just a thought, advised by previous libvirt troubles. > > Can you look at the mac addresses on the VMWare guest? Check that the > eth0 on the vmware guest (i.e. container host) is always lower than > that of the veths in the guests. > > -serge > Hi Serge, Thanks for your reply. This is one thing i found while searching for a solution and after verifying, everything seems ok. I did decide by the way to force a high MAC for containers so now they have fe:ff:xx MACs. Is there a way to force the veth MAC on the host side? What i noted is that in any case, when i issue the network restart on the host and get back network in the container, there's not a change on MACs. veth stays with the same MAC as long as i don't restart the container. Here's an example: [root@host 16:08:39 ~]# ip addr 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:ae:00:41 brd ff:ff:ff:ff:ff:ff 6: br0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:50:56:ae:00:41 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global br0 17: veth56nN7a: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether a2:8a:f2:ec:f1:1b brd ff:ff:ff:ff:ff:ff [root@host 16:08:42 ~]# ssh guest root@guest 's password: [root@guest ~]# curl -I www.coredumb.net curl: (7) couldn't connect to host [root@guest ~]# logout Connection to guest closed. [root@host 16:11:22 ~]# /etc/init.d/network restart Shutting down interface br0: [ OK ] Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface:[ OK ] Bringing up interface eth0:[ OK ] Bringing up interface br0: [ OK ] [root@host 16:11:58 ~]# ip addr 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:ae:00:41 brd ff:ff:ff:ff:ff:ff 6: br0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:50:56:ae:00:41 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global br0 17: veth56nN7a: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether a2:8a:f2:ec:f1:1b brd ff:ff:ff:ff:ff:ff [root@host 16:12:01 ~]# ssh guest root@guest's password: [root@guest ~]# curl -I www.coredumb.net HTTP/1.1 200 OK Date: Fri, 15 Apr 2011 14:13:04 GMT Server: Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8m DAV/2 PHP/5.2.10 SVN/1.6.4 X-Powered-By: PHP/5.2.10 Expires: Tue, 01 Jan 2002 00:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate Pragma: no-cache Content-Type: text/html; charset=ISO-8859-1; Proxy-Connection: Keep-Alive Connection: Keep-Alive Set-Cookie: PHPSESSID=giplm9rrr0nmb1fo03f6u39435; path=/ As you see in this example, before issuing the network restart, my veth MAC was already higher than the eth0 MAC but the guest hadn't a working network connection. After restarting network on the host while the guest is still running, as you can see my MACs haven't changed a bit but now the network inside the guest is working correctly. Hope this helps to better understand the problem. Cheers, Olivier --
[Lxc-users] Container inside an ESX VM
Hello, I'm struggling for two days now with some completely weird network behaviours. My host is a virtual machine hosted on an ESX farm. I planned to deploy several containers on it to achieve various tasks. Host is running Scientific Linux 6 with default kernel (2.6.32), and my container is an Oracle Linux 6. I discovered that i had to change ESX vswitch settings to allow promiscuous mode in order to make the host bridge correctly behave, but it still gives me weird results. Most of the time after having started the container, network inside the container is erratic. I can ping or ssh from the host to the container, but nothing gets out of the container or in the container from the LAN. While the container is still running, if i issue a network restart on the host, the container start behaving correctly and network works again as expected. The problem is that it's not reliable at all. If i stop/restart the container several times, it starts losing network again that i can only get back by issuing the network restart on the host... Here's my container configuration: lxc.utsname = ct-011 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.ipv4 = 0.0.0.0 lxc.mount = /etc/lxc/ct-01.fstab lxc.rootfs = /srv/lxc/ct-01/ lxc.cap.drop = sys_module mknod lxc.cap.drop = mac_override sys_time lxc.cap.drop = setfcap setpcap sys_boot I set the network from inside the container to avoid having to modify too much of container init - I also tried setting IP from lxc config and it gave me the same result. My bridge is set with forward delay to 0 and STP on as having it disabled doesn't work at all. I don't have that much errors that could lead me to a solution here's a snippet of my dmesg after restarting twice the network on the host: e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: starting userspace STP failed, starting kernel STP br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state device vethAuDQzn entered promiscuous mode br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state br0: port 2(vethAuDQzn) entering disabled state br0: port 1(eth0) entering disabled state br0: port 1(eth0) entering disabled state e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state I'm starting to desperate here and i hope one of you has an idea on what would be needed to make that thing work correctly. Regards, Olivier PS: Sorry if this mail gets duplicated, it doesn't appear to be correctly sent -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] ESX VM host and network issues
Hello, I'm struggling for two days now with some completely weird network behaviours. My host is a virtual machine hosted on an ESX farm. I planned to deploy several containers on it to achieve various tasks. Host is running Scientific Linux 6 with default kernel (2.6.32), and my container is an Oracle Linux 6. I discovered that i had to change ESX vswitch settings to allow promiscuous mode in order to make the host bridge correctly behave, but it still gives me weird results. Most of the time after having started the container, network inside the container is erratic. I can ping or ssh from the host to the container, but nothing gets out of the container or in the container from the LAN. While the container is still running, if i issue a network restart on the host, the container start behaving correctly and network works again as expected. The problem is that it's not reliable at all. If i stop/restart the container several times, it starts losing network again that i can only get back by issuing the network restart on the host... Here's my container configuration: lxc.utsname = ct-011 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.ipv4 = 0.0.0.0 lxc.mount = /etc/lxc/ct-01.fstab lxc.rootfs = /srv/lxc/ct-01/ lxc.cap.drop = sys_module mknod lxc.cap.drop = mac_override sys_time lxc.cap.drop = setfcap setpcap sys_boot I set the network from inside the container to avoid having to modify too much of container init - I also tried setting IP from lxc config and it gave me the same result. My bridge is set with forward delay to 0 and STP on as having it disabled doesn't work at all. I don't have that much errors that could lead me to a solution here's a snippet of my dmesg after restarting twice the network on the host: e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: starting userspace STP failed, starting kernel STP br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state device vethAuDQzn entered promiscuous mode br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state br0: port 2(vethAuDQzn) entering disabled state br0: port 1(eth0) entering disabled state br0: port 1(eth0) entering disabled state e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state I'm starting to desperate here and i hope one of you has an idea on what would be needed to make that thing work correctly. Regards, Olivier -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] ESX VM host and network issues
Hello, I'm struggling for two days now with some completely weird network behaviours. My host is a virtual machine hosted on an ESX farm. I planned to deploy several containers on it to achieve various tasks. Host is running Scientific Linux 6 with default kernel (2.6.32), and my container is an Oracle Linux 6. I discovered that i had to change ESX vswitch settings to allow promiscuous mode in order to make the host bridge correctly behave, but it still gives me weird results. Most of the time after having started the container, network inside the container is erratic. I can ping or ssh from the host to the container, but nothing gets out of the container or in the container from the LAN. While the container is still running, if i issue a network restart on the host, the container start behaving correctly and network works again as expected. The problem is that it's not reliable at all. If i stop/restart the container several times, it starts losing network again that i can only get back by issuing the network restart on the host... Here's my container configuration: lxc.utsname = ct-011 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 lxc.network.ipv4 = 0.0.0.0 lxc.mount = /etc/lxc/ct-01.fstab lxc.rootfs = /srv/lxc/ct-01/ lxc.cap.drop = sys_module mknod lxc.cap.drop = mac_override sys_time lxc.cap.drop = setfcap setpcap sys_boot I set the network from inside the container to avoid having to modify too much of container init - I also tried setting IP from lxc config and it gave me the same result. My bridge is set with forward delay to 0 and STP on as having it disabled doesn't work at all. I don't have that much errors that could lead me to a solution here's a snippet of my dmesg after restarting twice the network on the host: e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: starting userspace STP failed, starting kernel STP br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state device vethAuDQzn entered promiscuous mode br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state br0: port 2(vethAuDQzn) entering disabled state br0: port 1(eth0) entering disabled state br0: port 1(eth0) entering disabled state e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None br0: topology change detected, propagating br0: port 1(eth0) entering forwarding state br0: topology change detected, propagating br0: port 2(vethAuDQzn) entering forwarding state I'm starting to desperate here and i hope one of you has an idea on what would be needed to make that thing work correctly. Regards, Olivier -- Benefiting from Server Virtualization: Beyond Initial Workload Consolidation -- Increasing the use of server virtualization is a top priority.Virtualization can reduce costs, simplify management, and improve application availability and disaster protection. Learn more about boosting the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users