[Lxc-users] lxc.cgroup.memory.limit_in_bytes has no effect

2011-05-17 Thread Ulli Horlacher

Memory limitation does not work for me:

root@vms2:/lxc# uname -a
Linux vms2 2.6.32-31-server #61-Ubuntu SMP Fri Apr 8 19:44:42 UTC 2011 x86_64 
GNU/Linux

root@vms2:/lxc# grep CONFIG_CGROUP_MEM_RES_CTLR
/boot/config-2.6.32-31-server
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y

root@vms2:/lxc# grep limit_in_bytes /lxc/flupp.cfg
lxc.cgroup.memory.limit_in_bytes = 536870912

root@vms2:/lxc# lxc-version 
lxc version: 0.7.4.1

root@vms2:/lxc# lxc-start -d -n flupp -f /lxc/flupp.cfg
root@vms2:/lxc# lxc-console -n flupp

Type Ctrl+a q to exit the console

root@flupp:~# ls -l /tmp/1GB.tmp
-rw-r--r-- 1 root root 1073741824 2011-05-17 06:06 /tmp/1GB.tmp

root@flupp:~# clp
Command Line Perl with readline support, @ARGV and Specials. Type ? for help.
(perl):: undef $/; open F,'/tmp/1GB.tmp' or die; $_=F; print length

1073741824


Why can a container process allocate more than 1 GB of memory if there is
512 MB limit?


-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc.cgroup.memory.limit_in_bytes has no effect

2011-05-17 Thread Daniel Lezcano
On 05/17/2011 08:34 AM, Ulli Horlacher wrote:
 Memory limitation does not work for me:

 root@vms2:/lxc# uname -a
 Linux vms2 2.6.32-31-server #61-Ubuntu SMP Fri Apr 8 19:44:42 UTC 2011 x86_64 
 GNU/Linux

 root@vms2:/lxc# grep CONFIG_CGROUP_MEM_RES_CTLR
 /boot/config-2.6.32-31-server
 CONFIG_CGROUP_MEM_RES_CTLR=y
 CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y

 root@vms2:/lxc# grep limit_in_bytes /lxc/flupp.cfg
 lxc.cgroup.memory.limit_in_bytes = 536870912

 root@vms2:/lxc# lxc-version
 lxc version: 0.7.4.1

 root@vms2:/lxc# lxc-start -d -n flupp -f /lxc/flupp.cfg
 root@vms2:/lxc# lxc-console -n flupp

 TypeCtrl+a q  to exit the console

 root@flupp:~# ls -l /tmp/1GB.tmp
 -rw-r--r-- 1 root root 1073741824 2011-05-17 06:06 /tmp/1GB.tmp

 root@flupp:~# clp
 Command Line Perl with readline support, @ARGV and Specials. Type ? for 
 help.
 (perl):: undef $/; open F,'/tmp/1GB.tmp' or die; $_=F; print length

 1073741824


 Why can a container process allocate more than 1 GB of memory if there is
 512 MB limit?

I don't know exactly what does your perl program but I suggest you try 
with a simple C program:

#include stdio.h
#include sys/mman.h
#include sys/poll.h

int main(int argc, char *argv[])
{
 char *addr;

 addr = mmap(NULL, 512 * 1024 * 1024, PROT_READ | PROT_WRITE,
 MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS, -1, 0);
 if (addr == MAP_FAILED) {
 perror(mmap);
 return -1;
 }

 poll(0, 0, -1);
 return 0;
}

When a process reaches the memory limit size then the container will 
begin to swap. This is not really what we want as it can impact the 
performances of the other container with continuous disk io. So the 
solution would be to set prevent the container to swap or play with the 
swapiness (not tried myself).

In order to disable the swap, you have to set the 
memory.memsw.limit_in_bytes = memory.limit_in_bytes.






--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc.cgroup.memory.limit_in_bytes has no effect

2011-05-17 Thread David Serrano
On Tue, May 17, 2011 at 09:10, Daniel Lezcano daniel.lezc...@free.fr wrote:
 On 05/17/2011 08:34 AM, Ulli Horlacher wrote:

 root@flupp:~# clp
 Command Line Perl with readline support, @ARGV and Specials. Type ? for 
 help.
 (perl):: undef $/; open F,'/tmp/1GB.tmp' or die; $_=F; print length

 1073741824

 I don't know exactly what does your perl program

It reads the whole file into a variable and then prints the length of
that variable, which shows that the file has actually been read into
memory.


 When a process reaches the memory limit size then the container will
 begin to swap.

Yes, that's what I saw in a quick test.


--
David Serrano

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


[Lxc-users] memory.usage_in_bytes value

2011-05-17 Thread David Touzeau
Dear


I have a debian running in a container 

the /cgroup/vps-1/memory.usage_in_bytes display

10784768

10784768 - 10Mb memory used.

I estimate that this value did not reflect a running debian system 

Is it true ?

best regards


--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Running LXC on a pxelinux machine

2011-05-17 Thread Gus Power
Unfortunately I haven't managed to get any further :(

I can still ping the LXC containers from other hosts on the network, and
they can ping each other, but I cannot ping them from the pxelinux host
machine.

Comparing the network config between the pxelinux host and a
non-pxelinux host I can see that the pxelinux host has an IP associated
with eth0 while the non-pxelinux associates the IP with br0. I've tried
various attempts to reassign the ip address on the pxelinux host to br0
but to no avail (attempts result in hanging the machine).

Any more pointers would be a great help!

Gus.


On 04/05/11 13:23, Gus Power wrote:
 Hi Guido,
 
 Why STP is disabled?
 
 Good question! Info below:
 
 route -n
 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 192.168.1.0 0.0.0.0 255.255.255.0   U 0  00 eth0
 127.0.0.0   0.0.0.0 255.0.0.0   U 0  00 lo
 0.0.0.0 192.168.1.1 0.0.0.0 UG0  00 eth0
 
 brctl showstp br0
 br0
  bridge id8000.00183704c188
  designated root  8000.00183704c188
  root port   0path cost  0
  max age19.99 bridge max age19.99
  hello time  1.99 bridge hello time  1.99
  forward delay   0.00 bridge forward delay
0.00
  ageing time   299.95
  hello timer 0.38 tcn timer  0.00
  topology change timer   0.00 gc timer
   33.56
  flags
 
 
 eth0 (1)
  port id  8001stateforwarding
  designated root  8000.00183704c188   path cost  4
  designated bridge8000.00183704c188   message age timer  0.00
  designated port  8001forward delay timer0.00
  designated cost 0hold timer 0.00
  flags
 
 vethNFweOZ (2)
  port id  8002stateforwarding
  designated root  8000.00183704c188   path cost  2
  designated bridge8000.00183704c188   message age timer  0.00
  designated port  8002forward delay timer0.00
  designated cost 0hold timer 0.00
  flags
 
 vethNeCrkd (4)
  port id  8004stateforwarding
  designated root  8000.00183704c188   path cost  2
  designated bridge8000.00183704c188   message age timer  0.00
  designated port  8004forward delay timer0.00
  designated cost 0hold timer 0.00
  flags
 
 vethU0zyYA (3)
  port id  8003stateforwarding
  designated root  8000.00183704c188   path cost  2
  designated bridge8000.00183704c188   message age timer  0.00
  designated port  8003forward delay timer0.00
  designated cost 0hold timer 0.00
  flags
 
 brctl showmacs br0
 port no   mac addris local?   ageing timer
   1   00:00:48:0e:9a:16   no75.22
   1   00:16:01:df:a7:36   no33.51
   1   00:18:37:04:c0:36   no 3.40
   1   00:18:37:04:c1:15   no56.37
   1   00:18:37:04:c1:80   no45.16
   1   00:18:37:04:c1:88   yes0.00
   1   00:18:37:04:c1:a0   no43.84
   1   00:18:37:04:c1:c5   no19.96
   1   00:1d:73:4c:13:e8   no45.23
   1   00:1e:c9:59:a4:83   no 3.39
   1   00:1f:28:dc:ba:80   no19.52
   1   00:1f:c6:bf:07:4d   no 5.10
   1   00:23:6c:84:ce:57   no33.66
   1   08:00:27:dc:f1:ca   no33.66
   1   20:cf:30:4e:1a:fd   no73.35
   1   20:cf:30:5a:c9:e7   no42.16
   1   2a:68:44:23:5b:3d   no34.18
   4   7a:0c:74:86:f6:f4   yes0.00
   3   92:61:42:84:ec:5a   yes0.00
   2   96:73:0c:d0:71:f5   yes0.00
   1   a2:f7:44:bf:9e:25   no67.64
 
 
 G
 
 On 04/05/11 09:24, Jäkel, Guido wrote:
 Dear Gus,

 brctl show
 bridge name bridge id   STP enabled interfaces
 br0 8000.00183704c188   no  eth0
 vethNFweOZ
  

Re: [Lxc-users] lxc.cgroup.memory.limit_in_bytes has no effect

2011-05-17 Thread Ulli Horlacher
On Tue 2011-05-17 (09:10), Daniel Lezcano wrote:

  Why can a container process allocate more than 1 GB of memory if there is
  512 MB limit?
 
 When a process reaches the memory limit size then the container will 
 begin to swap. This is not really what we want as 

Oh... no!


 In order to disable the swap, you have to set the 
 memory.memsw.limit_in_bytes = memory.limit_in_bytes.

Thanks! This does the trick!

-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] memory.usage_in_bytes value

2011-05-17 Thread Daniel Lezcano
On 05/17/2011 12:11 PM, David Touzeau wrote:
 Dear


 I have a debian running in a container

 the /cgroup/vps-1/memory.usage_in_bytes display

 10784768

 10784768 -  10Mb memory used.

 I estimate that this value did not reflect a running debian system

 Is it true ?
Assuming you are referring to the virtual memory used by the container ...
The cgroup memory acts at the physical memory level. What you assign is 
the physical memory for your container.
For example, if you have 2GB of memory on your host, you can assign 
512MB to your container and 1,5GB will be available for your system. But 
the processes will still have 4GB (for a 32b host) of virtual memory.

So what you see is the physical memory used by the container.

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread Ulli Horlacher
On Tue 2011-05-17 (17:18), David Touzeau wrote:

 the host is a Virtual Machine stored on ESXi 4.0
 
 The container can ping the host, the host can ping the container.
 Issue is others computers network. cannot ping the container and the
 container cannot ping the network.

I have had the same problems.

My solution is: lxc.network.type = phys

Every container has its own (pseudo) physical ethernet interface, which
indeed is a ESX virtual interface, but Linux (LXC) sees a real ethernet
interface, therefore: lxc.network.type = phys

I have created 10 more ethernet interface via vSphere. This costs
virtually nothing :-)

root@zoo:/lxc# fpg network *cfg

bunny.cfg:
lxc.network.type = phys
lxc.network.link  = eth4
lxc.network.name  = eth4
lxc.network.flags = up
lxc.network.mtu = 1500
lxc.network.ipv4 = 129.69.8.7/24

flupp.cfg:
lxc.network.type = phys
lxc.network.link = eth1
lxc.network.name = eth1
lxc.network.flags = up
lxc.network.mtu = 1500
lxc.network.ipv4 = 129.69.1.219/24


vmtest1.cfg:
lxc.network.type = phys
lxc.network.link = eth2
lxc.network.name = eth2
lxc.network.flags = up
lxc.network.mtu = 1500
lxc.network.ipv4 = 129.69.1.42/24



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread Mauras Olivier
Hello David,

As you can see you only force the MAC adress _inside_ the container, on the
host the MAC for the veth is out of the bounds for ESX it doesn't seem to
like that - At least that's my guess cause i have not been able to make it
work correctly with this configuration.

First thing to check out, is ensure that your ESX vswitch has promiscuous
mode enabled - it's disabled by default.
Next thing is to use Macvlan configuration for your containers.

Here's a network config example i use successfully in my containers:

lxc.utsname = lxc1
lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.flags = up
lxc.network.link = br1
lxc.network.name = eth0
lxc.network.mtu = 1500
lxc.network.hwaddr = 00:50:56:3f:ff:00# High enough MAC to not overlap
with ESX assignments - from 00 to FF gives quite a good number of guests :)
lxc.network.ipv4 = 0.0.0.0  # I set the network inside
the guest for minimal guest modifications


I find a bit painfull to have to configure another macvlan interface on the
host to be able to communicate to the guests, so i'm assigning 2 interfaces
on the hosts - The advantage of virtualization ;) - eth0 stays for the host
network, and i setup a bridge over eth1 which is called br1 and is used for
the containers.

I've achieved to have very good network performances since i set this up
this way and have completely fixed my stability problems that i had why
veth.


Tell me if you need some more details.


Cheers,

Olivier



On Tue, May 17, 2011 at 5:18 PM, David Touzeau da...@touzeau.eu wrote:

 Dear

 According last discuss i have tried to change MAC address up to:
 00:50:56:XX:YY:ZZ
 Thread was here :
 http://sourceforge.net/mailarchive/message.php?msg_id=27400968

 Using container veth+bridge

 the host is a Virtual Machine stored on ESXi 4.0

 The container can ping the host, the host can ping the container.
 Issue is others computers network. cannot ping the container and the
 container cannot ping the network.

 Is there anybody encounter this issue


 here it is the ifconfig of the host:

 br5   Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
  inet adr:192.168.1.64  Bcast:192.168.1.255
 Masque:255.255.255.0
  adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:607044 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12087 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 lg file transmission:0
  RX bytes:54131332 (51.6 MiB)  TX bytes:6350221 (6.0 MiB)

 eth1  Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
  adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:611474 errors:0 dropped:0 overruns:0 frame:0
  TX packets:13813 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 lg file transmission:1000
  RX bytes:63127550 (60.2 MiB)  TX bytes:6638350 (6.3 MiB)
  Interruption:18 Adresse de base:0x2000

 vethZS6zKh Link encap:Ethernet  HWaddr 5E:AE:96:7C:4B:D7
  adr inet6: fe80::5cae:96ff:fe7c:4bd7/64 Scope:Lien
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:56 errors:0 dropped:0 overruns:0 frame:0
  TX packets:3875 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 lg file transmission:1000
  RX bytes:3756 (3.6 KiB)  TX bytes:437097 (426.8 KiB)




 container settings:

 lxc.tty = 4
 lxc.pts = 1024
 lxc.network.type = veth
 lxc.network.link = br5
 lxc.network.ipv4 = 192.168.1.72
 lxc.network.hwaddr = 00:50:56:a5:af:30
 lxc.network.name = eth0
 lxc.network.flags = up
 lxc.cgroup.memory.limit_in_bytes = 128M
 lxc.cgroup.memory.memsw.limit_in_bytes = 512M
 lxc.cgroup.cpu.shares = 1024
 lxc.cgroup.cpuset.cpus = 0












 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread Mauras Olivier
I tried this way either, but there's two blocking problems with that - At
least for me:
- Can't use this feature on 2.6.32 kernels
- Have to reboot to had a new interface to setup a new container - Yeah the
say you want to add up a 11th container ;)


Olivier

On Tue, May 17, 2011 at 5:36 PM, Ulli Horlacher 
frams...@rus.uni-stuttgart.de wrote:

 On Tue 2011-05-17 (17:18), David Touzeau wrote:

  the host is a Virtual Machine stored on ESXi 4.0
 
  The container can ping the host, the host can ping the container.
  Issue is others computers network. cannot ping the container and the
  container cannot ping the network.

 I have had the same problems.

 My solution is: lxc.network.type = phys

 Every container has its own (pseudo) physical ethernet interface, which
 indeed is a ESX virtual interface, but Linux (LXC) sees a real ethernet
 interface, therefore: lxc.network.type = phys

 I have created 10 more ethernet interface via vSphere. This costs
 virtually nothing :-)

 root@zoo:/lxc# fpg network *cfg

 bunny.cfg:
 lxc.network.type = phys
 lxc.network.link  = eth4
 lxc.network.name  = eth4
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.8.7/24

 flupp.cfg:
 lxc.network.type = phys
 lxc.network.link = eth1
 lxc.network.name = eth1
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.1.219/24


 vmtest1.cfg:
 lxc.network.type = phys
 lxc.network.link = eth2
 lxc.network.name = eth2
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.1.42/24



 --
 Ullrich Horlacher  Server- und Arbeitsplatzsysteme
 Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
 Universitaet Stuttgart Tel:++49-711-685-65868
 Allmandring 30 Fax:++49-711-682357
 70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/


 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread Ulli Horlacher
On Tue 2011-05-17 (17:40), Mauras Olivier wrote:

 I tried this way either, but there's two blocking problems with that - At
 least for me:
 - Can't use this feature on 2.6.32 kernels

I have installed 2.6.39 without problems.


 - Have to reboot to had a new interface to setup a new container - Yeah the
 say you want to add up a 11th container ;)

Simply pre-provision *enough* interfaces - it does neither cost something
nor does it hurt :-)



-- 
Ullrich Horlacher  Server- und Arbeitsplatzsysteme
Rechenzentrum  E-Mail: horlac...@rus.uni-stuttgart.de
Universitaet Stuttgart Tel:++49-711-685-65868
Allmandring 30 Fax:++49-711-682357
70550 Stuttgart (Germany)  WWW:http://www.rus.uni-stuttgart.de/

--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread Miroslav Lednicky
Hello,

you can try allow switching ethernet interface to
promiscuous mode in ESXi host configuration.

Best regards,

Miroslav.


Dne 17.5.2011 17:18, David Touzeau napsal(a):
 Dear

 According last discuss i have tried to change MAC address up to:
 00:50:56:XX:YY:ZZ
 Thread was here :
 http://sourceforge.net/mailarchive/message.php?msg_id=27400968

 Using container veth+bridge

 the host is a Virtual Machine stored on ESXi 4.0

 The container can ping the host, the host can ping the container.
 Issue is others computers network. cannot ping the container and the
 container cannot ping the network.

 Is there anybody encounter this issue


 here it is the ifconfig of the host:

 br5   Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
inet adr:192.168.1.64  Bcast:192.168.1.255
 Masque:255.255.255.0
adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:607044 errors:0 dropped:0 overruns:0 frame:0
TX packets:12087 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:0
RX bytes:54131332 (51.6 MiB)  TX bytes:6350221 (6.0 MiB)

 eth1  Link encap:Ethernet  HWaddr 00:0C:29:AD:40:A7
adr inet6: fe80::20c:29ff:fead:40a7/64 Scope:Lien
UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
RX packets:611474 errors:0 dropped:0 overruns:0 frame:0
TX packets:13813 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:63127550 (60.2 MiB)  TX bytes:6638350 (6.3 MiB)
Interruption:18 Adresse de base:0x2000

 vethZS6zKh Link encap:Ethernet  HWaddr 5E:AE:96:7C:4B:D7
adr inet6: fe80::5cae:96ff:fe7c:4bd7/64 Scope:Lien
UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
RX packets:56 errors:0 dropped:0 overruns:0 frame:0
TX packets:3875 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:3756 (3.6 KiB)  TX bytes:437097 (426.8 KiB)




 container settings:

 lxc.tty = 4
 lxc.pts = 1024
 lxc.network.type = veth
 lxc.network.link = br5
 lxc.network.ipv4 = 192.168.1.72
 lxc.network.hwaddr = 00:50:56:a5:af:30
 lxc.network.name = eth0
 lxc.network.flags = up
 lxc.cgroup.memory.limit_in_bytes = 128M
 lxc.cgroup.memory.memsw.limit_in_bytes = 512M
 lxc.cgroup.cpu.shares = 1024
 lxc.cgroup.cpuset.cpus = 0











 --
 Achieve unprecedented app performance and reliability
 What every C/C++ and Fortran developer should know.
 Learn how Intel has extended the reach of its next-generation tools
 to help boost performance applications - inlcuding clusters.
 http://p.sf.net/sfu/intel-dev2devmay
 ___
 Lxc-users mailing list
 Lxc-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/lxc-users




--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] LXC on ESXi (help)

2011-05-17 Thread David Touzeau
Le mardi 17 mai 2011 à 17:36 +0200, Ulli Horlacher a écrit :
 On Tue 2011-05-17 (17:18), David Touzeau wrote:
 
  the host is a Virtual Machine stored on ESXi 4.0
  
  The container can ping the host, the host can ping the container.
  Issue is others computers network. cannot ping the container and the
  container cannot ping the network.
 
 I have had the same problems.
 
 My solution is: lxc.network.type = phys
 
 Every container has its own (pseudo) physical ethernet interface, which
 indeed is a ESX virtual interface, but Linux (LXC) sees a real ethernet
 interface, therefore: lxc.network.type = phys
 
 I have created 10 more ethernet interface via vSphere. This costs
 virtually nothing :-)
 
 root@zoo:/lxc# fpg network *cfg
 
 bunny.cfg:
 lxc.network.type = phys
 lxc.network.link  = eth4
 lxc.network.name  = eth4
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.8.7/24
 
 flupp.cfg:
 lxc.network.type = phys
 lxc.network.link = eth1
 lxc.network.name = eth1
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.1.219/24
 
 
 vmtest1.cfg:
 lxc.network.type = phys
 lxc.network.link = eth2
 lxc.network.name = eth2
 lxc.network.flags = up
 lxc.network.mtu = 1500
 lxc.network.ipv4 = 129.69.1.42/24
 
 
 


Thanks Ulli, 
i'm so stupid !!!
this make sense/logical to add unlimited network card directly under the
VMWare Virtual Machine and did not loose time to create a bridge...




--
Achieve unprecedented app performance and reliability
What every C/C++ and Fortran developer should know.
Learn how Intel has extended the reach of its next-generation tools
to help boost performance applications - inlcuding clusters.
http://p.sf.net/sfu/intel-dev2devmay
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users