[Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Andreas Laut
Dear list,

we are using lxc 0.8 on Debian Wheezy (official debian package).
Now we wanted to start a lxc with more than one IP address and we have 
gotten strange behaviors.

As starting the lxc some IPs are reachable, some not. If we shut down 
the lxc and boot again, some other IPs are reachable. There seams no 
logic behind this. And - in no time - there were all IPs reachable.

If we are using only one IP for LXC, all is fine.

Do someone else have gotten this problem? All help and ideas are 
appreciated.

Regards,
Andreas

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] [Spam-Wahrscheinlichkeit=45] Problem with lxc and mutliple ips

2013-10-11 Thread Jäkel , Guido
Dear Andreas,

please substantiate your term start a lxc with multiple IPs and the line If 
we are using only one IP for LXC, all is fine: What kind of network setup do 
you use, is it e.g. a bridge on the lxc host and veth's on the containers? 

A guess might be that you have a MAC address clash; did you override the 
lxc.network.hwaddr? 

Guido

-Original Message-
From: Andreas Laut [mailto:andreas.l...@spark5.de]
Sent: Friday, October 11, 2013 8:53 AM
To: lxc-users@lists.sourceforge.net
Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and mutliple 
ips

Dear list,

we are using lxc 0.8 on Debian Wheezy (official debian package).
Now we wanted to start a lxc with more than one IP address and we have
gotten strange behaviors.

As starting the lxc some IPs are reachable, some not. If we shut down
the lxc and boot again, some other IPs are reachable. There seams no
logic behind this. And - in no time - there were all IPs reachable.

If we are using only one IP for LXC, all is fine.

Do someone else have gotten this problem? All help and ideas are
appreciated.

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Andreas Laut
Ok, sorry. You're right.

We are using a bridge named br0 bound to eth0 at lxc host. On the 
containers we are using veth, but this problem happens also with type 
macvlan. No change at all.

We also tried using hwaddr.
We're doing further research in hope to show a way to reproduce this for 
you.

Andreas


Am 11.10.2013 09:41, schrieb Jäkel, Guido:
 Dear Andreas,

 please substantiate your term start a lxc with multiple IPs and the line 
 If we are using only one IP for LXC, all is fine: What kind of network 
 setup do you use, is it e.g. a bridge on the lxc host and veth's on the 
 containers?

 A guess might be that you have a MAC address clash; did you override the 
 lxc.network.hwaddr?

 Guido

 -Original Message-
 From: Andreas Laut [mailto:andreas.l...@spark5.de]
 Sent: Friday, October 11, 2013 8:53 AM
 To: lxc-users@lists.sourceforge.net
 Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and 
 mutliple ips

 Dear list,

 we are using lxc 0.8 on Debian Wheezy (official debian package).
 Now we wanted to start a lxc with more than one IP address and we have
 gotten strange behaviors.

 As starting the lxc some IPs are reachable, some not. If we shut down
 the lxc and boot again, some other IPs are reachable. There seams no
 logic behind this. And - in no time - there were all IPs reachable.

 If we are using only one IP for LXC, all is fine.

 Do someone else have gotten this problem? All help and ideas are
 appreciated.


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Tamas Papp

On 10/11/2013 10:40 AM, Andreas Laut wrote:
 Ok, sorry. You're right.

 We are using a bridge named br0 bound to eth0 at lxc host. On the 
 containers we are using veth, but this problem happens also with type 
 macvlan. No change at all.

 We also tried using hwaddr.
 We're doing further research in hope to show a way to reproduce this for 
 you.

Are you able to reproduce that against recent nightly build?

tamas

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Jäkel , Guido
Dear Andreas,

Thank you for the first clarification. But now I want to ask what exactly is 
not working:

* Are you trying to use more than one Containers with one IP or one Container 
with more than one IP? 
* Are you using the same subnets?
* From what location you can't reach the IP, did this station list the right 
MAC in it's ARP table at least? 
* Can you reach the LXC host from there? Can you reach others from the Host? 
Can you reach the host or others from the Containers?
* Has your host an IP (on the bridge)? Is STP enabled, is the forward delay and 
hello time at an appropriate low value?
* Is the Host connected to a Switched Network? What did you observe here with 
respect to the used MACs / IPs?


Greetings

Guido

-Original Message-
From: Andreas Laut [mailto:andreas.l...@spark5.de]
Sent: Friday, October 11, 2013 10:41 AM
To: Jäkel, Guido; 'lxc-users@lists.sourceforge.net'
Subject: Re: [Lxc-users] Problem with lxc and mutliple ips

Ok, sorry. You're right.

We are using a bridge named br0 bound to eth0 at lxc host. On the
containers we are using veth, but this problem happens also with type
macvlan. No change at all.

We also tried using hwaddr.
We're doing further research in hope to show a way to reproduce this for
you.

Andreas


Am 11.10.2013 09:41, schrieb Jäkel, Guido:
 Dear Andreas,

 please substantiate your term start a lxc with multiple IPs and the line 
 If we are using only one IP for LXC, all is fine: What kind of
network setup do you use, is it e.g. a bridge on the lxc host and veth's on 
the containers?

 A guess might be that you have a MAC address clash; did you override the 
 lxc.network.hwaddr?

 Guido

 -Original Message-
 From: Andreas Laut [mailto:andreas.l...@spark5.de]
 Sent: Friday, October 11, 2013 8:53 AM
 To: lxc-users@lists.sourceforge.net
 Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and 
 mutliple ips

 Dear list,

 we are using lxc 0.8 on Debian Wheezy (official debian package).
 Now we wanted to start a lxc with more than one IP address and we have
 gotten strange behaviors.

 As starting the lxc some IPs are reachable, some not. If we shut down
 the lxc and boot again, some other IPs are reachable. There seams no
 logic behind this. And - in no time - there were all IPs reachable.

 If we are using only one IP for LXC, all is fine.

 Do someone else have gotten this problem? All help and ideas are
 appreciated.


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Andreas Laut

Hi,

actually can't get lxc nightly compiled with debian right now, configure 
has problems with pkg-config python3-dev (package pkg-config and 
python3-dev is installed) (configure line 5588)


I tried instead lxc 0.9 from tarball and got the same problem.

Our LXC Config is attached, maybe this helps.
Our lxc-host bridge is configured like

auto br0
iface br0 inet static
bridge_ports eth4
bridge_stp off
address 10.5.255.80
netmask 255.255.0.0
gateway 10.5.255.252


Andreas

Am 11.10.2013 10:45, schrieb Tamas Papp:

On 10/11/2013 10:40 AM, Andreas Laut wrote:

Ok, sorry. You're right.

We are using a bridge named br0 bound to eth0 at lxc host. On the
containers we are using veth, but this problem happens also with type
macvlan. No change at all.

We also tried using hwaddr.
We're doing further research in hope to show a way to reproduce this for
you.

Are you able to reproduce that against recent nightly build?

tamas

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


# /var/lib/lxc/lxc-container/config

## Container
lxc.utsname = lxc-container

lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4= 10.05.225.10/16

lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4= 10.05.225.11/16

lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4= 10.05.100.12/16

lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4= 10.05.100.13/16

lxc.network.type= veth
lxc.network.flags   = up
lxc.network.link= br0
lxc.network.ipv4= 10.05.225.14/16

lxc.network.ipv4.gateway= 10.05.255.252

lxc.rootfs  = /var/lib/lxc/lxc-container/rootfs
lxc.arch= x86_64
#lxc.console= /var/log/lxc/lxc-container.console
lxc.tty = 5
lxc.pts = 1024

## Capabilities
lxc.cap.drop= mac_admin
lxc.cap.drop= mac_override
lxc.cap.drop= sys_admin
lxc.cap.drop= sys_module
lxc.cap.drop= sys_rawio
## Devices
# Allow all devices
#lxc.cgroup.devices.allow   = a
# Deny all devices
lxc.cgroup.devices.deny = a
# Allow to mknod all devices (but not using them)
lxc.cgroup.devices.allow= c *:* m
lxc.cgroup.devices.allow= b *:* m

# /dev/console
lxc.cgroup.devices.allow= c 5:1 rwm
# /dev/fuse
lxc.cgroup.devices.allow= c 10:229 rwm
# /dev/null
lxc.cgroup.devices.allow= c 1:3 rwm
# /dev/ptmx
lxc.cgroup.devices.allow= c 5:2 rwm
# /dev/pts/*
lxc.cgroup.devices.allow= c 136:* rwm
# /dev/random
lxc.cgroup.devices.allow= c 1:8 rwm
# /dev/rtc
lxc.cgroup.devices.allow= c 254:0 rwm
# /dev/tty
lxc.cgroup.devices.allow= c 5:0 rwm
# /dev/urandom
lxc.cgroup.devices.allow= c 1:9 rwm
# /dev/zero
lxc.cgroup.devices.allow= c 1:5 rwm

## Limits
#lxc.cgroup.cpu.shares  = 1024
#lxc.cgroup.cpuset.cpus = 0
#lxc.cgroup.memory.limit_in_bytes   = 4G
#lxc.cgroup.memory.memsw.limit_in_bytes = 1G
#lxc.cgroup.blkio.weight= 500

## Filesystem
lxc.mount.entry = proc 
/var/lib/lxc/lxc-container/rootfs/proc proc nodev,noexec,nosuid,ro 0 0
lxc.mount.entry = sysfs 
/var/lib/lxc/lxc-container/rootfs/sys sysfs defaults,ro 0 0
--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and 

Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Andreas Laut
Sorry, found the mistake in my lxc config by myself, need to do further 
tests.


Am 11.10.2013 14:42, schrieb Andreas Laut:

Hi,

actually can't get lxc nightly compiled with debian right now, 
configure has problems with pkg-config python3-dev (package pkg-config 
and python3-dev is installed) (configure line 5588)


I tried instead lxc 0.9 from tarball and got the same problem.

Our LXC Config is attached, maybe this helps.
Our lxc-host bridge is configured like

auto br0
iface br0 inet static
bridge_ports eth4
bridge_stp off
address 10.5.255.80
netmask 255.255.0.0
gateway 10.5.255.252


Andreas

Am 11.10.2013 10:45, schrieb Tamas Papp:

On 10/11/2013 10:40 AM, Andreas Laut wrote:

Ok, sorry. You're right.

We are using a bridge named br0 bound to eth0 at lxc host. On the
containers we are using veth, but this problem happens also with type
macvlan. No change at all.

We also tried using hwaddr.
We're doing further research in hope to show a way to reproduce this 
for

you.

Are you able to reproduce that against recent nightly build?

tamas

-- 


October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the 
most from
the latest Intel processors and coprocessors. See abstracts and 
register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk 


___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users




--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk


___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] Problem with lxc and mutliple ips

2013-10-11 Thread Tamas Papp

On 10/11/2013 02:42 PM, Andreas Laut wrote:
 Hi,

 actually can't get lxc nightly compiled with debian right now,
 configure has problems with pkg-config python3-dev (package pkg-config
 and python3-dev is installed) (configure line 5588)

 I tried instead lxc 0.9 from tarball and got the same problem.

 Our LXC Config is attached, maybe this helps.
 Our lxc-host bridge is configured like

 auto br0
 iface br0 inet static
 bridge_ports eth4
 bridge_stp off
 address 10.5.255.80
 netmask 255.255.0.0
 gateway 10.5.255.252

I used to add

bridge_fd 0
bridge_maxwait 0


to the bridge config.

tamas

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users


Re: [Lxc-users] lxc-centos/lxc-rhel?

2013-10-11 Thread Dwight Engen
On Thu, 10 Oct 2013 21:58:58 +0200
Tamas Papp tom...@martos.bme.hu wrote:

 On 10/10/2013 08:56 PM, Dwight Engen wrote:
  Hmm not sure what could be the issue. I would start by running ssh
  -vv against the container and see where it is getting stuck.
 
 On the server:

[...]

 
 It show up nothing to me.

I agree that wasn't too helpful, but it shows there is nothing going
wrong in the key exchange / authentication.
 
 There is strace log as well.
 This fork cycle is repeating:
 
[...]

Hmm, so for some reason /usr/bin/id -gn is being invoked over and over
again? Do you have something in your login scripts that might do this?
(ie. a quick google brought up
http://stackoverflow.com/questions/5929552/ssh-command-execution-hangs-although-interactive-shell-functions-fine).
Not sure where sshd is without seeing earlier in the strace.

--
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register 
http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk
___
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users