On Mon, May 9, 2011 at 14:52, Serge Hallyn <[email protected]> wrote: >
Thanks for your response. Before scripting it, let's try manually first: > devs=`ls /sys/class/net/veth*` > ip link add type veth > newdevs=`ls /sys/class/net/veth*` > # Get the intersection of $devs and $newdevs I assume you mean "difference" instead of "intersection", since the first execution of ls gives an emtpy output, and the purpose of this is obtaining the new devices, right? host# ls /sys/class/net/ eth0 eth1 lo br0 host# ip link add type veth host# ls /sys/class/net/ eth0 eth1 lo br0 veth0 veth1 host# _ > # Attach $dev1 to your bridge Assuming $dev1 is the first of the new devices: host# brctl addif br0 veth0 host# _ > lxc-start -n mycontainer > # mycontainer has no network After this, the container sees the same interfaces as the host and it does have connectivity to the outside: host# cat testimg01.conf lxc.tty = 4 lxc.pivotdir = .pivot lxc.arch=x86 lxc.utsname=testimg01 lxc.console=/tmp/lxc-testimg01-console.log lxc.rootfs=/root/lxc/nfsroot lxc.mount.entry=proc /root/lxc/nfsroot/proc proc defaults 0 0 lxc.mount.entry=sys /root/lxc/nfsroot/sys sysfs defaults 0 0 lxc.mount.entry=devpts /root/lxc/nfsroot/dev/pts devpts defaults 0 0 lxc.mount.entry=varlock /root/lxc/nfsroot/var/lock tmpfs defaults 0 0 lxc.mount.entry=tmp /root/lxc/nfsroot/tmp tmpfs mode=1777 0 0 lxc.cgroup.devices.deny = a lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 5:2 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 254:0 rm host# lxc-start -f testimg01.conf -n testimg01 -l DEBUG -o /tmp/lxc-testimg01.log _ container# # ip link show |grep ^[0-9] 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 6: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 container# telnet 172.20.64.20 22 ## outside node Trying 172.20.64.20... Connected to 172.20.64.20. Escape character is '^]'. SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 _ > # get PID as the init pid of mycontainer > ip link set $dev2 netns $PID host# pgrep init 1 4809 host# ip link set veth1 netns 4809 host # _ > # now from your mycontainer console, configure $dev2 which is now in the > container > # you can rename it to eth0 in the container as > ip link set $dev2 name eth0 Since eth0 exists inside the container, renaming veth1 returns an error: container# ip link set veth1 name eth0 RTNETLINK answers: File exists Am I doing something wrong? -- David Serrano ------------------------------------------------------------------------------ Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay _______________________________________________ Lxc-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/lxc-users
