Hi,

... news about my IPv4 container in my IPv6-only host :

2012/6/9 Fajar A. Nugraha <l...@fajar.net>

> The default containers created from templates uses veth and bridged
> networking. *If setup correctly*, that would mean the host (main system,
> as you call it) behaves pretty much similar to an L2 switch. Which
> means that there's no requirement that the host should be connected
> (IP-wise) to the guest. They only need to be connected on ethernet
> level.



Now trying to setup it correctly ;)
Sorry for annoying you again, but I couldn't make it work...
Maybe someone could help ?



*My problem :*
I *can't ping my gateway* 91.121.99.*254* from my container 91.121.99.167.
In host I try to tcpdump pings from container, but nothing interesting is
listed.
I tried to add a specific host route to host 91.121.99.254 in container.
The command works, but still can't ping it.

# ping 91.121.99.254
PING 91.121.99.254 (91.121.99.254) 56(84) bytes of data.
>From 91.121.99.167 icmp_seq=1 Destination Host Unreachable
>From 91.121.99.167 icmp_seq=2 Destination Host Unreachable
>From 91.121.99.167 icmp_seq=3 Destination Host Unreachable

# route add -host 91.121.99.254 eth0
# ping 91.121.99.254
PING 91.121.99.254 (91.121.99.254) 56(84) bytes of data.
>From 91.121.99.167 icmp_seq=1 Destination Host Unreachable
>From 91.121.99.167 icmp_seq=2 Destination Host Unreachable
>From 91.121.99.167 icmp_seq=3 Destination Host Unreachable



*Host configuration : *cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto br0
iface br0 inet6 static
  bridge_ports eth0
  bridge_fd 0
  address 2001:41d0:1:98a7::1
  netmask 64
  gateway 2001:41d0:1:98FF:FF:FF:FF:FF



*Container configuration : *grep network config
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.hwaddr = 00:1C:C0:17:8B:44
lxc.network.ipv4 = 91.121.99.167/24



*Host ifconfig :*
br0       Link encap:Ethernet  HWaddr 00:1c:c0:17:8b:44
          adr inet6: 2001:41d0:1:98a7::1/64 Scope:Global
          adr inet6: fe80::21c:c0ff:fe17:8b44/64 Scope:Lien
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1429 errors:0 dropped:0 overruns:0 frame:0
          TX packets:260 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:0
          RX bytes:179867 (175.6 KiB)  TX bytes:35854 (35.0 KiB)

eth0      Link encap:Ethernet  HWaddr 00:1c:c0:17:8b:44
          adr inet6: fe80::21c:c0ff:fe17:8b44/64 Scope:Lien
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1750 errors:0 dropped:0 overruns:0 frame:0
          TX packets:268 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:1000
          RX bytes:223036 (217.8 KiB)  TX bytes:36446 (35.5 KiB)
          Interruption:19 Adresse de base:0x2000

lo        Link encap:Boucle locale
          inet adr:127.0.0.1  Masque:255.0.0.0
          adr inet6: ::1/128 Scope:Hôte
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:198 errors:0 dropped:0 overruns:0 frame:0
          TX packets:198 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:0
          RX bytes:14588 (14.2 KiB)  TX bytes:14588 (14.2 KiB)

veth8c2b2U Link encap:Ethernet  HWaddr e2:1f:1e:68:81:31
          adr inet6: fe80::e01f:1eff:fe68:8131/64 Scope:Lien
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:261 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:1000
          RX bytes:216 (216.0 B)  TX bytes:17972 (17.5 KiB)



*Container ifconfig :*
eth0      Link encap:Ethernet  HWaddr 00:1c:c0:17:8b:44
          inet adr:91.121.99.167  Bcast:91.121.99.0  Masque:255.255.255.0
          adr inet6: fe80::21c:c0ff:fe17:8b44/64 Scope:Lien
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:263 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:1000
          RX bytes:18212 (17.7 KiB)  TX bytes:216 (216.0 B)

lo        Link encap:Boucle locale
          inet adr:127.0.0.1  Masque:255.0.0.0
          adr inet6: ::1/128 Scope:Hôte
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 lg file transmission:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)



*Host route -n :*
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
*
*
*
*
*
*
*Container route -n :*
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
91.121.99.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0



*
*

Thank you for reading this far !
seb



2012/6/9 Fajar A. Nugraha <l...@fajar.net>

> On Sat, Jun 9, 2012 at 7:39 PM, Sébastien Montagne
> <sebastien.monta...@gmail.com> wrote:
> > Hi dears,
> >
> > do you think it would be easy/hard/not possible
> > to setup a container with an IPv4 address (optionnaly with an IPv6
> address
> > as well)
> > in a IPv6-only (i.e. without an IPv4 address) main system ?
>
> Should be easy.
>
> The default containers created from templates uses veth and bridged
> networking. If setup correctly, that would mean the host (main system,
> as you call it) behaves pretty much similar to an L2 switch. Which
> means that there's no requirement that the host should be connected
> (IP-wise) to the guest. They only need to be connected on ethernet
> level.
>
> --
> Fajar
>
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to