TL;DR: working for me, please help understanding your issue by:
1. share your network configuration xmls
2. Try with a recent Ubuntu image to rule out cirros
3. share your guest network config for those interfaces
Note: As it might have been arch specific this test has been done on a Power8E
machine.
Trying to isolate the Qemu net part
Vivid
-netdev tap,fd=23,id=hostnet0
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:bd:8f:4f,bus=pci.0,addr=0x1
-netdev tap,fd=24,id=hostnet1
-device
virtio-net-pci,netdev=hostnet1,id=net1,mac=fa:16:3e:99:c7:71,bus=pci.0,addr=0x2
Xenial
-netdev tap,fd=25,id=hostnet0
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:e9:4a:82,bus=pci.0,addr=0x1
-netdev tap,fd=27,id=hostnet1
-device virtio-net-pci,netdev=hostnet1,id=net1,mac=fa:16:3e:b9:
49:ed,bus=pci.0,addr=0x2
Ok, so all that is the same way.
The TL;DR of this is that id=XX maps it to network XX and the -device defines
how it is represented to the guest.
It would be really useful to see the definition of those networks in your case,
generate via:
$virsh net-dumpxml <yournetworkname>
$virsh net-dumpxml <yournetworkname2>
I'll try to reproduce anyway - all I know from the bug is one ipv4 one ipv6.
A default would look like:
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=29
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:6f:13:c8,bus=pci.0,addr=0x1
I also see that you have no vhost accelleration enabled, not
recommended/default IMHO but well, ok I can do so as well.
Random mac, normal virtio pci representation - nothing special at these bits.
You did not state what you do with these networks - so for now I just bridge
them to the host.
Once I see your network config xml that will be more clear.
I set up the following networks:
hostnet0-ipv4 active no yes
hostnet1-ipv6 active no yes
With the definitions being basic, but following your description:
$ cat hostnet0-ipv4.xml
<network>
<name>hostnet0-ipv4</name>
<bridge name='hostnet0-ipv4' stp='on' delay='0'/>
<ip address='10.0.21.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.0.21.2' end='10.0.21.254'/>
</dhcp>
</ip>
</network>
ubuntu@diamond:~/cpaelzer$ cat hostnet1-ipv6.xml
<network>
<name>hostnet1-ipv6</name>
<bridge name='hostnet1-ipv6' stp='on' delay='0'/>
<ip family="ipv6" address="2001:db8:ca2:2::1" prefix="64">
<dhcp>
<range start="2001:db8:ca2:2:1::10" end="2001:db8:ca2:2:1::ff"/>
</dhcp>
</ip>
</network>
Add the following to my guest to link up to those networks in non vhost mode
(set driver qemu as vhost would be default if available):
<interface type='network'>
<source network='hostnet0-ipv4'/>
<model type='virtio'/>
<driver name='qemu'/>
</interface>
<interface type='network'>
<source network='hostnet1-ipv6'/>
<model type='virtio'/>
<driver name='qemu'/>
</interface>
Host view:
$ for dev in hostnet0-ipv4 hostnet0pv4-nic hostnet1-ipv6 hostnet1pv6-nic; do ip
addr show dev $dev; done
101: hostnet0-ipv4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 52:54:00:98:33:21 brd ff:ff:ff:ff:ff:ff
inet 10.0.21.1/24 brd 10.0.21.255 scope global hostnet0-ipv4
valid_lft forever preferred_lft forever
102: hostnet0pv4-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
hostnet0-ipv4 state DOWN group default qlen 1000
link/ether 52:54:00:98:33:21 brd ff:ff:ff:ff:ff:ff
103: hostnet1-ipv6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 52:54:00:89:77:1e brd ff:ff:ff:ff:ff:ff
inet6 2001:db8:ca2:2::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe89:771e/64 scope link
valid_lft forever preferred_lft forever
104: hostnet1pv6-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
hostnet1-ipv6 state DOWN group default qlen 1000
link/ether 52:54:00:89:77:1e brd ff:ff:ff:ff:ff:ff
Set up the guest so that it actually cares about those interfaces:
$ cat /etc/network/interfaces.d/51-extra-devs.cfg
auto enp0s7
iface enp0s7 inet dhcp
auto enp0s8
iface enp0s8 inet6 dhcp
It would be nice if you could share your config in /etc/network/* and subdirs.
As well as an output of
$ ip addr
$ ip link
So in my case my networks got ipv4 (plus the default ipv6 mapping) and the
second one ipv6 only.
enp0s7 Link encap:Ethernet HWaddr 52:54:00:2d:59:f7
inet addr:10.0.21.184 Bcast:10.0.21.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe2d:59f7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:340 errors:0 dropped:325 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:18600 (18.6 KB) TX bytes:1304 (1.3 KB)
enp0s8 Link encap:Ethernet HWaddr 52:54:00:28:1e:65
inet6 addr: fe80::5054:ff:fe28:1e65/64 Scope:Link
inet6 addr: 2001:db8:ca2:2:1::25/128 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:341 errors:0 dropped:326 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:18601 (18.6 KB) TX bytes:1166 (1.1 KB)
7 = hostnet0-ipv4
8 = hostnet0-ipv6
ipv4 routes look sane:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 enp0s1
10.0.21.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s7
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s1
And Host is reachable:
$ ping -c 4 10.0.21.1
PING 10.0.21.1 (10.0.21.1) 56(84) bytes of data.
64 bytes from 10.0.21.1: icmp_seq=1 ttl=64 time=0.424 ms
[...]
Same for ipv6:
ping6 2001:db8:ca2:2::1
PING 2001:db8:ca2:2::1(2001:db8:ca2:2::1) 56 data bytes
64 bytes from 2001:db8:ca2:2::1: icmp_seq=1 ttl=64 time=0.265 ms
Same is true for Host to guest communication.
Note: The call to qemu is more or less the same as yours (no vhost for example):
-netdev tap,fd=28,id=hostnet1
-device
virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:2d:59:f7,bus=pci.0,addr=0x7
-netdev tap,fd=29,id=hostnet2
-device
virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:28:1e:65,bus=pci.0,addr=0x8
Summarizing:
1. ipv4 dhcp + connectivity working
2. ipv6 dhcp6 + connectivity working
=> we need to find what is different for you.
Could it be related to your Cirros guest (or its config)?
I haven't seen Cirros in quite a while - could you try a recent Ubuntu cloud
image like from https://cloud-images.ubuntu.com/xenial/current/
** Changed in: qemu-kvm (Ubuntu)
Status: New => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1655161
Title:
VM boots with no connectivity when booting with dual net configuration
(ipv4 + ipv6)
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openstack/+bug/1655161/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs