Re: issue when not using acpi indices in libvirt 7.4.0 and qemu 6.0.0

2021-06-25 Thread Riccardo Ravaioli
On Thu, 24 Jun 2021 at 04:11, Laine Stump  wrote:

> [...]
>

Hi Laine,

Thank you so much for your analysis and thoughtful insights. As you noticed
straight away, there were indeed some minor differences in the two VM
definitions that I didn't see before posting. The interface naming was not
biased by this in the end.

Anyway, we found out that the problem actually lay in qemu 6.0.0, which was
promptly patched by its maintainers after we contacted them:
https://lists.nongnu.org/archive/html/qemu-stable/2021-06/msg00058.html

Thanks again!

Riccardo


Re: issue when not using acpi indices in libvirt 7.4.0 and qemu 6.0.0

2021-06-23 Thread Riccardo Ravaioli
On Wed, 23 Jun 2021 at 18:59, Daniel P. Berrangé 
wrote:

> [...]
> So your config here does NOT list any ACPI indexes
>

Exactly, I don't list any ACPI indices.


> > After upgrading to libvirt 7.4.0 and qemu 6.0.0, the XML snippet above
> > yielded:
> > - ens1 for the first virtio interface => OK
> > - rename4 for the second virtio interface => **KO**
> > - ens3 for the PCI passthrough interface  => OK
>
> So from libvirt's POV, nothing should have changed upon upgrade,
> as we wouldn't be setting any ACPI indexes by default.
>
> Can you show the QEMU command line from /var/log/libvirt/qemu/$GUEST.log
> both before and after the libvirt upgrade.
>

Sure, here it is before the upgrade: https://pastebin.com/ZzKd2uRJ
And here after the upgrade: https://pastebin.com/EMu6Jgat
(there is a minor difference in the disks which shouldn't be related to
this issue)

Thanks!

Riccardo


issue when not using acpi indices in libvirt 7.4.0 and qemu 6.0.0

2021-06-23 Thread Riccardo Ravaioli
Hi everyone,

We have an issue with how network interfaces are presented in the VM with
the latest libvirt 7.4.0 and qemu 6.0.0.

Previously, we were on libvirt 7.0.0 and qemu 5.2.0, and we used increasing
virtual PCI addresses for any type of network interface (virtio, PCI
passthrough, SRIOV) in order to decide the interface order inside the VM.
For instance the following snippet yields ens1, ens2 and ens3 in a Debian
Buster VM:

  
 
 
 
 
 

 
  
  
 
 
 
 
 

 
  
  
 

 
 
  

After upgrading to libvirt 7.4.0 and qemu 6.0.0, the XML snippet above
yielded:
- ens1 for the first virtio interface => OK
- rename4 for the second virtio interface => **KO**
- ens3 for the PCI passthrough interface  => OK

Argh! What happened to ens2? By running udev inside the VM, I see that
"rename4" is the result of a conflict between the ID_NET_NAME_SLOT of the
second and the third interface, both appearing as ID_NET_NAME_SLOT=ens3. In
theory rename4 should show ID_NET_NAME_SLOT=ens2. What happened?

#  udevadm info -q all /sys/class/net/rename4
P: /devices/pci:00/:00:03.0/:01:02.0/virtio4/net/rename4
L: 0
E: DEVPATH=/devices/pci:00/:00:03.0/:01:02.0/virtio4/net/rename4
E: INTERFACE=rename4
E: IFINDEX=4
E: SUBSYSTEM=net
E: USEC_INITIALIZED=94191911
E: ID_NET_NAMING_SCHEME=v240
E: ID_NET_NAME_MAC=enx525400aabba1
E: ID_NET_NAME_PATH=enp1s2
E: ID_NET_NAME_SLOT=ens3
E: ID_BUS=pci
E: ID_VENDOR_ID=0x1af4
E: ID_MODEL_ID=0x1000
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_PATH=pci-:01:02.0
E: ID_PATH_TAG=pci-_01_02_0
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/rename4
E: TAGS=:systemd:

#  udevadm info -q all /sys/class/net/ens3
P: /devices/pci:00/:00:03.0/:01:03.0/net/ens3
L: 0
E: DEVPATH=/devices/pci:00/:00:03.0/:01:03.0/net/ens3
E: INTERFACE=ens3
E: IFINDEX=2
E: SUBSYSTEM=net
E: USEC_INITIALIZED=3600940
E: ID_NET_NAMING_SCHEME=v240
E: ID_NET_NAME_MAC=enx00900b621235
E: ID_OUI_FROM_DATABASE=LANNER ELECTRONICS, INC.
E: ID_NET_NAME_PATH=enp1s3
E: ID_NET_NAME_SLOT=ens3
E: ID_BUS=pci
E: ID_VENDOR_ID=0x8086
E: ID_MODEL_ID=0x1533
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Intel Corporation
E: ID_MODEL_FROM_DATABASE=I210 Gigabit Network Connection
E: ID_PATH=pci-:01:03.0
E: ID_PATH_TAG=pci-_01_03_0
E: ID_NET_DRIVER=igb
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/ens3
E: TAGS=:systemd:


Is there anything we can do in the XML definition of the VM to fix this?

The PCI tree from within the VM is the following, if it helps:
(with libvirt 7.0.0 and qemu 5.2.0 it was the same)

# lspci -tv
-[:00]-+-00.0  Intel Corporation 440FX - 82441FX PMC [Natoma]
   +-01.0  Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
   +-01.1  Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
   +-01.2  Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II]
   +-01.3  Intel Corporation 82371AB/EB/MB PIIX4 ACPI
   +-02.0  Cirrus Logic GD 5446
   +-03.0-[01]--+-01.0  Red Hat, Inc. Virtio network device
   |+-02.0  Red Hat, Inc. Virtio network device
   |\-03.0  Intel Corporation I210 Gigabit Network
Connection
   +-04.0-[02]--
   +-05.0-[03]--
   +-06.0-[04]--
   +-07.0-[05]--
   +-08.0-[06]01.0  Red Hat, Inc. Virtio block device
   +-09.0  Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port
SATA Controller [AHCI mode]
   +-0a.0  Red Hat, Inc. Virtio console
   +-0b.0  Red Hat, Inc. Virtio memory balloon
   \-0c.0  Red Hat, Inc. Virtio RNG


I see that a new feature in qemu and libvirt is to add ACPI indices in
order to have network interfaces appear as *onboard* and sort them through
this index as opposed to virtual PCI addresses. This is great. I see that
in this case, interfaces appear as eno1, eno2, etc.

However, for the sake of backward compatibility, is there a way to have the
previous behaviour where interfaces were called by their PCI slot number
(ens1, ens2, etc.)?

If I move to the new naming yielded by ACPI indices, I am mostly worried
about any possible change in interface names that might occur across VMs
running different OS's, with respect to what we had before with libvirt
7.0.0 and qemu 5.2.0.

Thanks!

Best,
Riccardo Ravaioli


how to use external snapshots with memory state

2021-01-01 Thread Riccardo Ravaioli
Hi all,

Best wishes for 2021! :)

So I've been reading and playing around with live snapshots and still
haven't figured out how to use an external memory snapshot. My goal is to
take a disk+memory snapshot of a running VM and, if possible, save it in
external files.

As far as I understand, I can run:
$ virsh snapshot-create $VM
... and that'll take an *internal* live snapshot of a given VM, consisting
of its disks and memory state, which will be stored in the qcow2 disk(s) of
the VM. In particular, the memory state will be stored in the first disk of
the VM. I can then use the full range of snapshot commands available:
revert, list, current, delete.

Now, an external snapshot can be taken with:
$ virsh snapshot-create-as --domain $VM  mysnapshot --diskspec
vda,file=/home/riccardo/disk_mysnapshot.qcow2,snapshot=external --memspec
file=/home/riccardo/mem_mysnapshot.qcow2,snapshot=external
... with as many "--diskspec" as there are disks in the VM.

I've read the virsh manual and the libvirt API documentation, but it's not
clear to me what exactly I can do then with an external snapshot, in
particular with the file containing the memory state. In articles from 7-8
years ago people state that external memory snapshots cannot be reverted...
is it still the case today? If so, what's a typical usage for such files?
If not with libvirt, is it possible to revert to an external memory + disk
state in other ways, for instance through qemu commands?

Thanks!

Riccardo


Re: "failed to setup INTx fd: Operation not permitted" error when using PCI passthrough

2020-04-30 Thread Riccardo Ravaioli
So ultimately the problem was somewhere in the BIOS. A BIOS update fixed
the issue.

Riccardo

On Tue, 7 Apr 2020 at 18:05, Riccardo Ravaioli 
wrote:

> Hi,
>
> I'm on a Dell VEP 1405 running Debian 9.11 and I'm running a few tests
> with various interfaces given in PCI passthrough to a qemu/KVM Virtual
> Machine also running Debian 9.11.
>
> I noticed that only one of the four I350 network controllers can be used
> in PCI passthrough. The available interfaces are:
>
>
>
> *# dpdk-devbind.py --status Network devices using kernel driver
> ==*
> *= :02:00.0 'I350 Gigabit Network Connection 1521' if=eth2 drv=igb
> unused=igb_uio,vfio-pci,uio_*
> *pci_generic :02:00.1 'I350 Gigabit Network Connection 1521' if=eth3
> drv=igb unused=igb_uio,vfio-pci,uio_*
> *pci_generic :02:00.2 'I350 Gigabit Network Connection 1521' if=eth0
> drv=igb unused=igb_uio,vfio-pci,uio_*
> *pci_generic :02:00.3 'I350 Gigabit Network Connection 1521' if=eth1
> drv=igb unused=igb_uio,vfio-pci,uio_*
> *pci_generic :04:00.0 'QCA986x/988x 802.11ac Wireless Network Adapter
> 003c' if= drv=ath10k_pci unused=igb_uio,vfio-pci,uio_*
> *pci_generic :05:00.0 'Device 15c4' if=eth7 drv=ixgbe
> unused=igb_uio,vfio-pci,uio_*
> *pci_generic :05:00.1 'Device 15c4' if=eth6 drv=ixgbe
> unused=igb_uio,vfio-pci,uio_*
> *pci_generic :07:00.0 'Device 15e5' if=eth5 drv=ixgbe
> unused=igb_uio,vfio-pci,uio_*
> *pci_generic :07:00.1 'Device 15e5' if=eth4 drv=ixgbe
> unused=igb_uio,vfio-pci,uio_**pci_generic*
>
> If I try PCI passthrough on 02:00.2 (eth0), it works fine. With any of the
> remaining three interfaces, libvirt fails with this error:
>
>
>
> *# virsh create vnf.xml error: Failed to create domain from vnf.xml error:
> internal error: process exited while connecting to monitor:
> 2020-04-06T16:08:47.048266Z qemu-system-x86_64: -device
> vfio-pci,host=02:00.1,id=**hostdev0,bus=pci.0,addr=0x5: vfio
> :02:00.1: failed to setup INTx fd: Operation not permitted*
>
> The contents of vnf.xml are available here: https://pastebin.com/rT3RmAi5
> This is what happened in *dmesg* when I tried to start the VM:
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[ 7305.371730] igb :02:00.1: removed PHC on eth3 [ 7307.085618] ACPI
> Warning: \_SB.PCI0.PEX2._PRT: Return Package has no elements (empty)
> (20160831/nsprepkg-130) [ 7307.085717] pcieport :00:0b.0: can't derive
> routing for PCI INT B [ 7307.085719] vfio-pci :02:00.1: PCI INT B: no
> GSI [ 7307.369611] igb :02:00.1: enabling device (0400 -> 0402) [
> 7307.369668] ACPI Warning: \_SB.PCI0.PEX2._PRT: Return Package has no
> elements (empty) (20160831/nsprepkg-130) [ 7307.369764] pcieport
> :00:0b.0: can't derive routing for PCI INT B [ 7307.369766] igb
> :02:00.1: PCI INT B: no GSI [ 7307.426266] igb :02:00.1: added PHC
> on eth3 [ 7307.426269] igb :02:00.1: Intel(R) Gigabit Ethernet Network
> Connection [ 7307.426271] igb :02:00.1: eth3: (PCIe:5.0Gb/s:Width x2)
> 50:9a:4c:ee:9f:b1 [ 7307.426350] igb :02:00.1: eth3: PBA No: 106300-000
> [ 7307.426352] igb :02:00.1: Using MSI-X interrupts. 4 rx queue(s), 4
> tx queue(s)*
>
>
> These are all the messages related to that device in dmesg before I tried
> to start the VM:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *# dmesg | grep 02:00.1 [0.185301] pci :02:00.1: [8086:1521] type
> 00 class 0x02 [0.185317] pci :02:00.1: reg 0x10: [mem
> 0xdfd4-0xdfd5] [0.185334] pci :02:00.1: reg 0x18: [io
> 0xd040-0xd05f] [0.185343] pci :02:00.1: reg 0x1c: [mem
> 0xdfd88000-0xdfd8bfff] [0.185434] pci :02:00.1: PME# supported from
> D0 D3hot D3cold [0.185464] pci :02:00.1: reg 0x184: [mem
> 0xdeea-0xdeea3fff 64bit pref] [0.185467] pci :02:00.1: VF(n)
> BAR0 space: [mem 0xdeea-0xdeeb 64bit pref] (contains BAR0 for 8
> VFs) [0.185486] pci :02:00.1: reg 0x190: [mem 0xdee8-0xdee83fff
> 64bit pref] [0.185488] pci :02:00.1: VF(n) BAR3 space: [mem
> 0xdee8-0xdee9 64bit pref] (contains BAR3 for 8 VFs) [0.334021]
> DMAR: Hardware identity mapping for device :02:00.1 [0.334463]
> iommu: Adding device :02:00.1 to group 16 [0.398809] pci
> :02:00.1: Signaling PME through PCIe PME interrupt [2.588049] igb
> :02:00.1: PCI INT B: not connected [2.643900] igb :02:00.1:
> added PHC on eth1 [2.643903] igb :02:00.1: Intel(R) Gigabit
> Ethernet Network Connection [2.643905] igb :02:00.1: eth1:
> (PCIe:5.0Gb/s:Width x2) 50:9a:4c:ee:9f:b1 [2.643984] igb :02:00.1:
> eth1: PBA No: 106300-000 [2.

"failed to setup INTx fd: Operation not permitted" error when using PCI passthrough

2020-04-07 Thread Riccardo Ravaioli
ork
Connection|   +-00.1  Intel Corporation I350
Gigabit Network Connection|   +-00.2  Intel
Corporation I350 Gigabit Network Connection|
\-00.3  Intel Corporation I350 Gigabit Network Connection
+-0f.0-[04]00.0  Qualcomm Atheros QCA986x/988x 802.11ac Wireless
Network Adapter+-12.0  Intel Corporation DNV SMBus Contoller -
Host+-13.0  Intel Corporation DNV SATA Controller 0
+-15.0  Intel Corporation Device 19d0+-16.0-[05-06]--+-00.0
Intel Corporation Device 15c4|   \-00.1  Intel
Corporation Device 15c4+-17.0-[07-08]--+-00.0  Intel
Corporation Device 15e5|   \-00.1  Intel
Corporation Device 15e5+-18.0  Intel Corporation Device 19d3
   +-1c.0  Intel Corporation Device 19db+-1f.0  Intel
Corporation DNV LPC or eSPI+-1f.2  Intel Corporation Device
19de+-1f.4  Intel Corporation DNV SMBus controller
\-1f.5  Intel Corporation DNV SPI Controller*

By looking at lspci -v, there's something going on with the IRQ field
exactly in three devices I can't use in PCI passthrough ("IRQ -2147483648"):












*# lspci -v|grep -A1 I350 02:00.0 Ethernet controller: Intel Corporation
I350 Gigabit Network Connection (rev 01) Flags: bus master, fast
devsel, latency 0, IRQ -2147483648 -- 02:00.1 Ethernet controller: Intel
Corporation I350 Gigabit Network Connection (rev 01) Flags: bus master,
fast devsel, latency 0, IRQ -2147483648 -- 02:00.2 Ethernet controller:
Intel Corporation I350 Gigabit Network Connection (rev 01) Flags: bus
master, fast devsel, latency 0, IRQ 18 -- 02:00.3 Ethernet controller:
Intel Corporation I350 Gigabit Network Connection (rev 01) Flags: bus
master, fast devsel, latency 0, IRQ -2147483648*


Finally, every i350 interface has its own IOMMU group in
/sys/kernel/iommu_groups/.

The kernel I'm using in the host machine is 4.9.189 and my libvirt version
is 4.3.0.

Any thoughts on this?
Is there something I should enable in the BIOS or in the kernel to make
this work?

Thanks!

Regards,
Riccardo Ravaioli


Re: [libvirt-users] assigning PCI addresses with bus > 0x09

2019-01-03 Thread Riccardo Ravaioli
On Thu, 20 Dec 2018 at 15:39, Laine Stump  wrote:

> I think you're right. Each bus requires some amount of IO space, and I
> thought I recalled someone saying that all of the available IO space is
> exhausted after 7 or 8 buses. [...]
>

Laine,

Do you have by any chance a link to a page explaining this in more details?
Thanks again! :)

Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] assigning PCI addresses with bus > 0x09

2018-12-20 Thread Riccardo Ravaioli
On Thu, 20 Dec 2018 at 15:39, Laine Stump  wrote:

> [...]
> I think you're right. Each bus requires some amount of IO space, and I
> thought I recalled someone saying that all of the available IO space is
> exhausted after 7 or 8 buses. This was in relation to PCIe, where each
> root port is a bus, and can potentially take up IO space, so possibly in
> that context they were talking about the buses *after* the root bus and
> pcie-pci-bridge, which would bring us back to the same number you're
> getting.
>
> For PCIe our solution was to turn off IO space usage on the
> pcie-root-ports, but you can't do that for conventional PCI buses, since
> the devices might actually need IO space to function properly and the
> standard requires that you provide it.
>

Ok, that makes sense.


> > The real question though is why you need to create so many PCI buses.
> > Each bus can do 31 devices.  Do you really need to have more than 279
> > devices in your VM ?
>
> And if you do need more than 279 devices, do they all need to be
> hot-pluggable? If not, then you can put up to 8 devices on each slot
> (functions 0 - 7).
>

True. I'll use the function field too, then.

Thanks a lot!

Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] assigning PCI addresses with bus > 0x09

2018-12-20 Thread Riccardo Ravaioli
On Thu, 20 Dec 2018 at 15:20, Daniel P. Berrangé 
wrote:

> [...]
>
> I guess the hang is that you hit some limit in PCI buses.
>
> The real question though is why you need to create so many PCI buses.
> Each bus can do 31 devices.  Do you really need to have more than 279
> devices in your VM ?
>

Ok, I see. Of course I don't really need that many devices, I was just
exploring the available ranges in a PCI address :)

Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] assigning PCI addresses with bus > 0x09

2018-12-20 Thread Riccardo Ravaioli
Hi,

My goal is to assign PCI addresses to a number of devices (network
interfaces, disks and PCI devices in PCI-passthrough) without delegating to
libvirt the generation of those values. This should give me more control
and for sure more predictability over the hardware configuration of a
virtual machine and consequently the name of the interfaces in it.  I'm
using libvirt 4.3.0 to create qemu/KVM virtual machines running Linux
(Debian Stretch).

So, for every device of the type mentioned above, I add this line:
,
... with values from 00 to ff in the bus field, and from 00 to 1f in the
slot field, as described in the documentation.

Long story short, I noticed that as soon as I assign values > 0x09 to the
bus field, the serial console hangs indefinitely, in both Debian and
Ubuntu. The VM seems to be started correctly and its state is "running"; in
the XML file created by libvirt, I see all controllers from 0 the largest
bus value I assigned, so everything from that side seems ok.

What am I missing here?
Thanks!

Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] performance overhead with backing chains?

2018-11-15 Thread Riccardo Ravaioli
Hi,

I was wondering if there are any studies on the performance overhead of a
QEMU/KVM virtual machine when the backing chain of its disk(s) is of size
n, with n > 1. In particular, it would be useful to know until what size we
can expect little or no impact on a Linux virtual machine, for instance.

Thanks!

Regards,
Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] "scripts are not supported on interfaces of type vhostuser" error

2018-02-23 Thread Riccardo Ravaioli
Spot on! The script qemu-ifup was indeed the cause of the problem. It's a
dummy script that only does "exit 0" and it was apparently needed in the
past with old versions of libvirt. I see examples on the net  where it is
used to configure interfaces with shell commands. I don't need it.

I see that everything works fine without the  line, for vhostuser and ethernet interfaces.
Are there any side effects or can I safely remove that line?

Thanks again!

Riccardo

On 22 February 2018 at 20:55, Laine Stump  wrote:

> [...]
> > 
> >   
>
>  Why do you have this line?
>
> Spo> $ virsh create vm.xml
> > error: Failed to create domain from vm.xml
> > error: unsupported configuration: scripts are not supported on
> > interfaces of type vhostuser
>
> This error message tells you exactly what is wrong. The 

[libvirt-users] "scripts are not supported on interfaces of type vhostuser" error

2018-02-22 Thread Riccardo Ravaioli
Hi,

I'm having trouble starting a VM with vhostuser interfaces.

I have a simple configuration where a VM running Debian has 1 vhostuser
interface plugged into an OVS switch where a DPDK interface is already
plugged in.
$ ovs-vsctl show:
Bridge "switch1"
Port "switch1"
Interface "switch1"
type: internal
Port "1.switch1"
Interface "1.switch1"
type: dpdk
options: {dpdk-devargs=":0b:00.0"}
Port "0.switch1"
Interface "0.vm"
type: dpdkvhostuserclient
options: {vhost-server-path="/opt/oa/vhost/0.vm.sock"}


The relevant excerpt from the XML of my VM is:


http://libvirt.org/schemas/domain/qemu/1.0;>
 
 
/opt/oa/bin/qemu-system-x86_64


  
  
  
  
  

  


  
  





  



Now, if I try to start my VM, I get the following error and the VM is not
started at all:
$ virsh create vm.xml
error: Failed to create domain from vm.xml
error: unsupported configuration: scripts are not supported on interfaces
of type vhostuser


The logs from libvirtd.log say:
2018-02-22 09:18:24.982+: 2033: warning :
qemuProcessStartWarnShmem:4539 : Detected vhost-user interface without any
shared memory, the interface might not be operational
2018-02-22 09:18:24.982+: 2033: error : qemuBuildHostNetStr:3894 :
unsupported configuration: scripts are not supported on interfaces of type
vhostuser

The logs from qemu simply say:
2018-02-22 09:26:15.857+: shutting down, reason=failed

And finally, ovs-vswitchd.log:
2018-02-22T09:18:24.715Z|00328|dpdk|INFO|VHOST_CONFIG: vhost-user client:
socket created, fd: 51
2018-02-22T09:18:24.716Z|00329|netdev_dpdk|INFO|vHost User device '0.vm'
created in 'client' mode, using client socket '/opt/oa/vhost/0.vm.sock'
2018-02-22T09:18:24.718Z|00330|dpdk|WARN|VHOST_CONFIG: failed to connect to
/opt/oa/vhost/0.vm.sock: No such file or directory
2018-02-22T09:18:24.718Z|00331|dpdk|INFO|VHOST_CONFIG:
/opt/oa/vhost/0.vm.sock: reconnecting...
2018-02-22T09:18:24.718Z|00332|bridge|INFO|bridge switch1: added interface
0.vm on port 5


Am I missing something on the openvswitch or on the libvirt side?

It looks like openvswitch can't find /opt/oa/vhost/0.vm.sock, but isn't
either openvswitch or libvirt in charge of creating it?
Then, I'm not too sure about the error messages in libvirtd.log...

My software versions are: libvirt 3.10.0, qemu 2.10.2, openvswitch 2.8.1
and DPDK 17.11.

Thanks a lot!

Riccardo
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users