Re: virsh dommemstat doesn't update its information

2021-03-29 Thread Michal Privoznik

On 3/29/21 4:00 PM, Lentes, Bernd wrote:

Hi,

i'm playing a bit around with my domains and the balloon driver.
To get information about ballooning i use virsh dommemstat.
But i only get very few information:

virsh # dommemstat vm_idcc_devel
actual 1044480
last_update 0
rss 1030144

Also configuring "dommemstat --domain vm_idcc_devel --period 5 --live"
or "dommemstat --domain vm_idcc_devel --period 5 --current" does neither update 
nor extend the information.

In vm_idcc_devel virtio_balloon is loaded:
idcc-devel:~ # lsmod|grep balloon
virtio_balloon 22788  0

Guest OS is SLES 10 SP4. Is that too old ?


Yeah, that is ~10 years old and I believe that virtio_balloon module is 
lacking feature that enable QEMU (and subsequently libvirt) report more 
info.


Michal



virsh dommemstat doesn't update its information

2021-03-29 Thread Lentes, Bernd
Hi,

i'm playing a bit around with my domains and the balloon driver.
To get information about ballooning i use virsh dommemstat.
But i only get very few information:

virsh # dommemstat vm_idcc_devel
actual 1044480
last_update 0
rss 1030144

Also configuring "dommemstat --domain vm_idcc_devel --period 5 --live"
or "dommemstat --domain vm_idcc_devel --period 5 --current" does neither update 
nor extend the information.

In vm_idcc_devel virtio_balloon is loaded:
idcc-devel:~ # lsmod|grep balloon
virtio_balloon 22788  0

Guest OS is SLES 10 SP4. Is that too old ?
Host OS is SLES 12 SP5.
There are other domains in which the information is updated.
Here is the config from vm_idcc_devel:

virsh # dumpxml vm_idcc_devel

  vm_idcc_devel
  4993009b-42ff-45d9-b1e0-145b8c0c8f82
  2044928
  1044480
  1
  
/machine
  
  
hvm



  
  


  
  
  destroy
  restart
  destroy
  
/usr/bin/qemu-kvm

  
  
  
  
  
  


  
  
  
  
  


  
  


  


  
  


  
  
  
  
  
  


  
  

  
  


  
  
  


  


  


  


  
  
  


  
  
  

  




Bernd


-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


Re: how to check a virtual disk

2021-03-29 Thread Lentes, Bernd


- On Mar 29, 2021, at 2:09 PM, Peter Krempa pkre...@redhat.com wrote:

> On Mon, Mar 29, 2021 at 13:59:11 +0200, Lentes, Bernd wrote:
>> 
>> - On Mar 29, 2021, at 12:58 PM, Bernd Lentes
>> bernd.len...@helmholtz-muenchen.de wrote:
> 
> [...]
> 
>> 
>> > 
>> 
>> I forgot:
>> host is SLES 12 SP5, virtual domain too.
>> The image file is in raw format.
> 
> Please always attach the VM config XMLs, so that we don't have to guess
> how your disks are configured.




  vm_geneious
  7337ee89-1699-470f-95c4-05ee19203847
  8192000
  8192000
  2
  
hvm

  
  



  
  



  
  destroy
  restart
  destroy
  


  
  
/usr/bin/qemu-kvm

  
  
  
  
  



  


  
  


  
  


  
  



  


  


  


  
  
  
  


  

  


  


  
  


  
  


  




  


  
  


  


  


  


  /dev/urandom
  

  



smime.p7s
Description: S/MIME Cryptographic Signature


Re: how to check a virtual disk

2021-03-29 Thread Peter Krempa
On Mon, Mar 29, 2021 at 13:59:11 +0200, Lentes, Bernd wrote:
> 
> - On Mar 29, 2021, at 12:58 PM, Bernd Lentes 
> bernd.len...@helmholtz-muenchen.de wrote:

[...]

> 
> > 
> 
> I forgot:
> host is SLES 12 SP5, virtual domain too.
> The image file is in raw format.

Please always attach the VM config XMLs, so that we don't have to guess
how your disks are configured.



Re: how to check a virtual disk

2021-03-29 Thread Lentes, Bernd

- On Mar 29, 2021, at 12:58 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> we have a two-node cluster with pacemaker a SAN.
> The resources are inside virtual domains.
> The images of the virtual disks reside on the SAN.
> On one domain i have errors from the hd in my log:
> 
> 2021-03-24T21:02:28.416504+01:00 geneious kernel: [2159685.909613] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:02:46.505323+01:00 geneious kernel: [2159704.012213] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:02:55.573149+01:00 geneious kernel: [2159713.078560] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:03:23.702946+01:00 geneious kernel: [2159741.202546] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:03:30.289606+01:00 geneious kernel: [2159747.796192] 
> [
> cut here ]
> 2021-03-24T21:03:30.289635+01:00 geneious kernel: [2159747.796207] WARNING: 
> CPU:
> 0 PID: 457 at ../fs/buffer.c:1108 mark_buffer_dirty+0xe8/0x100
> 2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796208] Modules
> linked in: st sr_mod cdrom lp parport_pc ppdev parport xfrm_user xfrm_algo
> binfmt_misc uinput nf_log_ipv6 xt_comme
> nt nf_log_ipv4 nf_log_common xt_LOG xt_limit af_packet iscsi_ibft
> iscsi_boot_sysfs ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT
> xt_pkttype xt_tcpudp iptable_filter ip6table_mangl
> e nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4
> nf_defrag_ipv4 ip_tables xt_conntrack nf_conntrack libcrc32c ip6table_filter
> ip6_tables x_tables joydev virtio_net net_fai
> lover failover virtio_balloon i2c_piix4 qemu_fw_cfg pcspkr button ext4 crc16
> jbd2 mbcache ata_generic hid_generic usbhid ata_piix sd_mod virtio_rng ahci
> floppy libahci serio_raw ehci_pci bo
> chs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> uhci_hcd ehci_hcd usbcore virtio_pci
> 2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796374]
> drm_panel_orientation_quirks libata dm_mirror dm_region_hash dm_log sg
> dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_
> dh_alua scsi_mod autofs4 [last unloaded: parport_pc]
> 2021-03-24T21:03:30.289643+01:00 geneious kernel: [2159747.796400] Supported:
> Yes
> 2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID:
> 457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
> 2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796406] Hardware
> name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> rel-1.12.0-0-ga698c89-rebuilt.suse.com 04/01/2014
> 2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796407] task:
> 8ba32766c380 task.stack: 99954124c000
> 2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796409] RIP:
> 0010:mark_buffer_dirty+0xe8/0x100
> 2021-03-24T21:03:30.289646+01:00 geneious kernel: [2159747.796409] RSP:
> 0018:99954124fcf0 EFLAGS: 00010246
> 2021-03-24T21:03:30.289650+01:00 geneious kernel: [2159747.796413] RAX:
> 00a20828 RBX: 8ba209a58d90 RCX: 8ba3292d7958
> 2021-03-24T21:03:30.289651+01:00 geneious kernel: [2159747.796413] RDX:
> 8ba209a585b0 RSI: 8ba24270b690 RDI: 8ba3292d7958
> 2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796414] RBP:
> 8ba3292d7958 R08: 8ba209a585b0 R09: 0001
> 2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796415] R10:
> 8ba328c1c0b0 R11: 8ba287805380 R12: 8ba3292d795a
> 2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796415] R13:
>  R14: 8ba3292d7958 R15: 8ba209a58d90
> 2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796417] FS:
> () GS:8ba333c0() knlGS:
> 2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796417] CS:  0010 
> DS:
>  ES:  CR0: 80050033
> 2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796418] CR2:
> 99bff000 CR3: 000101b06000 CR4: 06f0
> 2021-03-24T21:03:30.289655+01:00 geneious kernel: [2159747.796424] Call Trace:
> 2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796470]
> __jbd2_journal_refile_buffer+0xbb/0xe0 [jbd2]
> 2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796479]
> jbd2_journal_commit_transaction+0xf1a/0x1870 [jbd2]
> 2021-03-24T21:03:30.289657+01:00 geneious kernel: [2159747.796489]  ?
> __switch_to_asm+0x41/0x70
> 2021-03-24T21:03:30.289658+01:00 geneious kernel: [2159747.796490]  ?
> __switch_to_asm+0x35/0x70
> 2021-03-24T21:03:30.289662+01:00 geneious kernel: [2159747.796493]
> kjournald2+0xbb/0x230 [jbd2]
> 2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796499]  ?
> wait_woken+0x80/0x80
> 2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796503]
> kthread+0xf6/0x130
> 2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796508]  ?
> 

how to check a virtual disk

2021-03-29 Thread Lentes, Bernd
Hi,

we have a two-node cluster with pacemaker a SAN.
The resources are inside virtual domains.
The images of the virtual disks reside on the SAN.
On one domain i have errors from the hd in my log:

2021-03-24T21:02:28.416504+01:00 geneious kernel: [2159685.909613] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:46.505323+01:00 geneious kernel: [2159704.012213] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:55.573149+01:00 geneious kernel: [2159713.078560] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:23.702946+01:00 geneious kernel: [2159741.202546] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:30.289606+01:00 geneious kernel: [2159747.796192] 
[ cut here ]
2021-03-24T21:03:30.289635+01:00 geneious kernel: [2159747.796207] WARNING: 
CPU: 0 PID: 457 at ../fs/buffer.c:1108 mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796208] Modules 
linked in: st sr_mod cdrom lp parport_pc ppdev parport xfrm_user xfrm_algo 
binfmt_misc uinput nf_log_ipv6 xt_comme
nt nf_log_ipv4 nf_log_common xt_LOG xt_limit af_packet iscsi_ibft 
iscsi_boot_sysfs ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT 
xt_pkttype xt_tcpudp iptable_filter ip6table_mangl
e nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 
nf_defrag_ipv4 ip_tables xt_conntrack nf_conntrack libcrc32c ip6table_filter 
ip6_tables x_tables joydev virtio_net net_fai
lover failover virtio_balloon i2c_piix4 qemu_fw_cfg pcspkr button ext4 crc16 
jbd2 mbcache ata_generic hid_generic usbhid ata_piix sd_mod virtio_rng ahci 
floppy libahci serio_raw ehci_pci bo
chs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm 
uhci_hcd ehci_hcd usbcore virtio_pci
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796374]  
drm_panel_orientation_quirks libata dm_mirror dm_region_hash dm_log sg 
dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_
dh_alua scsi_mod autofs4 [last unloaded: parport_pc]
2021-03-24T21:03:30.289643+01:00 geneious kernel: [2159747.796400] Supported: 
Yes
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID: 
457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796406] Hardware 
name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.12.0-0-ga698c89-rebuilt.suse.com 04/01/2014
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796407] task: 
8ba32766c380 task.stack: 99954124c000
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796409] RIP: 
0010:mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289646+01:00 geneious kernel: [2159747.796409] RSP: 
0018:99954124fcf0 EFLAGS: 00010246
2021-03-24T21:03:30.289650+01:00 geneious kernel: [2159747.796413] RAX: 
00a20828 RBX: 8ba209a58d90 RCX: 8ba3292d7958
2021-03-24T21:03:30.289651+01:00 geneious kernel: [2159747.796413] RDX: 
8ba209a585b0 RSI: 8ba24270b690 RDI: 8ba3292d7958
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796414] RBP: 
8ba3292d7958 R08: 8ba209a585b0 R09: 0001
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796415] R10: 
8ba328c1c0b0 R11: 8ba287805380 R12: 8ba3292d795a
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796415] R13: 
 R14: 8ba3292d7958 R15: 8ba209a58d90
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796417] FS:  
() GS:8ba333c0() knlGS:
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796417] CS:  0010 
DS:  ES:  CR0: 80050033
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796418] CR2: 
99bff000 CR3: 000101b06000 CR4: 06f0
2021-03-24T21:03:30.289655+01:00 geneious kernel: [2159747.796424] Call Trace:
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796470]  
__jbd2_journal_refile_buffer+0xbb/0xe0 [jbd2]
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796479]  
jbd2_journal_commit_transaction+0xf1a/0x1870 [jbd2]
2021-03-24T21:03:30.289657+01:00 geneious kernel: [2159747.796489]  ? 
__switch_to_asm+0x41/0x70
2021-03-24T21:03:30.289658+01:00 geneious kernel: [2159747.796490]  ? 
__switch_to_asm+0x35/0x70
2021-03-24T21:03:30.289662+01:00 geneious kernel: [2159747.796493]  
kjournald2+0xbb/0x230 [jbd2]
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796499]  ? 
wait_woken+0x80/0x80
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796503]  
kthread+0xf6/0x130
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796508]  ? 
commit_timeout+0x10/0x10 [jbd2]
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796510]  ? 
kthread_bind+0x10/0x10
2021-03-24T21:03:30.289665+01:00 geneious kernel: [2159747.796511]  
ret_from_fork+0x35/0x40

Re: Packets dropped by virtual NICs

2021-03-29 Thread Michal Privoznik

On 3/29/21 11:38 AM, Silvia Fichera wrote:

Hi Michal,
these are the steps:
- start the vm with qemu
sudo qemu-system-x86_64-spice -m 2048 -enable-kvm -smp 3 -cdrom 
/home/machine_A/ubuntu.iso -netdev 
tap,ifname=tap0,vhost=on,id=n1,vhostforce=on,queues=3,script=/etc/tap0_ifup.sh,downscript=/etc/tap0_ifdown.sh 
-device virtio-net-pci,netdev=n1,mac=9E:58:00:d2:53:03,mq=on,vectors=8 
-netdev 
tap,ifname=tap1,vhost=on,id=n2,vhostforce=on,queues=3,script=/etc/tap1_ifup.sh,downscript=/etc/tap1_ifdown.sh 
-device virtio-net-pci,netdev=n2,mac=7A:53:00:d1:59:04,mq=on,vectors=8  
-device virtio-net,netdev=network2 -netdev user,id=network2 -hda 
switch_A.img


- attach the tap to the bridges in the host machine (to have the traffic 
coming from the outsider traffic generator injected in the vm)

- in the VM:
     - enable ip_forward
     - enable promiscuous mode for the interface


This sounds fishy. Why is this needed?

     - send the "tc qdisc" command to configure the output NIC with 2 
traffic classes (TC) assigned to 2 different queues
     - add the iptables mangle rules to the POSTROUTING chain to assign 
TC1 to the traffic with dport  and TC0 to the traffic with dport 

I'm sending 2 UDP flows, I have no loss under 3Mbps each.
If I capture traffic with tcpdump on the ingress and on the egress nic 
(of the VM), I see a difference of 50% of packets


When I did a test on the host machine, to check if it has the same 
problem, I've reached an aggregated traffic of 90Mbps with no loss.


That's why I think that there is some misconfiguration on the virtual NIC.


Well, that's fairly easy to test - create those two TAPs, do the setup 
you're doing and instad of spawning qemu, run iperf or spirent and tell 
it to use those TAPs. I think you'll find the problem elsewhere.


Anyway, this problem is not libvirt related really and I guess you can 
find more elaborate replies on a list where TC developers hang out. 
Looks like they're using latrc: https://lartc.org/#mailinglist


Michal



Re: Packets dropped by virtual NICs

2021-03-29 Thread Silvia Fichera
Hi Michal,
these are the steps:
- start the vm with qemu
sudo qemu-system-x86_64-spice -m 2048 -enable-kvm -smp 3 -cdrom
/home/machine_A/ubuntu.iso -netdev
tap,ifname=tap0,vhost=on,id=n1,vhostforce=on,queues=3,script=/etc/tap0_ifup.sh,downscript=/etc/tap0_ifdown.sh
-device virtio-net-pci,netdev=n1,mac=9E:58:00:d2:53:03,mq=on,vectors=8
-netdev
tap,ifname=tap1,vhost=on,id=n2,vhostforce=on,queues=3,script=/etc/tap1_ifup.sh,downscript=/etc/tap1_ifdown.sh
-device virtio-net-pci,netdev=n2,mac=7A:53:00:d1:59:04,mq=on,vectors=8
-device virtio-net,netdev=network2 -netdev user,id=network2 -hda
switch_A.img

- attach the tap to the bridges in the host machine (to have the traffic
coming from the outsider traffic generator injected in the vm)
- in the VM:
- enable ip_forward
- enable promiscuous mode for the interface
- send the "tc qdisc" command to configure the output NIC with 2
traffic classes (TC) assigned to 2 different queues
- add the iptables mangle rules to the POSTROUTING chain to assign TC1
to the traffic with dport  and TC0 to the traffic with dport 
I'm sending 2 UDP flows, I have no loss under 3Mbps each.
If I capture traffic with tcpdump on the ingress and on the egress nic (of
the VM), I see a difference of 50% of packets

When I did a test on the host machine, to check if it has the same problem,
I've reached an aggregated traffic of 90Mbps with no loss.

That's why I think that there is some misconfiguration on the virtual NIC.

Thanks
Silvia



On Mon, Mar 29, 2021 at 11:13 AM Michal Privoznik 
wrote:

> On 3/27/21 2:53 PM, Silvia Fichera wrote:
> > Hi all,
> > I want to use tc qdisc settings in a network coposed of several qemu
> > VMs, connected through bridges and tap interfaces.
> > I generate traffic with a spirent. Everything is fine when the
> > scheduling discipline is not installed but when I run the command to set
> > taprio queues traffic on the VM's NIC the traffic is dropped, i can send
> > max 1mbps.
> > I think that there is something missing in the virtual NIC configuration
> > or setup. With ethtool i can see that queues are configured. I've also
> > noticed the BQL equals to 0, that is different than the physical machine
> > (BQL=18600) where everything works correctly.
> > I've read that it could be because NIC drivers do not support that
> setting.
> >
> > Do you have any suggestions?
>
> Hey,
>
> I'm not familiar with taprio, but what's implemented in libvirt is htb
> and sfq and that works well. Are you setting qdisc-s yourself or
> modifying libvirt created structure?
>
> Are you setting these qdiscs from the host, right?
> I know that when changing QoS settings (when libvirt changes qdics/class
> layout) for a brief moment packets are not transmitted from/to guest. I
> suspect that kernel is freeing up queues or something. But this does not
> look like your case, does it?
>
> Michal
>
>

-- 
Silvia Fichera


Re: Virtual Network API for QEMU

2021-03-29 Thread Michal Privoznik

On 3/27/21 1:39 PM, Radek Simko wrote:

Hi,
According to this support matrix 
https://libvirt.org/hvsupport.html#virNetworkDriver 


there is no support for any APIs other than hypervisor ones for qemu.
For example virConnectNumOfNetworks is not supported.

Is there any particular reason this is not supported? Has any 
development in that area been attempted in the past? Would contributions 
adding support be welcomed?


To extend Laine's reply:

Libvirt has two set of drivers: statefull (where libvirt keeps the state 
of resources like domains, networks, ...) and stateless (where libvirt 
merely translates from/to APIs exposed by hypervisor).


QEMU can be an example of a statefull driver, ESX or hyperv are examples 
of stateless drivers. Stateless drivers also implement network APIs 
(again, by translating from/to APIs exposed by the underlying hypervisor 
- ESX or hypverv in this example), whereas statefull drivers use bridge 
driver. Therefore, QEMU doesn't implement any network APIs.



This is even more visible with split daemons (where monolithic libvirtd 
is broken into smaller daemons) - if virnetworkd is not running then 
things like 'virsh net-list' return an error [*].



Is there any particular problem you're facing?

Michal


* - except not really, because these split daemons are socket activated, 
so virnetworkd is stared automatically when needed.




Re: Packets dropped by virtual NICs

2021-03-29 Thread Michal Privoznik

On 3/27/21 2:53 PM, Silvia Fichera wrote:

Hi all,
I want to use tc qdisc settings in a network coposed of several qemu 
VMs, connected through bridges and tap interfaces.
I generate traffic with a spirent. Everything is fine when the 
scheduling discipline is not installed but when I run the command to set 
taprio queues traffic on the VM's NIC the traffic is dropped, i can send 
max 1mbps.
I think that there is something missing in the virtual NIC configuration 
or setup. With ethtool i can see that queues are configured. I've also 
noticed the BQL equals to 0, that is different than the physical machine 
(BQL=18600) where everything works correctly.

I've read that it could be because NIC drivers do not support that setting.

Do you have any suggestions?


Hey,

I'm not familiar with taprio, but what's implemented in libvirt is htb 
and sfq and that works well. Are you setting qdisc-s yourself or 
modifying libvirt created structure?


Are you setting these qdiscs from the host, right?
I know that when changing QoS settings (when libvirt changes qdics/class 
layout) for a brief moment packets are not transmitted from/to guest. I 
suspect that kernel is freeing up queues or something. But this does not 
look like your case, does it?


Michal