Re: qemu process consumes 100% host CPU after reverting snapshot

2024-03-28 Thread Chun Feng Wu
BTW, if I download qemu8.0(https://download.qemu.org/qemu-8.0.0.tar.xz), 
compile and install it in my ubuntu22.04, launch vm with the same 
command(just update --machine to be "pc"), the host cpu usage is not 
high, it seems to be a bug in QEMU6


Also, I have another question, for disk iotune or throttle group, after 
reverting snapshot, if I login vm, and run fio, I/O performance drops a 
lot in both QEMU6 and QEMU8, do anyone know the reason? Any explanation 
would be appreciated!




On 2024/3/28 07:32, Chun Feng Wu wrote:


Hi,

I am testing throttle filter chain(multiple throttle-groups on disk) 
with the following steps:
1. start guest vm(chained throttle filters applied on disk per 
https://github.com/qemu/qemu/blob/master/docs/throttle.txt)

2. take snapshot
3. revert snapshot

after step3, I noticed qemu process in host consumes 100% cpu, and 
after I login guest vm, vm cannot(or slowly) response my cmd (it works 
well before reverting).


/    PID USER  PR  NI    VIRT    RES         SHR S  %CPU  
%MEM TIME+ COMMAND
   65455 root  20   0 9659924 891328  20132 R 100.3 5.4    
29:39.93 qemu-system-x86/


/
/

Does anybody know why such issue happens?  is it a bug or I 
misunderstand something?


my cmd:

/qemu-system-x86_64 \
  -name ubuntu-20.04-vm,debug-threads=on \
  -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
  -accel kvm \
  -cpu 
Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off 
\

  -m 8192 \
  -overcommit mem-lock=off \
  -smp 2,sockets=1,dies=1,cores=1,threads=2 \
  -numa node,nodeid=0,cpus=0-1,memdev=ram \
  -object memory-backend-ram,id=ram,size=8192M \
  -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
  -display none \
  -nodefaults \
  -monitor stdio \
  -rtc base=utc,driftfix=slew \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object 
'{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
  -blockdev 
'{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' 
\
  -blockdev 
'{"node-name":"li

qemu process consumes 100% host CPU after reverting snapshot

2024-03-27 Thread Chun Feng Wu

Hi,

I am testing throttle filter chain(multiple throttle-groups on disk) 
with the following steps:
1. start guest vm(chained throttle filters applied on disk per 
https://github.com/qemu/qemu/blob/master/docs/throttle.txt)

2. take snapshot
3. revert snapshot

after step3, I noticed qemu process in host consumes 100% cpu, and after 
I login guest vm, vm cannot(or slowly) response my cmd (it works well 
before reverting).


/    PID USER  PR  NI    VIRT    RES         SHR    S %CPU  
%MEM TIME+ COMMAND
   65455 root  20   0 9659924 891328  20132 R 100.3   5.4 29:39.93 
qemu-system-x86/


/
/

Does anybody know why such issue happens?  is it a bug or I 
misunderstand something?


my cmd:

/qemu-system-x86_64 \
  -name ubuntu-20.04-vm,debug-threads=on \
  -machine pc-i440fx-jammy,usb=off,dump-guest-core=off \
  -accel kvm \
  -cpu 
Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off 
\

  -m 8192 \
  -overcommit mem-lock=off \
  -smp 2,sockets=1,dies=1,cores=1,threads=2 \
  -numa node,nodeid=0,cpus=0-1,memdev=ram \
  -object memory-backend-ram,id=ram,size=8192M \
  -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \
  -display none \
  -nodefaults \
  -monitor stdio \
  -rtc base=utc,driftfix=slew \
  -no-shutdown \
  -boot strict=on \
  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
  -object 
'{"qom-type":"throttle-group","id":"limit0","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":200,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":200,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit1","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":250,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":250,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit2","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":300,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":300,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
    -object 
'{"qom-type":"throttle-group","id":"limit012","limits":{"bps-total":0,"bps-read":0,"bps-write":0,"iops-total":400,"iops-read":0,"iops-write":0,"bps-total-max":0,"bps-read-max":0,"bps-write-max":0,"iops-total-max":400,"iops-read-max":0,"iops-write-max":0,"iops-size":0,"bps-total-max-length":1,"bps-read-max-length":1,"bps-write-max-length":1,"iops-total-max-length":1,"iops-read-max-length":1,"iops-write-max-length":1}}' 
\
  -blockdev 
'{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' 
\
  -blockdev 
'{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' 
\
  -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 
\
  -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' 
\
  -blockdev 
'{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' 
\
  -blockdev 
'{"driver":"throttle","node-name":"libvirt-5-filter","throttle-group":"limit0","file":"libvirt-3-format"}' 
\
  -blockdev 
'{"driver":"throttle","node-name":"libvirt-6-filter","throttle-group":"limit012","file":"libvirt-5-filter"}' 
\
  -device 
virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-6-filter,id=virtio-disk1 \
  -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' 
\
  -blockdev 
'{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' 
\
  -blockdev 
'{"driver":"throttle","node-name":"libvirt-3-filter","throttle-group":"limit1","file":"libvirt-2-format"}' 
\
  -blockdev 
'{"driver":"throttle","node-name":"libvirt-4-filter","throttle-group":"limit012","file":"libvirt-3-filter"}' 
\
  -device 
virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-4-filter,id=virtio-disk2 \
  -blockdev 
'{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' 
\
  -blockdev 

RE: Does "-object" support structured options now?

2024-03-06 Thread Chun Feng Wu
Yes, you’re right, QEMU >=6.0.0 works well, I failed test because I did it on 
QEMU 4.2.1

From: Daniel P. Berrangé 
Date: Wednesday, March 6, 2024 at 22:43
To: Chun Feng Wu , qemu-devel@nongnu.org 

Subject: [EXTERNAL] Re: Does "-object" support structured options now?
On Wed, Mar 06, 2024 at 02:36:08PM +, Daniel P. Berrangé wrote:
> On Wed, Mar 06, 2024 at 02:33:05PM +0000, Chun Feng Wu wrote:
> > Thanks Daniel for your response!
> >
> > I tried it with the following cmd
> >
> > qemu-system-x86_64 [other options...] \
> >   -object 
> > '{"qom-type":"throttle-group","id":"limits0","limits":{"iops-total":200}}'
> >
> > And I got error:
> > qemu-system-x86_64: -object 
> > {"qom-type":"throttle-group","id":"limits0","limits":{"iops-total":200}}: 
> > Parameter 'id' is missing
> >
> > Do you know why such error happens?
>
> You have made a mistake somewhere in invoking it ?

Or perhaps you are using a much older QEMU release which lacks JSON
support ?  You need QEMU >= 6.0.0

With regards,
Daniel
--
|: https://berrange.com   -o-https://www.flickr.com/photos/dberrange  :|
|: https://libvirt.org  -o-https://fstop138.berrange.com  :|
|: https://entangle-photo.org -o-https://www.instagram.com/dberrange  :|


RE: Does "-object" support structured options now?

2024-03-06 Thread Chun Feng Wu
Thanks Daniel for your response!

I tried it with the following cmd

qemu-system-x86_64 [other options...] \
  -object 
'{"qom-type":"throttle-group","id":"limits0","limits":{"iops-total":200}}'

And I got error:
qemu-system-x86_64: -object 
{"qom-type":"throttle-group","id":"limits0","limits":{"iops-total":200}}: 
Parameter 'id' is missing

Do you know why such error happens?


--
Thanks and Regards,

Wu


From: Daniel P. Berrangé 
Date: Monday, March 4, 2024 at 16:06
To: Chun Feng Wu 
Cc: qemu-devel@nongnu.org 
Subject: [EXTERNAL] Re: Does "-object" support structured options now?
On Mon, Mar 04, 2024 at 06:43:19AM +, Chun Feng Wu wrote:
> Hi,
>
> I noticed that throttle-group can be created with “-object”, however, per 
> qemu doc(https://github.com/qemu/qemu/blob/master/docs/throttle.txt ), 
> “-object” doesn’t support structured options at that moment:
>
> “
> A throttle-group can also be created with the -object command line
> option but at the moment there is no way to pass a 'limits' parameter
> that contains a ThrottleLimits structure. The solution is to set the
> individual values directly, like in this example:
>
>-object throttle-group,id=group0,x-iops-total=1000,x-bps-write=2097152
>
> Note however that this is not a stable API (hence the 'x-' prefixes) and
> will disappear when -object gains support for structured options and
> enables use of 'limits'.
> “
>
> Does anybody know if the latest qemu code still lacks of such
> support(structured options for -object)? If so, is there any
> plan to support it(instead of non-stable API)?

-object supports JSON syntax these days so any QAPI structure can be
expressed no matter how complex.


With regards,
Daniel
--
|: https://berrange.com   -o-https://www.flickr.com/photos/dberrange  :|
|: https://libvirt.org  -o-https://fstop138.berrange.com  :|
|: https://entangle-photo.org -o-https://www.instagram.com/dberrange  :|


Does "-object" support structured options now?

2024-03-03 Thread Chun Feng Wu
Hi,

I noticed that throttle-group can be created with “-object”, however, per qemu 
doc(https://github.com/qemu/qemu/blob/master/docs/throttle.txt), “-object” 
doesn’t support structured options at that moment:

“
A throttle-group can also be created with the -object command line
option but at the moment there is no way to pass a 'limits' parameter
that contains a ThrottleLimits structure. The solution is to set the
individual values directly, like in this example:

   -object throttle-group,id=group0,x-iops-total=1000,x-bps-write=2097152

Note however that this is not a stable API (hence the 'x-' prefixes) and
will disappear when -object gains support for structured options and
enables use of 'limits'.
“

Does anybody know if the latest qemu code still lacks of such 
support(structured options for -object)? If so, is there any plan to support 
it(instead of non-stable API)?


--
Thanks and Regards,

Wu




Does "'throttle' block filter" work?

2023-10-25 Thread Chun Feng Wu
Hi,

I am trying to use “'throttle' block filter” mentioned at 
https://github.com/qemu/qemu/blob/master/docs/throttle.txt, however, it seems 
not work with the following steps, did I miss or mis-understand anything?

1. In RHEL 8.8, I created one vm

qemu-kvm -m 2048 -drive file=/virt/images/focal-server-cloudimg-amd64.img 
-drive file=/virt/disks/vm1_disk_1.qcow2 -drive 
file=/virt/disks/vm1_disk_2.qcow2 -serial stdio -vga virtio -display default 
-qmp tcp:localhost:,server,wait=off

2. telnet vm

telnet localhost 
{"execute": "qmp_capabilities"}

3. add a new data disk ()
{"execute": "blockdev-add", "arguments": {"node-name": "disk7", "driver": 
"qcow2", "file": {"driver": "file", "filename": 
"/virt/disks/vm1_disk_7.qcow2"}}}

4. create a new throttle group
{"execute": "object-add", "arguments": {"qom-type": "throttle-group", "id": 
"limit1", "limits": {"iops-total": 1000}}}

5. add a throttle filter to disk
{"execute": "blockdev-add", "arguments": {"driver": "throttle", "node-name": 
"throttle1", "throttle-group": "limit1", "file": "disk7"}}

6. associate the disk with vm
{"execute": "device_add", "arguments": {"driver": "virtio-blk", "id": "blk7", 
"drive": "disk7"}}

7. fio test within vm (limit is 1000, while fio gets 2445 in avg)
fio --name=mytest --ioengine=sync --rw=randwrite --bs=4k --size=20G --numjobs=1 
--time_based --runtime=300s --filename=/dev/vda

mytest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=sync, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [f(1)][100.0%][eta 00m:00s]
mytest: (groupid=0, jobs=1): err= 0: pid=959: Tue Oct 24 15:11:38 2023
  write: IOPS=2446, BW=9784KiB/s (10.0MB/s)(2867MiB/300016msec); 0 zone resets
clat (usec): min=3, max=131566, avg=399.31, stdev=3128.70
 lat (usec): min=4, max=131568, avg=401.12, stdev=3128.72
clat percentiles (usec):
 |  1.00th=[5],  5.00th=[6], 10.00th=[6], 20.00th=[6],
 | 30.00th=[6], 40.00th=[7], 50.00th=[7], 60.00th=[8],
 | 70.00th=[   10], 80.00th=[   11], 90.00th=[   29], 95.00th=[   94],
 | 99.00th=[23462], 99.50th=[23987], 99.90th=[31589], 99.95th=[33817],
 | 99.99th=[43779]
   bw (  KiB/s): min=  896, max=248856, per=99.98%, avg=9781.85, 
stdev=26238.98, samples=600
   iops: min=  224, max=62214, avg=2445.43, stdev=6559.75, samples=600
  lat (usec)   : 4=0.01%, 10=76.84%, 20=11.91%, 50=3.37%, 100=3.29%
  lat (usec)   : 250=2.49%, 500=0.33%, 750=0.07%, 1000=0.04%
  lat (msec)   : 2=0.04%, 4=0.02%, 10=0.02%, 20=0.33%, 50=1.24%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu  : usr=1.32%, sys=4.32%, ctx=23971, majf=0, minf=11
  IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
 issued rwts: total=0,733855,0,0 short=0,0,0,0 dropped=0,0,0,0
 latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=9784KiB/s (10.0MB/s), 9784KiB/s-9784KiB/s (10.0MB/s-10.0MB/s), 
io=2867MiB (3006MB), run=300016-300016msec

Disk stats (read/write):
  vda: ios=0/660825, merge=0/17257, ticks=0/39116357, in_queue=38236828, 
util=98.23%