Re: [PULL 00/28] Block layer patches

2023-09-19 Thread Stefan Hajnoczi
On Tue, 19 Sept 2023 at 06:26, Kevin Wolf  wrote:
> Am 18.09.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> If we could fully get rid of the AioContext lock (as we originally
> stated as a goal), that would automatically solve this kind of
> deadlocks.

Grepping for "ctx locked", "context acquired", etc does not bring up a
lot of comments describing variables that are protected by the
AioContext lock.

However, there are at least hundreds of functions that assume they are
called with the AioContext lock held.

There are a few strategies:

Top-down

Shorten AioContext lock critical sections to cover only APIs that need them.
Then push the lock down into the API and repeat the next lower level until
aio_context_acquire() + AIO_WAIT_WHILE() + aio_context_release() can be
replaced with AIO_WAIT_UNLOCKED().

Bottom-up
-
Switch AIO_WAIT_WHILE() to aio_context_release() + AIO_WAIT_WHILE_UNLOCKED() +
aio_context_acquire(). Then move the lock up into callers and repeat at the
next higher level until aio_context_acquire() + aio_context_release() cancel
each other out.

Big bang

Remove aio_context_acquire/release() and fix tests until they pass.

I think top-down is safer than bottom-up, because bottom-up is more
likely to cause issues with callers that do not tolerate temporarily
dropping the lock.

The big bang approach is only reasonable if the AioContext lock is no
longer used to protect variables (which we don't know for sure because
that requires auditing every line of code).

My concern with the top-down approach is that so much code needs to be
audited and the conversions are temporary steps (it's almost a waste
of time for maintainers to review them).

I'm tempted to go for the big bang approach but also don't want to
introduce a slew of new race conditions. :/

Stefan



Re: [PULL 00/28] Block layer patches

2023-09-19 Thread Stefan Hajnoczi
On Tue, 19 Sept 2023 at 06:26, Kevin Wolf  wrote:
>
> Am 18.09.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> > Hi Kevin,
> > I believe that my own commit "block-coroutine-wrapper: use
> > qemu_get_current_aio_context()" breaks this test. The failure is
> > non-deterministic (happens about 1 out of 4 runs).
> >
> > It seems the job hangs and the test times out in vm.run_job('job1', 
> > wait=5.0).
> >
> > I haven't debugged it yet but wanted to share this information to save
> > some time. Tomorrow I'll investigate further.
>
> Yes, it's relatively easily reproducible if I run the test in a loop,
> and I can't seem to reproduce it without the last patch. Should I
> unstage the full series again, or do you think that the last patch is
> really optional this time?
>
> However, I'm unsure how the stack traces I'm seeing are related to your
> patch. Maybe it just made an existing bug more likely to be triggered?
>
> What I'm seeing is that the reader lock is held by an iothread that is
> waiting for its AioContext lock to make progress:
>
> Thread 3 (Thread 0x7f811e9346c0 (LWP 26390) "qemu-system-x86"):
> #0  0x7f81250aaf80 in __lll_lock_wait () at /lib64/libc.so.6
> #1  0x7f81250b149a in pthread_mutex_lock@@GLIBC_2.2.5 () at 
> /lib64/libc.so.6
> #2  0x55b7b170967e in qemu_mutex_lock_impl (mutex=0x55b7b34e3080, 
> file=0x55b7b199e1f7 "../util/async.c", line=728) at 
> ../util/qemu-thread-posix.c:94
> #3  0x55b7b1709953 in qemu_rec_mutex_lock_impl (mutex=0x55b7b34e3080, 
> file=0x55b7b199e1f7 "../util/async.c", line=728) at 
> ../util/qemu-thread-posix.c:149
> #4  0x55b7b1728318 in aio_context_acquire (ctx=0x55b7b34e3020) at 
> ../util/async.c:728
> #5  0x55b7b1727c49 in co_schedule_bh_cb (opaque=0x55b7b34e3020) at 
> ../util/async.c:565
> #6  0x55b7b1726f1c in aio_bh_call (bh=0x55b7b34e2e70) at 
> ../util/async.c:169
> #7  0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b34e3020) at 
> ../util/async.c:216
> #8  0x55b7b170351d in aio_poll (ctx=0x55b7b34e3020, blocking=true) at 
> ../util/aio-posix.c:722
> #9  0x55b7b1518604 in iothread_run (opaque=0x55b7b2904460) at 
> ../iothread.c:63
> #10 0x55b7b170a955 in qemu_thread_start (args=0x55b7b34e36b0) at 
> ../util/qemu-thread-posix.c:541
> #11 0x7f81250ae15d in start_thread () at /lib64/libc.so.6
> #12 0x7f812512fc00 in clone3 () at /lib64/libc.so.6
>
> On the other hand, the main thread wants to acquire the writer lock,
> but it holds the AioContext lock of the iothread (it takes it in
> job_prepare_locked()):
>
> Thread 1 (Thread 0x7f811f4b7b00 (LWP 26388) "qemu-system-x86"):
> #0  0x7f8125122356 in ppoll () at /lib64/libc.so.6
> #1  0x55b7b172eae0 in qemu_poll_ns (fds=0x55b7b34ec910, nfds=1, 
> timeout=-1) at ../util/qemu-timer.c:339
> #2  0x55b7b1704ebd in fdmon_poll_wait (ctx=0x55b7b3269210, 
> ready_list=0x7ffc90b05680, timeout=-1) at ../util/fdmon-poll.c:79
> #3  0x55b7b1703284 in aio_poll (ctx=0x55b7b3269210, blocking=true) at 
> ../util/aio-posix.c:670
> #4  0x55b7b1567c3b in bdrv_graph_wrlock (bs=0x0) at 
> ../block/graph-lock.c:145
> #5  0x55b7b1554c1c in blk_remove_bs (blk=0x55b7b4425800) at 
> ../block/block-backend.c:916
> #6  0x55b7b1554779 in blk_delete (blk=0x55b7b4425800) at 
> ../block/block-backend.c:497
> #7  0x55b7b1554133 in blk_unref (blk=0x55b7b4425800) at 
> ../block/block-backend.c:557
> #8  0x55b7b157a149 in mirror_exit_common (job=0x55b7b4419000) at 
> ../block/mirror.c:696
> #9  0x55b7b1577015 in mirror_prepare (job=0x55b7b4419000) at 
> ../block/mirror.c:807
> #10 0x55b7b153a1a7 in job_prepare_locked (job=0x55b7b4419000) at 
> ../job.c:988
> #11 0x55b7b153a0d9 in job_txn_apply_locked (job=0x55b7b4419000, 
> fn=0x55b7b153a110 ) at ../job.c:191
> #12 0x55b7b1538b6d in job_do_finalize_locked (job=0x55b7b4419000) at 
> ../job.c:1011
> #13 0x55b7b153a886 in job_completed_txn_success_locked 
> (job=0x55b7b4419000) at ../job.c:1068
> #14 0x55b7b1539372 in job_completed_locked (job=0x55b7b4419000) at 
> ../job.c:1082
> #15 0x55b7b153a71b in job_exit (opaque=0x55b7b4419000) at ../job.c:1103
> #16 0x55b7b1726f1c in aio_bh_call (bh=0x7f8110005470) at 
> ../util/async.c:169
> #17 0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b3269210) at 
> ../util/async.c:216
> #18 0x55b7b1702c05 in aio_dispatch (ctx=0x55b7b3269210) at 
> ../util/aio-posix.c:423
> #19 0x55b7b1728a14 in aio_ctx_dispatch (source=0x55b7b3269210, 
> callback=0x0, user_data=0x0) at ../util/async.c:358
> #20 0x7f8126c31c7f in g_main_dispatch (context=0x55b7b3269720) at 
> ../glib/gmain.c:3454
> #21 g_main_context_dispatch (context=0x55b7b3269720) at ../glib/gmain.c:4172
> #22 0x55b7b1729c98 in glib_pollfds_poll () at ../util/main-loop.c:290
> #23 0x55b7b1729572 in os_host_main_loop_wait (timeout=27462700) at 
> ../util/main-loop.c:313
> #24 0x55b7b1729452 in main_loop_wait (nonblocking=0) at 
> ../util/main-loop.c:592
> #25 0x55b7b119a1eb in 

Re: [PULL 00/28] Block layer patches

2023-09-19 Thread Stefan Hajnoczi
On Tue, 19 Sept 2023 at 06:26, Kevin Wolf  wrote:
>
> Am 18.09.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> > Hi Kevin,
> > I believe that my own commit "block-coroutine-wrapper: use
> > qemu_get_current_aio_context()" breaks this test. The failure is
> > non-deterministic (happens about 1 out of 4 runs).
> >
> > It seems the job hangs and the test times out in vm.run_job('job1', 
> > wait=5.0).
> >
> > I haven't debugged it yet but wanted to share this information to save
> > some time. Tomorrow I'll investigate further.
>
> Yes, it's relatively easily reproducible if I run the test in a loop,
> and I can't seem to reproduce it without the last patch. Should I
> unstage the full series again, or do you think that the last patch is
> really optional this time?

Please drop the last patch. I'm not aware of dependencies on the last patch.

> However, I'm unsure how the stack traces I'm seeing are related to your
> patch. Maybe it just made an existing bug more likely to be triggered?

I'll share my thoughts once I've looked at the crash today.

Regarding AioContext lock removal: I'll work on that and see what
still depends on the lock.

Stefan

> What I'm seeing is that the reader lock is held by an iothread that is
> waiting for its AioContext lock to make progress:
>
> Thread 3 (Thread 0x7f811e9346c0 (LWP 26390) "qemu-system-x86"):
> #0  0x7f81250aaf80 in __lll_lock_wait () at /lib64/libc.so.6
> #1  0x7f81250b149a in pthread_mutex_lock@@GLIBC_2.2.5 () at 
> /lib64/libc.so.6
> #2  0x55b7b170967e in qemu_mutex_lock_impl (mutex=0x55b7b34e3080, 
> file=0x55b7b199e1f7 "../util/async.c", line=728) at 
> ../util/qemu-thread-posix.c:94
> #3  0x55b7b1709953 in qemu_rec_mutex_lock_impl (mutex=0x55b7b34e3080, 
> file=0x55b7b199e1f7 "../util/async.c", line=728) at 
> ../util/qemu-thread-posix.c:149
> #4  0x55b7b1728318 in aio_context_acquire (ctx=0x55b7b34e3020) at 
> ../util/async.c:728
> #5  0x55b7b1727c49 in co_schedule_bh_cb (opaque=0x55b7b34e3020) at 
> ../util/async.c:565
> #6  0x55b7b1726f1c in aio_bh_call (bh=0x55b7b34e2e70) at 
> ../util/async.c:169
> #7  0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b34e3020) at 
> ../util/async.c:216
> #8  0x55b7b170351d in aio_poll (ctx=0x55b7b34e3020, blocking=true) at 
> ../util/aio-posix.c:722
> #9  0x55b7b1518604 in iothread_run (opaque=0x55b7b2904460) at 
> ../iothread.c:63
> #10 0x55b7b170a955 in qemu_thread_start (args=0x55b7b34e36b0) at 
> ../util/qemu-thread-posix.c:541
> #11 0x7f81250ae15d in start_thread () at /lib64/libc.so.6
> #12 0x7f812512fc00 in clone3 () at /lib64/libc.so.6
>
> On the other hand, the main thread wants to acquire the writer lock,
> but it holds the AioContext lock of the iothread (it takes it in
> job_prepare_locked()):
>
> Thread 1 (Thread 0x7f811f4b7b00 (LWP 26388) "qemu-system-x86"):
> #0  0x7f8125122356 in ppoll () at /lib64/libc.so.6
> #1  0x55b7b172eae0 in qemu_poll_ns (fds=0x55b7b34ec910, nfds=1, 
> timeout=-1) at ../util/qemu-timer.c:339
> #2  0x55b7b1704ebd in fdmon_poll_wait (ctx=0x55b7b3269210, 
> ready_list=0x7ffc90b05680, timeout=-1) at ../util/fdmon-poll.c:79
> #3  0x55b7b1703284 in aio_poll (ctx=0x55b7b3269210, blocking=true) at 
> ../util/aio-posix.c:670
> #4  0x55b7b1567c3b in bdrv_graph_wrlock (bs=0x0) at 
> ../block/graph-lock.c:145
> #5  0x55b7b1554c1c in blk_remove_bs (blk=0x55b7b4425800) at 
> ../block/block-backend.c:916
> #6  0x55b7b1554779 in blk_delete (blk=0x55b7b4425800) at 
> ../block/block-backend.c:497
> #7  0x55b7b1554133 in blk_unref (blk=0x55b7b4425800) at 
> ../block/block-backend.c:557
> #8  0x55b7b157a149 in mirror_exit_common (job=0x55b7b4419000) at 
> ../block/mirror.c:696
> #9  0x55b7b1577015 in mirror_prepare (job=0x55b7b4419000) at 
> ../block/mirror.c:807
> #10 0x55b7b153a1a7 in job_prepare_locked (job=0x55b7b4419000) at 
> ../job.c:988
> #11 0x55b7b153a0d9 in job_txn_apply_locked (job=0x55b7b4419000, 
> fn=0x55b7b153a110 ) at ../job.c:191
> #12 0x55b7b1538b6d in job_do_finalize_locked (job=0x55b7b4419000) at 
> ../job.c:1011
> #13 0x55b7b153a886 in job_completed_txn_success_locked 
> (job=0x55b7b4419000) at ../job.c:1068
> #14 0x55b7b1539372 in job_completed_locked (job=0x55b7b4419000) at 
> ../job.c:1082
> #15 0x55b7b153a71b in job_exit (opaque=0x55b7b4419000) at ../job.c:1103
> #16 0x55b7b1726f1c in aio_bh_call (bh=0x7f8110005470) at 
> ../util/async.c:169
> #17 0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b3269210) at 
> ../util/async.c:216
> #18 0x55b7b1702c05 in aio_dispatch (ctx=0x55b7b3269210) at 
> ../util/aio-posix.c:423
> #19 0x55b7b1728a14 in aio_ctx_dispatch (source=0x55b7b3269210, 
> callback=0x0, user_data=0x0) at ../util/async.c:358
> #20 0x7f8126c31c7f in g_main_dispatch (context=0x55b7b3269720) at 
> ../glib/gmain.c:3454
> #21 g_main_context_dispatch (context=0x55b7b3269720) at ../glib/gmain.c:4172
> #22 0x55b7b1729c98 in glib_pollfds_poll () at 

Re: [PULL 00/28] Block layer patches

2023-09-19 Thread Kevin Wolf
Am 18.09.2023 um 20:56 hat Stefan Hajnoczi geschrieben:
> Hi Kevin,
> I believe that my own commit "block-coroutine-wrapper: use
> qemu_get_current_aio_context()" breaks this test. The failure is
> non-deterministic (happens about 1 out of 4 runs).
> 
> It seems the job hangs and the test times out in vm.run_job('job1', wait=5.0).
> 
> I haven't debugged it yet but wanted to share this information to save
> some time. Tomorrow I'll investigate further.

Yes, it's relatively easily reproducible if I run the test in a loop,
and I can't seem to reproduce it without the last patch. Should I
unstage the full series again, or do you think that the last patch is
really optional this time?

However, I'm unsure how the stack traces I'm seeing are related to your
patch. Maybe it just made an existing bug more likely to be triggered?

What I'm seeing is that the reader lock is held by an iothread that is
waiting for its AioContext lock to make progress:

Thread 3 (Thread 0x7f811e9346c0 (LWP 26390) "qemu-system-x86"):
#0  0x7f81250aaf80 in __lll_lock_wait () at /lib64/libc.so.6
#1  0x7f81250b149a in pthread_mutex_lock@@GLIBC_2.2.5 () at /lib64/libc.so.6
#2  0x55b7b170967e in qemu_mutex_lock_impl (mutex=0x55b7b34e3080, 
file=0x55b7b199e1f7 "../util/async.c", line=728) at 
../util/qemu-thread-posix.c:94
#3  0x55b7b1709953 in qemu_rec_mutex_lock_impl (mutex=0x55b7b34e3080, 
file=0x55b7b199e1f7 "../util/async.c", line=728) at 
../util/qemu-thread-posix.c:149
#4  0x55b7b1728318 in aio_context_acquire (ctx=0x55b7b34e3020) at 
../util/async.c:728
#5  0x55b7b1727c49 in co_schedule_bh_cb (opaque=0x55b7b34e3020) at 
../util/async.c:565
#6  0x55b7b1726f1c in aio_bh_call (bh=0x55b7b34e2e70) at ../util/async.c:169
#7  0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b34e3020) at 
../util/async.c:216
#8  0x55b7b170351d in aio_poll (ctx=0x55b7b34e3020, blocking=true) at 
../util/aio-posix.c:722
#9  0x55b7b1518604 in iothread_run (opaque=0x55b7b2904460) at 
../iothread.c:63
#10 0x55b7b170a955 in qemu_thread_start (args=0x55b7b34e36b0) at 
../util/qemu-thread-posix.c:541
#11 0x7f81250ae15d in start_thread () at /lib64/libc.so.6
#12 0x7f812512fc00 in clone3 () at /lib64/libc.so.6

On the other hand, the main thread wants to acquire the writer lock,
but it holds the AioContext lock of the iothread (it takes it in
job_prepare_locked()):

Thread 1 (Thread 0x7f811f4b7b00 (LWP 26388) "qemu-system-x86"):
#0  0x7f8125122356 in ppoll () at /lib64/libc.so.6
#1  0x55b7b172eae0 in qemu_poll_ns (fds=0x55b7b34ec910, nfds=1, timeout=-1) 
at ../util/qemu-timer.c:339
#2  0x55b7b1704ebd in fdmon_poll_wait (ctx=0x55b7b3269210, 
ready_list=0x7ffc90b05680, timeout=-1) at ../util/fdmon-poll.c:79
#3  0x55b7b1703284 in aio_poll (ctx=0x55b7b3269210, blocking=true) at 
../util/aio-posix.c:670
#4  0x55b7b1567c3b in bdrv_graph_wrlock (bs=0x0) at 
../block/graph-lock.c:145
#5  0x55b7b1554c1c in blk_remove_bs (blk=0x55b7b4425800) at 
../block/block-backend.c:916
#6  0x55b7b1554779 in blk_delete (blk=0x55b7b4425800) at 
../block/block-backend.c:497
#7  0x55b7b1554133 in blk_unref (blk=0x55b7b4425800) at 
../block/block-backend.c:557
#8  0x55b7b157a149 in mirror_exit_common (job=0x55b7b4419000) at 
../block/mirror.c:696
#9  0x55b7b1577015 in mirror_prepare (job=0x55b7b4419000) at 
../block/mirror.c:807
#10 0x55b7b153a1a7 in job_prepare_locked (job=0x55b7b4419000) at 
../job.c:988
#11 0x55b7b153a0d9 in job_txn_apply_locked (job=0x55b7b4419000, 
fn=0x55b7b153a110 ) at ../job.c:191
#12 0x55b7b1538b6d in job_do_finalize_locked (job=0x55b7b4419000) at 
../job.c:1011
#13 0x55b7b153a886 in job_completed_txn_success_locked (job=0x55b7b4419000) 
at ../job.c:1068
#14 0x55b7b1539372 in job_completed_locked (job=0x55b7b4419000) at 
../job.c:1082
#15 0x55b7b153a71b in job_exit (opaque=0x55b7b4419000) at ../job.c:1103
#16 0x55b7b1726f1c in aio_bh_call (bh=0x7f8110005470) at ../util/async.c:169
#17 0x55b7b17270ee in aio_bh_poll (ctx=0x55b7b3269210) at 
../util/async.c:216
#18 0x55b7b1702c05 in aio_dispatch (ctx=0x55b7b3269210) at 
../util/aio-posix.c:423
#19 0x55b7b1728a14 in aio_ctx_dispatch (source=0x55b7b3269210, 
callback=0x0, user_data=0x0) at ../util/async.c:358
#20 0x7f8126c31c7f in g_main_dispatch (context=0x55b7b3269720) at 
../glib/gmain.c:3454
#21 g_main_context_dispatch (context=0x55b7b3269720) at ../glib/gmain.c:4172
#22 0x55b7b1729c98 in glib_pollfds_poll () at ../util/main-loop.c:290
#23 0x55b7b1729572 in os_host_main_loop_wait (timeout=27462700) at 
../util/main-loop.c:313
#24 0x55b7b1729452 in main_loop_wait (nonblocking=0) at 
../util/main-loop.c:592
#25 0x55b7b119a1eb in qemu_main_loop () at ../softmmu/runstate.c:772
#26 0x55b7b14c102d in qemu_default_main () at ../softmmu/main.c:37
#27 0x55b7b14c1068 in main (argc=44, argv=0x7ffc90b05d58) at 
../softmmu/main.c:48

At first I thought we just need to look into 

Re: [PULL 00/28] Block layer patches

2023-09-18 Thread Stefan Hajnoczi
Hi Kevin,
I believe that my own commit "block-coroutine-wrapper: use
qemu_get_current_aio_context()" breaks this test. The failure is
non-deterministic (happens about 1 out of 4 runs).

It seems the job hangs and the test times out in vm.run_job('job1', wait=5.0).

I haven't debugged it yet but wanted to share this information to save
some time. Tomorrow I'll investigate further.

Stefan



Re: [PULL 00/28] Block layer patches

2023-09-18 Thread Stefan Hajnoczi
Hi Kevin,
The following CI failure looks like it is related to this pull
request. Please take a look:
https://gitlab.com/qemu-project/qemu/-/jobs/5112083994

▶ 823/840 qcow2 iothreads-commit-active FAIL
823/840 qemu:block / io-qcow2-iothreads-commit-active ERROR 6.16s exit status 1
>>> MALLOC_PERTURB_=184 
>>> PYTHON=/home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/build/pyvenv/bin/python3
>>>  
>>> /home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/build/pyvenv/bin/python3
>>>  
>>> /home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/build/../tests/qemu-iotests/check
>>>  -tap -qcow2 iothreads-commit-active --source-dir 
>>> /home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/tests/qemu-iotests 
>>> --build-dir 
>>> /home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/build/tests/qemu-iotests
― ✀ ―
stderr:
--- 
/home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/tests/qemu-iotests/tests/iothreads-commit-active.out
+++ 
/home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/build/scratch/qcow2-file-iothreads-commit-active/iothreads-commit-active.out.bad
@@ -18,6 +18,35 @@
{"execute": "job-complete", "arguments": {"id": "job1"}}
{"return": {}}
{"data": {"device": "job1", "len": 131072, "offset": 131072, "speed":
0, "type": "commit"}, "event": "BLOCK_JOB_READY", "timestamp":
{"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "job1", "len": 131072, "offset": 131072, "speed":
0, "type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp":
{"microseconds": "USECS", "seconds": "SECS"}}
-{"execute": "job-dismiss", "arguments": {"id": "job1"}}
-{"return": {}}
+Traceback (most recent call last):
+ File 
"/home/gitlab-runner/builds/E8PpwMky/0/qemu-project/qemu/python/qemu/qmp/events.py",
line 557, in get
+ return await self._queue.get()
+ File "/usr/lib/python3.10/asyncio/queues.py", line 159, in get
+ await getter
+asyncio.exceptions.CancelledError
+
+During handling of the above exception, another exception occurred:
+
+Traceback (most recent call last):
+ File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
+ return fut.result()
+asyncio.exceptions.CancelledError
+
+The above exception was the direct cause of the following exception:

On Fri, 15 Sept 2023 at 10:45, Kevin Wolf  wrote:
>
> The following changes since commit 005ad32358f12fe9313a4a01918a55e60d4f39e5:
>
>   Merge tag 'pull-tpm-2023-09-12-3' of 
> https://github.com/stefanberger/qemu-tpm into staging (2023-09-13 13:41:57 
> -0400)
>
> are available in the Git repository at:
>
>   https://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 5d96864b73225ee61b0dad7e928f0cddf14270fc:
>
>   block-coroutine-wrapper: use qemu_get_current_aio_context() (2023-09-15 
> 15:49:14 +0200)
>
> 
> Block layer patches
>
> - Graph locking part 4 (node management)
> - qemu-img map: report compressed data blocks
> - block-backend: process I/O in the current AioContext
>
> 
> Andrey Drobyshev via (2):
>   block: add BDRV_BLOCK_COMPRESSED flag for bdrv_block_status()
>   qemu-img: map: report compressed data blocks
>
> Kevin Wolf (21):
>   block: Remove unused BlockReopenQueueEntry.perms_checked
>   preallocate: Factor out preallocate_truncate_to_real_size()
>   preallocate: Don't poll during permission updates
>   block: Take AioContext lock for bdrv_append() more consistently
>   block: Introduce bdrv_schedule_unref()
>   block-coroutine-wrapper: Add no_co_wrapper_bdrv_wrlock functions
>   block-coroutine-wrapper: Allow arbitrary parameter names
>   block: Mark bdrv_replace_child_noperm() GRAPH_WRLOCK
>   block: Mark bdrv_replace_child_tran() GRAPH_WRLOCK
>   block: Mark bdrv_attach_child_common() GRAPH_WRLOCK
>   block: Call transaction callbacks with lock held
>   block: Mark bdrv_attach_child() GRAPH_WRLOCK
>   block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK
>   block: Mark bdrv_get_cumulative_perm() and callers GRAPH_RDLOCK
>   block: Mark bdrv_child_perm() GRAPH_RDLOCK
>   block: Mark bdrv_parent_cb_change_media() GRAPH_RDLOCK
>   block: Take graph rdlock in bdrv_drop_intermediate()
>   block: Take graph rdlock in bdrv_change_aio_context()
>   block: Mark bdrv_root_unref_child() GRAPH_WRLOCK
>   block: Mark bdrv_unref_child() GRAPH_WRLOCK
>   block: Mark bdrv_add/del_child() and caller GRAPH_WRLOCK
>
> Stefan Hajnoczi (5):
>   block: remove AIOCBInfo->get_aio_context()
>   test-bdrv-drain: avoid race with BH in IOThread drain test
>   block-backend: process I/O in the current AioContext
>   block-backend: process zoned requests in the current AioContext
>   block-coroutine-wrapper: use qemu_get_current_aio_context()
>
>  qapi/block-core.json

[PULL 00/28] Block layer patches

2023-09-15 Thread Kevin Wolf
The following changes since commit 005ad32358f12fe9313a4a01918a55e60d4f39e5:

  Merge tag 'pull-tpm-2023-09-12-3' of https://github.com/stefanberger/qemu-tpm 
into staging (2023-09-13 13:41:57 -0400)

are available in the Git repository at:

  https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 5d96864b73225ee61b0dad7e928f0cddf14270fc:

  block-coroutine-wrapper: use qemu_get_current_aio_context() (2023-09-15 
15:49:14 +0200)


Block layer patches

- Graph locking part 4 (node management)
- qemu-img map: report compressed data blocks
- block-backend: process I/O in the current AioContext


Andrey Drobyshev via (2):
  block: add BDRV_BLOCK_COMPRESSED flag for bdrv_block_status()
  qemu-img: map: report compressed data blocks

Kevin Wolf (21):
  block: Remove unused BlockReopenQueueEntry.perms_checked
  preallocate: Factor out preallocate_truncate_to_real_size()
  preallocate: Don't poll during permission updates
  block: Take AioContext lock for bdrv_append() more consistently
  block: Introduce bdrv_schedule_unref()
  block-coroutine-wrapper: Add no_co_wrapper_bdrv_wrlock functions
  block-coroutine-wrapper: Allow arbitrary parameter names
  block: Mark bdrv_replace_child_noperm() GRAPH_WRLOCK
  block: Mark bdrv_replace_child_tran() GRAPH_WRLOCK
  block: Mark bdrv_attach_child_common() GRAPH_WRLOCK
  block: Call transaction callbacks with lock held
  block: Mark bdrv_attach_child() GRAPH_WRLOCK
  block: Mark bdrv_parent_perms_conflict() and callers GRAPH_RDLOCK
  block: Mark bdrv_get_cumulative_perm() and callers GRAPH_RDLOCK
  block: Mark bdrv_child_perm() GRAPH_RDLOCK
  block: Mark bdrv_parent_cb_change_media() GRAPH_RDLOCK
  block: Take graph rdlock in bdrv_drop_intermediate()
  block: Take graph rdlock in bdrv_change_aio_context()
  block: Mark bdrv_root_unref_child() GRAPH_WRLOCK
  block: Mark bdrv_unref_child() GRAPH_WRLOCK
  block: Mark bdrv_add/del_child() and caller GRAPH_WRLOCK

Stefan Hajnoczi (5):
  block: remove AIOCBInfo->get_aio_context()
  test-bdrv-drain: avoid race with BH in IOThread drain test
  block-backend: process I/O in the current AioContext
  block-backend: process zoned requests in the current AioContext
  block-coroutine-wrapper: use qemu_get_current_aio_context()

 qapi/block-core.json |   6 +-
 include/block/aio.h  |   1 -
 include/block/block-common.h |   7 +
 include/block/block-global-state.h   |  32 +-
 include/block/block-io.h |   1 -
 include/block/block_int-common.h |  34 +-
 include/block/block_int-global-state.h   |  14 +-
 include/sysemu/block-backend-global-state.h  |   4 +-
 block.c  | 348 +++---
 block/blklogwrites.c |   4 +
 block/blkverify.c|   2 +
 block/block-backend.c|  64 +-
 block/copy-before-write.c|  10 +-
 block/crypto.c   |   6 +-
 block/graph-lock.c   |  26 +-
 block/io.c   |  23 +-
 block/mirror.c   |   8 +
 block/preallocate.c  | 133 ++--
 block/qcow.c |   5 +-
 block/qcow2.c|   7 +-
 block/quorum.c   |  23 +-
 block/replication.c  |   9 +
 block/snapshot.c |   2 +
 block/stream.c   |  20 +-
 block/vmdk.c |  15 +
 blockdev.c   |  23 +-
 blockjob.c   |   2 +
 hw/nvme/ctrl.c   |   7 -
 qemu-img.c   |   8 +-
 softmmu/dma-helpers.c|   8 -
 tests/unit/test-bdrv-drain.c |  31 +-
 tests/unit/test-bdrv-graph-mod.c |  20 +
 tests/unit/test-block-iothread.c |   3 +
 util/thread-pool.c   |   8 -
 scripts/block-coroutine-wrapper.py   |  24 +-
 tests/qemu-iotests/051.pc.out|   6 +-
 tests/qemu-iotests/122.out   |  84 +--
 tests/qemu-iotests/146.out   | 780 +++
 tests/qemu-iotests/154.out   | 194 +++---
 tests/qemu-iotests/179.out   | 178 +++---
 tests/qemu-iotests/209.out   |   4 +-
 tests/qemu-iotests/221.out   

Re: [PULL 00/28] Block layer patches

2023-05-10 Thread Richard Henderson

On 5/10/23 13:20, Kevin Wolf wrote:

The following changes since commit b2896c1b09878fd1c4b485b3662f8beecbe0fef4:

   Merge tag 'vfio-updates-20230509.0' of 
https://gitlab.com/alex.williamson/qemu into staging (2023-05-10 11:20:35 +0100)

are available in the Git repository at:

   https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 58a2e3f5c37be02dac3086b81bdda9414b931edf:

   block: compile out assert_bdrv_graph_readable() by default (2023-05-10 
14:16:54 +0200)


Block layer patches

- Graph locking, part 3 (more block drivers)
- Compile out assert_bdrv_graph_readable() by default
- Add configure options for vmdk, vhdx and vpc
- Fix use after free in blockdev_mark_auto_del()
- migration: Attempt disk reactivation in more failure scenarios
- Coroutine correctness fixes


Applied, thanks.  Please update https://wiki.qemu.org/ChangeLog/8.1 as 
appropriate.


r~





[PULL 00/28] Block layer patches

2023-05-10 Thread Kevin Wolf
The following changes since commit b2896c1b09878fd1c4b485b3662f8beecbe0fef4:

  Merge tag 'vfio-updates-20230509.0' of 
https://gitlab.com/alex.williamson/qemu into staging (2023-05-10 11:20:35 +0100)

are available in the Git repository at:

  https://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 58a2e3f5c37be02dac3086b81bdda9414b931edf:

  block: compile out assert_bdrv_graph_readable() by default (2023-05-10 
14:16:54 +0200)


Block layer patches

- Graph locking, part 3 (more block drivers)
- Compile out assert_bdrv_graph_readable() by default
- Add configure options for vmdk, vhdx and vpc
- Fix use after free in blockdev_mark_auto_del()
- migration: Attempt disk reactivation in more failure scenarios
- Coroutine correctness fixes


Emanuele Giuseppe Esposito (5):
  nbd: Mark nbd_co_do_establish_connection() and callers GRAPH_RDLOCK
  block: Mark bdrv_co_get_allocated_file_size() and callers GRAPH_RDLOCK
  block: Mark bdrv_co_get_info() and callers GRAPH_RDLOCK
  block: Mark bdrv_co_debug_event() GRAPH_RDLOCK
  block: Mark BlockDriver callbacks for amend job GRAPH_RDLOCK

Eric Blake (1):
  migration: Attempt disk reactivation in more failure scenarios

Kevin Wolf (18):
  block: Fix use after free in blockdev_mark_auto_del()
  iotests/nbd-reconnect-on-open: Fix NBD socket path
  qcow2: Don't call bdrv_getlength() in coroutine_fns
  block: Consistently call bdrv_activate() outside coroutine
  block: bdrv/blk_co_unref() for calls in coroutine context
  block: Don't call no_coroutine_fns in qmp_block_resize()
  iotests: Test resizing image attached to an iothread
  test-bdrv-drain: Don't modify the graph in coroutines
  graph-lock: Add GRAPH_UNLOCKED(_PTR)
  graph-lock: Fix GRAPH_RDLOCK_GUARD*() to be reader lock
  block: .bdrv_open is non-coroutine and unlocked
  nbd: Remove nbd_co_flush() wrapper function
  vhdx: Require GRAPH_RDLOCK for accessing a node's parent list
  mirror: Require GRAPH_RDLOCK for accessing a node's parent list
  block: Mark bdrv_query_bds_stats() and callers GRAPH_RDLOCK
  block: Mark bdrv_query_block_graph_info() and callers GRAPH_RDLOCK
  block: Mark bdrv_recurse_can_replace() and callers GRAPH_RDLOCK
  block: Mark bdrv_refresh_limits() and callers GRAPH_RDLOCK

Paolo Bonzini (1):
  block: add missing coroutine_fn annotations

Stefan Hajnoczi (2):
  aio-wait: avoid AioContext lock in aio_wait_bh_oneshot()
  block: compile out assert_bdrv_graph_readable() by default

Vladimir Sementsov-Ogievskiy (1):
  block: add configure options for excluding vmdk, vhdx and vpc

 meson_options.txt  |   8 ++
 configure  |   1 +
 block/coroutines.h |   5 +-
 block/qcow2.h  |   4 +-
 include/block/aio-wait.h   |   2 +-
 include/block/block-global-state.h |  19 +++-
 include/block/block-io.h   |  23 +++--
 include/block/block_int-common.h   |  37 +++
 include/block/block_int-global-state.h |   4 +-
 include/block/graph-lock.h |  20 ++--
 include/block/qapi.h   |   7 +-
 include/sysemu/block-backend-global-state.h|   5 +-
 block.c|  25 -
 block/amend.c  |   8 +-
 block/blkverify.c  |   5 +-
 block/block-backend.c  |  10 +-
 block/crypto.c |   8 +-
 block/graph-lock.c |   3 +
 block/io.c |  12 +--
 block/mirror.c |  18 +++-
 block/nbd.c|  50 +
 block/parallels.c  |   6 +-
 block/qapi.c   |   6 +-
 block/qcow.c   |   6 +-
 block/qcow2-refcount.c |   2 +-
 block/qcow2.c  |  48 -
 block/qed.c|  24 ++---
 block/quorum.c |   4 +-
 block/raw-format.c |   2 +-
 block/vdi.c|   6 +-
 block/vhdx.c   |  15 +--
 block/vmdk.c   |  20 ++--
 block/vpc.c|   6 +-
 blockdev.c |  25 +++--
 hw/block/dataplane/virtio-blk.c|   3 +-
 

Re: [PULL 00/28] Block layer patches

2021-07-10 Thread Peter Maydell
On Fri, 9 Jul 2021 at 13:50, Kevin Wolf  wrote:
>
> The following changes since commit 9db3065c62a983286d06c207f4981408cf42184d:
>
>   Merge remote-tracking branch 
> 'remotes/vivier2/tags/linux-user-for-6.1-pull-request' into staging 
> (2021-07-08 16:30:18 +0100)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to e60edf69e2f64e818466019313517a2e6d6b63f4:
>
>   block: Make blockdev-reopen stable API (2021-07-09 13:19:11 +0200)
>
> 
> Block layer patches
>
> - Make blockdev-reopen stable
> - Remove deprecated qemu-img backing file without format
> - rbd: Convert to coroutines and add write zeroes support
> - rbd: Updated MAINTAINERS
> - export/fuse: Allow other users access to the export
> - vhost-user: Fix backends without multiqueue support
> - Fix drive-backup transaction endless drained section


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/6.1
for any user-visible changes.

-- PMM



[PULL 00/28] Block layer patches

2021-07-09 Thread Kevin Wolf
The following changes since commit 9db3065c62a983286d06c207f4981408cf42184d:

  Merge remote-tracking branch 
'remotes/vivier2/tags/linux-user-for-6.1-pull-request' into staging (2021-07-08 
16:30:18 +0100)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to e60edf69e2f64e818466019313517a2e6d6b63f4:

  block: Make blockdev-reopen stable API (2021-07-09 13:19:11 +0200)


Block layer patches

- Make blockdev-reopen stable
- Remove deprecated qemu-img backing file without format
- rbd: Convert to coroutines and add write zeroes support
- rbd: Updated MAINTAINERS
- export/fuse: Allow other users access to the export
- vhost-user: Fix backends without multiqueue support
- Fix drive-backup transaction endless drained section


Alberto Garcia (4):
  block: Add bdrv_reopen_queue_free()
  block: Support multiple reopening with x-blockdev-reopen
  iotests: Test reopening multiple devices at the same time
  block: Make blockdev-reopen stable API

Eric Blake (3):
  qcow2: Prohibit backing file changes in 'qemu-img amend'
  qemu-img: Require -F with -b backing image
  qemu-img: Improve error for rebase without backing format

Heinrich Schuchardt (1):
  util/uri: do not check argument of uri_free()

Ilya Dryomov (1):
  MAINTAINERS: update block/rbd.c maintainer

Kevin Wolf (3):
  vhost-user: Fix backends without multiqueue support
  qcow2: Fix dangling pointer after reopen for 'file'
  block: Acquire AioContexts during bdrv_reopen_multiple()

Max Reitz (6):
  export/fuse: Pass default_permissions for mount
  export/fuse: Add allow-other option
  export/fuse: Give SET_ATTR_SIZE its own branch
  export/fuse: Let permissions be adjustable
  iotests/308: Test +w on read-only FUSE exports
  iotests/fuse-allow-other: Test allow-other

Or Ozeri (1):
  block/rbd: Add support for rbd image encryption

Peter Lieven (8):
  block/rbd: bump librbd requirement to luminous release
  block/rbd: store object_size in BDRVRBDState
  block/rbd: update s->image_size in qemu_rbd_getlength
  block/rbd: migrate from aio to coroutines
  block/rbd: add write zeroes support
  block/rbd: drop qemu_rbd_refresh_limits
  block/rbd: fix type of task->complete
  MAINTAINERS: add block/rbd.c reviewer

Vladimir Sementsov-Ogievskiy (1):
  blockdev: fix drive-backup transaction endless drained section

 qapi/block-core.json   | 134 +++-
 qapi/block-export.json |  33 +-
 docs/system/deprecated.rst |  32 -
 docs/system/removed-features.rst   |  31 +
 include/block/block.h  |   3 +
 block.c| 108 +--
 block/export/fuse.c| 121 +++-
 block/nfs.c|   4 +-
 block/qcow2.c  |  42 +-
 block/rbd.c| 749 +
 block/replication.c|   7 +
 block/ssh.c|   4 +-
 blockdev.c |  77 ++-
 hw/virtio/vhost-user.c |   3 +
 qemu-img.c |   9 +-
 qemu-io-cmds.c |   7 +-
 util/uri.c |  22 +-
 MAINTAINERS|   3 +-
 meson.build|   7 +-
 tests/qemu-iotests/040 |   4 +-
 tests/qemu-iotests/041 |   6 +-
 tests/qemu-iotests/061 |   3 +
 tests/qemu-iotests/061.out |   3 +-
 tests/qemu-iotests/082.out |   6 +-
 tests/qemu-iotests/114 |  18 +-
 tests/qemu-iotests/114.out |  11 +-
 tests/qemu-iotests/155 |   9 +-
 tests/qemu-iotests/165 |   4 +-
 tests/qemu-iotests/245 |  78 ++-
 tests/qemu-iotests/245.out |   4 +-
 tests/qemu-iotests/248 |   4 +-
 tests/qemu-iotests/248.out |   2 +-
 tests/qemu-iotests/296 |  11 +-
 tests/qemu-iotests/298 |   4 +-
 tests/qemu-iotests/301 |   4 +-
 tests/qemu-iotests/301.out |  16 +-
 tests/qemu-iotests/308 |  20 +-
 tests/qemu-iotests/308.out |   6 +-
 tests/qemu-iotests/common.rc   | 

Re: [Qemu-devel] [PULL 00/28] Block layer patches

2019-06-03 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20190603150233.6614-1-kw...@redhat.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Subject: [Qemu-devel] [PULL 00/28] Block layer patches
Type: series
Message-id: 20190603150233.6614-1-kw...@redhat.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

From https://github.com/patchew-project/qemu
   ad88e4252f..e2a58ff493  master -> master
From https://github.com/patchew-project/qemu
 * [new tag]   patchew/20190603150233.6614-1-kw...@redhat.com -> 
patchew/20190603150233.6614-1-kw...@redhat.com
Switched to a new branch 'test'
220e753b45 iotests: Fix duplicated diff output on failure
0b71e618fa block/io: bdrv_pdiscard: support int64_t bytes parameter
e7a4ac900f block/qcow2-refcount: add trace-point to qcow2_process_discards
4dad653c53 block: Remove bdrv_set_aio_context()
3ab629f6ae test-bdrv-drain: Use bdrv_try_set_aio_context()
a2c999ab92 iotests: Attach new devices to node in non-default iothread
9882a15cb7 virtio-scsi-test: Test attaching new overlay with iothreads
bad8ab68a2 block: Remove wrong bdrv_set_aio_context() calls
4b94672417 blockdev: Use bdrv_try_set_aio_context() for monitor commands
ac65c7ccfd block: Move node without parents to main AioContext
f3f5afac50 test-block-iothread: BlockBackend AioContext across root node change
5be6b76fd8 test-block-iothread: Test adding parent to iothread node
d44a5b97a1 block: Adjust AioContexts when attaching nodes
032072dbdd scsi-disk: Use qdev_prop_drive_iothread
08213d11da block: Add qdev_prop_drive_iothread property type
3143af0939 block: Add BlockBackend.ctx
0218a8120b block: Add Error to blk_set_aio_context()
8dec7e6e5d nbd-server: Call blk_set_allow_aio_context_change()
4fab052a83 test-block-iothread: Check filter node in test_propagate_mirror
bec547ff8c nvme: add Get/Set Feature Timestamp support
daedce9044 block/linux-aio: Drop unused BlockAIOCB submission method
a620ab7c2f iotests: Test cancelling a job and closing the VM
b13df9380c block/io: Delay decrementing the quiesce_counter
96436a8e4d block: avoid recursive block_status call if possible
3b06eca03a tests/perf: Test lseek influence on qcow2 block-status
24a7a53346 blockdev: fix missed target unref for drive-backup
73ec47bcf8 iotests: Test commit job start with concurrent I/O
c8758f4aab block: Drain source node in bdrv_replace_node()

=== OUTPUT BEGIN ===
1/28 Checking commit c8758f4aabfc (block: Drain source node in 
bdrv_replace_node())
2/28 Checking commit 73ec47bcf823 (iotests: Test commit job start with 
concurrent I/O)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#16: 
new file mode 100755

total: 0 errors, 1 warnings, 131 lines checked

Patch 2/28 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
3/28 Checking commit 24a7a533464e (blockdev: fix missed target unref for 
drive-backup)
4/28 Checking commit 3b06eca03aff (tests/perf: Test lseek influence on qcow2 
block-status)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#20: 
new file mode 100755

total: 0 errors, 1 warnings, 71 lines checked

Patch 4/28 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
5/28 Checking commit 96436a8e4dba (block: avoid recursive block_status call if 
possible)
6/28 Checking commit b13df9380cc9 (block/io: Delay decrementing the 
quiesce_counter)
7/28 Checking commit a620ab7c2f73 (iotests: Test cancelling a job and closing 
the VM)
8/28 Checking commit daedce9044d9 (block/linux-aio: Drop unused BlockAIOCB 
submission method)
9/28 Checking commit bec547ff8ce6 (nvme: add Get/Set Feature Timestamp support)
10/28 Checking commit 4fab052a83c2 (test-block-iothread: Check filter node in 
test_propagate_mirror)
11/28 Checking commit 8dec7e6e5db9 (nbd-server: Call 
blk_set_allow_aio_context_change())
12/28 Checking commit 0218a8120b5c (block: Add Error to blk_set_aio_context())
WARNING: Block comments use a leading /* on a separate line
#104: FILE: hw/block/dataplane/virtio-blk.c:289:
+/* Drain and try to switch bs back to the QEMU main loop. If other users

WARNING: Block comments use a trailing */ on a separate line
#105: FILE: hw/block/dataplane/virtio-blk.c:290:
+ * keep the BlockBackend in the iothread, that's ok */

total: 0 errors, 2 warnings, 259 lines checked

Patch 12/28 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
13/28 Checking commit 3143af09391d (block: Add BlockBackend.ctx)
WARNING: Block comments use a leading /* on a separate line

Re: [Qemu-devel] [PULL 00/28] Block layer patches

2019-06-03 Thread Peter Maydell
On Mon, 3 Jun 2019 at 16:05, Kevin Wolf  wrote:
>
> The following changes since commit ad88e4252f09c2956b99c90de39e95bab2e8e7af:
>
>   Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-jun-1-2019' 
> into staging (2019-06-03 10:25:12 +0100)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to 9593db8ccd27800ce4a17f1d5b735b9130c541a2:
>
>   iotests: Fix duplicated diff output on failure (2019-06-03 16:33:20 +0200)
>
> 
> Block layer patches:
>
> - block: AioContext management, part 2
> - Avoid recursive block_status call (i.e. lseek() calls) if possible
> - linux-aio: Drop unused BlockAIOCB submission method
> - nvme: add Get/Set Feature Timestamp support
> - Fix crash on commit job start with active I/O on base node
> - Fix crash in bdrv_drained_end
> - Fix integer overflow in qcow2 discard
>
> 

Hi; this failed my build tests on any platform where I run
'make check':

MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
QTEST_QEMU_BINARY=arm-softmmu/qemu-system-arm QTEST_QEMU_IMG=qemu-img
tests/qos-test -m=quick -k --tap < /dev/null | ./scripts/tap-driver.pl
--test-name="qos-test"
PASS 1 qos-test /arm/raspi2/generic-sdhci/sdhci/sdhci-tests/registers
PASS 2 qos-test /arm/sabrelite/generic-sdhci/sdhci/sdhci-tests/registers
[...]
PASS 30 qos-test
/arm/virt/virtio-mmio/virtio-bus/virtio-scsi-device/virtio-scsi/virtio-scsi-tests/hotplug
PASS 31 qos-test
/arm/virt/virtio-mmio/virtio-bus/virtio-scsi-device/virtio-scsi/virtio-scsi-tests/unaligned-write-same
qemu-system-arm: -device virtio-scsi-device,id=vs0,iothread=thread0:
ioeventfd is required for iothread
Broken pipe
/home/petmay01/linaro/qemu-for-merges/tests/libqtest.c:135:
kill_qemu() tried to terminate QEMU process but encountered exit
status 1
Aborted (core dumped)
ERROR - too few tests run (expected 37, got 31)

thanks
-- PMM



[Qemu-devel] [PULL 00/28] Block layer patches

2019-06-03 Thread Kevin Wolf
The following changes since commit ad88e4252f09c2956b99c90de39e95bab2e8e7af:

  Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-jun-1-2019' 
into staging (2019-06-03 10:25:12 +0100)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to 9593db8ccd27800ce4a17f1d5b735b9130c541a2:

  iotests: Fix duplicated diff output on failure (2019-06-03 16:33:20 +0200)


Block layer patches:

- block: AioContext management, part 2
- Avoid recursive block_status call (i.e. lseek() calls) if possible
- linux-aio: Drop unused BlockAIOCB submission method
- nvme: add Get/Set Feature Timestamp support
- Fix crash on commit job start with active I/O on base node
- Fix crash in bdrv_drained_end
- Fix integer overflow in qcow2 discard


John Snow (1):
  blockdev: fix missed target unref for drive-backup

Julia Suvorova (1):
  block/linux-aio: Drop unused BlockAIOCB submission method

Kenneth Heitke (1):
  nvme: add Get/Set Feature Timestamp support

Kevin Wolf (19):
  block: Drain source node in bdrv_replace_node()
  iotests: Test commit job start with concurrent I/O
  test-block-iothread: Check filter node in test_propagate_mirror
  nbd-server: Call blk_set_allow_aio_context_change()
  block: Add Error to blk_set_aio_context()
  block: Add BlockBackend.ctx
  block: Add qdev_prop_drive_iothread property type
  scsi-disk: Use qdev_prop_drive_iothread
  block: Adjust AioContexts when attaching nodes
  test-block-iothread: Test adding parent to iothread node
  test-block-iothread: BlockBackend AioContext across root node change
  block: Move node without parents to main AioContext
  blockdev: Use bdrv_try_set_aio_context() for monitor commands
  block: Remove wrong bdrv_set_aio_context() calls
  virtio-scsi-test: Test attaching new overlay with iothreads
  iotests: Attach new devices to node in non-default iothread
  test-bdrv-drain: Use bdrv_try_set_aio_context()
  block: Remove bdrv_set_aio_context()
  iotests: Fix duplicated diff output on failure

Max Reitz (2):
  block/io: Delay decrementing the quiesce_counter
  iotests: Test cancelling a job and closing the VM

Vladimir Sementsov-Ogievskiy (4):
  tests/perf: Test lseek influence on qcow2 block-status
  block: avoid recursive block_status call if possible
  block/qcow2-refcount: add trace-point to qcow2_process_discards
  block/io: bdrv_pdiscard: support int64_t bytes parameter

 docs/devel/multiple-iothreads.txt  |   4 +-
 block/qcow2.h  |   4 +
 hw/block/nvme.h|   2 +
 include/block/block.h  |  21 ++---
 include/block/block_int.h  |   1 +
 include/block/nvme.h   |   2 +
 include/block/raw-aio.h|   3 -
 include/hw/block/block.h   |   7 +-
 include/hw/qdev-properties.h   |   3 +
 include/hw/scsi/scsi.h |   1 +
 include/sysemu/block-backend.h |   5 +-
 tests/libqtest.h   |  11 +++
 block.c|  79 -
 block/backup.c |   3 +-
 block/block-backend.c  |  47 ++
 block/commit.c |  13 +--
 block/crypto.c |   3 +-
 block/io.c |  28 +++---
 block/linux-aio.c  |  72 +++
 block/mirror.c |   4 +-
 block/parallels.c  |   3 +-
 block/qcow.c   |   3 +-
 block/qcow2-refcount.c |  39 -
 block/qcow2.c  |  17 +++-
 block/qed.c|   3 +-
 block/sheepdog.c   |   3 +-
 block/vdi.c|   3 +-
 block/vhdx.c   |   3 +-
 block/vmdk.c   |   3 +-
 block/vpc.c|   3 +-
 blockdev.c |  61 +++--
 blockjob.c |  12 ++-
 hmp.c  |   3 +-
 hw/block/dataplane/virtio-blk.c|  12 ++-
 hw/block/dataplane/xen-block.c |   6 +-
 hw/block/fdc.c |   2 +-
 hw/block/nvme.c| 106 +-
 hw/block/xen-block.c   |   2 +-
 hw/core/qdev-properties-system.c   |  41 -
 hw/ide/qdev.c  |   2 +-
 hw/scsi/scsi-disk.c|  24 +++--
 hw/scsi/virtio-scsi.c  |  25 +++---
 

Re: [Qemu-devel] [PULL 00/28] Block layer patches

2019-03-13 Thread Peter Maydell
On Tue, 12 Mar 2019 at 17:30, Kevin Wolf  wrote:
>
> The following changes since commit eda1df0345f5a1e337e30367124dcb0e802bdfde:
>
>   Merge remote-tracking branch 'remotes/armbru/tags/pull-pflash-2019-03-11' 
> into staging (2019-03-12 11:12:36 +)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to c31dfeb02a1d155bdb961edeb61a137a589c174b:
>
>   qemu-iotests: Test the x-blockdev-reopen QMP command (2019-03-12 17:58:37 
> +0100)
>
> 
> Block layer patches:
>
> - file-posix: Make auto-read-only dynamic
> - Add x-blockdev-reopen QMP command
> - Finalize block-latency-histogram QMP command
> - gluster: Build fixes for newer lib version
>

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/4.0
for any user-visible changes.

-- PMM



[Qemu-devel] [PULL 00/28] Block layer patches

2019-03-12 Thread Kevin Wolf
The following changes since commit eda1df0345f5a1e337e30367124dcb0e802bdfde:

  Merge remote-tracking branch 'remotes/armbru/tags/pull-pflash-2019-03-11' 
into staging (2019-03-12 11:12:36 +)

are available in the Git repository at:

  git://repo.or.cz/qemu/kevin.git tags/for-upstream

for you to fetch changes up to c31dfeb02a1d155bdb961edeb61a137a589c174b:

  qemu-iotests: Test the x-blockdev-reopen QMP command (2019-03-12 17:58:37 
+0100)


Block layer patches:

- file-posix: Make auto-read-only dynamic
- Add x-blockdev-reopen QMP command
- Finalize block-latency-histogram QMP command
- gluster: Build fixes for newer lib version


Alberto Garcia (13):
  block: Allow freezing BdrvChild links
  block: Freeze the backing chain for the duration of the commit job
  block: Freeze the backing chain for the duration of the mirror job
  block: Freeze the backing chain for the duration of the stream job
  block: Add 'keep_old_opts' parameter to bdrv_reopen_queue()
  block: Handle child references in bdrv_reopen_queue()
  block: Allow omitting the 'backing' option in certain cases
  block: Allow changing the backing file on reopen
  block: Add a 'mutable_opts' field to BlockDriver
  block: Add bdrv_reset_options_allowed()
  block: Remove the AioContext parameter from bdrv_reopen_multiple()
  block: Add an 'x-blockdev-reopen' QMP command
  qemu-iotests: Test the x-blockdev-reopen QMP command

Keith Busch (1):
  nvme: fix write zeroes offset and count

Kevin Wolf (10):
  tests/virtio-blk-test: Disable auto-read-only
  qemu-iotests: commit to backing file with auto-read-only
  block: Avoid useless local_err
  block: Make permission changes in reopen less wrong
  file-posix: Fix bdrv_open_flags() for snapshot=on
  file-posix: Factor out raw_reconfigure_getfd()
  file-posix: Store BDRVRawState.reopen_state during reopen
  file-posix: Lock new fd in raw_reopen_prepare()
  file-posix: Prepare permission code for fd switching
  file-posix: Make auto-read-only dynamic

Niels de Vos (1):
  gluster: the glfs_io_cbk callback function pointer adds pre/post stat args

Prasanna Kumar Kalever (1):
  gluster: Handle changed glfs_ftruncate signature

Vladimir Sementsov-Ogievskiy (2):
  qapi: move to QOM path for x-block-latency-histogram-set
  qapi: drop x- from x-block-latency-histogram-set

 qapi/block-core.json  |  66 ++-
 configure |  42 ++
 include/block/block.h |  13 +-
 include/block/block_int.h |  14 +
 block.c   | 440 +--
 block/commit.c|  16 +
 block/file-posix.c| 254 ---
 block/gluster.c   |  10 +-
 block/mirror.c|   8 +
 block/qapi.c  |  12 +-
 block/qcow2.c |  25 ++
 block/raw-format.c|   3 +
 block/replication.c   |   7 +-
 block/stream.c|  21 +
 blockdev.c|  61 ++-
 hw/block/nvme.c   |   6 +-
 qemu-io-cmds.c|   4 +-
 tests/virtio-blk-test.c   |   2 +-
 tests/qemu-iotests/051|   7 +
 tests/qemu-iotests/051.out|   9 +
 tests/qemu-iotests/051.pc.out |   9 +
 tests/qemu-iotests/232|  31 ++
 tests/qemu-iotests/232.out|  32 +-
 tests/qemu-iotests/245| 991 ++
 tests/qemu-iotests/245.out|   5 +
 tests/qemu-iotests/group  |   1 +
 26 files changed, 1929 insertions(+), 160 deletions(-)
 create mode 100644 tests/qemu-iotests/245
 create mode 100644 tests/qemu-iotests/245.out