[Bug 1922252] Re: [feature request] webcam support

2021-05-14 Thread promeneur
I use

-device usb-host,vendorid=0x046d,productid=0x081b

But in this case the webcam belongs to the guest and the host can't use
the webcam.

I want a dynamical sharing like the mouse sharing for example.

Thanks

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1922252

Title:
  [feature request] webcam support

Status in QEMU:
  Incomplete

Bug description:
  Please

  I am impatient to get something as "-device usb-webcam" to share
  dynamically the webcam between host and guest.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1922252/+subscriptions



Re: Best approach for supporting snapshots for QEMU's gdbstub?

2021-05-14 Thread Pavel Dovgalyuk

On 14.05.2021 19:06, Alex Bennée wrote:

Hi,

I've been playing around with QEMU's reverse debugging support which
I have working with Pavel's latest patches for supporting virtio with
record/replay. Once you get the right command line it works well enough
although currently each step backwards requires replaying the entire
execution history until you get to the right point.

QEMU can quite easily snapshot the entire VM state so I was looking to
see what the best way to integrate this would be. As far as I can tell
there are two interfaces gdb supports: bookmarks and checkpoints.

As far as I can tell bookmarks where added as part of GDB's reverse
debugging support but attempting to use them from the gdbstub reports:

   (gdb) bookmark
   You can't do that when your target is `remote'

so I guess that would need an extension to the stub protocol to support?

The other option I found was checkpoints which seem to predate support
for reverse debugging. However:

   (gdb) checkpoint
   checkpoint: can't find fork function in inferior.

I couldn't tell what feature needs to be negotiated but I suspect it's
something like fork-events if the checkpoint mechanism is designed for
user space with a fork/freeze approach.

We could of course just add a custom monitor command like the
qemu.sstep= command which could be used manually. However that would be
a QEMU gdbstub specific approach.


For now you can just use 'monitor savevm sn1' in gdb.
But something like 'bookmark' seems more convenient.


The other thing would be to be more intelligent on QEMU's side and save
snapshots each time we hit an event, for example each time we hit a
given breakpoint. However I do worry that might lead to snapshots
growing quite quickly.

Any thoughts/suggestions?






Re: [RFC PATCH v2] ppc/spapr: Add support for H_SCM_PERFORMANCE_STATS hcall

2021-05-14 Thread Vaibhav Jain
Thanks for looking into this patch David

David Gibson  writes:

> On Thu, May 06, 2021 at 08:19:24AM +0530, Vaibhav Jain wrote:
>> Add support for H_SCM_PERFORMANCE_STATS described at [1] for
>> spapr nvdimms. This enables guest to fetch performance stats[2] like
>> expected life of an nvdimm ('MemLife ') etc and display them to the
>> user. Linux kernel support for fetching these performance stats and
>> exposing them to the user-space was done via [3].
>> 
>> The hcall semantics mandate that each nvdimm performance stats is
>> uniquely identied by a 8-byte ascii string encoded as an unsigned
>> integer (e.g 'MemLife ' == 0x4D656D4C69666520) and its value be a
>> 8-byte unsigned integer. These performance-stats are exchanged with
>> the guest in via a guest allocated buffer called
>> 'requestAndResultBuffer' or rr-buffer for short. This buffer contains
>> a header descibed by 'struct papr_scm_perf_stats' followed by an array
>> of performance-stats described by 'struct papr_scm_perf_stat'. The
>> hypervisor is expected to validate the rr-buffer header and then based
>> on the request copy the needed performance-stats to the array of
>> 'struct papr_scm_perf_stat' following the header.
>> 
>> The patch proposes a new function h_scm_performance_stats() that
>> services the H_SCM_PERFORMANCE_STATS hcall. After verifying the
>> validity of the rr-buffer header via scm_perf_check_rr_buffer() it
>> proceeds to fill the rr-buffer with requested performance-stats. The
>> value of individual stats is retrived from individual accessor
>> function for the stat which are indexed in the array
>> 'nvdimm_perf_stats'.
>> 
>> References:
>> [1] "Hypercall Op-codes (hcalls)"
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/powerpc/papr_hcalls.rst#n269
>> [2] Sysfs attribute documentation for papr_scm
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/testing/sysfs-bus-papr-pmem#n36
>> [3] "powerpc/papr_scm: Fetch nvdimm performance stats from PHYP"
>> https://lore.kernel.org/r/20200731064153.182203-2-vaib...@linux.ibm.com
>> 
>> Signed-off-by: Vaibhav Jain 
>> ---
>> Changelog
>> 
>> RFC-v1:
>> * Removed empty lines from code [ David ]
>> * Updated struct papr_scm_perf_stat to use uint64_t for
>>   statistic_id.
>> * Added a hard limit on max number of stats requested to 255 [ David ]
>> * Updated scm_perf_check_rr_buffer() to check for rr-buffer header
>>   size [ David ]
>> * Removed a redundant check from nvdimm_stat_getval() [ David ]
>> * Removed a redundant call to address_space_access_valid() in
>>   scm_perf_check_rr_buffer() [ David ]
>> * Instead of allocating a minimum size local buffer, allocate a max
>>   possible size local rr-buffer. [ David ]
>> * Updated nvdimm_stat_getval() to set 'val' to '0' on error. [ David ]
>> * Updated h_scm_performance_stats() to use a canned-response method
>>   for simplifying num_stats==0 case [ David ].
>> ---
>>  hw/ppc/spapr_nvdimm.c  | 230 +
>>  include/hw/ppc/spapr.h |  19 +++-
>>  2 files changed, 248 insertions(+), 1 deletion(-)
>> 
>> diff --git a/hw/ppc/spapr_nvdimm.c b/hw/ppc/spapr_nvdimm.c
>> index 252204e25f..b0c2b55a5b 100644
>> --- a/hw/ppc/spapr_nvdimm.c
>> +++ b/hw/ppc/spapr_nvdimm.c
>> @@ -35,6 +35,14 @@
>>  /* SCM device is unable to persist memory contents */
>>  #define PAPR_PMEM_UNARMED PPC_BIT(0)
>>  
>> +/* Maximum output buffer size needed to return all nvdimm_perf_stats */
>> +#define SCM_STATS_MAX_OUTPUT_BUFFER  (sizeof(struct papr_scm_perf_stats) + \
>> +  sizeof(struct papr_scm_perf_stat) * \
>> +  ARRAY_SIZE(nvdimm_perf_stats))
>> +
>> +/* Maximum number of stats that we can return back in a single stat request 
>> */
>> +#define SCM_STATS_MAX_STATS 255
>> +
>>  bool spapr_nvdimm_validate(HotplugHandler *hotplug_dev, NVDIMMDevice 
>> *nvdimm,
>> uint64_t size, Error **errp)
>>  {
>> @@ -502,6 +510,227 @@ static target_ulong h_scm_health(PowerPCCPU *cpu, 
>> SpaprMachineState *spapr,
>>  return H_SUCCESS;
>>  }
>>  
>> +static int perf_stat_noop(SpaprDrc *drc, uint64_t unused, uint64_t *val)
>> +{
>> +*val = 0;
>> +return H_SUCCESS;
>> +}
>> +
>> +static int perf_stat_memlife(SpaprDrc *drc, uint64_t unused, uint64_t *val)
>> +{
>> +/* Assume full life available of an NVDIMM right now */
>> +*val = 100;
>> +return H_SUCCESS;
>> +}
>> +
>> +/*
>> + * Holds all supported performance stats accessors. Each 
>> performance-statistic
>> + * is uniquely identified by a 8-byte ascii string for example: 'MemLife '
>> + * which indicate in percentage how much usage life of an nvdimm is 
>> remaining.
>> + * 'NoopStat' which is primarily used to test support for retriving 
>> performance
>> + * stats and also to replace unknown stats present in the rr-buffer.
>> + *
>> + */
>> +static const struct {
>> +char stat_i

[Bug 1921092] Re: gdbstub debug of multi-cluster machines is undocumented and confusing

2021-05-14 Thread Martin Schönstedt
It can be closed. The added documentation is very helpful.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921092

Title:
  gdbstub debug of multi-cluster machines is undocumented and confusing

Status in QEMU:
  Incomplete

Bug description:
  Working with Zephyr RTOS, running a multi core sample on mps2_an521 works 
fine. Both cpus start.
  Trying to debug with options -s -S the second core fails to boot.

  Posted with explanation also at: https://github.com/zephyrproject-
  rtos/zephyr/issues/33635

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921092/+subscriptions



Re: [PATCH v2 3/3] qapi: deprecate drive-backup

2021-05-14 Thread John Snow

On 5/6/21 5:57 AM, Kashyap Chamarthy wrote:

TODO: We also need to deprecate drive-backup transaction action..
But union members in QAPI doesn't support 'deprecated' feature. I tried
to dig a bit, but failed :/ Markus, could you please help with it? At
least by advice?


Oho, I see.

OK, I'm not Markus, but I've been getting into lots of trouble in the 
QAPI generator lately, so let me see if I can help get you started...


https://gitlab.com/jsnow/qemu/-/commits/hack-deprecate-union-branches/

Here's a quick hack that might expose that feature. I suppose we can 
discuss this with Markus and turn these into real patches if that's the 
direction we wanna head.


--js




Re: [PATCH 5/6] co-shared-resource: protect with a mutex

2021-05-14 Thread Emanuele Giuseppe Esposito



we want to get from shres here, after possible call to 
block_copy_task_shrink(), as task->bytes may be reduced.


Ah right, I missed that. So I guess if we want the caller to protect 
co-shared-resource, get_from_shres stays where it is, and put_ 
instead can still go into task_end (with a boolean enabling it).


honestly, I don't follow how it helps thread-safety


 From my understanding, the whole point here is to have no lock in 
co-shared-resource but let the caller take care of it (block-copy).


The above was just an idea on how to do it.


But how moving co_put_to_shres() make it thread-safe? Nothing in 
block-copy is thread-safe yet..


Sorry this is my bad, I did not explain it properly. If you look closely 
at the diff I sent, there are locks in a similar way of my block-copy 
initial patch. So I am essentially assuming that block-copy has already 
locks, and moving co_put_to_shres in block_copy_task_end has the purpose 
of moving shres "in a function that has a critical section".


@@ -269,6 +270,7 @@ static void coroutine_fn 
block_copy_task_end(BlockCopyTask *task, int ret)
  bdrv_set_dirty_bitmap(task->s->copy_bitmap, 
task->offset, task->bytes);

  }
  qemu_co_mutex_lock(&task->s->tasks_lock);

   ^^^ locks

+co_put_to_shres(task->s->mem, task->bytes);
  task->s->in_flight_bytes -= task->bytes;
  QLIST_REMOVE(task, list);
  progress_set_remaining(task->s->progress,


unlocks here (not shown in the diff)
 }

Hopefully now it is clear. Apologies again for the confusion.

Emanuele




[Bug 1921061] Re: Corsair iCUE Install Fails, qemu VM Reboots

2021-05-14 Thread Russell Morris
** Changed in: qemu
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921061

Title:
  Corsair iCUE Install Fails, qemu VM Reboots

Status in QEMU:
  Confirmed

Bug description:
  Hi,

  I had this working before, but in the latest version of QEMU (built
  from master), when I try to install Corsair iCUE, and it gets to the
  driver install point => my Windows 10 VM just reboots! I would be
  happy to capture logs, but ... what logs exist for an uncontrolled
  reboot? Thinking they are lost in the reboot :-(.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921061/+subscriptions



Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx

2021-05-14 Thread Paolo Bonzini

On 14/05/21 19:27, Roman Kagan wrote:


AFAICS your patch has basically the same effect as Vladimir's
patch "util/async: aio_co_enter(): do aio_co_schedule in general case"
(https://lore.kernel.org/qemu-devel/20210408140827.332915-4-vsement...@virtuozzo.com/).
That one was found to break e.g. aio=threads cases.  I guessed it
implicitly relied upon aio_co_enter() acquiring the aio_context but we
didn't dig further to pinpoint the exact scenario.


That one is much more intrusive, because it goes through a bottom half 
unnecessarily in the case of aio_co_wake being called from an I/O 
callback (or from another bottom half).  I'll test my patch with 
aio=threads.


Paolo




Re: [RFC PATCH 2/9] replace machine phase_check with machine_is_initialized/ready calls

2021-05-14 Thread Paolo Bonzini

On 14/05/21 15:13, Mirela Grujic wrote:

However, if you believe it should rather be just renamed I can do so.


I am just not sure it's such an advantage to replace phase_check with 
separate functions.  The rename is a constraint of QAPI, so we have to 
live with the long names.


Paolo




Re: [PATCH 5/6] co-shared-resource: protect with a mutex

2021-05-14 Thread Vladimir Sementsov-Ogievskiy via

14.05.2021 20:28, Emanuele Giuseppe Esposito wrote:



On 14/05/2021 17:30, Vladimir Sementsov-Ogievskiy wrote:

14.05.2021 17:32, Emanuele Giuseppe Esposito wrote:



On 14/05/2021 16:26, Vladimir Sementsov-Ogievskiy wrote:

14.05.2021 17:10, Emanuele Giuseppe Esposito wrote:



On 12/05/2021 17:44, Stefan Hajnoczi wrote:

On Mon, May 10, 2021 at 10:59:40AM +0200, Emanuele Giuseppe Esposito wrote:

co-shared-resource is currently not thread-safe, as also reported
in co-shared-resource.h. Add a QemuMutex because co_try_get_from_shres
can also be invoked from non-coroutine context.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  util/qemu-co-shared-resource.c | 26 ++
  1 file changed, 22 insertions(+), 4 deletions(-)


Hmm...this thread-safety change is more fine-grained than I was
expecting. If we follow this strategy basically any data structure used
by coroutines needs its own fine-grained lock (like Java's Object base
class which has its own lock).

I'm not sure I like it since callers may still need coarser grained
locks to protect their own state or synchronize access to multiple
items of data. Also, some callers may not need thread-safety.

Can the caller to be responsible for locking instead (e.g. using
CoMutex)?


Right now co-shared-resource is being used only by block-copy, so I guess 
locking it from the caller or within the API won't really matter in this case.

One possible idea on how to delegate this to the caller without adding 
additional small lock/unlock in block-copy is to move co_get_from_shres in 
block_copy_task_end, and calling it only when a boolean passed to 
block_copy_task_end is true.

Otherwise make b_c_task_end always call co_get_from_shres and then include 
co_get_from_shres in block_copy_task_create, so that we always add and in case 
remove (if error) in the shared resource.

Something like:

diff --git a/block/block-copy.c b/block/block-copy.c
index 3a447a7c3d..1e4914b0cb 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -233,6 +233,7 @@ static coroutine_fn BlockCopyTask 
*block_copy_task_create(BlockCopyState *s,
  /* region is dirty, so no existent tasks possible in it */
  assert(!find_conflicting_task(s, offset, bytes));
  QLIST_INSERT_HEAD(&s->tasks, task, list);
+    co_get_from_shres(s->mem, task->bytes);
  qemu_co_mutex_unlock(&s->tasks_lock);

  return task;
@@ -269,6 +270,7 @@ static void coroutine_fn block_copy_task_end(BlockCopyTask 
*task, int ret)
  bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->offset, 
task->bytes);
  }
  qemu_co_mutex_lock(&task->s->tasks_lock);
+    co_put_to_shres(task->s->mem, task->bytes);
  task->s->in_flight_bytes -= task->bytes;
  QLIST_REMOVE(task, list);
  progress_set_remaining(task->s->progress,
@@ -379,7 +381,6 @@ static coroutine_fn int block_copy_task_run(AioTaskPool 
*pool,

  aio_task_pool_wait_slot(pool);
  if (aio_task_pool_status(pool) < 0) {
-    co_put_to_shres(task->s->mem, task->bytes);
  block_copy_task_end(task, -ECANCELED);
  g_free(task);
  return -ECANCELED;
@@ -498,7 +499,6 @@ static coroutine_fn int block_copy_task_entry(AioTask *task)
  }
  qemu_mutex_unlock(&t->s->calls_lock);

-    co_put_to_shres(t->s->mem, t->bytes);
  block_copy_task_end(t, ret);

  return ret;
@@ -687,8 +687,6 @@ block_copy_dirty_clusters(BlockCopyCallState *call_state)

  trace_block_copy_process(s, task->offset);

-    co_get_from_shres(s->mem, task->bytes);


we want to get from shres here, after possible call to block_copy_task_shrink(), 
as task->bytes may be reduced.


Ah right, I missed that. So I guess if we want the caller to protect 
co-shared-resource, get_from_shres stays where it is, and put_ instead can 
still go into task_end (with a boolean enabling it).


honestly, I don't follow how it helps thread-safety


 From my understanding, the whole point here is to have no lock in 
co-shared-resource but let the caller take care of it (block-copy).

The above was just an idea on how to do it.


But how moving co_put_to_shres() make it thread-safe? Nothing in block-copy is 
thread-safe yet..


--
Best regards,
Vladimir



Re: [PATCH v2 2/3] docs/interop/bitmaps: use blockdev-backup

2021-05-14 Thread John Snow

On 5/5/21 9:58 AM, Vladimir Sementsov-Ogievskiy wrote:

We are going to deprecate drive-backup, so use modern interface here.
In examples where target image creation is shown, show blockdev-add as
well. If target creation omitted, omit blockdev-add as well.

Signed-off-by: Vladimir Sementsov-Ogievskiy 


Seems good, thanks!


(aside: I really need to push forward with the QMP cross-references work 
...)





Re: [PATCH v2 1/3] docs/block-replication: use blockdev-backup

2021-05-14 Thread John Snow

On 5/5/21 9:58 AM, Vladimir Sementsov-Ogievskiy wrote:

We are going to deprecate drive-backup, so don't mention it here.
Moreover, blockdev-backup seems more correct in the context.

Signed-off-by: Vladimir Sementsov-Ogievskiy 


Reviewed-by: John Snow 


---
  docs/block-replication.txt | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/block-replication.txt b/docs/block-replication.txt
index 108e9166a8..59eb2b33b3 100644
--- a/docs/block-replication.txt
+++ b/docs/block-replication.txt
@@ -79,7 +79,7 @@ Primary | ||  Secondary disk <- 
hidden-disk 5 <-
||| |
||| |
||'-'
-  ||   drive-backup sync=none 6
+  || blockdev-backup sync=none 6
  
  1) The disk on the primary is represented by a block device with two

  children, providing replication between a primary disk and the host that
@@ -101,7 +101,7 @@ should support bdrv_make_empty() and backing file.
  that is modified by the primary VM. It should also start as an empty disk,
  and the driver supports bdrv_make_empty() and backing file.
  
-6) The drive-backup job (sync=none) is run to allow hidden-disk to buffer

+6) The blockdev-backup job (sync=none) is run to allow hidden-disk to buffer
  any state that would otherwise be lost by the speculative write-through
  of the NBD server into the secondary disk. So before block replication,
  the primary disk and secondary disk should contain the same data.






Re: [PATCH v3 10/15] qemu_iotests: extent QMP socket timeout when using valgrind

2021-05-14 Thread John Snow

On 5/14/21 4:16 AM, Emanuele Giuseppe Esposito wrote:



On 13/05/2021 20:47, John Snow wrote:

On 4/14/21 1:03 PM, Emanuele Giuseppe Esposito wrote:

As with gdbserver, valgrind delays the test execution, so
the default QMP socket timeout timeout too soon.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  python/qemu/machine.py    | 2 +-
  tests/qemu-iotests/iotests.py | 4 ++--
  2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/python/qemu/machine.py b/python/qemu/machine.py
index d6142271c2..dce96e1858 100644
--- a/python/qemu/machine.py
+++ b/python/qemu/machine.py
@@ -410,7 +410,7 @@ def _launch(self) -> None:
 shell=False,
 close_fds=False)
-    if 'gdbserver' in self._wrapper:
+    if 'gdbserver' in self._wrapper or 'valgrind' in self._wrapper:


This approaches me suggesting that we just change __init__ to accept a 
parameter that lets the caller decide what kind of timeout(s) they 
find acceptable. They know more about what they're trying to run than 
we do.


Certainly after launch occurs, the user is free to just grab the qmp 
object and tinker around with the timeouts, but that does not allow us 
to change the timeout(s) for accept itself.


D'oh.

(Spilled milk: It was probably a mistake to make the default launch 
behavior here have a timeout of 15 seconds. That logic likely belongs 
to the iotests implementation. The default here probably ought to 
indeed be "wait forever".)


In the here and now ... would it be acceptable to change the launch() 
method to add a timeout parameter? It's still a little awkward, 
because conceptually it's a timeout for just QMP and not for the 
actual duration of the entire launch process.


But, I guess, it's *closer* to the truth.

If you wanted to route it that way, I take back what I said about not 
wanting to pass around variables to event loop hooks.


If we defined the timeout as something that applies exclusively to the 
launching process, then it'd be appropriate to route that to the 
launch-related functions ... and subclasses would have to be adjusted 
to be made aware that they're expected to operate within those 
parameters, which is good.


Sorry for my waffling back and forth on this. Let me know what the 
actual requirements are if you figure out which timeouts you need / 
don't need and I'll give you some review priority.


Uhm.. I am getting a little bit confused on what to do too :)



SORRY, I hit send too quickly and then change my mind. I've handed you a 
giant bag of my own confusion. Very unfair of me!



So the current plan I have for _qmp_timer is:

- As Max suggested, move it in __init__ and check there for the wrapper 
contents. If we need to block forever (gdb, valgrind), we set it to 
None. Otherwise to 15 seconds. I think setting it always to None is not 
ideal, because if you are testing something that deadlocks (see my 
attempts to remove/add locks in QEMU multiqueue) and the socket is set 
to block forever, you don't know if the test is super slow or it just 
deadlocked.




I agree with your concern on rational defaults, let's focus on that briefly:

Let's have QEMUMachine default to *no timeouts* moving forward, and have 
the timeouts be *opt-in*. This keeps the Machine class somewhat pure and 
free of opinions. The separation of mechanism and policy.


Next, instead of modifying hundreds of tests to opt-in to the timeout, 
let's modify the VM class in iotests.py to opt-in to that timeout, 
restoring the current "safe" behavior of iotests.


The above items can happen in a single commit, preserving behavior in 
the bisect.


Finally, we can add a non-private property that individual tests can 
re-override to opt BACK out of the default.


Something as simple as:

vm.qmp_timeout = None

would be just fine.

Well, one can argue that in both cases this is not the expected 
behavior, but I think having an upper bound on each QMP command 
execution would be good.


- pass _qmp_timer directly to self._qmp.accept() in _post launch, 
leaving _launch() intact. I think this makes sense because as you also 
mentioned, changing _post_launch() into taking a parameter requires 
changing also all subclasses and pass values around.




Sounds OK. If we do change the defaults back to "No Timeout" in a way 
that allows an override by an opinionated class, we'll already have the 
public property, though, so a parameter might not be needed.


(Yes, this is the THIRD time I've changed my mind in 48 hours.)


Any opinion on this is very welcome.



Brave words!

My last thought here is that I still don't like the idea of QEMUMachine 
class changing its timeout behavior based on the introspection of 
wrapper args.


It feels much more like the case that a caller who is knowingly wrapping 
it with a program that delays its execution should change its parameters 
accordingly based on what the caller knows about what they're trying to 
accomplish.

[Bug 1912934] Re: QEMU emulation of fmadds instruction on powerpc64le is buggy

2021-05-14 Thread Bruno Haible
The situation is still the same in QEMU 6.0.0.

$ powerpc64le-linux-gnu-gcc-5 test-fmadds.c -static
$ ~/inst-qemu/6.0.0/bin/qemu-ppc64le ./a.out ; echo $?
32


** Changed in: qemu
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1912934

Title:
  QEMU emulation of fmadds instruction on powerpc64le is buggy

Status in QEMU:
  New

Bug description:
  The attached program test-fmadds.c tests the fmadds instruction on
  powerpc64le.

  Result on real hardware (POWER8E processor):
  $ ./a.out ; echo $?
  0

  Result in Alpine Linux 3.13/powerpcle, emulated by QEMU 5.0.0 on Ubuntu 16.04:
  $ ./a.out ; echo $?
  32

  Result in Debian 8.6.0/ppc64el, emulated by QEMU 2.9.0 on Ubuntu 16.04:
  $ ./a.out ; echo $?
  32

  Through 'nm --dynamic qemu-system-ppc64 | grep fma' I can see that
  QEMU is NOT using the fmaf() or fma() function from the host system's
  libc; this function is working fine in glibc of the host system (see
  https://www.gnu.org/software/gnulib/manual/html_node/fmaf.html ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1912934/+subscriptions



[Bug 1705118] Re: qemu user mode: rt signals not implemented for sparc guests

2021-05-14 Thread Bruno Haible
The situation in version 6.0.0 is the same as in version 2.11.0: The
cases ppc, ppc64, ppc64le, s390x are fixed, but the sparc64 executable
still crashes.

** Changed in: qemu
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1705118

Title:
  qemu user mode: rt signals not implemented for sparc guests

Status in QEMU:
  New

Bug description:
  The documentation
   says that
  qemu in user mode supports POSIX signal handling.

  Catching SIGSEGV according to POSIX, however, does not work on
ppc, ppc64, ppc64le, s390x, sparc64.
  It does work, however, on
aarch64, alpha, arm, hppa, m68k, mips, mips64, sh4.

  How to reproduce:
  The attached program runs fine (exits with code 0) on
- real hardware Linux/PowerPC64 (in 32-bit and 64-bit mode),
- real hardware Linux/PowerPC64LE,
- qemu-system-s390x emulated Linux/s390x,
- real hardware Linux/SPARC64.
  $ gcc -O -Wall testsigsegv.c; ./a.out; echo $?
  0

  For ppc:
  $ powerpc-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c -o testsigsegv-ppc
  $ ~/inst-qemu/2.9.0/bin/qemu-ppc testsigsegv-ppc
  $ echo $?
  3

  For ppc64:
  $ powerpc64-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c -o 
testsigsegv-ppc64
  $ ~/inst-qemu/2.9.0/bin/qemu-ppc64 testsigsegv-ppc64
  $ echo $?
  3

  For ppc64le:
  $ powerpc64le-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c -o 
testsigsegv-ppc64le
  $ ~/inst-qemu/2.9.0/bin/qemu-ppc64le testsigsegv-ppc64le
  $ echo $?
  3

  For s390x:
  $ s390x-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c -o testsigsegv-s390x
  $ ~/inst-qemu/2.9.0/bin/qemu-s390x testsigsegv-s390x
  $ echo $?
  3
  $ s390x-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c 
-DAVOID_LINUX_S390X_COMPAT -o testsigsegv-s390x-a
  $ ~/inst-qemu/2.9.0/bin/qemu-s390x testsigsegv-s390x-a
  $ echo $?
  0
  So, the test fails here because the Linux/s390x kernel omits the least
  significant 12 bits of the fault address in the 'si_addr' field. But
  qemu-s390x is not compatible with the Linux/s390x behaviour: it puts
  the complete fault address in the 'si_addr' field.

  For sparc64:
  $ sparc64-linux-gnu-gcc-5 -O -Wall -static testsigsegv.c -o 
testsigsegv-sparc64
  $ ~/inst-qemu/2.9.0/bin/qemu-sparc64 testsigsegv-sparc64
  Segmentation fault (core dumped)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1705118/+subscriptions



[Bug 1701798] Re: dynamically linked binaries crash for big-endian targets

2021-05-14 Thread Bruno Haible
The issue seems to be fixed, even without the symlink for 
/usr/-linux-gnu/etc/ld.so.cache.
For m68k: since version 2.10.0.
For s390x: since version 2.11.0.
For the other platforms, already since 2.9.0 (strange, this contradicts my 
original report...).

** Changed in: qemu
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1701798

Title:
  dynamically linked binaries crash for big-endian targets

Status in QEMU:
  Fix Released

Bug description:
  On the targets
hppa
m68k
mips
mips64
powerpc
powerpc64
s390x
sparc64
  dynamically linked binaries crash, but statically linked binaries work.
  On the targets
aarch64
alpha
armhf
powerpc64le
sh4
  both dynamically linked and statically linked binaries work.

  How to reproduce:

  1) On Ubuntu 16.04, install the packages
  g++-5-aarch64-linux-gnu
  g++-5-alpha-linux-gnu
  g++-5-arm-linux-gnueabihf
  g++-5-hppa-linux-gnu
  g++-5-m68k-linux-gnu
  g++-5-mips-linux-gnu
  g++-5-mips64-linux-gnuabi64
  g++-5-powerpc-linux-gnu
  g++-5-powerpc64-linux-gnu
  g++-5-powerpc64le-linux-gnu
  g++-5-s390x-linux-gnu
  g++-5-sh4-linux-gnu
  g++-5-sparc64-linux-gnu

  2) Install qemu 2.9.0 from source (for m68k, use the 2.7.0-m68k
  code from https://github.com/vivier/qemu-m68k.git):
  $ ../configure --prefix=/home/bruno/inst-qemu/2.9.0 
--target-list=aarch64-softmmu,alpha-softmmu,arm-softmmu,i386-softmmu,m68k-softmmu,mips-softmmu,mipsel-softmmu,mips64-softmmu,mips64el-softmmu,ppc-softmmu,ppc64-softmmu,s390x-softmmu,sh4-softmmu,sparc-softmmu,sparc64-softmmu,x86_64-softmmu,aarch64-linux-user,alpha-linux-user,arm-linux-user,hppa-linux-user,m68k-linux-user,mips-linux-user,mipsel-linux-user,mips64-linux-user,mips64el-linux-user,ppc-linux-user,ppc64-linux-user,ppc64le-linux-user,s390x-linux-user,sh4-linux-user,sparc-linux-user,sparc64-linux-user
 --disable-strip --disable-werror --enable-gtk --enable-vnc
  $ make
  $ make install

  3) Cross-compile the programs:

  $ aarch64-linux-gnu-gcc-5 -O hello.c -o hello.aarch64
  $ alpha-linux-gnu-gcc-5 -O hello.c -o hello.alpha
  $ arm-linux-gnueabihf-gcc-5 -O hello.c -o hello.armhf
  $ hppa-linux-gnu-gcc-5 -O hello.c -o hello.hppa
  $ m68k-linux-gnu-gcc-5 -O hello.c -o hello.m68k
  $ mips-linux-gnu-gcc-5 -O hello.c -o hello.mips
  $ mips64-linux-gnuabi64-gcc-5 -O hello.c -o hello.mips64
  $ powerpc-linux-gnu-gcc-5 -O hello.c -o hello.powerpc
  $ powerpc64-linux-gnu-gcc-5 -O hello.c -o hello.powerpc64
  $ powerpc64le-linux-gnu-gcc-5 -O hello.c -o hello.powerpc64le
  $ s390x-linux-gnu-gcc-5 -O hello.c -o hello.s390x
  $ sh4-linux-gnu-gcc-5 -O hello.c -o hello.sh4
  $ sparc64-linux-gnu-gcc-5 -O hello.c -o hello.sparc64

  4) Run the programs:

  * aarch64 works:
  $ QEMU_LD_PREFIX=/usr/aarch64-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-aarch64 
hello.aarch64
  Hello world

  * alpha works:
  $ QEMU_LD_PREFIX=/usr/alpha-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-alpha 
hello.alpha 
  Hello world

  * armhf works:
  $ QEMU_LD_PREFIX=/usr/arm-linux-gnueabihf ~/inst-qemu/2.9.0/bin/qemu-arm 
hello.armhf
  Hello world

  * powerpc64le works:
  $ QEMU_LD_PREFIX=/usr/powerpc64le-linux-gnu 
~/inst-qemu/2.9.0/bin/qemu-ppc64le hello.powerpc64le
  Hello world

  * sh4 works:
  $ QEMU_LD_PREFIX=/usr/sh4-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-sh4 hello.sh4
  Hello world

  * = sparc64 does not work:
  $ QEMU_LD_PREFIX=/usr/sparc64-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-sparc64 
hello.sparc64
  Segmentation fault (core dumped)

  When I copy the file to a machine with `uname -srm` = "Linux 4.5.0-2-sparc64 
sparc64",
  it works:
  $ ./hello.sparc64
  Hello world

  When I copy the file and its execution environment /usr/sparc64-linux-gnu to 
the
  same machine and run the binary in a chroot environment:
  # /bin/hello.sparc64 
  Hello world

  * = mips does not work:
  $ QEMU_LD_PREFIX=/usr/mips-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-mips 
hello.mips
  qemu: uncaught target signal 11 (Segmentation fault) - core dumped

  When I copy the file to a machine with `uname -srm` = "Linux 
3.16.0-4-4kc-malta mips",
  it works:
  $ ./hello.mips
  Hello world

  When I copy the file and its execution environment /usr/mips-linux-gnu to the
  same machine and run the binary in a chroot environment:
  # /bin/hello.mips 
  Hello world

  * = mips64 does not work:
  $ QEMU_LD_PREFIX=/usr/mips64-linux-gnuabi64 ~/inst-qemu/2.9.0/bin/qemu-mips64 
hello.mips64
  qemu: uncaught target signal 11 (Segmentation fault) - core dumped

  When I copy the file to a machine with `uname -srm` = "Linux 
3.16.0-4-5kc-malta mips64",
  it works:
  $ ./hello.mips64
  Hello world

  * = powerpc does not work:
  $ QEMU_LD_PREFIX=/usr/powerpc-linux-gnu ~/inst-qemu/2.9.0/bin/qemu-ppc 
hello.powerpc
  qemu: uncaught target signal 11 (Segmentation fault) - core dumped

  When I copy the file to a machine with `uname 

Re: [PATCH 0/3] tests/acceptance: Handle tests with "cpu" tag

2021-05-14 Thread John Snow

On 4/9/21 10:53 AM, Wainer dos Santos Moschetta wrote:

Hi,

On 4/7/21 5:01 PM, Eduardo Habkost wrote:

On Tue, Mar 23, 2021 at 05:01:09PM -0400, John Snow wrote:

On 3/17/21 3:16 PM, Wainer dos Santos Moschetta wrote:

Added John and Eduardo,

On 3/9/21 3:52 PM, Cleber Rosa wrote:

On Wed, Feb 24, 2021 at 06:26:51PM -0300, Wainer dos Santos
Moschetta wrote:
Currently the acceptance tests tagged with "machine" have the "-M 
TYPE"
automatically added to the list of arguments of the QEMUMachine 
object.

In other words, that option is passed to the launched QEMU. On this
series it is implemented the same feature but instead for tests 
marked

with "cpu".


Good!


There is a caveat, however, in case the test needs additional
arguments to
the CPU type they cannot be passed via tag, because the tags
parser split
values by comma. For example, in
tests/acceptance/x86_cpu_model_versions.py,
there are cases where:

    * -cpu is set to
"Cascadelake-Server,x-force-features=on,check=off,enforce=off"
    * if it was tagged like
"cpu:Cascadelake-Server,x-force-features=on,check=off,enforce=off"
  then the parser would break it into 4 tags
("cpu:Cascadelake-Server",
  "x-force-features=on", "check=off", "enforce=off")
    * resulting on "-cpu Cascadelake-Server" and the remaining
arguments are ignored.

For the example above, one should tag it (or not at all) as
"cpu:Cascadelake-Server"
AND self.vm.add_args('-cpu',
"Cascadelake-Server,x-force-features=on,check=off,enforce=off"),
and that results on something like:

    "qemu-system-x86_64 (...) -cpu Cascadelake-Server -cpu
Cascadelake-Server,x-force-features=on,check=off,enforce=off".


There are clearly two problems here:

1) the tag is meant to be succinct, so that it can be used by users
 selecting which tests to run.  At the same time, it's a waste
 to throw away the other information or keep it duplicate or
 incosistent.

2) QEMUMachine doesn't keep track of command line arguments
 (add_args() makes it pretty clear what's doing).  But, on this 
type

 of use case, a "set_args()" is desirable, in which case it would
 overwrite the existing arguments for a given command line option.

I like the idea of a "set_args()" to QEMUMachine as you describe above
but it needs further discussion because I can see at least one corner
case; for example, one can set the machine type as either -machine or
-M, then what key it should be searched-and-replaced (if any) on the
list of args?

Unlike your suggestion, I thought on implement the method to deal 
with a

single argument at time, as:

      def set_arg(self, arg: Union[str, list], value: str) -> None:
      """
      Set the value of an argument from the list of extra arguments
to be
      given to the QEMU binary. If the argument does not exist then
it is
      added to the list.

      If the ``arg`` parameter is a list then it will search and
replace all
      occurencies (if any). Otherwise a new argument is added 
and it is

      used the first value of the ``arg`` list.
      """
      pass

Does it sound good to you?

Thanks!

Wainer

A little hokey, but I suppose that's true of our CLI interface in 
general.


I'd prefer not get into the business of building a "config" inside the
python module if we can help it right now, but if "setting" 
individual args

is something you truly need to do, I won't stand in the way.

Do what's least-gross.

I don't have any specific suggestions on how the API should look
like, but I'm having trouble understanding the documentation
above.

I don't know what "it will search and replace all occurrences"
means.  Occurrences of what?

I don't understand what "it is used the first value of the `arg`
list" means, either.  I understand you are going to use the first
value of the list, but you don't say what you are going to do
with it.



The documentation was indeed confusing but, please, disregard it. Based 
on John's comments on this thread I decided to not introduce yet another 
specialized function to QEMUMachine class. Instead I added the "args" 
property so that users will have access to QEMUMachine._args to change 
it whatever they like. You will find that implemented on the v2 of this 
series:


'[PATCH v2 0/7] tests/acceptance: Handle tests with "cpu" tag'

Thanks!

- Wainer






It would truly be very cool if we had a QEMUMachineConfig class that we 
could build out properly.


In the hypothetical perfect future world where we have a json-based 
config file format that mapped perfectly to QMP commands, we could 
create a class that represents loading and representing such a format, 
and allow callers to change values at runtime, like:


config.machine = q35

but this treads so absurdly close to what libvirt already does that I am 
hesitant to work on it without addressing some of the core 
organizational problems with our CLI first.


A frequent source of anguish is how we treat multiple or repeating 
values on

[Bug 1908515] Re: assertion failure in lsi53c810 emulator

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/305


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #305
   https://gitlab.com/qemu-project/qemu/-/issues/305

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908515

Title:
  assertion failure in lsi53c810 emulator

Status in QEMU:
  Expired

Bug description:
  Hello,

  Using hypervisor fuzzer, hyfuzz, I found an assertion failure through
  lsi53c810 emulator.

  A malicious guest user/process could use this flaw to abort the QEMU
  process on the host, resulting in a denial of service.

  This was found in version 5.2.0 (master)

  
  qemu-system-i386: ../hw/scsi/lsi53c895a.c:624: void lsi_do_dma(LSIState *, 
int): Assertion `s->current'
  failed.
  [1]1406 abort (core dumped)  
/home/cwmyung/prj/hyfuzz/src/qemu-5.2/build/i386-softmmu/qemu-system-i386 -m

  Program terminated with signal SIGABRT, Aborted.
  #0  __GI_raise (sig=sig@entry=0x6) at ../sysdeps/unix/sysv/linux/raise.c:51
  51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
  [Current thread is 1 (Thread 0x7fa9310a8700 (LWP 2076))]
  gdb-peda$ bt
  #0  0x7fa94aa98f47 in __GI_raise (sig=sig@entry=0x6) at 
../sysdeps/unix/sysv/linux/raise.c:51
  #1  0x7fa94aa9a8b1 in __GI_abort () at abort.c:79
  #2  0x7fa94aa8a42a in __assert_fail_base (fmt=0x7fa94ac11a38 "%s%s%s:%u: 
%s%sAssertion `%s' failed.\\n%n", assertion=assertion@entry=0x562851c9eab9 
"s->current", file=file@entry=0x562851c9d4f9 "../hw/scsi/lsi53c895a.c", 
line=line@entry=0x270, function=function@entry=0x562851c9de43 "void 
lsi_do_dma(LSIState *, int)") at assert.c:92
  #3  0x7fa94aa8a4a2 in __GI___assert_fail (assertion=0x562851c9eab9 
"s->current", file=0x562851c9d4f9 "../hw/scsi/lsi53c895a.c", line=0x270, 
function=0x562851c9de43 "void lsi_do_dma(LSIState *, int)")
  at assert.c:101
  #4  0x5628515d9605 in lsi_do_dma (s=0x56289060, out=0x1) at 
../hw/scsi/lsi53c895a.c:624
  #5  0x5628515d5317 in lsi_execute_script (s=) at 
../hw/scsi/lsi53c895a.c:1250
  #6  0x5628515cec49 in lsi_reg_writeb (s=0x56289060, offset=0x2f, 
val=0x1e)
  at ../hw/scsi/lsi53c895a.c:2005
  #7  0x562851952798 in memory_region_write_accessor (mr=, 
addr=, value=, size=, 
shift=, mask=, attrs=...)
  at ../softmmu/memory.c:491
  #8  0x56285195258e in access_with_adjusted_size (addr=, 
value=, size=, access_size_min=, 
access_size_max=, access_fn=, mr=, 
attrs=...) at ../softmmu/memory.c:552
  #9  0x56285195258e in memory_region_dispatch_write (mr=0x56289960, 
addr=, data=, op=, attrs=...) at 
../softmmu/memory.c:1501
  #10 0x5628518e5305 in flatview_write_continue (fv=0x7fa92871f040, 
addr=0xfebf302c, attrs=..., ptr=0x7fa9310a49b8, len=0x4, addr1=0x7fa9310a3410, 
l=, mr=0x56289960)
  at ../softmmu/physmem.c:2759
  #11 0x5628518e6ef6 in flatview_write (fv=0x7fa92871f040, addr=0xfebf302c, 
attrs=..., len=0x4, buf=) at ../softmmu/physmem.c:2799
  #12 0x5628518e6ef6 in subpage_write (opaque=, 
addr=, value=, len=, attrs=...) at 
../softmmu/physmem.c:2465
  #13 0x5628519529a2 in memory_region_write_with_attrs_accessor 
(mr=, addr=, value=, 
size=, shift=, mask=, attrs=...) 
at ../softmmu/memory.c:511
  #14 0x5628519525e1 in access_with_adjusted_size (addr=, 
size=, access_size_min=, 
access_size_max=, mr=, attrs=..., 
value=, access_fn=) at ../softmmu/memory.c:552
  #15 0x5628519525e1 in memory_region_dispatch_write (mr=, 
addr=, data=, op=, attrs=...) at 
../softmmu/memory.c:1508
  #16 0x562851a49228 in io_writex (iotlbentry=, 
mmu_idx=, val=, addr=, 
retaddr=, op=, env=)
  at ../accel/tcg/cputlb.c:1378
  #17 0x562851a49228 in store_helper (env=, addr=, val=, oi=, retaddr=, 
op=MO_32) at ../accel/tcg/cputlb.c:2397
  #18 0x562851a49228 in helper_le_stl_mmu (env=, 
addr=, val=0x2, oi=, retaddr=0x7fa8e44032ee) at 
../accel/tcg/cputlb.c:2463
  #19 0x7fa8e44032ee in code_gen_buffer ()
  #20 0x56285191ada0 in cpu_tb_exec (cpu=0x5628547b81a0, itb=)
  at ../accel/tcg/cpu-exec.c:178
  #21 0x56285191b9eb in cpu_loop_exec_tb (tb=, 
cpu=, last_tb=, tb_exit=) at 
../accel/tcg/cpu-exec.c:658
  #22 0x56285191b9eb in cpu_exec (cpu=0x5628547b81a0) at 
../accel/tcg/cpu-exec.c:771
  #23 0x56285194ab9f in tcg_cpu_exec (cpu=) at 
../accel/tcg/tcg-cpus.c:243
  #24 0x56285194ab9f in tcg_cpu_thread_fn (arg=0x5628547b81a0) at 
../accel/tcg/tcg-cpus.c:427
  #25 0x562851c22775 in qemu_thread_start (args=) at 
../util/qemu-thread-posix.c:521
  #26 0x7fa94ae526db in start_thread (arg=0x7fa9310a8700) at 
pthread_create.c:463
  #27 0x7fa94ab7ba3f in clone () at 
../sysdeps/unix/sysv/linux/x86_

[Bug 1912224] Re: qemu may freeze during drive-mirroring on fragmented FS

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/307


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #307
   https://gitlab.com/qemu-project/qemu/-/issues/307

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1912224

Title:
  qemu may freeze during drive-mirroring on fragmented FS

Status in QEMU:
  Expired

Bug description:
  
  We have odd behavior in operation where qemu freeze during long
  seconds, We started an thread about that issue here:
  https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg05623.html

  It happens at least during openstack nova snapshot (qemu blockdev-mirror)
  or live block migration(which include network copy of disk).

  After further troubleshoots, it seems related to FS fragmentation on
  host.

  reproducible at least on:
  Ubuntu 18.04.3/4.18.0-25-generic/qemu-4.0
  Ubuntu 16.04.6/5.10.6/qemu-5.2.0-rc2

  # Lets create a dedicated file system on a SSD/Nvme 60GB disk in my case:
  $sudo mkfs.ext4 /dev/sda3
  $sudo mount /dev/sda3 /mnt
  $df -h /mnt
  Filesystem  Size  Used Avail Use% Mounted on
  /dev/sda3 59G   53M   56G   1% /mnt

  #Create a fragmented disk on it using 2MB Chunks (about 30min):
  $sudo python3 create_fragged_disk.py /mnt 2
  Filling up FS by Creating chunks files in:  /mnt/chunks
  We are probably full as expected!!:  [Errno 28] No space left on device
  Creating fragged disk file:  /mnt/disk

  $ls -lhs 
  59G -rw-r--r-- 1 root root 59G Jan 15 14:08 /mnt/disk

  $ sudo e4defrag -c /mnt/disk
   Total/best extents 41971/30
   Average size per extent1466 KB
   Fragmentation score2
   [0-30 no problem: 31-55 a little bit fragmented: 56- needs defrag]
   This file (/mnt/disk) does not need defragmentation.
   Done.

  # the tool^^^ says it is not enough fragmented to be able to defrag.

  #Inject an image on fragmented disk
  sudo chown ubuntu /mnt/disk
  wget 
https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
  qemu-img convert -O raw  bionic-server-cloudimg-amd64.img \
   bionic-server-cloudimg-amd64.img.raw
  dd conv=notrunc iflag=fullblock if=bionic-server-cloudimg-amd64.img.raw \
  of=/mnt/disk bs=1M
  virt-customize -a /mnt/disk --root-password password:

  # logon run console activity ex: ping -i 0.3 127.0.0.1
  $qemu-system-x86_64 -m 2G -enable-kvm  -nographic \
  -chardev socket,id=test,path=/tmp/qmp-monitor,server,nowait \
  -mon chardev=test,mode=control \
  -drive 
file=/mnt/disk,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard\
  -device 
virtio-blk-pci,scsi=off,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on

  $sync
  $echo 3 | sudo tee -a /proc/sys/vm/drop_caches

  #start drive-mirror via qmp on another SSD/nvme partition
  nc -U /tmp/qmp-monitor
  {"execute":"qmp_capabilities"}
  
{"execute":"drive-mirror","arguments":{"device":"drive-virtio-disk0","target":"/home/ubuntu/mirror","sync":"full","format":"qcow2"}}
  ^^^ qemu console may start to freeze at this step.

  NOTE:
   - smaller chunk sz and bigger disk size the worst it is.
 In operation we also have issue on 400GB disk size with average 13MB/extent
   - Reproducible also on xfs

  
  Expected behavior:
  ---
  QEMU should remain steady, eventually only have decrease storage Performance
  or mirroring, because of fragmented fs.

  Observed behavior:
  ---
  Perf of mirroring is still quite good even on fragmented FS,
  but it breaks qemu.

  
  ##  create_fragged_disk.py 
  import sys
  import os
  import tempfile
  import glob
  import errno

  MNT_DIR = sys.argv[1]
  CHUNK_SZ_MB = int(sys.argv[2])
  CHUNKS_DIR = MNT_DIR + '/chunks'
  DISK_FILE = MNT_DIR + '/disk'

  if not os.path.exists(CHUNKS_DIR):
  os.makedirs(CHUNKS_DIR)

  with open("/dev/urandom", "rb") as f_rand:
   mb_rand=f_rand.read(1024 * 1024)

  print("Filling up FS by Creating chunks files in: ",CHUNKS_DIR)
  try:
  while True:
  tp = tempfile.NamedTemporaryFile(dir=CHUNKS_DIR,delete=False)
  for x in range(CHUNK_SZ_MB):
  tp.write(mb_rand)
  os.fsync(tp)
  tp.close()
  except Exception as ex:
  print("We are probably full as expected!!: ",ex)

  chunks = glob.glob(CHUNKS_DIR + '/*')

  print("Creating fragged disk file: ",DISK_FILE)
  with open(DISK_FILE, "w+b") as f_disk:
  for chunk in chunks:
  try:
  os.unlink(chunk)
  for x in range(CHUNK_SZ_MB):
  f_disk.write(mb_rand)
  os.

[Bug 1912107] Re: Option to constrain linux-user exec() to emulated CPU only

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/306


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #306
   https://gitlab.com/qemu-project/qemu/-/issues/306

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1912107

Title:
  Option to constrain linux-user exec() to emulated CPU only

Status in QEMU:
  Expired

Bug description:
  When trying to reproduce a bug someone reported on an actual AMD K10[1], ​I 
tried to directly throw `qemu_x86-64 -cpu 
  ​phenom path/to/wrongly-labelled-instruction-set/gcc 1.c` at the problem, but 
failed to get an "illegal instruction" as expected. A quick investigation 
reveals that the error is actually caused by one of gcc's child processess, and 
that the said process is being ran directly on the host. A similar problem 
happens with trying to call stuff with /usr/bin/env.

   ​[1]: https://github.com/Homebrew/brew/issues/1034

  Since both the host and the guest are x86_64, I deemed binfmt
  inapplicable to my case. I believe that QEMU should offer a way to
  modify exec() and other spawning syscalls so that execution remains on
  an emulated CPU in such a case. Call it an extra layer of binfmt, if
  you must.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1912107/+subscriptions



[Bug 1908513] Re: assertion failure in mptsas1068 emulator

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/304


** Changed in: qemu
   Status: Confirmed => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #304
   https://gitlab.com/qemu-project/qemu/-/issues/304

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908513

Title:
  assertion failure in mptsas1068 emulator

Status in QEMU:
  Expired

Bug description:
  Using hypervisor fuzzer, hyfuzz, I found an assertion failure through
  mptsas1068 emulator.

  A malicious guest user/process could use this flaw to abort the QEMU
  process on the host, resulting in a denial of service.

  This was found in version 5.2.0 (master)

  
  qemu-system-i386: ../hw/scsi/mptsas.c:968: void 
mptsas_interrupt_status_write(MPTSASState *): Assertion
  `s->intr_status & MPI_HIS_DOORBELL_INTERRUPT' failed.
  [1]16951 abort (core dumped)  
/home/cwmyung/prj/hyfuzz/src/qemu-5.2/build/qemu-system-i386 -m 512 -drive

  Program terminated with signal SIGABRT, Aborted.
  #0  __GI_raise (sig=sig@entry=0x6) at ../sysdeps/unix/sysv/linux/raise.c:51
  51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
  [Current thread is 1 (Thread 0x7fc7d6023700 (LWP 23475))]
  gdb-peda$ bt
  #0  0x7fc7efa13f47 in __GI_raise (sig=sig@entry=0x6) at 
../sysdeps/unix/sysv/linux/raise.c:51
  #1  0x7fc7efa158b1 in __GI_abort () at abort.c:79
  #2  0x7fc7efa0542a in __assert_fail_base (fmt=0x7fc7efb8ca38 "%s%s%s:%u: 
%s%sAssertion `%s' failed.\\n%n", assertion=assertion@entry=0x56439214d593 
"s->intr_status & MPI_HIS_DOORBELL_INTERRUPT", file=file@entry=0x56439214d4a7 
"../hw/scsi/mptsas.c", line=line@entry=0x3c8, 
function=function@entry=0x56439214d81c "void 
mptsas_interrupt_status_write(MPTSASState *)") at assert.c:92
  #3  0x7fc7efa054a2 in __GI___assert_fail (assertion=0x56439214d593 
"s->intr_status & MPI_HIS_DOORBELL_INTERRUPT", file=0x56439214d4a7 
"../hw/scsi/mptsas.c", line=0x3c8, function=0x56439214d81c "void 
mptsas_interrupt_status_write(MPTSASState *)") at assert.c:101
  #4  0x564391a43963 in mptsas_interrupt_status_write (s=) 
at ../hw/scsi/mptsas.c:968
  #5  0x564391a43963 in mptsas_mmio_write (opaque=0x5643943dd5b0, 
addr=0x30, val=0x1800, size=) at ../hw/scsi/mptsas.c:1052
  #6  0x564391e08798 in memory_region_write_accessor (mr=, 
addr=, value=, size=, 
shift=, mask=, attrs=...)
  at ../softmmu/memory.c:491
  #7  0x564391e0858e in access_with_adjusted_size (addr=, 
value=, size=, access_size_min=, 
access_size_max=, access_fn=, mr=, 
attrs=...) at ../softmmu/memory.c:552
  #8  0x564391e0858e in memory_region_dispatch_write (mr=0x5643943ddea0, 
addr=, data=, op=, attrs=...) at 
../softmmu/memory.c:1501
  #9  0x564391eff228 in io_writex (iotlbentry=, 
mmu_idx=, val=, addr=, 
retaddr=, op=, env=)
  at ../accel/tcg/cputlb.c:1378
  #10 0x564391eff228 in store_helper (env=, addr=, val=, oi=, retaddr=, 
op=MO_32) at ../accel/tcg/cputlb.c:2397
  #11 0x564391eff228 in helper_le_stl_mmu (env=, 
addr=, val=0x2, oi=, retaddr=0x7fc78841b401) at 
../accel/tcg/cputlb.c:2463
  #12 0x7fc78841b401 in code_gen_buffer ()
  #13 0x564391dd0da0 in cpu_tb_exec (cpu=0x56439363e650, itb=) at ../accel/tcg/cpu-exec.c:178
  #14 0x564391dd19eb in cpu_loop_exec_tb (tb=, 
cpu=, last_tb=, tb_exit=) at 
../accel/tcg/cpu-exec.c:658
  #15 0x564391dd19eb in cpu_exec (cpu=0x56439363e650) at 
../accel/tcg/cpu-exec.c:771
  #16 0x564391e00b9f in tcg_cpu_exec (cpu=) at 
../accel/tcg/tcg-cpus.c:243
  #17 0x564391e00b9f in tcg_cpu_thread_fn (arg=0x56439363e650) at 
../accel/tcg/tcg-cpus.c:427
  #18 0x5643920d8775 in qemu_thread_start (args=) at 
../util/qemu-thread-posix.c:521
  #19 0x7fc7efdcd6db in start_thread (arg=0x7fc7d6023700) at 
pthread_create.c:463

  To reproduce this issue, please run the QEMU with the following
  command line.

  
  # To enable ASan option, please set configuration with the following command
  $ ./configure --target-list=i386-softmmu --disable-werror --enable-sanitizers
  $ make

  # To reproduce this issue, please run the QEMU process with the following 
command line.
  $ ./qemu-system-i386 -m 512 -drive 
file=./hyfuzz.img,index=0,media=disk,format=raw -device mptsas1068,id=scsi 
-device scsi-hd,drive=SysDisk -drive id=SysDisk,if=none,file=./disk.img

  Please let me know if I can provide any further info.
  Thank you.

  - Cheolwoo, Myung (Seoul National University)

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908513/+subscriptions



[Bug 1913969] Re: unable to migrate non shared storage when TLS is used

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/310


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #310
   https://gitlab.com/qemu-project/qemu/-/issues/310

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913969

Title:
  unable to migrate non shared storage when TLS is used

Status in QEMU:
  Expired

Bug description:
  Operating system: Gentoo
  Architecture: x86_64
  kernel version: 5.4.72, 5.10.11
  libvirt version: at least 6.9.0, 6.10.0, 7.0.0
  Hypervisor and version: qemu 5.1.0, 5.2.0

  With software versions described above and following configurations:
  libvirt:
  key_file = "/etc/ssl/libvirt/server.lan.key"
  cert_file = "/etc/ssl/libvirt/server.lan.crt"
  ca_file = "/etc/ssl/libvirt/ca.crt"
  log_filters="3:remote 4:event 3:util.json 3:rpc 1:*"
  log_outputs="1:file:/var/log/libvirt/libvirtd.log"
  qemu:
  default_tls_x509_cert_dir = "/etc/ssl/qemu"
  default_tls_x509_verify = 1
  migration with tls:
  virsh # migrate vm1 qemu+tls://server2.lan/system --persistent 
--undefinesource --copy-storage-all --verbose --tls
  never succeeds. Progress stops typically at high progress amounts (95%-98%), 
and network traffic drastically drops as well (from 1 gbps+ to nothing). 
domjobinfo progress also stops. Without --tls migrations succeed without issues 
without any other changes to hosts or configurations.

  Note: I reported this originally as libvirt bug:
  https://gitlab.com/libvirt/libvirt/-/issues/108.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1913969/+subscriptions



[Bug 1913873] Re: QEMU: net: vmxnet: integer overflow may crash guest

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/308


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #308
   https://gitlab.com/qemu-project/qemu/-/issues/308

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913873

Title:
  QEMU: net: vmxnet: integer overflow may crash guest

Status in QEMU:
  Expired

Bug description:
  * Gaoning Pan from Zhejiang University & Ant Security Light-Year Lab reported 
a malloc failure
issue locates in vmxnet3_activate_device() of qemu/hw/net/vmxnet3.c NIC 
emulator

  * This issue is reproducible  because while activating the NIC device, 
vmxnet3_activate_device
does not validate guest supplied configuration values against predefined 
min/max limits.

  @@ -1420,6 +1420,7 @@ static void vmxnet3_activate_device(VMXNET3State *s)
   vmxnet3_setup_rx_filtering(s);
   /* Cache fields from shared memory */
   s->mtu = VMXNET3_READ_DRV_SHARED32(d, s->drv_shmem, devRead.misc.mtu);
  +assert(VMXNET3_MIN_MTU <= s->mtu && s->mtu < VMXNET3_MAX_MTU);<= Did 
not check if MTU is within range
   VMW_CFPRN("MTU is %u", s->mtu);
   
   s->max_rx_frags =
  @@ -1473,6 +1474,9 @@ static void vmxnet3_activate_device(VMXNET3State *s)
   /* Read rings memory locations for TX queues */
   pa = VMXNET3_READ_TX_QUEUE_DESCR64(d, qdescr_pa, conf.txRingBasePA);
   size = VMXNET3_READ_TX_QUEUE_DESCR32(d, qdescr_pa, conf.txRingSize);
  +if (size > VMXNET3_TX_RING_MAX_SIZE) {  <= Did 
not check TX ring size
  +size = VMXNET3_TX_RING_MAX_SIZE;
  +}
   
   vmxnet3_ring_init(d, &s->txq_descr[i].tx_ring, pa, size,
 sizeof(struct Vmxnet3_TxDesc), false);
  @@ -1483,6 +1487,9 @@ static void vmxnet3_activate_device(VMXNET3State *s)
   /* TXC ring */
   pa = VMXNET3_READ_TX_QUEUE_DESCR64(d, qdescr_pa, 
conf.compRingBasePA);
   size = VMXNET3_READ_TX_QUEUE_DESCR32(d, qdescr_pa, 
conf.compRingSize);
  +if (size > VMXNET3_TC_RING_MAX_SIZE) {   <= Did 
not check TC ring size 
  +size = VMXNET3_TC_RING_MAX_SIZE;
  +}
   vmxnet3_ring_init(d, &s->txq_descr[i].comp_ring, pa, size,
 sizeof(struct Vmxnet3_TxCompDesc), true);
   VMXNET3_RING_DUMP(VMW_CFPRN, "TXC", i, &s->txq_descr[i].comp_ring);
  @@ -1524,6 +1531,9 @@ static void vmxnet3_activate_device(VMXNET3State *s)
   /* RX rings */
   pa = VMXNET3_READ_RX_QUEUE_DESCR64(d, qd_pa, 
conf.rxRingBasePA[j]);
   size = VMXNET3_READ_RX_QUEUE_DESCR32(d, qd_pa, 
conf.rxRingSize[j]);
  +if (size > VMXNET3_RX_RING_MAX_SIZE) {   <= Did 
not check RX ring size
  +size = VMXNET3_RX_RING_MAX_SIZE;
  +}
   vmxnet3_ring_init(d, &s->rxq_descr[i].rx_ring[j], pa, size,
 sizeof(struct Vmxnet3_RxDesc), false);
   VMW_CFPRN("RX queue %d:%d: Base: %" PRIx64 ", Size: %d",
  @@ -1533,6 +1543,9 @@ static void vmxnet3_activate_device(VMXNET3State *s)
   /* RXC ring */
   pa = VMXNET3_READ_RX_QUEUE_DESCR64(d, qd_pa, conf.compRingBasePA);
   size = VMXNET3_READ_RX_QUEUE_DESCR32(d, qd_pa, conf.compRingSize);
  +if (size > VMXNET3_RC_RING_MAX_SIZE) {  <= Did 
not check RC ring size
  +size = VMXNET3_RC_RING_MAX_SIZE;
  +}

  This may lead to potential integer overflow OR OOB buffer access
  issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1913873/+subscriptions



[Bug 1907042] Re: assert issue locates in hw/usb/core.c:727: usb_ep_get: Assertion `pid == USB_TOKEN_IN || pid == USB_TOKEN_OUT' failed

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/303


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #303
   https://gitlab.com/qemu-project/qemu/-/issues/303

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1907042

Title:
  assert issue locates in hw/usb/core.c:727: usb_ep_get: Assertion `pid
  == USB_TOKEN_IN || pid == USB_TOKEN_OUT' failed

Status in QEMU:
  Expired

Bug description:
  Hello,

  An assertion failure was found in hw/usb/core.c:727 in latest version
  5.2.0.

  Reproduced environment is as follows:
  Host: ubuntu 18.04
  Guest: ubuntu 18.04

  QEMU boot command line:
  qemu-system-x86_64 -enable-kvm -boot c -m 4G -drive 
format=qcow2,file=./ubuntu.img -nic user,hostfwd=tcp:0.0.0.0:-:22 -device 
pci-ohci,id=ohci -device usb-tablet,bus=ohci.0,port=1,id=usbdev1 -trace usb\*

  Backtrace is as follows:
  #0  0x7f13fff14438 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
  #1  0x7f13fff1603a in __GI_abort () at abort.c:89
  #2  0x7f13fff0cbe7 in __assert_fail_base (fmt=, 
assertion=assertion@entry=0x55f97745ffe0 "pid == USB_TOKEN_IN || pid == 
USB_TOKEN_OUT", file=file@entry=0x55f97745f6c0 "../hw/usb/core.c", 
line=line@entry=727, function=function@entry=0x55f9774606e0 
<__PRETTY_FUNCTION__.22877> "usb_ep_get") at assert.c:92
  #3  0x7f13fff0cc92 in __GI___assert_fail (assertion=0x55f97745ffe0 "pid 
== USB_TOKEN_IN || pid == USB_TOKEN_OUT", file=0x55f97745f6c0 
"../hw/usb/core.c", line=727, function=0x55f9774606e0 
<__PRETTY_FUNCTION__.22877> "usb_ep_get") at assert.c:101
  #4  0x55f975bfc9b2 in usb_ep_get (dev=0x6230c500, pid=45, ep=1) at 
../hw/usb/core.c:727
  #5  0x55f975f945db in ohci_service_td (ohci=0x627191f0, 
ed=0x7ffcd9308410) at ../hw/usb/hcd-ohci.c:1044
  #6  0x55f975f95d5e in ohci_service_ed_list (ohci=0x627191f0, 
head=857580576, completion=0) at ../hw/usb/hcd-ohci.c:1200
  #7  0x55f975f9656d in ohci_process_lists (ohci=0x627191f0, 
completion=0) at ../hw/usb/hcd-ohci.c:1238
  #8  0x55f975f9725c in ohci_frame_boundary (opaque=0x627191f0) at 
../hw/usb/hcd-ohci.c:1281
  #9  0x55f977212494 in timerlist_run_timers (timer_list=0x60b5b060) at 
../util/qemu-timer.c:574
  #10 0x55f9772126db in qemu_clock_run_timers (type=QEMU_CLOCK_VIRTUAL) at 
../util/qemu-timer.c:588
  #11 0x55f977212fde in qemu_clock_run_all_timers () at 
../util/qemu-timer.c:670
  #12 0x55f9772d5717 in main_loop_wait (nonblocking=0) at 
../util/main-loop.c:531
  #13 0x55f97695100c in qemu_main_loop () at ../softmmu/vl.c:1677
  #14 0x55f9758f7601 in main (argc=16, argv=0x7ffcd930, 
envp=0x7ffcd9308910) at ../softmmu/main.c:50
  #15 0x7f13ffeff840 in __libc_start_main (main=0x55f9758f75b0 , 
argc=16, argv=0x7ffcd930, init=, fini=, 
rtld_fini=, stack_end=0x7ffcd9308878) at ../csu/libc-start.c:291
  #16 0x55f9758f74a9 in _start ()

  
  The poc is attached.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1907042/+subscriptions



[Bug 1913923] Re: assert issue locates in hw/net/vmxnet3.c:1793:vmxnet3_io_bar1_write: code should not be reach

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/309


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #309
   https://gitlab.com/qemu-project/qemu/-/issues/309

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913923

Title:
  assert issue locates in hw/net/vmxnet3.c:1793:vmxnet3_io_bar1_write:
  code should not be reach

Status in QEMU:
  Expired

Bug description:
  Hello,

  I found an assertion failure in hw/net/vmxnet3.c:1793

  This was found in latest version 5.2.0.

  my reproduced is as follows:

  
  cat << EOF | ./qemu-system-x86_64 \
  -device vmxnet3 \
  -display none -nodefaults -qtest stdio 
  outl 0xcf8 0x80001014
  outl 0xcfc 0xf0001000
  outl 0xcf8 0x80001018
  outl 0xcf8 0x80001004
  outw 0xcfc 0x7
  writel 0x5c000 0xbabefee1
  writel 0x5c028 0x5d000
  writel 0x5c03c 0x01010101
  writel 0x5d038 0xe000 
  writel 0xf0001038 1
  EOF

  
  Backtrace is as follows:
  #0  0x7f6f641a5f47 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51
  #1  0x7f6f641a78b1 in __GI_abort () at abort.c:79
  #2  0x7f6f67922315 in g_assertion_message () at 
/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
  #3  0x7f6f6792237a in g_assertion_message_expr () at 
/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
  #4  0x55edcaec96af in vmxnet3_io_bar1_write (opaque=0x62804100, 
addr=56, val=1, size=4) at ../hw/net/vmxnet3.c:1793
  #5  0x55edcbd294c6 in memory_region_write_accessor (mr=0x62806b00, 
addr=56, value=0x7fffd52ba848, size=4, shift=0, mask=4294967295, attrs=...) at 
../softmmu/memory.c:491
  #6  0x55edcbd299be in access_with_adjusted_size (addr=56, 
value=0x7fffd52ba848, size=4, access_size_min=4, access_size_max=4, 
access_fn=0x55edcbd2918c , mr=0x62806b00, 
attrs=...) at ../softmmu/memory.c:552
  #7  0x55edcbd35ef2 in memory_region_dispatch_write (mr=0x62806b00, 
addr=56, data=1, op=MO_32, attrs=...) at ../softmmu/memory.c:1501
  #8  0x55edcba1e554 in flatview_write_continue (fv=0x606619a0, 
addr=4026535992, attrs=..., ptr=0x7fffd52bae80, len=4, addr1=56, l=4, 
mr=0x62806b00) at ../softmmu/physmem.c:2759
  #9  0x55edcba1e8c5 in flatview_write (fv=0x606619a0, addr=4026535992, 
attrs=..., buf=0x7fffd52bae80, len=4) at ../softmmu/physmem.c:2799
  #10 0x55edcba1f391 in address_space_write (as=0x60802620, 
addr=4026535992, attrs=..., buf=0x7fffd52bae80, len=4) at 
../softmmu/physmem.c:2891
  #11 0x55edcbaff8d3 in qtest_process_command (chr=0x55edd03ff4a0 
, words=0x6037f450) at ../softmmu/qtest.c:534
  #12 0x55edcbb04aa1 in qtest_process_inbuf (chr=0x55edd03ff4a0 
, inbuf=0x6190fd00) at ../softmmu/qtest.c:797
  #13 0x55edcbb04bcc in qtest_read (opaque=0x55edd03ff4a0 , 
buf=0x7fffd52bbe30 "outl 0xcf8 0x80001014\noutl 0xcfc 0xf0001000\noutl 0xcf8 
0x80001018\noutl 0xcf8 0x80001004\noutw 0xcfc 0x7\nwritel 0x5c000 
0xbabefee1\nwritel 0x5c028 0x5d000\nwritel 0x5c03c 0x01010101\nwritel 0x5d038 
0xe"..., size=225) at ../softmmu/qtest.c:809
  #14 0x55edcbe73742 in qemu_chr_be_write_impl (s=0x60f02110, 
buf=0x7fffd52bbe30 "outl 0xcf8 0x80001014\noutl 0xcfc 0xf0001000\noutl 0xcf8 
0x80001018\noutl 0xcf8 0x80001004\noutw 0xcfc 0x7\nwritel 0x5c000 
0xbabefee1\nwritel 0x5c028 0x5d000\nwritel 0x5c03c 0x01010101\nwritel 0x5d038 
0xe"..., len=225) at ../chardev/char.c:201
  #15 0x55edcbe73820 in qemu_chr_be_write (s=0x60f02110, 
buf=0x7fffd52bbe30 "outl 0xcf8 0x80001014\noutl 0xcfc 0xf0001000\noutl 0xcf8 
0x80001018\noutl 0xcf8 0x80001004\noutw 0xcfc 0x7\nwritel 0x5c000 
0xbabefee1\nwritel 0x5c028 0x5d000\nwritel 0x5c03c 0x01010101\nwritel 0x5d038 
0xe"..., len=225) at ../chardev/char.c:213
  #16 0x55edcbe9188e in fd_chr_read (chan=0x60802520, cond=(G_IO_IN | 
G_IO_HUP), opaque=0x60f02110) at ../chardev/char-fd.c:68
  #17 0x55edcbe2379d in qio_channel_fd_source_dispatch 
(source=0x60c25c00, callback=0x55edcbe915ac , 
user_data=0x60f02110) at ../io/channel-watch.c:84
  #18 0x7f6f678fb285 in g_main_context_dispatch () at 
/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
  #19 0x55edcc50b503 in glib_pollfds_poll () at ../util/main-loop.c:221
  #20 0x55edcc50b68b in os_host_main_loop_wait (timeout=0) at 
../util/main-loop.c:244
  #21 0x55edcc50b9a5 in main_loop_wait (nonblocking=0) at 
../util/main-loop.c:520
  #22 0x55edcbd8805b in qemu_main_loop () at ../softmmu/vl.c:1678
  #23 0x55edcab67e69 in main (argc=8, argv=0x7fffd52bd1d8, 
envp=0x7fffd52bd220) at ../softmmu/main.c:50
  #24 0x7f6f64188b97 in __libc_start_main (main=0x55edcab67e2a , 
argc=8, argv=0x7fffd52bd1d8, init=, fini=, 
rtld_fini=, stack_end=0x7fffd52b

[Bug 1918975] Re: [Feature request] Propagate interpreter to spawned processes

2021-05-14 Thread Thomas Huth
Also, is this a duplicate of
https://bugs.launchpad.net/qemu/+bug/1912107 or do you mean something
different here?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1918975

Title:
  [Feature request] Propagate interpreter to spawned processes

Status in QEMU:
  Incomplete

Bug description:
  I want QEMU user static to propagate interpreter to spawned processes,
  for instances by adding -R recursive.

  I.e. if my program is interpreted by QEMU static than everything what
  it launches should be interpreted by it, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1918975/+subscriptions



[Bug 1912780] Re: QEMU: Null Pointer Failure in fdctrl_read() in hw/block/fdc.c

2021-05-14 Thread Thomas Huth
** Changed in: qemu
   Status: New => In Progress

** Changed in: qemu
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1912780

Title:
  QEMU: Null Pointer Failure in fdctrl_read() in hw/block/fdc.c

Status in QEMU:
  In Progress

Bug description:
  [via qemu-security list]

  This is Gaoning Pan from Zhejiang University & Ant Security Light-Year Lab.
  I found a Null Pointer issue locates in fdctrl_read() in  hw/block/fdc.c.
  This flaw allows a malicious guest user or process in a denial of service 
condition.

  This issus was discovered in the latest Qemu-5.2.0. When using floppy device, 
there are several
  choices to get specific drive in get_drv(), depending on fdctrl->cur_drv. But 
not all drives are
  initialized properly, leaving fdctrl->drives[0]->blk as NULL. So when the 
drive was used in
  blk_pread(cur_drv->blk, fd_offset(cur_drv), fdctrl->fifo, BDRV_SECTOR_SIZE) 
at line 1918,
  null pointer access triggers, thus denial of service.My reproduced 
environment is as follows:

  Host: ubuntu 18.04
  Guest: ubuntu 18.04

  My boot command is as follows:

qemu-system-x86_64 -enable-kvm -boot c -m 2G -drive 
format=qcow2,file=./ubuntu.img \
 -nic user,hostfwd=tcp:0.0.0.0:-:22 -device floppy,unit=1,drive=mydrive 
\
 -drive id=mydrive,file=null-co://,size=2M,format=raw,if=none -display none

  ASAN output is as follows:
  =
  ==14688==ERROR: AddressSanitizer: SEGV on unknown address 0x034c (pc 
0x5636eee9bbaf bp 0x7ff2a53fdea0 sp 0x7ff2a53fde90 T3)
  ==14688==The signal is caused by a WRITE memory access.
  ==14688==Hint: address points to the zero page.
  #0 0x5636eee9bbae in blk_inc_in_flight ../block/block-backend.c:1356
  #1 0x5636eee9b766 in blk_prw ../block/block-backend.c:1328
  #2 0x5636eee9cd76 in blk_pread ../block/block-backend.c:1491
  #3 0x5636ee1adf24 in fdctrl_read_data ../hw/block/fdc.c:1918
  #4 0x5636ee1a6654 in fdctrl_read ../hw/block/fdc.c:935
  #5 0x5636eebb84c8 in portio_read ../softmmu/ioport.c:179
  #6 0x5636ee9848c5 in memory_region_read_accessor ../softmmu/memory.c:442
  #7 0x5636ee9855c2 in access_with_adjusted_size ../softmmu/memory.c:552
  #8 0x5636ee98f0b7 in memory_region_dispatch_read1 ../softmmu/memory.c:1420
  #9 0x5636ee98f311 in memory_region_dispatch_read ../softmmu/memory.c:1449
  #10 0x5636ee8ff64a in flatview_read_continue ../softmmu/physmem.c:2822
  #11 0x5636ee8ff9e5 in flatview_read ../softmmu/physmem.c:2862
  #12 0x5636ee8ffb83 in address_space_read_full ../softmmu/physmem.c:2875
  #13 0x5636ee8ffdeb in address_space_rw ../softmmu/physmem.c:2903
  #14 0x5636eea6a924 in kvm_handle_io ../accel/kvm/kvm-all.c:2285
  #15 0x5636eea6c5e3 in kvm_cpu_exec ../accel/kvm/kvm-all.c:2531
  #16 0x5636eeca492b in kvm_vcpu_thread_fn ../accel/kvm/kvm-cpus.c:49
  #17 0x5636ef1bc296 in qemu_thread_start ../util/qemu-thread-posix.c:521
  #18 0x7ff337c736da in start_thread 
(/lib/x86_64-linux-gnu/libpthread.so.0+0x76da)
  #19 0x7ff33799ca3e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x121a3e)

  AddressSanitizer can not provide additional info.
  SUMMARY: AddressSanitizer: SEGV ../block/block-backend.c:1356 in 
blk_inc_in_flight
  Thread T3 created by T0 here:
  #0 0x7ff33c580d2f in __interceptor_pthread_create 
(/usr/lib/x86_64-linux-gnu/libasan.so.4+0x37d2f)
  #1 0x5636ef1bc673 in qemu_thread_create ../util/qemu-thread-posix.c:558
  #2 0x5636eeca4ce7 in kvm_start_vcpu_thread ../accel/kvm/kvm-cpus.c:73
  #3 0x5636ee9aa965 in qemu_init_vcpu ../softmmu/cpus.c:622
  #4 0x5636ee82a9b4 in x86_cpu_realizefn ../target/i386/cpu.c:6731
  #5 0x5636eed002f4 in device_set_realized ../hw/core/qdev.c:886
  #6 0x5636eecc59bc in property_set_bool ../qom/object.c:2251
  #7 0x5636eecc0c28 in object_property_set ../qom/object.c:1398
  #8 0x5636eecb6fb9 in object_property_set_qobject ../qom/qom-qobject.c:28
  #9 0x5636eecc1175 in object_property_set_bool ../qom/object.c:1465
  #10 0x5636eecfc286 in qdev_realize ../hw/core/qdev.c:399
  #11 0x5636ee739b34 in x86_cpu_new ../hw/i386/x86.c:111
  #12 0x5636ee739d6d in x86_cpus_init ../hw/i386/x86.c:138
  #13 0x5636ee6f843e in pc_init1 ../hw/i386/pc_piix.c:159
  #14 0x5636ee6fab1e in pc_init_v5_2 ../hw/i386/pc_piix.c:438
  #15 0x5636ee1cb4a7 in machine_run_board_init ../hw/core/machine.c:1134
  #16 0x5636ee9c323d in qemu_init ../softmmu/vl.c:4369
  #17 0x5636edd92c71 in main ../softmmu/main.c:49
  #18 0x7ff33789cb96 in __libc_start_main 
(/lib/x86_64-linux-gnu/libc.so.6+0x21b96)

  ==14688==ABORTING

  Reproducer is attached.

  Best regards.
  Gaoning Pan of Zhejiang University & Ant Security Light-Year Lab

To manage notifications about this bug go to:
https://bugs.launch

Re: [PATCH] fdc: check drive block device before usage (CVE-2021-20196)

2021-05-14 Thread John Snow

On 5/14/21 3:23 PM, Thomas Huth wrote:

On 23/01/2021 11.03, P J P wrote:

From: Prasad J Pandit 

While processing ioport command in 'fdctrl_write_dor', device
controller may select a drive which is not initialised with a
block device. This may result in a NULL pointer dereference.
Add checks to avoid it.

Fixes: CVE-2021-20196
Reported-by: Gaoning Pan 
Buglink: https://bugs.launchpad.net/qemu/+bug/1912780
Signed-off-by: Prasad J Pandit 
---
  hw/block/fdc.c | 11 +--
  1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index 3636874432..13a9470d19 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -1429,7 +1429,9 @@ static void fdctrl_write_dor(FDCtrl *fdctrl, 
uint32_t value)

  }
  }
  /* Selected drive */
-    fdctrl->cur_drv = value & FD_DOR_SELMASK;
+    if (fdctrl->drives[value & FD_DOR_SELMASK].blk) {
+    fdctrl->cur_drv = value & FD_DOR_SELMASK;
+    }
  fdctrl->dor = value;
  }
@@ -1894,6 +1896,10 @@ static uint32_t fdctrl_read_data(FDCtrl *fdctrl)
  uint32_t pos;
  cur_drv = get_cur_drv(fdctrl);
+    if (!cur_drv->blk) {
+    FLOPPY_DPRINTF("No drive connected\n");
+    return 0;
+    }
  fdctrl->dsr &= ~FD_DSR_PWRDOWN;
  if (!(fdctrl->msr & FD_MSR_RQM) || !(fdctrl->msr & FD_MSR_DIO)) {
  FLOPPY_DPRINTF("error: controller not ready for reading\n");
@@ -2420,7 +2426,8 @@ static void fdctrl_write_data(FDCtrl *fdctrl, 
uint32_t value)

  if (pos == FD_SECTOR_LEN - 1 ||
  fdctrl->data_pos == fdctrl->data_len) {
  cur_drv = get_cur_drv(fdctrl);
-    if (blk_pwrite(cur_drv->blk, fd_offset(cur_drv), 
fdctrl->fifo,

+    if (cur_drv->blk == NULL
+    || blk_pwrite(cur_drv->blk, fd_offset(cur_drv), 
fdctrl->fifo,

 BDRV_SECTOR_SIZE, 0) < 0) {
  FLOPPY_DPRINTF("error writing sector %d\n",
 fd_sector(cur_drv));



Ping again!

Could anybody review / pick this up?

  Thomas



Yep. Not forgotten, despite appearances. Clearing my Python review 
backlog, then onto FDC/IDE.


In the meantime, anything anyone else happens to feel comfortable 
staging won't upset me any. I don't insist they go through my tree right 
now.





[Bug 1913510] Re: [Fuzz] qemu-system-i386 virtio-mouse: Assertion in address_space_lduw_le_cached failed

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/302


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #302
   https://gitlab.com/qemu-project/qemu/-/issues/302

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1913510

Title:
  [Fuzz] qemu-system-i386 virtio-mouse: Assertion in
  address_space_lduw_le_cached failed

Status in QEMU:
  Expired

Bug description:
  --[ Reproducer

  cat << EOF | ./build/qemu-system-i386 -machine q35,accel=qtest -nodefaults \
  -device virtio-mouse -display none -qtest stdio
  outl 0xcf8 0x8820
  outl 0xcfc 0xe0004000
  outl 0xcf8 0x8804
  outb 0xcfc 0x02
  write 0xe000400c 0x4 0x003fe62e
  write 0xe0004016 0x1 0x01
  write 0xe0004024 0x1 0x01
  write 0xe000401c 0x1 0x01
  write 0xe0007007 0x1 0x00
  write 0xe0004018 0x1 0x41
  write 0xe0007007 0x1 0x00
  EOF

  
  --[ Output

  [I 1611805425.711054] OPENED
  [R +0.040080] outl 0xcf8 0x8820
  OK
  [S +0.040117] OK
  [R +0.040136] outl 0xcfc 0xe0004000
  OK
  [S +0.040155] OK
  [R +0.040165] outl 0xcf8 0x8804
  OK
  [S +0.040172] OK
  [R +0.040184] outb 0xcfc 0x02
  OK
  [S +0.040683] OK
  [R +0.040702] write 0xe000400c 0x4 0x003fe62e
  OK
  [S +0.040735] OK
  [R +0.040743] write 0xe0004016 0x1 0x01
  OK
  [S +0.040748] OK
  [R +0.040755] write 0xe0004024 0x1 0x01
  OK
  [S +0.040760] OK
  [R +0.040767] write 0xe000401c 0x1 0x01
  OK
  [S +0.040785] OK
  [R +0.040792] write 0xe0007007 0x1 0x00
  OK
  [S +0.040810] OK
  [R +0.040817] write 0xe0004018 0x1 0x41
  OK
  [S +0.040822] OK
  [R +0.040839] write 0xe0007007 0x1 0x00
  qemu-system-i386: /home/ubuntu/qemu/include/exec/memory_ldst_cached.h.inc:54: 
uint32_t address_space_lduw_le_cached(MemoryRegionCache *, hwaddr, MemTxAttrs, 
MemTxResult *): Assertion `addr < cache->len && 2 <= cache->len - addr' failed.

  
  -- [ Original ASAN report

  qemu-fuzz-i386: /home/ubuntu/qemu/include/exec/memory_ldst_cached.h.inc:54: 
uint32_t address_space_lduw_le_cached(MemoryRegionCache *, hwaddr, MemTxAttrs, 
MemTxResult *): Assertion `addr < cache->len && 2 <= cache->len - addr' failed.
  ==3406167== ERROR: libFuzzer: deadly signal
  #0 0x5644e4ae0f21 in __sanitizer_print_stack_trace 
(/home/ubuntu/qemu/build/qemu-fuzz-i386+0x2a47f21)
  #1 0x5644e4a29fe8 in fuzzer::PrintStackTrace() 
(/home/ubuntu/qemu/build/qemu-fuzz-i386+0x2990fe8)
  #2 0x5644e4a10023 in fuzzer::Fuzzer::CrashCallback() 
(/home/ubuntu/qemu/build/qemu-fuzz-i386+0x2977023)
  #3 0x7f77e2a4b3bf  (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)
  #4 0x7f77e285c18a in raise (/lib/x86_64-linux-gnu/libc.so.6+0x4618a)
  #5 0x7f77e283b858 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x25858)
  #6 0x7f77e283b728  (/lib/x86_64-linux-gnu/libc.so.6+0x25728)
  #7 0x7f77e284cf35 in __assert_fail 
(/lib/x86_64-linux-gnu/libc.so.6+0x36f35)
  #8 0x5644e60051b2 in address_space_lduw_le_cached 
/home/ubuntu/qemu/include/exec/memory_ldst_cached.h.inc:54:5
  #9 0x5644e60051b2 in lduw_le_phys_cached 
/home/ubuntu/qemu/include/exec/memory_ldst_phys.h.inc:91:12
  #10 0x5644e60051b2 in virtio_lduw_phys_cached 
/home/ubuntu/qemu/include/hw/virtio/virtio-access.h:166:12
  #11 0x5644e5ff476d in vring_avail_ring 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:327:12
  #12 0x5644e5ff476d in vring_get_used_event 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:333:12
  #13 0x5644e5ff476d in virtio_split_should_notify 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:2473:35
  #14 0x5644e5ff476d in virtio_should_notify 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:2524:16
  #15 0x5644e5ff5556 in virtio_notify 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:2566:14
  #16 0x5644e5571d2a in virtio_input_handle_sts 
/home/ubuntu/qemu/build/../hw/input/virtio-input.c:100:5
  #17 0x5644e5ff20ec in virtio_queue_notify 
/home/ubuntu/qemu/build/../hw/virtio/virtio.c:2366:9
  #18 0x5644e60908fb in memory_region_write_accessor 
/home/ubuntu/qemu/build/../softmmu/memory.c:491:5
  #19 0x5644e6090363 in access_with_adjusted_size 
/home/ubuntu/qemu/build/../softmmu/memory.c:552:18
  #20 0x5644e608fbc0 in memory_region_dispatch_write 
/home/ubuntu/qemu/build/../softmmu/memory.c
  #21 0x5644e5b97bc6 in flatview_write_continue 
/home/ubuntu/qemu/build/../softmmu/physmem.c:2759:23
  #22 0x5644e5b8d328 in flatview_write 
/home/ubuntu/qemu/build/../softmmu/physmem.c:2799:14
  #23 0x5644e5b8d328 in address_space_write 
/home/ubuntu/qemu/build/../softmmu/physmem.c:2891:18
  #24 0x5644e6018906 in qtest_process_command 
/home/ubuntu/qemu/build/../softmmu/qtest.c:539:13
  #25 0x5644e60159df in qtest_process_inbuf 
/home/ubuntu/qemu/build/../

Re: [PATCH] fdc: check drive block device before usage (CVE-2021-20196)

2021-05-14 Thread Thomas Huth

On 23/01/2021 11.03, P J P wrote:

From: Prasad J Pandit 

While processing ioport command in 'fdctrl_write_dor', device
controller may select a drive which is not initialised with a
block device. This may result in a NULL pointer dereference.
Add checks to avoid it.

Fixes: CVE-2021-20196
Reported-by: Gaoning Pan 
Buglink: https://bugs.launchpad.net/qemu/+bug/1912780
Signed-off-by: Prasad J Pandit 
---
  hw/block/fdc.c | 11 +--
  1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index 3636874432..13a9470d19 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -1429,7 +1429,9 @@ static void fdctrl_write_dor(FDCtrl *fdctrl, uint32_t 
value)
  }
  }
  /* Selected drive */
-fdctrl->cur_drv = value & FD_DOR_SELMASK;
+if (fdctrl->drives[value & FD_DOR_SELMASK].blk) {
+fdctrl->cur_drv = value & FD_DOR_SELMASK;
+}
  
  fdctrl->dor = value;

  }
@@ -1894,6 +1896,10 @@ static uint32_t fdctrl_read_data(FDCtrl *fdctrl)
  uint32_t pos;
  
  cur_drv = get_cur_drv(fdctrl);

+if (!cur_drv->blk) {
+FLOPPY_DPRINTF("No drive connected\n");
+return 0;
+}
  fdctrl->dsr &= ~FD_DSR_PWRDOWN;
  if (!(fdctrl->msr & FD_MSR_RQM) || !(fdctrl->msr & FD_MSR_DIO)) {
  FLOPPY_DPRINTF("error: controller not ready for reading\n");
@@ -2420,7 +2426,8 @@ static void fdctrl_write_data(FDCtrl *fdctrl, uint32_t 
value)
  if (pos == FD_SECTOR_LEN - 1 ||
  fdctrl->data_pos == fdctrl->data_len) {
  cur_drv = get_cur_drv(fdctrl);
-if (blk_pwrite(cur_drv->blk, fd_offset(cur_drv), fdctrl->fifo,
+if (cur_drv->blk == NULL
+|| blk_pwrite(cur_drv->blk, fd_offset(cur_drv), fdctrl->fifo,
 BDRV_SECTOR_SIZE, 0) < 0) {
  FLOPPY_DPRINTF("error writing sector %d\n",
 fd_sector(cur_drv));



Ping again!

Could anybody review / pick this up?

 Thomas




[Bug 1921280] Re: OpenIndiana stuck in boot loop when using hvf

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

** Tags added: hvf

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921280

Title:
  OpenIndiana stuck in boot loop when using hvf

Status in QEMU:
  Incomplete

Bug description:
  I'm using QEMU version 5.2.0 on macOS, and running the "OpenIndiana
  Hipster 2020.10 Text Install DVD (64-bit x86)" ISO:

  qemu-system-x86_64 -cdrom ~/Downloads/OI-hipster-text-20201031.iso -m
  2048 -accel hvf -cpu host

  It gets to "Booting...", stays there for a bit, and then restarts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921280/+subscriptions



Re: [PATCH 07/10] iotests: use subprocess.run where possible

2021-05-14 Thread John Snow

On 5/12/21 5:46 PM, John Snow wrote:

pylint 2.8.x adds warnings whenever we use Popen calls without using
'with', so it's desirable to convert synchronous calls to run()
invocations where applicable.

(Though, this trades one pylint warning for another due to a pylint bug,
which I've silenced with a pragma and a link to the bug.)

Signed-off-by: John Snow 
---
  tests/qemu-iotests/iotests.py | 19 +++
  1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 5af01828951..46deb7f4dd4 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -113,15 +113,16 @@ def qemu_tool_pipe_and_status(tool: str, args: 
Sequence[str],
  Run a tool and return both its output and its exit code
  """
  stderr = subprocess.STDOUT if connect_stderr else None
-subp = subprocess.Popen(args,
-stdout=subprocess.PIPE,
-stderr=stderr,
-universal_newlines=True)
-output = subp.communicate()[0]
-if subp.returncode < 0:
+res = subprocess.run(args,
+ stdout=subprocess.PIPE,
+ stderr=stderr,
+ universal_newlines=True,
+ check=False)
+output = res.stdout
+if res.returncode < 0:
  cmd = ' '.join(args)
-sys.stderr.write(f'{tool} received signal {-subp.returncode}: {cmd}\n')
-return (output, subp.returncode)
+sys.stderr.write(f'{tool} received signal {-res.returncode}: {cmd}\n')
+return (output, res.returncode)
  
  def qemu_img_pipe_and_status(*args: str) -> Tuple[str, int]:

  """
@@ -1153,6 +1154,8 @@ def _verify_virtio_scsi_pci_or_ccw() -> None:
  
  
  def supports_quorum():

+# https://github.com/PyCQA/astroid/issues/689


Oh, realizing this bug was closed, so this is something 
similar-but-different.


Bah. I'll delete this comment and change the commit message.


+# pylint: disable=unsupported-membership-test
  return 'quorum' in qemu_img_pipe('--help')
  
  def verify_quorum():







[Bug 1921082] Re: VM crash when process broadcast MCE

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921082

Title:
  VM crash when process broadcast MCE

Status in QEMU:
  Incomplete

Bug description:
  When i do memory SRAR test for VM, I meet the following issue:

  My VM has 16 vCPU, I will inject one UE error to memory which is accessed by 
VM, Then host MCE is raised and SIGBUS is send to VM, and qemu take control.
  Qemu will check the broadcast attribute by following  
cpu_x86_support_mca_broadcast();  

  Then Qemu may inject MCE to all vCPU, as vCPU is just one process for
  HOST, we can't guarantee all the vCPUs will enter MCE hander in 1S
  sync time, and the VM may panic.

  This issue will be easily fixed by expand monarch_timeout
  configuration, but the exact monarch_timeout can't be easily got, as
  it will depand on the num of vCPUs and current system schedule status.

  I am wondering why VM need broadcast attribute for MCE, When qeme
  process MCE event form host, it will always be signaled for one vCPU?
  If so, why does qemu need boradcast the MCE event to all vCPUs?

  Can weu just deliver LMCE to one specifc vCPU and make this behavior
  default?

  If anything wrong, Please point out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921082/+subscriptions



[Bug 1915063] Re: Windows 10 wil not install using qemu-system-x86_64

2021-05-14 Thread Thomas Huth
The patch for QEMU that has been mentioned in comment #38 has been
merged already, so I'm marking this as Fix-Released there.

** Changed in: qemu
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1915063

Title:
  Windows 10 wil not install using qemu-system-x86_64

Status in QEMU:
  Fix Released
Status in linux package in Ubuntu:
  Confirmed
Status in linux-oem-5.10 package in Ubuntu:
  Fix Released
Status in linux-oem-5.6 package in Ubuntu:
  Confirmed
Status in qemu package in Ubuntu:
  Invalid

Bug description:
  Steps to reproduce
  install virt-manager and ovmf if nopt already there
  copy windows and virtio iso files to /var/lib/libvirt/images

  Use virt-manager from local machine to create your VMs with the disk, CPUs 
and memory required
  Select customize configuration then select OVMF(UEFI) instead of seabios
  set first CDROM to the windows installation iso (enable in boot options)
  add a second CDROM and load with the virtio iso
change spice display to VNC

Always get a security error from windows and it fails to launch the 
installer (works on RHEL and Fedora)
  I tried updating the qemu version from Focals 4.2 to Groovy 5.0 which was of 
no help
  --- 
  ProblemType: Bug
  ApportVersion: 2.20.11-0ubuntu27.14
  Architecture: amd64
  CasperMD5CheckResult: skip
  CurrentDesktop: ubuntu:GNOME
  DistributionChannelDescriptor:
   # This is the distribution channel descriptor for the OEM CDs
   # For more information see 
http://wiki.ubuntu.com/DistributionChannelDescriptor
   
canonical-oem-sutton-focal-amd64-20201030-422+pc-sutton-bachman-focal-amd64+X00
  DistroRelease: Ubuntu 20.04
  InstallationDate: Installed on 2021-01-20 (19 days ago)
  InstallationMedia: Ubuntu 20.04 "Focal" - Build amd64 LIVE Binary 
20201030-14:39
  MachineType: LENOVO 30E102Z
  NonfreeKernelModules: nvidia_modeset nvidia
  Package: linux (not installed)
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcFB: 0 EFI VGA
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-5.6.0-1042-oem 
root=UUID=389cd165-fc52-4814-b837-a1090b9c2387 ro locale=en_US quiet splash 
vt.handoff=7
  ProcVersionSignature: Ubuntu 5.6.0-1042.46-oem 5.6.19
  RelatedPackageVersions:
   linux-restricted-modules-5.6.0-1042-oem N/A
   linux-backports-modules-5.6.0-1042-oem  N/A
   linux-firmware  1.187.8
  RfKill:
   
  Tags:  focal
  Uname: Linux 5.6.0-1042-oem x86_64
  UpgradeStatus: No upgrade log present (probably fresh install)
  UserGroups: adm cdrom dip docker kvm libvirt lpadmin plugdev sambashare sudo
  _MarkForUpload: True
  dmi.bios.date: 07/29/2020
  dmi.bios.vendor: LENOVO
  dmi.bios.version: S07KT08A
  dmi.board.name: 1046
  dmi.board.vendor: LENOVO
  dmi.board.version: Not Defined
  dmi.chassis.type: 3
  dmi.chassis.vendor: LENOVO
  dmi.chassis.version: None
  dmi.modalias: 
dmi:bvnLENOVO:bvrS07KT08A:bd07/29/2020:svnLENOVO:pn30E102Z:pvrThinkStationP620:rvnLENOVO:rn1046:rvrNotDefined:cvnLENOVO:ct3:cvrNone:
  dmi.product.family: INVALID
  dmi.product.name: 30E102Z
  dmi.product.sku: LENOVO_MT_30E1_BU_Think_FM_ThinkStation P620
  dmi.product.version: ThinkStation P620
  dmi.sys.vendor: LENOVO

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1915063/+subscriptions



[Bug 1921061] Re: Corsair iCUE Install Fails, qemu VM Reboots

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921061

Title:
  Corsair iCUE Install Fails, qemu VM Reboots

Status in QEMU:
  Incomplete

Bug description:
  Hi,

  I had this working before, but in the latest version of QEMU (built
  from master), when I try to install Corsair iCUE, and it gets to the
  driver install point => my Windows 10 VM just reboots! I would be
  happy to capture logs, but ... what logs exist for an uncontrolled
  reboot? Thinking they are lost in the reboot :-(.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921061/+subscriptions



Re: [PATCH 03/10] python/machine: use subprocess.run instead of subprocess.Popen

2021-05-14 Thread John Snow

On 5/14/21 10:08 AM, Wainer dos Santos Moschetta wrote:
Now it might throw a CalledProcessError given that `check=True`. 
Shouldn't it capture the exception and (possible) re-throw as an 
QEMUMachineError?


I lied to you again. The existing callers all check for failure 
explicitly, so in the interest of avoiding an API change, I'm just going 
to set check=False here.


We can improve the interface separately some other time.

--js




[Bug 1922102] Re: Broken tap networking on macOS host

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1922102

Title:
  Broken tap networking on macOS host

Status in QEMU:
  Incomplete

Bug description:
  Building QEMU with GLib newer than 2.58.3 corrupts tap networking on macOS 
hosts.
  Tap device was provided by Tun/Tap kernel extension installed from brew:
    brew install tuntap

  Checked revisions:
    553032d (v5.2.0)
    6d40ce0 (v6.0.0-rc1)

  Host:
   MacBook Pro (Retina, 15-inch, Mid 2015)
   macOS Catalina 10.15.6 (19G2021)

  Guest:
    Linux Ubuntu 4.4.0-206-generic x86_64
    Also tested macOS Catalina 10.15.7 as a guest, the behaviour is the same.

  QEMU command line:

  qemu-system-x86_64 \
    -drive file=hdd.qcow2,if=virtio,format=qcow2 \
    -m 3G \
    -nic tap,script=tap-up.sh

  tap-up.sh:

   #!/bin/sh

   TAPDEV="$1"
   BRIDGEDEV="bridge0"

   ifconfig "$BRIDGEDEV" addm "$TAPDEV"

  Enabling/disabling Hypervisor.Framework acceleration (`-accel hvf`)
  has no effect.

  How to reproduce:
    1. Build & install GLib > 2.58.3 (tested 2.60.7)
    2. Build qemu-system-x86_64 with GLib > 2.58.3
    3. Boot any guest with tap networking enabled
    4. See that the external network is inaccessible

  Hotfix:
    1. Build & install GLib 2.58.3
    2. Build qemu-system-x86_64 with GLib 2.58.3
    3. Boot any guest with tap networking enabled
    4. See that the external network is accessible, everything is working as 
expected

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1922102/+subscriptions



[Bug 1922252] Re: [feature request] webcam support

2021-05-14 Thread Thomas Huth
Have you already tried to simply pass the host USB webcam through to the
guest? ... that's likely easier and faster than adding software
emulation...

** Changed in: qemu
   Status: New => Incomplete

** Tags added: feature-request usb

** Changed in: qemu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1922252

Title:
  [feature request] webcam support

Status in QEMU:
  Incomplete

Bug description:
  Please

  I am impatient to get something as "-device usb-webcam" to share
  dynamically the webcam between host and guest.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1922252/+subscriptions



[Bug 1921444] Re: Q35 doesn't support to hot add the 2nd PCIe device to KVM guest

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921444

Title:
  Q35 doesn't support to hot add the 2nd PCIe device to KVM guest

Status in QEMU:
  Incomplete

Bug description:
  KVM: https://git.kernel.org/pub/scm/virt/kvm/kvm.git  branch: next, commit: 
4a98623d
  Qemu: https://git.qemu.org/git/qemu.git  branch: master, commit: 9e2e9fe3

  Created a KVM guest with Q35 chipset, and try to hot add 2 PCIe device
  to guest with qemu internal command device_add, the 1st device can be
  added successfully, but the 2nd device failed to hot add.

  If guest chipset is legacy i440fx, the 2 device can be added
  successfully.

  1. Enable VT-d in BIOS
  2. load KVM modules in Linux OS: modprobe kvm; modprobe kvm_intel
  3. Bind 2 device to vfio-pci
  echo :b1:00.0 > /sys/bus/pci/drivers/i40e/unbind
  echo "8086 1572" > /sys/bus/pci/drivers/vfio-pci/new_id 
  echo :b1:00.1 > /sys/bus/pci/drivers/i40e/unbind
  echo "8086 1572" > /sys/bus/pci/drivers/vfio-pci/new_id 

  4. create guest with Q35 chipset:
  qemu-system-x86_64 --accel kvm -m 4096 -smp 4 -drive 
file=/home/rhel8.2.qcow2,if=none,id=virtio-disk0 -device 
virtio-blk-pci,drive=virtio-disk0 -cpu host -machine q35 -device 
pcie-root-port,id=root1 -daemonize

  5. hot add the 1st device to guest successfully
  in guest qemu monitor "device_add vfio-pci,host=b1:00.0,id=nic0,bus=root1"
  6. hot add the 2nd device to guest
  in guest qemu monitor "device_add vfio-pci,host=b1:00.1,id=nic1,bus=root1"
  The 2nd device doesn't be added in guest, and the 1st device is removed from 
guest. 

  Guest partial log:
  [  110.452272] pcieport :00:04.0: pciehp: Slot(0): Attention button 
pressed
  [  110.453314] pcieport :00:04.0: pciehp: Slot(0) Powering on due to 
button press
  [  110.454156] pcieport :00:04.0: pciehp: Slot(0): Card present
  [  110.454792] pcieport :00:04.0: pciehp: Slot(0): Link Up
  [  110.580927] pci :01:00.0: [8086:1572] type 00 class 0x02
  [  110.582560] pci :01:00.0: reg 0x10: [mem 0x-0x007f 64bit 
pref]
  [  110.583453] pci :01:00.0: reg 0x1c: [mem 0x-0x7fff 64bit 
pref]
  [  110.584278] pci :01:00.0: reg 0x30: [mem 0x-0x0007 pref]
  [  110.585051] pci :01:00.0: Max Payload Size set to 128 (was 512, max 
2048)
  [  110.586621] pci :01:00.0: PME# supported from D0 D3hot D3cold
  [  110.588140] pci :01:00.0: BAR 0: no space for [mem size 0x0080 
64bit pref]
  [  110.588954] pci :01:00.0: BAR 0: failed to assign [mem size 0x0080 
64bit pref]
  [  110.589797] pci :01:00.0: BAR 6: assigned [mem 0xfe80-0xfe87 
pref]
  [  110.590703] pci :01:00.0: BAR 3: assigned [mem 0xfe00-0xfe007fff 
64bit pref]
  [  110.592085] pcieport :00:04.0: PCI bridge to [bus 01]
  [  110.592755] pcieport :00:04.0:   bridge window [io  0x1000-0x1fff]
  [  110.594403] pcieport :00:04.0:   bridge window [mem 
0xfe80-0xfe9f]
  [  110.595847] pcieport :00:04.0:   bridge window [mem 
0xfe00-0xfe1f 64bit pref]
  [  110.597867] PCI: No. 2 try to assign unassigned res
  [  110.597870] release child resource [mem 0xfe00-0xfe007fff 64bit pref]
  [  110.597871] pcieport :00:04.0: resource 15 [mem 0xfe00-0xfe1f 
64bit pref] released
  [  110.598881] pcieport :00:04.0: PCI bridge to [bus 01]
  [  110.600789] pcieport :00:04.0: BAR 15: assigned [mem 
0x18000-0x180bf 64bit pref]
  [  110.601731] pci :01:00.0: BAR 0: assigned [mem 0x18000-0x1807f 
64bit pref]
  [  110.602849]

[Bug 1921635] Re: ESP SCSI adapter not working with DOS ASPI drivers

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921635

Title:
  ESP SCSI adapter not working with DOS ASPI drivers

Status in QEMU:
  Incomplete

Bug description:
  I have been trying to install the DOS ASPI drivers for the ESP scsi
  card. Both in am53c974 and dc390 modes. Neither works but they don't
  work in different ways.

  The following things appear to be problematic:

  * The am53c974 should work with the PcSCSI drivers (AMSIDA.SYS) but the ASPI 
driver never manages to get past initializing the card. The VM never continues.
  * The dc390 ASPI driver fares a little better. The ASPI driver loads and is 
semi-functional but the drivers for the peripherals don't work.
   - ASPI.SYS (creative name) loads
   - TRMDISK.SYS fails to load when a cd-drive is attached and will crashs 
scanning the scsi-id where the cd drive is attached
   - TRMDISK.SYS loads without a CD drive attached but fails to read any 
scsi-hd devices attached. The TFDISK.EXE formatter crashes.
   - TRMCD.SYS loads, but can not detect any CD drives.

  The various permutations:
  am53c974 hang on ASPI driver load: (CD only attached)

  ~/src/qemu/build/qemu-system-i386 -m 64 -device am53c974,id=scsi0
  -device scsi-cd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0
  -drive file=../Windows\ 98\ Second\ Edition.iso,if=none,id=drive0 -vga
  cirrus -fda am53c974_aspi.img -bios /home/hp/src/seabios/out/bios.bin
  -boot a  -trace 'scsi*' -trace 'esp*' -D log

  dc390 crash because of CDROM attachment and loading TRMDISK.SYS (Only CD 
attached)
  ~/src/qemu/build/qemu-system-i386 -m 64 -device dc390,id=scsi0,rombar=0 
-device scsi-cd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 -drive 
file=../Windows\ 98\ Second\ Edition.iso,if=none,id=drive0 -vga cirrus -fda 
dc390_all.img  -bios /home/hp/src/seabios/out/bios.bin -boot a  -trace 'scsi*' 
-trace 'esp*' -D log

  dc390 successful boot, but TRMDISK.SYS not working (TFDISK.EXE will crash)
  ~/src/qemu/build/qemu-system-i386 -m 64 -device dc390,id=scsi0 -device 
scsi-hd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0,logical_block_size=512
 -drive file=small.qcow2,if=none,id=drive0 -vga cirrus -fda dc390_all.img -bios 
/home/hp/src/seabios/out/bios.bin -boot a  -trace 'scsi*' -trace 'esp*' -D log

  dc390 successful boot, TRMDISK.SYS not loaded, only TRMCD.SYS. CDROM not 
detected
  ~/src/qemu/build/qemu-system-i386 -m 64 -device dc390,id=scsi0,rombar=0 
-device scsi-cd,drive=drive0,bus=scsi0.0,channel=0,scsi-id=0,lun=0 -drive 
file=../Windows\ 98\ Second\ Edition.iso,if=none,id=drive0 -vga cirrus -fda 
dc390_cd.img  -bios /home/hp/src/seabios/out/bios.bin -boot a  -trace 'scsi*' 
-trace 'esp*' -D log

  All of these tests were done on
  7b9a3c9f94bcac23c534bc9f42a9e914b433b299 as well as the 'esp-next'
  branch found here: https://github.com/mcayland/qemu/tree/esp-next

  The bios file is a seabios master with all int13 support disabled.
  With it enabled even less works but I figured this would be a seabios
  bug and not a qemu one.

  The actual iso and qcow2 files used don't appear the matter. the
  'small.qcow2' is an empty drive of 100MB. I have also tried other ISOs
  in the CD drives, or even not put any cd in the drives with the same
  results.

  I will attach all of the above images.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921635/+subscriptions



[Bug 1921092] Re: gdbstub debug of multi-cluster machines is undocumented and confusing

2021-05-14 Thread Thomas Huth
Is there still anything to do here or could we close the ticket now?

** Changed in: qemu
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1921092

Title:
  gdbstub debug of multi-cluster machines is undocumented and confusing

Status in QEMU:
  Incomplete

Bug description:
  Working with Zephyr RTOS, running a multi core sample on mps2_an521 works 
fine. Both cpus start.
  Trying to debug with options -s -S the second core fails to boot.

  Posted with explanation also at: https://github.com/zephyrproject-
  rtos/zephyr/issues/33635

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1921092/+subscriptions



[Bug 1918975] Re: [Feature request] Propagate interpreter to spawned processes

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Tags added: feature-request linux-user

** Changed in: qemu
   Importance: Undecided => Wishlist

** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1918975

Title:
  [Feature request] Propagate interpreter to spawned processes

Status in QEMU:
  Incomplete

Bug description:
  I want QEMU user static to propagate interpreter to spawned processes,
  for instances by adding -R recursive.

  I.e. if my program is interpreted by QEMU static than everything what
  it launches should be interpreted by it, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1918975/+subscriptions



Re: [PATCH 05/10] python/machine: Disable pylint warning for open() in _pre_launch

2021-05-14 Thread John Snow

On 5/14/21 10:42 AM, Wainer dos Santos Moschetta wrote:

Hi,

On 5/12/21 6:46 PM, John Snow wrote:

Shift the open() call later so that the pylint pragma applies *only* to
that one open() call. Add a note that suggests why this is safe: the
resource is unconditionally cleaned up in _post_shutdown().



You can also put it in a pylint disable/enable block. E.g.:

     # pylint: disable=consider-using-with

     self._qemu_log_file = open(self._qemu_log_path, 'wb')

     # pylint: enable=consider-using-with

However I don't know if this is bad practice. :)



I learned a new trick!


Reviewed-by: Wainer dos Santos Moschetta 



Thanks. In this case I will probably leave this alone unless someone 
else voices a strong opinion. I figure the comment protects us against 
future oopses well enough.




_post_shutdown is called after failed launches (see launch()), and
unconditionally after every call to shutdown(), and therefore also on
__exit__.

Signed-off-by: John Snow 
---
  python/qemu/machine.py | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/python/qemu/machine.py b/python/qemu/machine.py
index c13ff9b32bf..8f86303b48f 100644
--- a/python/qemu/machine.py
+++ b/python/qemu/machine.py
@@ -308,7 +308,6 @@ def _pre_launch(self) -> None:
  self._temp_dir = tempfile.mkdtemp(prefix="qemu-machine-",
    dir=self._test_dir)
  self._qemu_log_path = os.path.join(self._temp_dir, 
self._name + ".log")

-    self._qemu_log_file = open(self._qemu_log_path, 'wb')
  if self._console_set:
  self._remove_files.append(self._console_address)
@@ -323,6 +322,11 @@ def _pre_launch(self) -> None:
  nickname=self._name
  )
+    # NOTE: Make sure any opened resources are *definitely* freed in
+    # _post_shutdown()!
+    # pylint: disable=consider-using-with
+    self._qemu_log_file = open(self._qemu_log_path, 'wb')
+
  def _post_launch(self) -> None:
  if self._qmp_connection:
  self._qmp.accept()





[Bug 1920871] Re: netperf UDP_STREAM high packet loss on QEMU tap network

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1920871

Title:
  netperf UDP_STREAM high packet loss on QEMU tap network

Status in QEMU:
  Incomplete

Bug description:
  Hi, I boot a guest with "-netdev
  tap,id=hn0,vhost=off,br=br0,helper=/usr/local/libexec/qemu-bridge-
  helper" network option, and using "netperf -H IP -t UDP_STREAM" to
  test guest UDP performance, I got the following output:

  Socket  Message  Elapsed  Messages
  SizeSize Time Okay Errors   Throughput
  bytes   bytessecs#  #   10^6bits/sec

  212992   65507   10.00  144710  07583.56
  212992   10.00  32  1.68

  We can find most of UDP packets are lost. But I test another host machine or 
use "-netdev usr,x". I can got:
  Socket  Message  Elapsed  Messages
  SizeSize Time Okay Errors   Throughput
  bytes   bytessecs#  #   10^6bits/sec

  212992   65507   10.00   18351  0 961.61
  212992   10.00   18350961.56

  most of UDP packets are recived.

  And If we check the tap qemu used, we can see:
  ifconfig tap0
  tap0: flags=4419  mtu 1500
  inet6 fe80::ecc6:21ff:fe6f:b174  prefixlen 64  scopeid 0x20
  ether ee:c6:21:6f:b1:74  txqueuelen 1000  (Ethernet)
  RX packets 282  bytes 30097 (29.3 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 9086214  bytes 12731596673 (11.8 GiB)
  TX errors 0  dropped 16349024 overruns 0  carrier 0  collisions 0
  lots of TX packets are dropped.

  list other packet size:

  ➜  boot netperf -H 192.168.199.200 -t UDP_STREAM -- -m 1
  MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.199.200 () port 0 AF_INET
  Socket  Message  Elapsed  Messages
  SizeSize Time Okay Errors   Throughput
  bytes   bytessecs#  #   10^6bits/sec

  212992   1   10.00 2297941  0   1.84
  212992   10.00 1462024  1.17

  ➜  boot netperf -H 192.168.199.200 -t UDP_STREAM -- -m 128
  MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
192.168.199.200 () port 0 AF_INET
  Socket  Message  Elapsed  Messages
  SizeSize Time Okay Errors   Throughput
  bytes   bytessecs#  #   10^6bits/sec

  212992 128   10.00 2311547  0 236.70
  212992   10.00 1359834139.25

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1920871/+subscriptions



[Bug 1920013] Re: Unable to pass-through PCIe devices from a ppc64le host to an x86_64 guest

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1920013

Title:
  Unable to pass-through PCIe devices from a ppc64le host to an x86_64
  guest

Status in QEMU:
  Incomplete

Bug description:
  Attempting to pass through a PCIe device from a ppc64le host to an
  x86_64 guest with QEMU v5.2.0-3031-g571d413b5d (built from git master)
  fails with the following error:

  include/exec/memory.h:43:IOMMU_MEMORY_REGION: Object 0x10438eb00
  is not an instance of type qemu:iommu-memory-region

  To reproduce this issue, simply run the following command on a POWER9
  system:

  qemu-system-x86_64 -machine q35 -device vfio-pci,host=$DBSF

  Where $DBSF is a domain:bus:slot.function PCIe device address.

  This also fails with QEMU 3.1.0 (from Debian Buster), so I assume this
  has never worked. Helpfully, the error message it prints seems to
  indicate where the problem is:

  hw/vfio/spapr.c:147:vfio_spapr_create_window: Object 0x164473510
  is not an instance of type qemu:iommu-memory-region

  My kernel (Linux v5.8.0 plus some small unrelated patches) is built
  with the page size set to 4k, so this issue shouldn't be due to a page
  size mismatch. And as I stated earlier, my host arch is ppc64le, so it
  shouldn't be an endianness issue, either.

  I assume this should be possible (in theory) since I've seen reports
  of others getting PCIe passthrough working with aarch64 guests on
  x86_64 hosts, but of course that (passthrough to weird guest arch on
  x86) is somewhat the opposite of what I'm trying to do (passthrough to
  x86 on weird host arch) so I don't know for sure. If it is possible,
  I'm willing to develop a fix myself, but I'm almost completely
  unfamiliar with QEMU's internals so if anyone has any advice on where
  to start I'd greatly appreciate it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1920013/+subscriptions



[Bug 1920211] Re: shrink option for discard (for bad host-filesystems and -backup solutions)

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Tags added: feature-request

** Changed in: qemu
   Status: New => Incomplete

** Changed in: qemu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1920211

Title:
  shrink option for discard (for bad host-filesystems and -backup
  solutions)

Status in QEMU:
  Incomplete

Bug description:
  When using discard=unmap for virtio or scsi devices with QCOW2 images,
  space discarded by the guest will be unmaped on the host, which is
  basically great!

  This will turn the QCOW2 image into a sparse file which is efficient
  for most scenarios. But it may be that you need to avoid big sparse
  files on your host. For example because you need to use a backup
  solution which doesn't support sparse files well. Or maybe the QCOW2
  image is on a filesystem mount which doesn't support sparse files at
  all.

  For those scenarios an alternative option for the discard setting 
(discard=shrink) would be great, so that the QCOW2 file itself gets shrunken 
again.
  I'm not sure about how the initial growing* of QCOW2 images is implemented 
and if there are maybe limitations. But I hope it may be possible do the 
inverse and actually shrink (not sparse) an QCOW2 image with internally 
discarded blocks.

  
  I'm using Qemu-5.2.0 and Linux >= 5.3 (host and guest).

  *If you use "qemu-img create -f qcow2 ..." withOUT the "preallocation"
  option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1920211/+subscriptions



Re: [PATCH 03/10] python/machine: use subprocess.run instead of subprocess.Popen

2021-05-14 Thread John Snow

On 5/14/21 10:08 AM, Wainer dos Santos Moschetta wrote:

Hi,

On 5/12/21 6:46 PM, John Snow wrote:

use run() instead of Popen() -- to assert to pylint that we are not
forgetting to close a long-running program.

Signed-off-by: John Snow 
---
  python/qemu/machine.py | 15 +--
  1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/python/qemu/machine.py b/python/qemu/machine.py
index 41f51bd27d0..c13ff9b32bf 100644
--- a/python/qemu/machine.py
+++ b/python/qemu/machine.py
@@ -223,13 +223,16 @@ def send_fd_scm(self, fd: Optional[int] = None,
  assert fd is not None
  fd_param.append(str(fd))
-    proc = subprocess.Popen(
-    fd_param, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE,
-    stderr=subprocess.STDOUT, close_fds=False
+    proc = subprocess.run(
+    fd_param,
+    stdin=subprocess.DEVNULL,
+    stdout=subprocess.PIPE,
+    stderr=subprocess.STDOUT,
+    check=True,
+    close_fds=False,
  )


Now it might throw a CalledProcessError given that `check=True`. 
Shouldn't it capture the exception and (possible) re-throw as an 
QEMUMachineError?


- Wainer



I suppose I ought to so that it matches the other errors of this method, 
yes.


Setting it to false and checking manually may be less code, but yeah. 
I'll change this.


Thanks!




[Bug 1908062] Re: qemu-system-i386 virtio-vga: Assertion in address_space_stw_le_cached failed again

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/300


** Changed in: qemu
   Status: Incomplete => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #300
   https://gitlab.com/qemu-project/qemu/-/issues/300

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1908062

Title:
  qemu-system-i386 virtio-vga: Assertion in address_space_stw_le_cached
  failed again

Status in QEMU:
  Expired

Bug description:
  When I was fuzzing virtio-vga device of the latest QEMU (1758428, Dec
  12, built with --enable-sanitizers --enable-fuzzing), an assertion
  failed in include/exec/memory_ldst_cached.h.inc.

  --[ Reproducer

  cat << EOF | ./build/i386-softmmu/qemu-system-i386 -machine accel=qtest \
  -machine q35 -display none -nodefaults -device virtio-vga -qtest stdio
  outl 0xcf8 0x881c
  outb 0xcfc 0xc3
  outl 0xcf8 0x8804
  outb 0xcfc 0x06
  write 0xc31024 0x2 0x0040
  write 0xc31028 0x1 0x5a
  write 0xc3101c 0x1 0x01
  writel 0xc3100c 0x2000
  write 0xc31016 0x3 0x80a080
  write 0xc33002 0x1 0x80
  write 0x5c 0x1 0x10
  EOF

  --[ Output

  ==35337==WARNING: ASan doesn't fully support makecontext/swapcontext 
functions and may produce false positives in some cases!
  [I 1607946348.442865] OPENED
  [R +0.059305] outl 0xcf8 0x881c
  OK
  [S +0.059326] OK
  [R +0.059338] outb 0xcfc 0xc3
  OK
  [S +0.059355] OK
  [R +0.059363] outl 0xcf8 0x8804
  OK
  [S +0.059369] OK
  [R +0.059381] outb 0xcfc 0x06
  OK
  [S +0.061094] OK
  [R +0.061107] write 0xc31024 0x2 0x0040
  OK
  [S +0.061120] OK
  [R +0.061127] write 0xc31028 0x1 0x5a
  OK
  [S +0.061135] OK
  [R +0.061142] write 0xc3101c 0x1 0x01
  OK
  [S +0.061158] OK
  [R +0.061167] writel 0xc3100c 0x2000
  OK
  [S +0.061212] OK
  [R +0.061222] write 0xc31016 0x3 0x80a080
  OK
  [S +0.061231] OK
  [R +0.061238] write 0xc33002 0x1 0x80
  OK
  [S +0.061247] OK
  [R +0.061253] write 0x5c 0x1 0x10
  OK
  [S +0.061403] OK
  qemu-system-i386: 
/home/qiuhao/hack/qemu/include/exec/memory_ldst_cached.h.inc:88: void 
address_space_stw_le_cached(MemoryRegionCache *, hwaddr, uint32_t, MemTxAttrs, 
MemTxResult *): Assertion `addr < cache->len && 2 <= cache->len - addr' failed.

  --[ Environment

  Ubuntu 20.04.1 5.4.0-58-generic x86_64
  clang: 10.0.0-4ubuntu1
  glibc: 2.31-0ubuntu9.1
  libglib2.0-dev: 2.64.3-1~ubuntu20.04.1

  --[ Note

  Alexander Bulekov found the same assertion failure on 2020-08-04,
  https://bugs.launchpad.net/qemu/+bug/1890333, and it had been fixed in
  commit 2d69eba5fe52045b2c8b0d04fd3806414352afc1.

  Fam Zheng found the same assertion failure on 2018-09-29,
  https://bugs.launchpad.net/qemu/+bug/1795148, and it had been fixed in
  commit db812c4073c77c8a64db8d6663b3416a587c7b4a.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1908062/+subscriptions



[Bug 1918149] Re: qemu-user reports wrong fault_addr in signal handler

2021-05-14 Thread Thomas Huth
The QEMU project is currently moving its bug tracking to another system.
For this we need to know which bugs are still valid and which could be
closed already. Thus we are setting the bug state to "Incomplete" now.

If the bug has already been fixed in the latest upstream version of QEMU,
then please close this ticket as "Fix released".

If it is not fixed yet and you think that this bug report here is still
valid, then you have two options:

1) If you already have an account on gitlab.com, please open a new ticket
for this problem in our new tracker here:

https://gitlab.com/qemu-project/qemu/-/issues

and then close this ticket here on Launchpad (or let it expire auto-
matically after 60 days). Please mention the URL of this bug ticket on
Launchpad in the new ticket on GitLab.

2) If you don't have an account on gitlab.com and don't intend to get
one, but still would like to keep this ticket opened, then please switch
the state back to "New" or "Confirmed" within the next 60 days (other-
wise it will get closed as "Expired"). We will then eventually migrate
the ticket automatically to the new system (but you won't be the reporter
of the bug in the new system and thus you won't get notified on changes
anymore).

Thank you and sorry for the inconvenience.


** Changed in: qemu
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1918149

Title:
  qemu-user reports wrong fault_addr in signal handler

Status in QEMU:
  Incomplete

Bug description:
  When a SEGV signal occurs and si_addr of the info struct is nil, qemu
  still tries to translate the address from host to guest
  (handle_cpu_signal in accel/tcg/user-exec.c). This means, that the
  actual signal handler, will receive a fault_addr that is something
  like 0xbf709000.

  I was able to get this to happen, by branching to a non canonical address on 
aarch64.
  I used 5.2 (commit: 553032db17). However, building from source, this only 
seems to happen, if I use the same configure flags as the debian build:

  ../configure --static --target-list=aarch64-linux-user --disable-
  system --enable-trace-backends=simple --disable-linux-io-uring
  --disable-pie --extra-cflags="-fstack-protector-strong -Wformat
  -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"  --extra-
  ldflags="-Wl,-z,relro -Wl,--as-needed"

  Let me know, if you need more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1918149/+subscriptions



[Bug 1910941] Re: Assertion `addr < cache->len && 2 <= cache->len - addr' in virtio-blk

2021-05-14 Thread Thomas Huth
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/301


** Changed in: qemu
   Status: New => Expired

** Bug watch added: gitlab.com/qemu-project/qemu/-/issues #301
   https://gitlab.com/qemu-project/qemu/-/issues/301

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1910941

Title:
  Assertion `addr < cache->len && 2 <= cache->len - addr' in virtio-blk

Status in QEMU:
  Expired

Bug description:
  Hello,

  Using hypervisor fuzzer, hyfuzz, I found an assertion failure through
  virtio-blk emulator.

  A malicious guest user/process could use this flaw to abort the QEMU
  process on the host, resulting in a denial of service.

  This was found in version 5.2.0 (master)

  ```

  qemu-system-i386: 
/home/cwmyung/prj/hyfuzz/src/qemu-master/include/exec/memory_ldst_cached.h.inc:88:
 void address_space_stw_le_cached(MemoryRegionCache *, hwaddr, uint32_t, 
MemTxAttrs, MemTxResult *): Assertion `addr < cache->len && 2 <= cache->len - 
addr' failed.
  [1]1877 abort (core dumped)  
/home/cwmyung/prj/hyfuzz/src/qemu-master/build/i386-softmmu/qemu-system-i386

  Program terminated with signal SIGABRT, Aborted.
  #0  0x7f71cc171f47 in __GI_raise (sig=sig@entry=0x6) at 
../sysdeps/unix/sysv/linux/raise.c:51
  #1  0x7f71cc1738b1 in __GI_abort () at abort.c:79
  #2  0x7f71cc16342a in __assert_fail_base (fmt=0x7f71cc2eaa38 "%s%s%s:%u: 
%s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x56537b324230 "addr 
< cache->len && 2 <= cache->len - addr", file=file@entry=0x56537b32425c 
"/home/cwmyung/prj/hyfuzz/src/qemu-master/include/exec/memory_ldst_cached.h.inc",
 line=line@entry=0x58, function=function@entry=0x56537b3242ab "void 
address_space_stw_le_cached(MemoryRegionCache *, hwaddr, uint32_t, MemTxAttrs, 
MemTxResult *)") at assert.c:92
  #3  0x7f71cc1634a2 in __GI___assert_fail (assertion=0x56537b324230 "addr 
< cache->len && 2 <= cache->len - addr", file=0x56537b32425c 
"/home/cwmyung/prj/hyfuzz/src/qemu-master/include/exec/memory_ldst_cached.h.inc",
 line=0x58, function=0x56537b3242ab "void 
address_space_stw_le_cached(MemoryRegionCache *, hwaddr, uint32_t, MemTxAttrs, 
MemTxResult *)") at assert.c:101
  #4  0x56537af3c917 in address_space_stw_le_cached (attrs=..., 
result=, cache=, addr=, 
val=) at 
/home/cwmyung/prj/hyfuzz/src/qemu-master/include/exec/memory_ldst_cached.h.inc:88
  #5  0x56537af3c917 in stw_le_phys_cached (cache=, 
addr=, val=) at 
/home/cwmyung/prj/hyfuzz/src/qemu-master/include/exec/memory_ldst_phys.h.inc:121
  #6  0x56537af3c917 in virtio_stw_phys_cached (vdev=, 
cache=, pa=, value=) at 
/home/cwmyung/prj/hyfuzz/src/qemu-master/include/hw/virtio/virtio-access.h:196
  #7  0x56537af2b809 in vring_set_avail_event (vq=, val=0x0) 
at ../hw/virtio/virtio.c:429
  #8  0x56537af2b809 in virtio_queue_split_set_notification (vq=, enable=) at ../hw/virtio/virtio.c:438
  #9  0x56537af2b809 in virtio_queue_set_notification (vq=, 
enable=0x1) at ../hw/virtio/virtio.c:499
  #10 0x56537b07ce1c in virtio_blk_handle_vq (s=0x56537d6bb3a0, 
vq=0x56537d6c0680) at ../hw/block/virtio-blk.c:795
  #11 0x56537af3eb4d in virtio_queue_notify_aio_vq (vq=0x56537d6c0680) at 
../hw/virtio/virtio.c:2326
  #12 0x56537af3ba04 in virtio_queue_host_notifier_aio_read (n=) at ../hw/virtio/virtio.c:3533
  #13 0x56537b20901c in aio_dispatch_handler (ctx=0x56537c4179f0, 
node=0x7f71a810b370) at ../util/aio-posix.c:329
  #14 0x56537b20838c in aio_dispatch_handlers (ctx=) at 
../util/aio-posix.c:372
  #15 0x56537b20838c in aio_dispatch (ctx=0x56537c4179f0) at 
../util/aio-posix.c:382
  #16 0x56537b1f99cb in aio_ctx_dispatch (source=0x2, 
callback=0x7ffc8add9f90, user_data=0x0) at ../util/async.c:306
  #17 0x7f71d1c10417 in g_main_context_dispatch () at 
/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0
  #18 0x56537b1f1bab in glib_pollfds_poll () at ../util/main-loop.c:232
  #19 0x56537b1f1bab in os_host_main_loop_wait (timeout=) at 
../util/main-loop.c:255
  #20 0x56537b1f1bab in main_loop_wait (nonblocking=) at 
../util/main-loop.c:531
  #21 0x56537af879d7 in qemu_main_loop () at ../softmmu/runstate.c:720
  #22 0x56537a928a3b in main (argc=, argc@entry=0x15, 
argv=, argv@entry=0x7ffc8adda718, envp=) at 
../softmmu/main.c:50
  #23 0x7f71cc154b97 in __libc_start_main (main=0x56537a928a30 , 
argc=0x15, argv=0x7ffc8adda718, init=, fini=, 
rtld_fini=, stack_end=0x7ffc8adda708) at ../csu/libc-start.c:310
  #24 0x56537a92894a in _start ()

  ```

  To reproduce this issue, please run the QEMU with the following
  command line.

  ```

  # To reproduce this issue, please run the QEMU process with the
  following command line.

  $ qemu-system-i386 -m 512  -drive
  file

[Bug 1918084] Re: Build fails on macOS 11.2.2

2021-05-14 Thread Thomas Huth
So is this working now with the final release of v6.0 ?

** Changed in: qemu
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1918084

Title:
  Build fails on macOS 11.2.2

Status in QEMU:
  Incomplete

Bug description:
  Hi,

  I got the latest version from git. I have pre-compiled the dependency
  libraries. All good. configure creates the necessary files. When I
  build I got the following error:

  [1368/6454] Compiling C object 
libcapstone.a.p/capstone_arch_AArch64_AArch64InstPrinter.c.o
  ninja: build stopped: subcommand failed.
  make[1]: *** [run-ninja] Error 1
  make: *** [all] Error 2

  I've ran make as make -j 8

  original config:

  
PKG_CONFIG_PATH="$SERVERPLUS_DIR/dependencies/glib/lib/pkgconfig:$SERVERPLUS_DIR/dependencies/pixman/lib/pkgconfig:$SERVERPLUS_DIR/dependencies
  /cyrus-sasl/lib/pkgconfig" ./configure --prefix="$SERVERPLUS_DIR"
  --enable-hvf --enable-cocoa --enable-vnc-sasl --enable-auth-pam
  --ninja=/opt/build/build/stage/tools/ninja/ninja
  --python="$SERVERPLUS_DIR/dependencies/python/bin/python3" --enable-
  bsd-user

  if I build with --target-list=x86_64-softmmu then it will build but I
  will get only the x86_64 QEMU built. With 5.0 I could build all
  emulators.

  $SERVERPLUS_DIR is my target dir.

  Thanks,

  Eddy

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1918084/+subscriptions



Re: [PATCH v6 59/82] target/arm: Implement SVE mixed sign dot product (indexed)

2021-05-14 Thread Richard Henderson

On 5/13/21 7:57 AM, Peter Maydell wrote:

Maybe we should macroify this, as unless I'm misreading them
gvec_sdot_idx_b, gvec_udot_idx_b, gvec_sudot_idx_b and gvec_usdot_idx_b
only differ in the types of the index and the data.


Done.

r~



Re: [PULL v3 0/1] Rtd patches

2021-05-14 Thread Peter Maydell
On Fri, 14 May 2021 at 12:13,  wrote:
>
> From: Marc-André Lureau 
>
> The following changes since commit 2d3fc4e2b069494b1e9e2e4a1e3de24cbc036426:
>
>   Merge remote-tracking branch 'remotes/armbru/tags/pull-misc-2021-05-12' 
> into staging (2021-05-13 20:13:24 +0100)
>
> are available in the Git repository at:
>
>   g...@gitlab.com:marcandre.lureau/qemu.git tags/rtd-pull-request
>
> for you to fetch changes up to 73e6aec6522e1edd63f631c52577b49a39bc234f:
>
>   sphinx: adopt kernel readthedoc theme (2021-05-14 15:05:03 +0400)
>
> 
> Pull request
>
> 


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/6.1
for any user-visible changes.

-- PMM



Re: [RFC PATCH 0/9] Initial support for machine creation via QMP

2021-05-14 Thread Paolo Bonzini
Il ven 14 mag 2021, 18:20 Daniel P. Berrangé  ha
scritto:

> My gut feeling though is accel-set would be more logical being done
> first, as that also influences the set of features available in other
> areas of QEMU configuration. Was there a reason you listed it after
> machine-set ?
>

That was also my initial gut feeling, but actually right now the machine
influences the accelerator more than the other way round. For example the
initialization of the accelerator takes a machine so that for example on
x86 the per-architecture KVM knows whether to set up SMM. Also different
machines could use different modes for KVM (HV vs PR for ppc), and some
machines may not be virtualizable at all so they require TCG.

The host CPU these days is really a virtualization-only synonym for -cpu
max, which works for TCG as well. But you're right that x86 CPU flags are
dictated by the accelerator rather than the machine, so specifying it in
machine-set would be clumsy. On the other hand on ARM it's a bit of both:
for KVM it's basically always -cpu host so the accelerator is important;
but some machines may have an M profile CPU and some may have an A.

I don't have the sources at hand to check in which phase CPUs are created,
but it's definitely after ACCEL_CREATED. Adding a third command
cpu-model-set is probably the easiest way to proceed.

Paolo


> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>
>


Re: [PATCH v6 55/82] target/arm: Implement SVE2 saturating multiply-add (indexed)

2021-05-14 Thread Richard Henderson

On 5/13/21 7:42 AM, Peter Maydell wrote:

On Fri, 30 Apr 2021 at 22:07, Richard Henderson
 wrote:


Signed-off-by: Richard Henderson 
---
  target/arm/helper-sve.h|  9 +
  target/arm/sve.decode  | 18 ++
  target/arm/sve_helper.c| 30 ++
  target/arm/translate-sve.c | 32 
  4 files changed, 81 insertions(+), 8 deletions(-)

+#define DO_ZZXW(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc)  \
+{ \
+intptr_t i, j, oprsz = simd_oprsz(desc);  \
+intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN);   \
+intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 1, 3) * sizeof(TYPEN); \
+for (i = 0; i < oprsz; i += 16) { \
+TYPEW mm = *(TYPEN *)(vm + i + idx);  \


Doesn't this need an H macro ?


Yep.


r~



Re: [RFC PATCH 10/11] target/ppc: created tcg-stub.c file

2021-05-14 Thread Bruno Piazera Larsen


On 12/05/2021 15:39, Richard Henderson wrote:

On 5/12/21 9:08 AM, Bruno Larsen (billionai) wrote:

+++ b/target/ppc/tcg-stub.c
@@ -0,0 +1,33 @@
+
+#include "qemu/osdep.h"


All files get copyright boilerplate.


+#include "exec/hwaddr.h"
+#include "cpu.h"
+#include "hw/ppc/spapr.h"
+
+hwaddr ppc_cpu_get_phys_page_debug(CPUState *cs, vaddr addr)
+{
+    return 0;
+}


This is used by gdbstub.

If there's a way for kvm to convert a virtual address to a physical 
address using the hardware, then use that.  I suspect there is not.


Otherwise, you have to keep all of the mmu page table walking stuff 
for kvm as well as tcg.  Which probably means that all of the other 
stuff that you're stubbing out is used or usable as well.


From what I can tell, KVM can't do it, so we'll have to extract the 
function. Looking at it, the main problem is that it might call 
get_physical_address and use struct mmu_ctx_t, if the mmu_model isn't 
one of: POWERPC_MMU_64B, POWERPC_MMU_2_03, POWERPC_MMU_2_06, 
POWERPC_MMU_2_07, POWERPC_MMU_3_00, POWERPC_MMU_32B, POWERPC_MMU_601.


Is it possible that a machine with an mmu not listed in here could build 
a !TCG version of qemu? if it's not possible, we can separate that part 
into a separate function left in mmu_helper.c and move the rest to 
somewhere else.


Looking at dump_mmu and ppc_tlb_invalidate_all, looks like we need to 
move enough code that it make sense to create an mmu_common.c for common 
code. Otherwise, it's probably easier to compile all of mmu_helper.c 
instead of picking those functions out.


--

Bruno Piazera Larsen
Instituto de Pesquisas ELDORADO 


Departamento Computação Embarcada
Analista de Software Trainee
Aviso Legal - Disclaimer 


Re: [PATCH 5/6] co-shared-resource: protect with a mutex

2021-05-14 Thread Paolo Bonzini
Il ven 14 mag 2021, 16:10 Emanuele Giuseppe Esposito 
ha scritto:

> > I'm not sure I like it since callers may still need coarser grained
> > locks to protect their own state or synchronize access to multiple
> > items of data. Also, some callers may not need thread-safety.
> >
> > Can the caller to be responsible for locking instead (e.g. using
> > CoMutex)?
>
> Right now co-shared-resource is being used only by block-copy, so I
> guess locking it from the caller or within the API won't really matter
> in this case.
>
> One possible idea on how to delegate this to the caller without adding
> additional small lock/unlock in block-copy is to move co_get_from_shres
> in block_copy_task_end, and calling it only when a boolean passed to
> block_copy_task_end is true.
>

The patch below won't work because qemu_co_queue_wait would have to unlock
the CoMutex; therefore you would have to pass it as an additional argument
to co_get_from_shres.

Overall, neither co_get_from_shres not AioTaskPool should be fast paths, so
using a local lock seems to produce the simplest API.

Paolo


> Otherwise make b_c_task_end always call co_get_from_shres and then
> include co_get_from_shres in block_copy_task_create, so that we always
> add and in case remove (if error) in the shared resource.
>
> Something like:
>
> diff --git a/block/block-copy.c b/block/block-copy.c
> index 3a447a7c3d..1e4914b0cb 100644
> --- a/block/block-copy.c
> +++ b/block/block-copy.c
> @@ -233,6 +233,7 @@ static coroutine_fn BlockCopyTask
> *block_copy_task_create(BlockCopyState *s,
>   /* region is dirty, so no existent tasks possible in it */
>   assert(!find_conflicting_task(s, offset, bytes));
>   QLIST_INSERT_HEAD(&s->tasks, task, list);
> +co_get_from_shres(s->mem, task->bytes);
>   qemu_co_mutex_unlock(&s->tasks_lock);
>
>   return task;
> @@ -269,6 +270,7 @@ static void coroutine_fn
> block_copy_task_end(BlockCopyTask *task, int ret)
>   bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->offset,
> task->bytes);
>   }
>   qemu_co_mutex_lock(&task->s->tasks_lock);
> +co_put_to_shres(task->s->mem, task->bytes);
>   task->s->in_flight_bytes -= task->bytes;
>   QLIST_REMOVE(task, list);
>   progress_set_remaining(task->s->progress,
> @@ -379,7 +381,6 @@ static coroutine_fn int
> block_copy_task_run(AioTaskPool *pool,
>
>   aio_task_pool_wait_slot(pool);
>   if (aio_task_pool_status(pool) < 0) {
> -co_put_to_shres(task->s->mem, task->bytes);
>   block_copy_task_end(task, -ECANCELED);
>   g_free(task);
>   return -ECANCELED;
> @@ -498,7 +499,6 @@ static coroutine_fn int
> block_copy_task_entry(AioTask *task)
>   }
>   qemu_mutex_unlock(&t->s->calls_lock);
>
> -co_put_to_shres(t->s->mem, t->bytes);
>   block_copy_task_end(t, ret);
>
>   return ret;
> @@ -687,8 +687,6 @@ block_copy_dirty_clusters(BlockCopyCallState
> *call_state)
>
>   trace_block_copy_process(s, task->offset);
>
> -co_get_from_shres(s->mem, task->bytes);
> -
>   offset = task_end(task);
>   bytes = end - offset;
>
>
>
>
> >
> >> diff --git a/util/qemu-co-shared-resource.c
> b/util/qemu-co-shared-resource.c
> >> index 1c83cd9d29..c455d02a1e 100644
> >> --- a/util/qemu-co-shared-resource.c
> >> +++ b/util/qemu-co-shared-resource.c
> >> @@ -32,6 +32,7 @@ struct SharedResource {
> >>   uint64_t available;
> >>
> >>   CoQueue queue;
> >> +QemuMutex lock;
> >
> > Please add a comment indicating what this lock protects.
> >
> > Thread safety should also be documented in the header file so API users
> > know what to expect.
>
> Will do, thanks.
>
> Emanuele
>
>


Re: [PATCH v2 49/50] target/i386: Move helper_check_io to sysemu

2021-05-14 Thread Richard Henderson

On 5/14/21 10:13 AM, Richard Henderson wrote:

--- a/target/i386/tcg/translate.c
+++ b/target/i386/tcg/translate.c
@@ -193,6 +193,7 @@ typedef struct DisasContext {
  { qemu_build_not_reached(); }
  
  #ifdef CONFIG_USER_ONLY

+STUB_HELPER(check_io, TCGv_env env, TCGv_i32 port, TCGv_i32 size)
  STUB_HELPER(clgi, TCGv_env env)
  STUB_HELPER(flush_page, TCGv_env env, TCGv addr)
  STUB_HELPER(hlt, TCGv_env env, TCGv_i32 pc_ofs)

...

@@ -681,6 +683,14 @@ static void gen_helper_out_func(MemOp ot, TCGv_i32 v, 
TCGv_i32 n)
  static bool gen_check_io(DisasContext *s, MemOp ot, TCGv_i32 port,
   uint32_t svm_flags)
  {
+#ifdef CONFIG_USER_ONLY
+/*
+ * We do not implement the iopriv(2) syscall, so the TSS check
+ * will always fail.
+ */
+gen_exception_gpf(s);
+return false;
+#else
  if (PE(s) && (CPL(s) > IOPL(s) || VM86(s))) {
  gen_helper_check_io(cpu_env, port, tcg_constant_i32(1 << ot));
  }
@@ -699,6 +709,7 @@ static bool gen_check_io(DisasContext *s, MemOp ot, 
TCGv_i32 port,
  tcg_constant_i32(next_eip - cur_eip));
  }
  return true;
+#endif


This ifdef means the STUB_HELPER above isn't even used.
This is caught by clang as an unused inline function.
Will fix for v3.


r~



Re: [PATCH v2 12/12] configure: bump min required CLang to 6.0 / XCode 10.0

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:06 AM Daniel P. Berrangé  wrote:
>
> Several distros have been dropped since the last time we bumped the
> minimum required CLang version.
>
> Per repology, currently shipping versions are:
>
>  RHEL-8: 10.0.1
>   Debian Buster: 7.0.1
>  openSUSE Leap 15.2: 9.0.1
>Ubuntu LTS 18.04: 6.0.0
>Ubuntu LTS 20.04: 10.0.0
>  FreeBSD 12: 8.0.1
>   Fedora 33: 11.0.0
>   Fedora 34: 11.1.0
>
> With this list Ubuntu LTS 18.04 is the constraint at 6.0.0
>
> An LLVM version of 6.0.0 corresponds to macOS XCode version of 10.0
> which dates from Sept 2018.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  configure | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)
>

Reviewed-by: Willian Rampazzo 




Re: [PATCH 5/6] co-shared-resource: protect with a mutex

2021-05-14 Thread Emanuele Giuseppe Esposito




On 14/05/2021 17:30, Vladimir Sementsov-Ogievskiy wrote:

14.05.2021 17:32, Emanuele Giuseppe Esposito wrote:



On 14/05/2021 16:26, Vladimir Sementsov-Ogievskiy wrote:

14.05.2021 17:10, Emanuele Giuseppe Esposito wrote:



On 12/05/2021 17:44, Stefan Hajnoczi wrote:
On Mon, May 10, 2021 at 10:59:40AM +0200, Emanuele Giuseppe 
Esposito wrote:

co-shared-resource is currently not thread-safe, as also reported
in co-shared-resource.h. Add a QemuMutex because 
co_try_get_from_shres

can also be invoked from non-coroutine context.

Signed-off-by: Emanuele Giuseppe Esposito 
---
  util/qemu-co-shared-resource.c | 26 ++
  1 file changed, 22 insertions(+), 4 deletions(-)


Hmm...this thread-safety change is more fine-grained than I was
expecting. If we follow this strategy basically any data structure 
used

by coroutines needs its own fine-grained lock (like Java's Object base
class which has its own lock).

I'm not sure I like it since callers may still need coarser grained
locks to protect their own state or synchronize access to multiple
items of data. Also, some callers may not need thread-safety.

Can the caller to be responsible for locking instead (e.g. using
CoMutex)?


Right now co-shared-resource is being used only by block-copy, so I 
guess locking it from the caller or within the API won't really 
matter in this case.


One possible idea on how to delegate this to the caller without 
adding additional small lock/unlock in block-copy is to move 
co_get_from_shres in block_copy_task_end, and calling it only when a 
boolean passed to block_copy_task_end is true.


Otherwise make b_c_task_end always call co_get_from_shres and then 
include co_get_from_shres in block_copy_task_create, so that we 
always add and in case remove (if error) in the shared resource.


Something like:

diff --git a/block/block-copy.c b/block/block-copy.c
index 3a447a7c3d..1e4914b0cb 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -233,6 +233,7 @@ static coroutine_fn BlockCopyTask 
*block_copy_task_create(BlockCopyState *s,

  /* region is dirty, so no existent tasks possible in it */
  assert(!find_conflicting_task(s, offset, bytes));
  QLIST_INSERT_HEAD(&s->tasks, task, list);
+    co_get_from_shres(s->mem, task->bytes);
  qemu_co_mutex_unlock(&s->tasks_lock);

  return task;
@@ -269,6 +270,7 @@ static void coroutine_fn 
block_copy_task_end(BlockCopyTask *task, int ret)
  bdrv_set_dirty_bitmap(task->s->copy_bitmap, task->offset, 
task->bytes);

  }
  qemu_co_mutex_lock(&task->s->tasks_lock);
+    co_put_to_shres(task->s->mem, task->bytes);
  task->s->in_flight_bytes -= task->bytes;
  QLIST_REMOVE(task, list);
  progress_set_remaining(task->s->progress,
@@ -379,7 +381,6 @@ static coroutine_fn int 
block_copy_task_run(AioTaskPool *pool,


  aio_task_pool_wait_slot(pool);
  if (aio_task_pool_status(pool) < 0) {
-    co_put_to_shres(task->s->mem, task->bytes);
  block_copy_task_end(task, -ECANCELED);
  g_free(task);
  return -ECANCELED;
@@ -498,7 +499,6 @@ static coroutine_fn int 
block_copy_task_entry(AioTask *task)

  }
  qemu_mutex_unlock(&t->s->calls_lock);

-    co_put_to_shres(t->s->mem, t->bytes);
  block_copy_task_end(t, ret);

  return ret;
@@ -687,8 +687,6 @@ block_copy_dirty_clusters(BlockCopyCallState 
*call_state)


  trace_block_copy_process(s, task->offset);

-    co_get_from_shres(s->mem, task->bytes);


we want to get from shres here, after possible call to 
block_copy_task_shrink(), as task->bytes may be reduced.


Ah right, I missed that. So I guess if we want the caller to protect 
co-shared-resource, get_from_shres stays where it is, and put_ instead 
can still go into task_end (with a boolean enabling it).


honestly, I don't follow how it helps thread-safety


From my understanding, the whole point here is to have no lock in 
co-shared-resource but let the caller take care of it (block-copy).


The above was just an idea on how to do it.






-
  offset = task_end(task);
  bytes = end - offset;






diff --git a/util/qemu-co-shared-resource.c 
b/util/qemu-co-shared-resource.c

index 1c83cd9d29..c455d02a1e 100644
--- a/util/qemu-co-shared-resource.c
+++ b/util/qemu-co-shared-resource.c
@@ -32,6 +32,7 @@ struct SharedResource {
  uint64_t available;
  CoQueue queue;
+    QemuMutex lock;


Please add a comment indicating what this lock protects.

Thread safety should also be documented in the header file so API 
users

know what to expect.


Will do, thanks.

Emanuele














Re: [PATCH v6 82/82] target/arm: Enable SVE2 and related extensions

2021-05-14 Thread Richard Henderson

On 5/13/21 2:35 PM, Peter Maydell wrote:

On Fri, 30 Apr 2021 at 22:37, Richard Henderson
 wrote:


Signed-off-by: Richard Henderson 
---
  target/arm/cpu.c   |  1 +
  target/arm/cpu64.c | 13 +
  2 files changed, 14 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 0dd623e590..30fd5d5ff7 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1464,6 +1464,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error 
**errp)

  u = cpu->isar.id_isar6;
  u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
+u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
  cpu->isar.id_isar6 = u;

  u = cpu->isar.mvfr0;
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index f0a9e968c9..379f90fab8 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -662,6 +662,7 @@ static void aarch64_max_initfn(Object *obj)
  t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
  t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
  t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
+t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
  cpu->isar.id_aa64isar1 = t;

  t = cpu->isar.id_aa64pfr0;
@@ -702,6 +703,17 @@ static void aarch64_max_initfn(Object *obj)
  t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
  cpu->isar.id_aa64mmfr2 = t;

+t = cpu->isar.id_aa64zfr0;
+t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2);  /* PMULL */
+t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
+cpu->isar.id_aa64zfr0 = t;
+
  /* Replicate the same data to the 32-bit id registers.  */
  u = cpu->isar.id_isar5;
  u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
@@ -718,6 +730,7 @@ static void aarch64_max_initfn(Object *obj)
  u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
  u = FIELD_DP32(u, ID_ISAR6, SB, 1);
  u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
+u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
  cpu->isar.id_isar6 = u;

  u = cpu->isar.id_pfr0;


Do we need to clear any of these in the "user set has_neon and/or
has_vfp to false" code in arm_cpu_realizefn() ?


Oh, hmm, yes.  Indeed, I guess we need to disable SVE as well?

I also see that ID_ISAR6.I8MM is currently handled by !has_vfp, but it's really 
an AdvSIMD aka has_neon feature.



r~



Re: [PATCH v2 11/12] configure: bump min required GCC to 7.5.0

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:05 AM Daniel P. Berrangé  wrote:
>
> Several distros have been dropped since the last time we bumped the
> minimum required GCC version.
>
> Per repology, currently shipping versions are:
>
>  RHEL-8: 8.3.1
>   Debian Buster: 8.3.0
>  openSUSE Leap 15.2: 7.5.0
>Ubuntu LTS 18.04: 7.5.0
>Ubuntu LTS 20.04: 9.3.0
> FreeBSD: 10.3.0
>   Fedora 33: 9.2.0
>   Fedora 34: 11.0.1
> OpenBSD: 8.4.0
>  macOS HomeBrew: 11.1.0
>
> With this list Ubuntu LTS 18.04 / openSUSE Leap 15.2 are the
> constraint at 7.5.0
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  configure | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>

Reviewed-by: Willian Rampazzo 




Re: [PATCH v2 10/12] configure: bump min required glib version to 2.56

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:05 AM Daniel P. Berrangé  wrote:
>
> The glib version was not previously constrained by RHEL-7 since it
> rebases fairly often. Instead SLES 12 and Ubuntu 16.04 were the
> constraints in 00f2cfbbec63fb6f5a7789797a62ccedd22466ea. Both of
> these are old enough that they are outside our platform support
> matrix now.
>
> Per repology, current shipping versions are:
>
>  RHEL-8: 2.56.4
>   Debian Buster: 2.58.3
>  openSUSE Leap 15.2: 2.62.6
>Ubuntu LTS 18.04: 2.56.4
>Ubuntu LTS 20.04: 2.64.6
> FreeBSD: 2.66.7
>   Fedora 33: 2.66.8
>   Fedora 34: 2.68.1
> OpenBSD: 2.68.1
>  macOS HomeBrew: 2.68.1
>
> Thus Ubuntu LTS 18.04 / RHEL-8 are the constraint for GLib version
> at 2.56
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  configure |   2 +-
>  include/glib-compat.h |  13 +--
>  util/oslib-win32.c| 204 --
>  3 files changed, 3 insertions(+), 216 deletions(-)
>

Reviewed-by: Willian Rampazzo 




[PULL 18/19] test-write-threshold: drop extra TestStruct structure

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

We don't need this extra logic: it doesn't make code simpler.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-8-vsement...@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Max Reitz 
---
 tests/unit/test-write-threshold.c | 20 +++-
 1 file changed, 3 insertions(+), 17 deletions(-)

diff --git a/tests/unit/test-write-threshold.c 
b/tests/unit/test-write-threshold.c
index 9e9986aefc..49b1ef7a20 100644
--- a/tests/unit/test-write-threshold.c
+++ b/tests/unit/test-write-threshold.c
@@ -37,26 +37,12 @@ static void test_threshold_trigger(void)
 g_assert_cmpuint(bdrv_write_threshold_get(&bs), ==, 0);
 }
 
-typedef struct TestStruct {
-const char *name;
-void (*func)(void);
-} TestStruct;
-
 
 int main(int argc, char **argv)
 {
-size_t i;
-TestStruct tests[] = {
-{ "/write-threshold/not-trigger",
-  test_threshold_not_trigger },
-{ "/write-threshold/trigger",
-  test_threshold_trigger },
-{ NULL, NULL }
-};
-
 g_test_init(&argc, &argv, NULL);
-for (i = 0; tests[i].name != NULL; i++) {
-g_test_add_func(tests[i].name, tests[i].func);
-}
+g_test_add_func("/write-threshold/not-trigger", 
test_threshold_not_trigger);
+g_test_add_func("/write-threshold/trigger", test_threshold_trigger);
+
 return g_test_run();
 }
-- 
2.31.1




Re: [PATCH v6 80/82] target/arm: Implement integer matrix multiply accumulate

2021-05-14 Thread Richard Henderson

On 5/13/21 2:49 PM, Peter Maydell wrote:

On Fri, 30 Apr 2021 at 22:36, Richard Henderson
 wrote:


This is {S,U,US}MMLA for both AArch64 AdvSIMD and SVE,
and V{S,U,US}MMLA.S8 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
  target/arm/helper.h   |  7 
  target/arm/neon-shared.decode |  7 
  target/arm/sve.decode |  6 +++
  target/arm/translate-a64.c| 18 
  target/arm/translate-neon.c   | 27 
  target/arm/translate-sve.c| 27 
  target/arm/vec_helper.c   | 77 +++
  7 files changed, 169 insertions(+)


I have to say the decode parts for SVE and A32 (using decodetree
were much easier to review than the A64 part...


Indeed, this was painful enough to write that I'm on the verge of converting 
a64 to decodetree as well.



r~



[PATCH 4/4] sasl: remove comment about obsolete kerberos versions

2021-05-14 Thread Daniel P . Berrangé
This is not relevant to any OS distro that QEMU currently targets.

Signed-off-by: Daniel P. Berrangé 
---
 qemu.sasl | 4 
 1 file changed, 4 deletions(-)

diff --git a/qemu.sasl b/qemu.sasl
index abdfc686be..851acc7e8f 100644
--- a/qemu.sasl
+++ b/qemu.sasl
@@ -29,10 +29,6 @@ mech_list: gssapi
 # client.
 #mech_list: scram-sha-256 gssapi
 
-# Some older builds of MIT kerberos on Linux ignore this option &
-# instead need KRB5_KTNAME env var.
-# For modern Linux, and other OS, this should be sufficient
-#
 # This file needs to be populated with the service principal that
 # was created on the Kerberos v5 server. If switching to a non-gssapi
 # mechanism this can be commented out.
-- 
2.31.1




Re: [PATCH v2 08/12] tests/vm: convert centos VM recipe to CentOS 8

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:05 AM Daniel P. Berrangé  wrote:
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  tests/vm/centos | 17 -
>  1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/tests/vm/centos b/tests/vm/centos
> index efe3dbbb36..5c7bc1c1a9 100755
> --- a/tests/vm/centos
> +++ b/tests/vm/centos
> @@ -26,24 +26,23 @@ class CentosVM(basevm.BaseVM):
>  export SRC_ARCHIVE=/dev/vdb;
>  sudo chmod a+r $SRC_ARCHIVE;
>  tar -xf $SRC_ARCHIVE;
> -make docker-test-block@centos7 {verbose} J={jobs} NETWORK=1;
> -make docker-test-quick@centos7 {verbose} J={jobs} NETWORK=1;
> +make docker-test-block@centos8 {verbose} J={jobs} NETWORK=1;
> +make docker-test-quick@centos8 {verbose} J={jobs} NETWORK=1;
>  make docker-test-mingw@fedora  {verbose} J={jobs} NETWORK=1;
>  """
>
>  def build_image(self, img):
> -cimg = 
> self._download_with_cache("https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz";)
> +cimg = 
> self._download_with_cache("https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.3.2011-20201204.2.x86_64.qcow2";)

I wonder why they didn't keep the compressed option for download.

Reviewed-by: Willian Rampazzo 




[PATCH 3/4] docs: recommend SCRAM-SHA-256 SASL mech instead of SHA-1 variant

2021-05-14 Thread Daniel P . Berrangé
The SHA-256 variant better meats modern security expectations.
Also warn that the password file is storing entries in clear
text.

Signed-off-by: Daniel P. Berrangé 
---
 docs/system/vnc-security.rst |  7 ---
 qemu.sasl| 11 ++-
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/docs/system/vnc-security.rst b/docs/system/vnc-security.rst
index 830f6acc73..4c1769eeb8 100644
--- a/docs/system/vnc-security.rst
+++ b/docs/system/vnc-security.rst
@@ -168,7 +168,7 @@ used is drastically reduced. In fact only the GSSAPI SASL 
mechanism
 provides an acceptable level of security by modern standards. Previous
 versions of QEMU referred to the DIGEST-MD5 mechanism, however, it has
 multiple serious flaws described in detail in RFC 6331 and thus should
-never be used any more. The SCRAM-SHA-1 mechanism provides a simple
+never be used any more. The SCRAM-SHA-256 mechanism provides a simple
 username/password auth facility similar to DIGEST-MD5, but does not
 support session encryption, so can only be used in combination with TLS.
 
@@ -191,11 +191,12 @@ reasonable configuration is
 
 ::
 
-   mech_list: scram-sha-1
+   mech_list: scram-sha-256
sasldb_path: /etc/qemu/passwd.db
 
 The ``saslpasswd2`` program can be used to populate the ``passwd.db``
-file with accounts.
+file with accounts. Note that the ``passwd.db`` file stores passwords
+in clear text.
 
 Other SASL configurations will be left as an exercise for the reader.
 Note that all mechanisms, except GSSAPI, should be combined with use of
diff --git a/qemu.sasl b/qemu.sasl
index fb8a92ba58..abdfc686be 100644
--- a/qemu.sasl
+++ b/qemu.sasl
@@ -19,15 +19,15 @@ mech_list: gssapi
 
 # If using TLS with VNC, or a UNIX socket only, it is possible to
 # enable plugins which don't provide session encryption. The
-# 'scram-sha-1' plugin allows plain username/password authentication
+# 'scram-sha-256' plugin allows plain username/password authentication
 # to be performed
 #
-#mech_list: scram-sha-1
+#mech_list: scram-sha-256
 
 # You can also list many mechanisms at once, and the VNC server will
 # negotiate which to use by considering the list enabled on the VNC
 # client.
-#mech_list: scram-sha-1 gssapi
+#mech_list: scram-sha-256 gssapi
 
 # Some older builds of MIT kerberos on Linux ignore this option &
 # instead need KRB5_KTNAME env var.
@@ -38,7 +38,8 @@ mech_list: gssapi
 # mechanism this can be commented out.
 keytab: /etc/qemu/krb5.tab
 
-# If using scram-sha-1 for username/passwds, then this is the file
+# If using scram-sha-256 for username/passwds, then this is the file
 # containing the passwds. Use 'saslpasswd2 -a qemu [username]'
-# to add entries, and 'sasldblistusers2 -f [sasldb_path]' to browse it
+# to add entries, and 'sasldblistusers2 -f [sasldb_path]' to browse it.
+# Note that this file stores passwords in clear text.
 #sasldb_path: /etc/qemu/passwd.db
-- 
2.31.1




[PULL 13/19] block/write-threshold: don't use write notifiers

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

write-notifiers are used only for write-threshold. New code for such
purpose should create filters.

Let's better special-case write-threshold and drop write notifiers at
all. (Actually, write-threshold is special-cased anyway, as the only
user of write-notifiers)

So, create a new direct interface for bdrv_co_write_req_prepare() and
drop all write-notifier related logic from write-threshold.c.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-2-vsement...@virtuozzo.com>
Reviewed-by: Eric Blake 
Reviewed-by: Stefan Hajnoczi 
[mreitz: Adjusted comment as per Eric's suggestion]
Signed-off-by: Max Reitz 
---
 include/block/block_int.h   |  1 -
 include/block/write-threshold.h |  9 +
 block/io.c  |  5 ++-
 block/write-threshold.c | 70 +++--
 4 files changed, 27 insertions(+), 58 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index 731ffedb27..aff948fb63 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -959,7 +959,6 @@ struct BlockDriverState {
 
 /* threshold limit for writes, in bytes. "High water mark". */
 uint64_t write_threshold_offset;
-NotifierWithReturn write_threshold_notifier;
 
 /* Writing to the list requires the BQL _and_ the dirty_bitmap_mutex.
  * Reading from the list can be done with either the BQL or the
diff --git a/include/block/write-threshold.h b/include/block/write-threshold.h
index c646f267a4..848a5dde85 100644
--- a/include/block/write-threshold.h
+++ b/include/block/write-threshold.h
@@ -59,4 +59,13 @@ bool bdrv_write_threshold_is_set(const BlockDriverState *bs);
 uint64_t bdrv_write_threshold_exceeded(const BlockDriverState *bs,
const BdrvTrackedRequest *req);
 
+/*
+ * bdrv_write_threshold_check_write
+ *
+ * Check whether the specified request exceeds the write threshold.
+ * If so, send a corresponding event and disable write threshold checking.
+ */
+void bdrv_write_threshold_check_write(BlockDriverState *bs, int64_t offset,
+  int64_t bytes);
+
 #endif
diff --git a/block/io.c b/block/io.c
index 35b6c56efc..3520de51bb 100644
--- a/block/io.c
+++ b/block/io.c
@@ -30,6 +30,7 @@
 #include "block/blockjob_int.h"
 #include "block/block_int.h"
 #include "block/coroutines.h"
+#include "block/write-threshold.h"
 #include "qemu/cutils.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
@@ -2008,8 +2009,8 @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t 
offset, int64_t bytes,
 } else {
 assert(child->perm & BLK_PERM_WRITE);
 }
-return notifier_with_return_list_notify(&bs->before_write_notifiers,
-req);
+bdrv_write_threshold_check_write(bs, offset, bytes);
+return 0;
 case BDRV_TRACKED_TRUNCATE:
 assert(child->perm & BLK_PERM_RESIZE);
 return 0;
diff --git a/block/write-threshold.c b/block/write-threshold.c
index 85b78dc2a9..71df3c434f 100644
--- a/block/write-threshold.c
+++ b/block/write-threshold.c
@@ -29,14 +29,6 @@ bool bdrv_write_threshold_is_set(const BlockDriverState *bs)
 return bs->write_threshold_offset > 0;
 }
 
-static void write_threshold_disable(BlockDriverState *bs)
-{
-if (bdrv_write_threshold_is_set(bs)) {
-notifier_with_return_remove(&bs->write_threshold_notifier);
-bs->write_threshold_offset = 0;
-}
-}
-
 uint64_t bdrv_write_threshold_exceeded(const BlockDriverState *bs,
const BdrvTrackedRequest *req)
 {
@@ -51,55 +43,9 @@ uint64_t bdrv_write_threshold_exceeded(const 
BlockDriverState *bs,
 return 0;
 }
 
-static int coroutine_fn before_write_notify(NotifierWithReturn *notifier,
-void *opaque)
-{
-BdrvTrackedRequest *req = opaque;
-BlockDriverState *bs = req->bs;
-uint64_t amount = 0;
-
-amount = bdrv_write_threshold_exceeded(bs, req);
-if (amount > 0) {
-qapi_event_send_block_write_threshold(
-bs->node_name,
-amount,
-bs->write_threshold_offset);
-
-/* autodisable to avoid flooding the monitor */
-write_threshold_disable(bs);
-}
-
-return 0; /* should always let other notifiers run */
-}
-
-static void write_threshold_register_notifier(BlockDriverState *bs)
-{
-bs->write_threshold_notifier.notify = before_write_notify;
-bdrv_add_before_write_notifier(bs, &bs->write_threshold_notifier);
-}
-
-static void write_threshold_update(BlockDriverState *bs,
-   int64_t threshold_bytes)
-{
-bs->write_threshold_offset = threshold_bytes;
-}
-
 void bdrv_write_threshold_set(BlockDriverState *bs, uint64_t threshold_bytes)
 {
-if (bdrv_write_threshold_is_set(bs)) {
-if (threshold_bytes > 0) {
-

[PATCH 2/4] docs: document usage of the authorization framework

2021-05-14 Thread Daniel P . Berrangé
The authorization framework provides a way to control access to network
services after a client has been authenticated. This documents how to
actually use it.

Signed-off-by: Daniel P. Berrangé 
---
 docs/system/authz.rst | 263 ++
 docs/system/index.rst |   1 +
 2 files changed, 264 insertions(+)
 create mode 100644 docs/system/authz.rst

diff --git a/docs/system/authz.rst b/docs/system/authz.rst
new file mode 100644
index 00..2276546d23
--- /dev/null
+++ b/docs/system/authz.rst
@@ -0,0 +1,263 @@
+.. _client authorization:
+
+Client authorization
+
+
+When configuring a QEMU network backend with either TLS certificates or SASL
+authentication, access will be granted if the client successfully proves
+their identity. If the authorization identity database is scoped to the QEMU
+client this may be sufficient. It is common, however, for the identity database
+to be much broader and thus authentication alone does not enable sufficient
+access control. In this case QEMU provides a flexible system for enforcing
+finer grained authorization on clients post-authentication.
+
+Identity providers
+~~
+
+At the time of writing there are two authentication frameworks used by QEMU
+that emit an identity upon completion.
+
+ * TLS x509 certificate distinguished name.
+
+   When configuring the QEMU backend as a network server with TLS, there
+   are a choice of credentials to use. The most common scenario is to utilize
+   x509 certificates. The simplest configuration only involves issuing
+   certificates to the servers, allowing the client to avoid a MITM attack
+   against their intended server.
+
+   It is possible, however, to enable mutual verification by requiring that
+   the client provide a certificate to the server to prove its own identity.
+   This is done by setting the property ``verify-peer=yes`` on the
+   ``tls-creds-x509`` object, which is in fact the default.
+
+   When peer verification is enabled, client will need to be issued with a
+   certificate by the same certificate authority as the server. If this is
+   still not sufficiently strong access control the Distinguished Name of
+   the certificate can be used as an identity in the QEMU authorization
+   framework.
+
+ * SASL username.
+
+   When configuring the QEMU backend as a network server with SASL, upon
+   completion of the SASL authentication mechanism, a username will be
+   provided. The format of this username will vary depending on the choice
+   of mechanism configured for SASL. It might be a simple UNIX style user
+   ``joebloggs``, while if using Kerberos/GSSAPI it can have a realm
+   attached ``joeblo...@qemu.org``.  Whatever format the username is presented
+   in, it can be used with the QEMU authorization framework.
+
+Authorization drivers
+~
+
+The QEMU authorization framework is a general purpose design with choice of
+user customizable drivers. These are provided as objects that can be
+created at startup using the ``-object`` argument, or at runtime using the
+``object_add`` monitor command.
+
+Simple
+^^
+
+This authorization driver provides a simple mechanism for granting access
+based on an exact match against a single identity. This is useful when it is
+known that only a single client is to be allowed access.
+
+A possible use case would be when configuring QEMU for an incoming live
+migration. It is known exactly which source QEMU the migration is expected
+to arrive from. The x509 certificate associated with this source QEMU would
+thus be used as the identity to match against. Alternatively if the virtual
+machine is dedicated to a specific tenant, then the VNC server would be
+configured with SASL and the username of only that tenant listed.
+
+To create an instance of this driver via QMP:
+
+::
+
+   {
+ "execute": "object-add",
+ "arguments": {
+   "qom-type": "authz-simple",
+   "id": "authz0",
+   "props": {
+ "identity": "fred"
+   }
+ }
+   }
+
+
+Or via the command line
+
+::
+
+   -object authz-simple,id=authz0,identity=fred
+
+
+List
+
+
+In some network backends it will be desirable to grant access to a range of
+clients. This authorization driver provides a list mechanism for granting
+access by matching identities against a list of permitted one. Each match
+rule has an associated policy and a catch all policy applies if no rule
+matches. The match can either be done as an exact string comparison, or can
+use the shell-like glob syntax, which allows for use of wildcards.
+
+To create an instance of this class via QMP:
+
+::
+
+   {
+ "execute": "object-add",
+ "arguments": {
+   "qom-type": "authz-list",
+   "id": "authz0",
+   "props": {
+ "rules": [
+{ "match": "fred", "policy": "allow", "format": "exact" },
+{ "match": "bob", "policy": "allow", "format": "exact" },
+{ "match": "danb", "po

[PATCH 1/4] docs: document how to pass secret data to QEMU

2021-05-14 Thread Daniel P . Berrangé
Signed-off-by: Daniel P. Berrangé 
---
 docs/system/index.rst   |   1 +
 docs/system/secrets.rst | 162 
 2 files changed, 163 insertions(+)
 create mode 100644 docs/system/secrets.rst

diff --git a/docs/system/index.rst b/docs/system/index.rst
index b05af716a9..6aa2f8c05c 100644
--- a/docs/system/index.rst
+++ b/docs/system/index.rst
@@ -30,6 +30,7 @@ Contents:
guest-loader
vnc-security
tls
+   secrets
gdb
managed-startup
cpu-hotplug
diff --git a/docs/system/secrets.rst b/docs/system/secrets.rst
new file mode 100644
index 00..4a177369b6
--- /dev/null
+++ b/docs/system/secrets.rst
@@ -0,0 +1,162 @@
+.. _secret data:
+
+Providing secret data to QEMU
+-
+
+There are a variety of objects in QEMU which require secret data to be provided
+by the administrator or management application. For example, network block
+devices often require a password, LUKS block devices require a passphrase to
+unlock key material, remote desktop services require an access password.
+QEMU has a general purpose mechanism for providing secret data to QEMU in a
+secure manner, using the ``secret`` object type.
+
+At startup this can be done using the ``-object secret,...`` command line
+argument. At runtime this can be done using the ``object_add`` QMP / HMP
+monitor commands. The examples that follow will illustrate use of ``-object``
+command lines, but they all apply equivalentely in QMP / HMP. When creating
+a ``secret`` object it must be given a unique ID string. This ID is then
+used to identify the object when configuring the thing which need the data.
+
+
+INSECURE: Passing secrets as clear text inline
+~~
+
+**The following should never be done in a production environment or on a
+multi-user host. Command line arguments are usually visible in the process
+listings and are often collected in log files by system monitoring agents
+or bug reporting tools. QMP/HMP commands and their arguments are also often
+logged and attached to bug reports. This all risks compromising secrets that
+are passed inline.**
+
+For the convenience of people debugging / developing with QEMU, it is possible
+to pass secret data inline on the command line.
+
+::
+
+   -object secret,id=secvnc0,data=87539319
+
+
+Again it is possible to provide the data in base64 encoded format, which is
+particularly useful if the data contains binary characters that would clash
+with argument parsing.
+
+::
+
+   -object secret,id=secvnc0,data=ODc1MzkzMTk=,format=base64
+
+
+**Note: base64 encoding does not provide any security benefit.**
+
+Passing secrets as clear text via a file
+
+
+The simplest approach to providing data securely is to use a file to store
+the secret:
+
+::
+
+   -object secret,id=secvnc0,file=vnc-password.txt
+
+
+In this example the file ``vnc-password.txt`` contains the plain text secret
+data. It is important to note that the contents of the file are treated as an
+opaque blob. The entire raw file contents is used as the value, thus it is
+important not to mistakenly add any trailing newline character in the file if
+this newline is not intended to be part of the secret data.
+
+In some cases it might be more convenient to pass the secret data in base64
+format and have QEMU decode to get the raw bytes before use:
+
+::
+
+   -object secret,id=sec0,file=vnc-password.txt,format=base64
+
+
+The file should generally be given mode ``0600`` or ``0400`` permissions, and
+have its user/group ownership set to the same account that the QEMU process
+will be launched under. If using mandatory access control such as SELinux, then
+the file should be labelled to only grant access to the specific QEMU process
+that needs access. This will prevent other processes/users from compromising 
the
+secret data.
+
+
+Passing secrets as cipher text inline
+~
+
+To address the insecurity of passing secrets inline as clear text, it is
+possible to configure a second secret as an AES key to use for decrypting
+the data.
+
+The secret used as the AES key must always be configured using the file based
+storage mechanism:
+
+::
+
+   -object secret,id=secmaster,file=masterkey.data,format=base64
+
+
+In this case the ``masterkey.data`` file would be initialized with 32
+cryptographically secure random bytes, which are then base64 encoded.
+The contents of this file will by used as an AES-256 key to encrypt the
+real secret that can now be safely passed to QEMU inline as cipher text
+
+::
+
+   -object 
secret,id=secvnc0,keyid=secmaster,data=BASE64-CIPHERTEXT,iv=BASE64-IV,format=base64
+
+
+In this example ``BASE64-CIPHERTEXT`` is the result of AES-256-CBC encrypting
+the secret with ``masterkey.data`` and then base64 encoding the ciphertext.
+The ``BASE64-IV`` data is 16 random bytes which have been base64 encrypted.
+These bytes are used as the initializa

[PULL 10/19] Document qemu-img options data_file and data_file_raw

2021-05-14 Thread Max Reitz
From: Connor Kuehl 

The contents of this patch were initially developed and posted by Han
Han[1], however, it appears the original patch was not applied. Since
then, the relevant documentation has been moved and adapted to a new
format.

I've taken most of the original wording and tweaked it according to
some of the feedback from the original patch submission. I've also
adapted it to restructured text, which is the format the documentation
currently uses.

[1] https://lists.nongnu.org/archive/html/qemu-block/2019-10/msg01253.html

Fixes: https://bugzilla.redhat.com/1763105
Signed-off-by: Han Han 
Suggested-by: Max Reitz 
[ Max: provided description of data_file_raw behavior ]
Signed-off-by: Connor Kuehl 
Message-Id: <20210505195512.391128-1-cku...@redhat.com>
Signed-off-by: Max Reitz 
---
 docs/tools/qemu-img.rst | 31 +++
 1 file changed, 31 insertions(+)

diff --git a/docs/tools/qemu-img.rst b/docs/tools/qemu-img.rst
index c9efcfaefc..cfe1147879 100644
--- a/docs/tools/qemu-img.rst
+++ b/docs/tools/qemu-img.rst
@@ -866,6 +866,37 @@ Supported image file formats:
 issue ``lsattr filename`` to check if the NOCOW flag is set or not
 (Capital 'C' is NOCOW flag).
 
+  ``data_file``
+Filename where all guest data will be stored. If this option is used,
+the qcow2 file will only contain the image's metadata.
+
+Note: Data loss will occur if the given filename already exists when
+using this option with ``qemu-img create`` since ``qemu-img`` will create
+the data file anew, overwriting the file's original contents. To simply
+update the reference to point to the given pre-existing file, use
+``qemu-img amend``.
+
+  ``data_file_raw``
+If this option is set to ``on``, QEMU will always keep the external data
+file consistent as a standalone read-only raw image.
+
+It does this by forwarding all write accesses to the qcow2 file through to
+the raw data file, including their offsets. Therefore, data that is visible
+on the qcow2 node (i.e., to the guest) at some offset is visible at the 
same
+offset in the raw data file. This results in a read-only raw image. Writes
+that bypass the qcow2 metadata may corrupt the qcow2 metadata because the
+out-of-band writes may result in the metadata falling out of sync with the
+raw image.
+
+If this option is ``off``, QEMU will use the data file to store data in an
+arbitrary manner. The file’s content will not make sense without the
+accompanying qcow2 metadata. Where data is written will have no relation to
+its offset as seen by the guest, and some writes (specifically zero writes)
+may not be forwarded to the data file at all, but will only be handled by
+modifying qcow2 metadata.
+
+This option can only be enabled if ``data_file`` is set.
+
 ``Other``
 
   QEMU also supports various other image file formats for
-- 
2.31.1




Re: [PATCH v2 03/12] crypto: bump min nettle to 3.4, dropping RHEL-7 support

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:04 AM Daniel P. Berrangé  wrote:
>
> It has been over two years since RHEL-8 was released, and thus per the
> platform build policy, we no longer need to support RHEL-7 as a build
> target. This lets us increment the minimum required nettle version and
> drop a lot of backwards compatibility code for 2.x series of nettle.
>
> Per repology, current shipping versions are:
>
>  RHEL-8: 3.4.1
>   Debian Buster: 3.4.1
>  openSUSE Leap 15.2: 3.4.1
>Ubuntu LTS 18.04: 3.4
>Ubuntu LTS 20.04: 3.5.1
> FreeBSD: 3.7.2
>   Fedora 33: 3.5.1
>   Fedora 34: 3.7.2
> OpenBSD: 3.7.2
>  macOS HomeBrew: 3.7.2
>
> Ubuntu LTS 18.04 has the oldest version and so 3.4 is the new minimum.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .gitlab-ci.yml | 10 --
>  configure  |  4 +---
>  crypto/cipher-nettle.c.inc | 31 ---
>  crypto/hash-nettle.c   |  4 
>  crypto/hmac-nettle.c   |  4 
>  5 files changed, 1 insertion(+), 52 deletions(-)
>

Reviewed-by: Willian Rampazzo 




[PATCH 0/4] docs: add user facing docs for secret passing and authorization controls

2021-05-14 Thread Daniel P . Berrangé
These are an important of the overall QEMU network backend security
controls but never previously documented aside from in blog posts.

Daniel P. Berrangé (4):
  docs: document how to pass secret data to QEMU
  docs: document usage of the authorization framework
  docs: recommend SCRAM-SHA-256 SASL mech instead of SHA-1 variant
  sasl: remove comment about obsolete kerberos versions

 docs/system/authz.rst| 263 +++
 docs/system/index.rst|   2 +
 docs/system/secrets.rst  | 162 +
 docs/system/vnc-security.rst |   7 +-
 qemu.sasl|  15 +-
 5 files changed, 437 insertions(+), 12 deletions(-)
 create mode 100644 docs/system/authz.rst
 create mode 100644 docs/system/secrets.rst

-- 
2.31.1





[PULL 08/19] qemu-iotests: let "check" spawn an arbitrary test command

2021-05-14 Thread Max Reitz
From: Paolo Bonzini 

Right now there is no easy way for "check" to print a reproducer command.
Because such a reproducer command line would be huge, we can instead teach
check to start a command of our choice.  This can be for example a Python
unit test with arguments to only run a specific subtest.

Move the trailing empty line to print_env(), since it always looks better
and one caller was not adding it.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Tested-by: Emanuele Giuseppe Esposito 
Message-Id: <20210323181928.311862-5-pbonz...@redhat.com>
Message-Id: <20210503110110.476887-5-pbonz...@redhat.com>
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/check | 19 ++-
 tests/qemu-iotests/testenv.py|  3 ++-
 tests/qemu-iotests/testrunner.py |  1 -
 3 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/tests/qemu-iotests/check b/tests/qemu-iotests/check
index 08f51366f1..2dd529eb75 100755
--- a/tests/qemu-iotests/check
+++ b/tests/qemu-iotests/check
@@ -19,6 +19,9 @@
 import os
 import sys
 import argparse
+import shutil
+from pathlib import Path
+
 from findtests import TestFinder
 from testenv import TestEnv
 from testrunner import TestRunner
@@ -100,7 +103,7 @@ def make_argparser() -> argparse.ArgumentParser:
'rerun failed ./check command, starting from the '
'middle of the process.')
 g_sel.add_argument('tests', metavar='TEST_FILES', nargs='*',
-   help='tests to run')
+   help='tests to run, or "--" followed by a command')
 
 return p
 
@@ -113,6 +116,20 @@ if __name__ == '__main__':
   imgopts=args.imgopts, misalign=args.misalign,
   debug=args.debug, valgrind=args.valgrind)
 
+if len(sys.argv) > 1 and sys.argv[-len(args.tests)-1] == '--':
+if not args.tests:
+sys.exit("missing command after '--'")
+cmd = args.tests
+env.print_env()
+exec_pathstr = shutil.which(cmd[0])
+if exec_pathstr is None:
+sys.exit('command not found: ' + cmd[0])
+exec_path = Path(exec_pathstr).resolve()
+cmd[0] = str(exec_path)
+full_env = env.prepare_subprocess(cmd)
+os.chdir(exec_path.parent)
+os.execve(cmd[0], cmd, full_env)
+
 testfinder = TestFinder(test_dir=env.source_iotests)
 
 groups = args.groups.split(',') if args.groups else None
diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index fca3a609e0..cd0e39b789 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -284,7 +284,8 @@ def print_env(self) -> None:
 PLATFORM  -- {platform}
 TEST_DIR  -- {TEST_DIR}
 SOCK_DIR  -- {SOCK_DIR}
-SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}"""
+SOCKET_SCM_HELPER -- {SOCKET_SCM_HELPER}
+"""
 
 args = collections.defaultdict(str, self.get_env())
 
diff --git a/tests/qemu-iotests/testrunner.py b/tests/qemu-iotests/testrunner.py
index 519924dc81..2f56ac545d 100644
--- a/tests/qemu-iotests/testrunner.py
+++ b/tests/qemu-iotests/testrunner.py
@@ -316,7 +316,6 @@ def run_tests(self, tests: List[str]) -> bool:
 
 if not self.makecheck:
 self.env.print_env()
-print()
 
 test_field_width = max(len(os.path.basename(t)) for t in tests) + 2
 
-- 
2.31.1




Re: [PATCH v3 06/33] util/async: aio_co_schedule(): support reschedule in same ctx

2021-05-14 Thread Roman Kagan
On Thu, May 13, 2021 at 11:04:37PM +0200, Paolo Bonzini wrote:
> On 12/05/21 09:15, Vladimir Sementsov-Ogievskiy wrote:
> > > > 
> > > 
> > > I don't understand.  Why doesn't aio_co_enter go through the ctx !=
> > > qemu_get_current_aio_context() branch and just do aio_co_schedule?
> > > That was at least the idea behind aio_co_wake and aio_co_enter.
> > > 
> > 
> > Because ctx is exactly qemu_get_current_aio_context(), as we are not in
> > iothread but in nbd connection thread. So,
> > qemu_get_current_aio_context() returns qemu_aio_context.
> 
> So the problem is that threads other than the main thread and
> the I/O thread should not return qemu_aio_context.  The vCPU thread
> may need to return it when running with BQL taken, though.

I'm not sure this is the only case.

AFAICS your patch has basically the same effect as Vladimir's
patch "util/async: aio_co_enter(): do aio_co_schedule in general case"
(https://lore.kernel.org/qemu-devel/20210408140827.332915-4-vsement...@virtuozzo.com/).
That one was found to break e.g. aio=threads cases.  I guessed it
implicitly relied upon aio_co_enter() acquiring the aio_context but we
didn't dig further to pinpoint the exact scenario.

Roman.

> Something like this (untested):
> 
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 5f342267d5..10fcae1515 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -691,10 +691,13 @@ void aio_co_enter(AioContext *ctx, struct Coroutine 
> *co);
>   * Return the AioContext whose event loop runs in the current thread.
>   *
>   * If called from an IOThread this will be the IOThread's AioContext.  If
> - * called from another thread it will be the main loop AioContext.
> + * called from the main thread or with the "big QEMU lock" taken it
> + * will be the main loop AioContext.
>   */
>  AioContext *qemu_get_current_aio_context(void);
> +void qemu_set_current_aio_context(AioContext *ctx);
> +
>  /**
>   * aio_context_setup:
>   * @ctx: the aio context
> diff --git a/iothread.c b/iothread.c
> index 7f086387be..22b967e77c 100644
> --- a/iothread.c
> +++ b/iothread.c
> @@ -39,11 +39,23 @@ DECLARE_CLASS_CHECKERS(IOThreadClass, IOTHREAD,
>  #define IOTHREAD_POLL_MAX_NS_DEFAULT 0ULL
>  #endif
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +assert(!my_aiocontext);
> +my_aiocontext = ctx;
> +}
>  AioContext *qemu_get_current_aio_context(void)
>  {
> -return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +if (my_aiocontext) {
> +return my_aiocontext;
> +}
> +if (qemu_mutex_iothread_locked()) {
> +return qemu_get_aio_context();
> +}
> +return NULL;
>  }
>  static void *iothread_run(void *opaque)
> @@ -56,7 +68,7 @@ static void *iothread_run(void *opaque)
>   * in this new thread uses glib.
>   */
>  g_main_context_push_thread_default(iothread->worker_context);
> -my_iothread = iothread;
> +qemu_set_current_aio_context(iothread->ctx);
>  iothread->thread_id = qemu_get_thread_id();
>  qemu_sem_post(&iothread->init_done_sem);
> diff --git a/stubs/iothread.c b/stubs/iothread.c
> index 8cc9e28c55..25ff398894 100644
> --- a/stubs/iothread.c
> +++ b/stubs/iothread.c
> @@ -6,3 +6,7 @@ AioContext *qemu_get_current_aio_context(void)
>  {
>  return qemu_get_aio_context();
>  }
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +}
> diff --git a/tests/unit/iothread.c b/tests/unit/iothread.c
> index afde12b4ef..cab38b3da8 100644
> --- a/tests/unit/iothread.c
> +++ b/tests/unit/iothread.c
> @@ -30,13 +30,26 @@ struct IOThread {
>  bool stopping;
>  };
> -static __thread IOThread *my_iothread;
> +static __thread AioContext *my_aiocontext;
> +
> +void qemu_set_current_aio_context(AioContext *ctx)
> +{
> +assert(!my_aiocontext);
> +my_aiocontext = ctx;
> +}
>  AioContext *qemu_get_current_aio_context(void)
>  {
> -return my_iothread ? my_iothread->ctx : qemu_get_aio_context();
> +if (my_aiocontext) {
> +return my_aiocontext;
> +}
> +if (qemu_mutex_iothread_locked()) {
> +return qemu_get_aio_context();
> +}
> +return NULL;
>  }
> +
>  static void iothread_init_gcontext(IOThread *iothread)
>  {
>  GSource *source;
> @@ -54,7 +67,7 @@ static void *iothread_run(void *opaque)
>  rcu_register_thread();
> -my_iothread = iothread;
> +qemu_set_current_aio_context(iothread->ctx);
>  qemu_mutex_lock(&iothread->init_done_lock);
>  iothread->ctx = aio_context_new(&error_abort);
> diff --git a/util/main-loop.c b/util/main-loop.c
> index d9c55df6f5..4ae5b23e99 100644
> --- a/util/main-loop.c
> +++ b/util/main-loop.c
> @@ -170,6 +170,7 @@ int qemu_init_main_loop(Error **errp)
>  if (!qemu_aio_context) {
>  return -EMFILE;
>  }
> +qemu_set_current_aio_context(qemu_aio_context);
>  qemu_notify_bh = qemu_bh_new(notify_event_cb, NULL);
> 

[PULL 19/19] write-threshold: deal with includes

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

"qemu/typedefs.h" is enough for include/block/write-threshold.h header
with forward declaration of BlockDriverState. Also drop extra includes
from block/write-threshold.c and tests/unit/test-write-threshold.c

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-Id: <20210506090621.11848-9-vsement...@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Max Reitz 
---
 include/block/write-threshold.h   | 2 +-
 block/write-threshold.c   | 2 --
 tests/unit/test-write-threshold.c | 1 -
 3 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/include/block/write-threshold.h b/include/block/write-threshold.h
index a03ee1cacd..f50f923e7e 100644
--- a/include/block/write-threshold.h
+++ b/include/block/write-threshold.h
@@ -13,7 +13,7 @@
 #ifndef BLOCK_WRITE_THRESHOLD_H
 #define BLOCK_WRITE_THRESHOLD_H
 
-#include "block/block_int.h"
+#include "qemu/typedefs.h"
 
 /*
  * bdrv_write_threshold_set:
diff --git a/block/write-threshold.c b/block/write-threshold.c
index 65a6acd142..35cafbc22d 100644
--- a/block/write-threshold.c
+++ b/block/write-threshold.c
@@ -12,9 +12,7 @@
 
 #include "qemu/osdep.h"
 #include "block/block_int.h"
-#include "qemu/coroutine.h"
 #include "block/write-threshold.h"
-#include "qemu/notify.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-block-core.h"
 #include "qapi/qapi-events-block-core.h"
diff --git a/tests/unit/test-write-threshold.c 
b/tests/unit/test-write-threshold.c
index 49b1ef7a20..0158e4637a 100644
--- a/tests/unit/test-write-threshold.c
+++ b/tests/unit/test-write-threshold.c
@@ -7,7 +7,6 @@
  */
 
 #include "qemu/osdep.h"
-#include "qapi/error.h"
 #include "block/block_int.h"
 #include "block/write-threshold.h"
 
-- 
2.31.1




[PULL 09/19] qemu-iotests: fix case of SOCK_DIR already in the environment

2021-05-14 Thread Max Reitz
From: Paolo Bonzini 

Due to a typo, in this case the SOCK_DIR was not being created.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Paolo Bonzini 
Tested-by: Emanuele Giuseppe Esposito 
Message-Id: <20210323181928.311862-6-pbonz...@redhat.com>
Message-Id: <20210503110110.476887-6-pbonz...@redhat.com>
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/testenv.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index cd0e39b789..0c3fe75636 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -120,7 +120,7 @@ def init_directories(self) -> None:
 try:
 self.sock_dir = os.environ['SOCK_DIR']
 self.tmp_sock_dir = False
-Path(self.test_dir).mkdir(parents=True, exist_ok=True)
+Path(self.sock_dir).mkdir(parents=True, exist_ok=True)
 except KeyError:
 self.sock_dir = tempfile.mkdtemp()
 self.tmp_sock_dir = True
-- 
2.31.1




[PULL 17/19] test-write-threshold: drop extra tests

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

Testing set/get of one 64bit variable doesn't seem necessary. We have a
lot of such variables. Also remaining tests do test set/get anyway.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-7-vsement...@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Max Reitz 
---
 tests/unit/test-write-threshold.c | 43 ---
 1 file changed, 43 deletions(-)

diff --git a/tests/unit/test-write-threshold.c 
b/tests/unit/test-write-threshold.c
index bb5c1a5217..9e9986aefc 100644
--- a/tests/unit/test-write-threshold.c
+++ b/tests/unit/test-write-threshold.c
@@ -12,43 +12,6 @@
 #include "block/write-threshold.h"
 
 
-static void test_threshold_not_set_on_init(void)
-{
-uint64_t res;
-BlockDriverState bs;
-memset(&bs, 0, sizeof(bs));
-
-res = bdrv_write_threshold_get(&bs);
-g_assert_cmpint(res, ==, 0);
-}
-
-static void test_threshold_set_get(void)
-{
-uint64_t threshold = 4 * 1024 * 1024;
-uint64_t res;
-BlockDriverState bs;
-memset(&bs, 0, sizeof(bs));
-
-bdrv_write_threshold_set(&bs, threshold);
-
-res = bdrv_write_threshold_get(&bs);
-g_assert_cmpint(res, ==, threshold);
-}
-
-static void test_threshold_multi_set_get(void)
-{
-uint64_t threshold1 = 4 * 1024 * 1024;
-uint64_t threshold2 = 15 * 1024 * 1024;
-uint64_t res;
-BlockDriverState bs;
-memset(&bs, 0, sizeof(bs));
-
-bdrv_write_threshold_set(&bs, threshold1);
-bdrv_write_threshold_set(&bs, threshold2);
-res = bdrv_write_threshold_get(&bs);
-g_assert_cmpint(res, ==, threshold2);
-}
-
 static void test_threshold_not_trigger(void)
 {
 uint64_t threshold = 4 * 1024 * 1024;
@@ -84,12 +47,6 @@ int main(int argc, char **argv)
 {
 size_t i;
 TestStruct tests[] = {
-{ "/write-threshold/not-set-on-init",
-  test_threshold_not_set_on_init },
-{ "/write-threshold/set-get",
-  test_threshold_set_get },
-{ "/write-threshold/multi-set-get",
-  test_threshold_multi_set_get },
 { "/write-threshold/not-trigger",
   test_threshold_not_trigger },
 { "/write-threshold/trigger",
-- 
2.31.1




[PULL 05/19] qemu-iotests: do not buffer the test output

2021-05-14 Thread Max Reitz
From: Paolo Bonzini 

Instead of buffering the test output into a StringIO, patch it on
the fly by wrapping sys.stdout's write method.  This can be
done unconditionally, even if using -d, which makes execute_unittest
a bit simpler.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Tested-by: Emanuele Giuseppe Esposito 
Message-Id: <20210323181928.311862-2-pbonz...@redhat.com>
Message-Id: <20210503110110.476887-2-pbonz...@redhat.com>
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/240.out|  8 ++--
 tests/qemu-iotests/245.out|  8 ++--
 tests/qemu-iotests/295.out|  6 +--
 tests/qemu-iotests/296.out|  8 ++--
 tests/qemu-iotests/iotests.py | 70 ---
 5 files changed, 56 insertions(+), 44 deletions(-)

diff --git a/tests/qemu-iotests/240.out b/tests/qemu-iotests/240.out
index e0982831ae..89ed25e506 100644
--- a/tests/qemu-iotests/240.out
+++ b/tests/qemu-iotests/240.out
@@ -15,7 +15,7 @@
 {"return": {}}
 {"execute": "blockdev-del", "arguments": {"node-name": "hd0"}}
 {"return": {}}
-==Attach two SCSI disks using the same block device and the same iothread==
+.==Attach two SCSI disks using the same block device and the same iothread==
 {"execute": "blockdev-add", "arguments": {"driver": "null-co", "node-name": 
"hd0", "read-only": true, "read-zeroes": true}}
 {"return": {}}
 {"execute": "object-add", "arguments": {"id": "iothread0", "qom-type": 
"iothread"}}
@@ -32,7 +32,7 @@
 {"return": {}}
 {"execute": "blockdev-del", "arguments": {"node-name": "hd0"}}
 {"return": {}}
-==Attach two SCSI disks using the same block device but different iothreads==
+.==Attach two SCSI disks using the same block device but different iothreads==
 {"execute": "blockdev-add", "arguments": {"driver": "null-co", "node-name": 
"hd0", "read-only": true, "read-zeroes": true}}
 {"return": {}}
 {"execute": "object-add", "arguments": {"id": "iothread0", "qom-type": 
"iothread"}}
@@ -55,7 +55,7 @@
 {"return": {}}
 {"execute": "blockdev-del", "arguments": {"node-name": "hd0"}}
 {"return": {}}
-==Attach a SCSI disks using the same block device as a NBD server==
+.==Attach a SCSI disks using the same block device as a NBD server==
 {"execute": "blockdev-add", "arguments": {"driver": "null-co", "node-name": 
"hd0", "read-only": true, "read-zeroes": true}}
 {"return": {}}
 {"execute": "nbd-server-start", "arguments": {"addr": {"data": {"path": 
"SOCK_DIR/PID-nbd.sock"}, "type": "unix"}}}
@@ -68,7 +68,7 @@
 {"return": {}}
 {"execute": "device_add", "arguments": {"drive": "hd0", "driver": "scsi-hd", 
"id": "scsi-hd0"}}
 {"return": {}}
-
+.
 --
 Ran 4 tests
 
diff --git a/tests/qemu-iotests/245.out b/tests/qemu-iotests/245.out
index 4b33dcaf5c..99c12f4f98 100644
--- a/tests/qemu-iotests/245.out
+++ b/tests/qemu-iotests/245.out
@@ -1,16 +1,16 @@
-{"execute": "job-finalize", "arguments": {"id": "commit0"}}
+..{"execute": "job-finalize", "arguments": {"id": "commit0"}}
 {"return": {}}
 {"data": {"id": "commit0", "type": "commit"}, "event": "BLOCK_JOB_PENDING", 
"timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "commit0", "len": 3145728, "offset": 3145728, "speed": 0, 
"type": "commit"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
-{"execute": "job-finalize", "arguments": {"id": "stream0"}}
+...{"execute": "job-finalize", "arguments": {"id": "stream0"}}
 {"return": {}}
 {"data": {"id": "stream0", "type": "stream"}, "event": "BLOCK_JOB_PENDING", 
"timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "stream0", "len": 3145728, "offset": 3145728, "speed": 0, 
"type": "stream"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
-{"execute": "job-finalize", "arguments": {"id": "stream0"}}
+.{"execute": "job-finalize", "arguments": {"id": "stream0"}}
 {"return": {}}
 {"data": {"id": "stream0", "type": "stream"}, "event": "BLOCK_JOB_PENDING", 
"timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
 {"data": {"device": "stream0", "len": 3145728, "offset": 3145728, "speed": 0, 
"type": "stream"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
-.
+...
 --
 Ran 21 tests
 
diff --git a/tests/qemu-iotests/295.out b/tests/qemu-iotests/295.out
index ad34b2ca2c..5ff91f116c 100644
--- a/tests/qemu-iotests/295.out
+++ b/tests/qemu-iotests/295.out
@@ -4,7 +4,7 @@
 {"return": {}}
 {"execute": "job-dismiss", "arguments": {"id": "job_erase_key"}}
 {"return": {}}
-{"execute": "job-dismiss", "arguments": {"id": "job_add_key"}}
+.{"execute": "job-dismiss", "arguments": {"id": "job_add_key"}}
 {"return": {}}
 {"execute": "job-dismiss", "arguments": {"id": "job_erase_key"}}
 {"return": {}}
@@ -13,7 +13,7 @@ Job failed: Invalid password, cannot unlock

[PULL 15/19] test-write-threshold: rewrite test_threshold_(not_)trigger tests

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

These tests use bdrv_write_threshold_exceeded() API, which is used only
for test (since pre-previous commit). Better is testing real API, which
is used in block.c as well.

So, let's call bdrv_write_threshold_check_write(), and check is
bs->write_threshold_offset cleared or not (it's cleared iff threshold
triggered).

Also we get rid of BdrvTrackedRequest use here. Note, that paranoiac
bdrv_check_request() calls were added in 8b1170012b1 to protect
BdrvTrackedRequest. Drop them now.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-4-vsement...@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Max Reitz 
---
 tests/unit/test-write-threshold.c | 22 --
 1 file changed, 4 insertions(+), 18 deletions(-)

diff --git a/tests/unit/test-write-threshold.c 
b/tests/unit/test-write-threshold.c
index fc1c45a2eb..fd40a815b8 100644
--- a/tests/unit/test-write-threshold.c
+++ b/tests/unit/test-write-threshold.c
@@ -55,41 +55,27 @@ static void test_threshold_multi_set_get(void)
 
 static void test_threshold_not_trigger(void)
 {
-uint64_t amount = 0;
 uint64_t threshold = 4 * 1024 * 1024;
 BlockDriverState bs;
-BdrvTrackedRequest req;
 
 memset(&bs, 0, sizeof(bs));
-memset(&req, 0, sizeof(req));
-req.offset = 1024;
-req.bytes = 1024;
-
-bdrv_check_request(req.offset, req.bytes, &error_abort);
 
 bdrv_write_threshold_set(&bs, threshold);
-amount = bdrv_write_threshold_exceeded(&bs, &req);
-g_assert_cmpuint(amount, ==, 0);
+bdrv_write_threshold_check_write(&bs, 1024, 1024);
+g_assert_cmpuint(bdrv_write_threshold_get(&bs), ==, threshold);
 }
 
 
 static void test_threshold_trigger(void)
 {
-uint64_t amount = 0;
 uint64_t threshold = 4 * 1024 * 1024;
 BlockDriverState bs;
-BdrvTrackedRequest req;
 
 memset(&bs, 0, sizeof(bs));
-memset(&req, 0, sizeof(req));
-req.offset = (4 * 1024 * 1024) - 1024;
-req.bytes = 2 * 1024;
-
-bdrv_check_request(req.offset, req.bytes, &error_abort);
 
 bdrv_write_threshold_set(&bs, threshold);
-amount = bdrv_write_threshold_exceeded(&bs, &req);
-g_assert_cmpuint(amount, >=, 1024);
+bdrv_write_threshold_check_write(&bs, threshold - 1024, 2 * 1024);
+g_assert_cmpuint(bdrv_write_threshold_get(&bs), ==, 0);
 }
 
 typedef struct TestStruct {
-- 
2.31.1




[PULL 07/14] tests/qtest: add multi-queue test case to vhost-user-blk-test

2021-05-14 Thread Kevin Wolf
From: Stefan Hajnoczi 

Signed-off-by: Stefan Hajnoczi 
Message-Id: <20210309094106.196911-4-stefa...@redhat.com>
Signed-off-by: Kevin Wolf 
Message-Id: <20210322092327.150720-3-stefa...@redhat.com>
Signed-off-by: Kevin Wolf 
---
 tests/qtest/vhost-user-blk-test.c | 81 +--
 1 file changed, 76 insertions(+), 5 deletions(-)

diff --git a/tests/qtest/vhost-user-blk-test.c 
b/tests/qtest/vhost-user-blk-test.c
index 3e79549899..d37e1c30bd 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -569,6 +569,67 @@ static void pci_hotplug(void *obj, void *data, 
QGuestAllocator *t_alloc)
 qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
 }
 
+static void multiqueue(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+QVirtioPCIDevice *pdev1 = obj;
+QVirtioDevice *dev1 = &pdev1->vdev;
+QVirtioPCIDevice *pdev8;
+QVirtioDevice *dev8;
+QTestState *qts = pdev1->pdev->bus->qts;
+uint64_t features;
+uint16_t num_queues;
+
+/*
+ * The primary device has 1 queue and VIRTIO_BLK_F_MQ is not enabled. The
+ * VIRTIO specification allows VIRTIO_BLK_F_MQ to be enabled when there is
+ * only 1 virtqueue, but --device vhost-user-blk-pci doesn't do this (which
+ * is also spec-compliant).
+ */
+features = qvirtio_get_features(dev1);
+g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ), ==, 0);
+features = features & ~(QVIRTIO_F_BAD_FEATURE |
+(1u << VIRTIO_RING_F_INDIRECT_DESC) |
+(1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+(1u << VIRTIO_BLK_F_SCSI));
+qvirtio_set_features(dev1, features);
+
+/* Hotplug a secondary device with 8 queues */
+qtest_qmp_device_add(qts, "vhost-user-blk-pci", "drv1",
+ "{'addr': %s, 'chardev': 'char2', 'num-queues': 8}",
+ stringify(PCI_SLOT_HP) ".0");
+
+pdev8 = virtio_pci_new(pdev1->pdev->bus,
+   &(QPCIAddress) {
+   .devfn = QPCI_DEVFN(PCI_SLOT_HP, 0)
+   });
+g_assert_nonnull(pdev8);
+g_assert_cmpint(pdev8->vdev.device_type, ==, VIRTIO_ID_BLOCK);
+
+qos_object_start_hw(&pdev8->obj);
+
+dev8 = &pdev8->vdev;
+features = qvirtio_get_features(dev8);
+g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ),
+==,
+(1u << VIRTIO_BLK_F_MQ));
+features = features & ~(QVIRTIO_F_BAD_FEATURE |
+(1u << VIRTIO_RING_F_INDIRECT_DESC) |
+(1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+(1u << VIRTIO_BLK_F_SCSI) |
+(1u << VIRTIO_BLK_F_MQ));
+qvirtio_set_features(dev8, features);
+
+num_queues = qvirtio_config_readw(dev8,
+offsetof(struct virtio_blk_config, num_queues));
+g_assert_cmpint(num_queues, ==, 8);
+
+qvirtio_pci_device_disable(pdev8);
+qos_object_destroy(&pdev8->obj);
+
+/* unplug secondary disk */
+qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
+}
+
 /*
  * Check that setting the vring addr on a non-existent virtqueue does
  * not crash.
@@ -688,7 +749,8 @@ static void quit_storage_daemon(void *data)
 g_free(data);
 }
 
-static void start_vhost_user_blk(GString *cmd_line, int vus_instances)
+static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
+ int num_queues)
 {
 const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
 int i;
@@ -713,8 +775,8 @@ static void start_vhost_user_blk(GString *cmd_line, int 
vus_instances)
 g_string_append_printf(storage_daemon_command,
 "--blockdev driver=file,node-name=disk%d,filename=%s "
 "--export 
type=vhost-user-blk,id=disk%d,addr.type=unix,addr.path=%s,"
-"node-name=disk%i,writable=on ",
-i, img_path, i, sock_path, i);
+"node-name=disk%i,writable=on,num-queues=%d ",
+i, img_path, i, sock_path, i, num_queues);
 
 g_string_append_printf(cmd_line, "-chardev socket,id=char%d,path=%s ",
i + 1, sock_path);
@@ -748,7 +810,7 @@ static void start_vhost_user_blk(GString *cmd_line, int 
vus_instances)
 
 static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
 {
-start_vhost_user_blk(cmd_line, 1);
+start_vhost_user_blk(cmd_line, 1, 1);
 return arg;
 }
 
@@ -762,7 +824,13 @@ static void *vhost_user_blk_test_setup(GString *cmd_line, 
void *arg)
 static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
 {
 /* "-chardev socket,id=char2" is used for pci_hotplug*/
-start_vhost_user_blk(cmd_line, 2);
+start_vhost_user_blk(cmd_line, 2, 1);
+return arg;
+}
+
+static void *vhost_user_blk_multiqueue_test_setup(GString *cmd_line, void *arg)
+{
+start_vhost_user_blk(cmd_line, 2, 8);
 

[PULL 14/19] block: drop write notifiers

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

They are unused now.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-3-vsement...@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi 
Signed-off-by: Max Reitz 
---
 include/block/block_int.h | 12 
 block.c   |  1 -
 block/io.c|  6 --
 3 files changed, 19 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index aff948fb63..b2c8b09d0f 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -954,9 +954,6 @@ struct BlockDriverState {
  */
 int64_t total_sectors;
 
-/* Callback before write request is processed */
-NotifierWithReturnList before_write_notifiers;
-
 /* threshold limit for writes, in bytes. "High water mark". */
 uint64_t write_threshold_offset;
 
@@ -1083,15 +1080,6 @@ void bdrv_parse_filename_strip_prefix(const char 
*filename, const char *prefix,
 bool bdrv_backing_overridden(BlockDriverState *bs);
 
 
-/**
- * bdrv_add_before_write_notifier:
- *
- * Register a callback that is invoked before write requests are processed but
- * after any throttling or waiting for overlapping requests.
- */
-void bdrv_add_before_write_notifier(BlockDriverState *bs,
-NotifierWithReturn *notifier);
-
 /**
  * bdrv_add_aio_context_notifier:
  *
diff --git a/block.c b/block.c
index 9ad725d205..75a82af641 100644
--- a/block.c
+++ b/block.c
@@ -400,7 +400,6 @@ BlockDriverState *bdrv_new(void)
 for (i = 0; i < BLOCK_OP_TYPE_MAX; i++) {
 QLIST_INIT(&bs->op_blockers[i]);
 }
-notifier_with_return_list_init(&bs->before_write_notifiers);
 qemu_co_mutex_init(&bs->reqs_lock);
 qemu_mutex_init(&bs->dirty_bitmap_mutex);
 bs->refcnt = 1;
diff --git a/block/io.c b/block/io.c
index 3520de51bb..1e826ba9e8 100644
--- a/block/io.c
+++ b/block/io.c
@@ -3165,12 +3165,6 @@ bool bdrv_qiov_is_aligned(BlockDriverState *bs, 
QEMUIOVector *qiov)
 return true;
 }
 
-void bdrv_add_before_write_notifier(BlockDriverState *bs,
-NotifierWithReturn *notifier)
-{
-notifier_with_return_list_add(&bs->before_write_notifiers, notifier);
-}
-
 void bdrv_io_plug(BlockDriverState *bs)
 {
 BdrvChild *child;
-- 
2.31.1




[PULL 12/19] qemu-iotests: fix pylint 2.8 consider-using-with error

2021-05-14 Thread Max Reitz
From: Emanuele Giuseppe Esposito 

pylint 2.8 introduces consider-using-with error, suggesting
to use the 'with' block statement when possible.

Modify all subprocess.Popen call to use the 'with' statement,
except the one in __init__ of QemuIoInteractive class, since
it is assigned to a class field and used in other methods.

Signed-off-by: Emanuele Giuseppe Esposito 
Message-Id: <20210510190449.65948-1-eespo...@redhat.com>
[mreitz: Disable bad-option-value warning in the iotests' pylintrc, so
 that disabling consider-using-with in QemuIoInteractive will
 not produce a warning in pre-2.8 pylint versions]
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/iotests.py| 65 
 tests/qemu-iotests/pylintrc  |  3 ++
 tests/qemu-iotests/testrunner.py | 22 +--
 3 files changed, 47 insertions(+), 43 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 5ead94229f..777fa2ec0e 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -112,15 +112,14 @@ def qemu_tool_pipe_and_status(tool: str, args: 
Sequence[str],
 Run a tool and return both its output and its exit code
 """
 stderr = subprocess.STDOUT if connect_stderr else None
-subp = subprocess.Popen(args,
-stdout=subprocess.PIPE,
-stderr=stderr,
-universal_newlines=True)
-output = subp.communicate()[0]
-if subp.returncode < 0:
-cmd = ' '.join(args)
-sys.stderr.write(f'{tool} received signal {-subp.returncode}: {cmd}\n')
-return (output, subp.returncode)
+with subprocess.Popen(args, stdout=subprocess.PIPE,
+  stderr=stderr, universal_newlines=True) as subp:
+output = subp.communicate()[0]
+if subp.returncode < 0:
+cmd = ' '.join(args)
+sys.stderr.write(f'{tool} received signal \
+   {-subp.returncode}: {cmd}\n')
+return (output, subp.returncode)
 
 def qemu_img_pipe_and_status(*args: str) -> Tuple[str, int]:
 """
@@ -236,6 +235,9 @@ def qemu_io_silent_check(*args):
 class QemuIoInteractive:
 def __init__(self, *args):
 self.args = qemu_io_args_no_fmt + list(args)
+# We need to keep the Popen objext around, and not
+# close it immediately. Therefore, disable the pylint check:
+# pylint: disable=consider-using-with
 self._p = subprocess.Popen(self.args, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
@@ -309,22 +311,22 @@ def qemu_nbd_popen(*args):
 cmd.extend(args)
 
 log('Start NBD server')
-p = subprocess.Popen(cmd)
-try:
-while not os.path.exists(pid_file):
-if p.poll() is not None:
-raise RuntimeError(
-"qemu-nbd terminated with exit code {}: {}"
-.format(p.returncode, ' '.join(cmd)))
-
-time.sleep(0.01)
-yield
-finally:
-if os.path.exists(pid_file):
-os.remove(pid_file)
-log('Kill NBD server')
-p.kill()
-p.wait()
+with subprocess.Popen(cmd) as p:
+try:
+while not os.path.exists(pid_file):
+if p.poll() is not None:
+raise RuntimeError(
+"qemu-nbd terminated with exit code {}: {}"
+.format(p.returncode, ' '.join(cmd)))
+
+time.sleep(0.01)
+yield
+finally:
+if os.path.exists(pid_file):
+os.remove(pid_file)
+log('Kill NBD server')
+p.kill()
+p.wait()
 
 def compare_images(img1, img2, fmt1=imgfmt, fmt2=imgfmt):
 '''Return True if two image files are identical'''
@@ -333,13 +335,12 @@ def compare_images(img1, img2, fmt1=imgfmt, fmt2=imgfmt):
 
 def create_image(name, size):
 '''Create a fully-allocated raw image with sector markers'''
-file = open(name, 'wb')
-i = 0
-while i < size:
-sector = struct.pack('>l504xl', i // 512, i // 512)
-file.write(sector)
-i = i + 512
-file.close()
+with open(name, 'wb') as file:
+i = 0
+while i < size:
+sector = struct.pack('>l504xl', i // 512, i // 512)
+file.write(sector)
+i = i + 512
 
 def image_size(img):
 '''Return image's virtual size'''
diff --git a/tests/qemu-iotests/pylintrc b/tests/qemu-iotests/pylintrc
index 7a6c0a9474..f2c0b522ac 100644
--- a/tests/qemu-iotests/pylintrc
+++ b/tests/qemu-iotests/pylintrc
@@ -19,6 +19,9 @@ disable=invalid-name,
 too-many-public-methods,
 # pylint warns about Optional[] etc. as unsubscriptable in 3.9
 unsubscriptable-object,
+# Sometimes we need to disable a newly introduced pylint warning.
+  

[PULL 14/14] vhost-user-blk: Check that num-queues is supported by backend

2021-05-14 Thread Kevin Wolf
Creating a device with a number of queues that isn't supported by the
backend is pointless, the device won't work properly and the error
messages are rather confusing.

Just fail to create the device if num-queues is higher than what the
backend supports.

Since the relationship between num-queues and the number of virtqueues
depends on the specific device, this is an additional value that needs
to be initialised by the device. For convenience, allow leaving it 0 if
the check should be skipped. This makes sense for vhost-user-net where
separate vhost devices are used for the queues and custom initialisation
code is needed to perform the check.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1935031
Signed-off-by: Kevin Wolf 
Reviewed-by: Raphael Norwitz 
Message-Id: <20210429171316.162022-7-kw...@redhat.com>
Reviewed-by: Michael S. Tsirkin 
Signed-off-by: Kevin Wolf 
---
 include/hw/virtio/vhost.h | 2 ++
 hw/block/vhost-user-blk.c | 1 +
 hw/virtio/vhost-user.c| 5 +
 3 files changed, 8 insertions(+)

diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 4a8bc75415..21a9a52088 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -74,6 +74,8 @@ struct vhost_dev {
 int nvqs;
 /* the first virtqueue which would be used by this vhost dev */
 int vq_index;
+/* if non-zero, minimum required value for max_queues */
+int num_queues;
 uint64_t features;
 uint64_t acked_features;
 uint64_t backend_features;
diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
index c7e502f4c7..c6210fad0c 100644
--- a/hw/block/vhost-user-blk.c
+++ b/hw/block/vhost-user-blk.c
@@ -324,6 +324,7 @@ static int vhost_user_blk_connect(DeviceState *dev, Error 
**errp)
 }
 s->connected = true;
 
+s->dev.num_queues = s->num_queues;
 s->dev.nvqs = s->num_queues;
 s->dev.vqs = s->vhost_vqs;
 s->dev.vq_index = 0;
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index ded0c10453..ee57abe045 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -1909,6 +1909,11 @@ static int vhost_user_backend_init(struct vhost_dev 
*dev, void *opaque)
 return err;
 }
 }
+if (dev->num_queues && dev->max_queues < dev->num_queues) {
+error_report("The maximum number of queues supported by the "
+ "backend is %" PRIu64, dev->max_queues);
+return -EINVAL;
+}
 
 if (virtio_has_feature(features, VIRTIO_F_IOMMU_PLATFORM) &&
 !(virtio_has_feature(dev->protocol_features,
-- 
2.30.2




[PULL 11/19] block/copy-on-read: use bdrv_drop_filter() and drop s->active

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

Now, after huge update of block graph permission update algorithm, we
don't need this workaround with active state of the filter. Drop it and
use new smart bdrv_drop_filter() function.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-Id: <20210506194143.394141-1-vsement...@virtuozzo.com>
Signed-off-by: Max Reitz 
---
 block/copy-on-read.c | 33 +
 1 file changed, 1 insertion(+), 32 deletions(-)

diff --git a/block/copy-on-read.c b/block/copy-on-read.c
index 9cad9e1b8c..c428682272 100644
--- a/block/copy-on-read.c
+++ b/block/copy-on-read.c
@@ -29,7 +29,6 @@
 
 
 typedef struct BDRVStateCOR {
-bool active;
 BlockDriverState *bottom_bs;
 bool chain_frozen;
 } BDRVStateCOR;
@@ -89,7 +88,6 @@ static int cor_open(BlockDriverState *bs, QDict *options, int 
flags,
  */
 bdrv_ref(bottom_bs);
 }
-state->active = true;
 state->bottom_bs = bottom_bs;
 
 /*
@@ -112,17 +110,6 @@ static void cor_child_perm(BlockDriverState *bs, BdrvChild 
*c,
uint64_t perm, uint64_t shared,
uint64_t *nperm, uint64_t *nshared)
 {
-BDRVStateCOR *s = bs->opaque;
-
-if (!s->active) {
-/*
- * While the filter is being removed
- */
-*nperm = 0;
-*nshared = BLK_PERM_ALL;
-return;
-}
-
 *nperm = perm & PERM_PASSTHROUGH;
 *nshared = (shared & PERM_PASSTHROUGH) | PERM_UNCHANGED;
 
@@ -280,32 +267,14 @@ static BlockDriver bdrv_copy_on_read = {
 
 void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
 {
-BdrvChild *child;
-BlockDriverState *bs;
 BDRVStateCOR *s = cor_filter_bs->opaque;
 
-child = bdrv_filter_child(cor_filter_bs);
-if (!child) {
-return;
-}
-bs = child->bs;
-
-/* Retain the BDS until we complete the graph change. */
-bdrv_ref(bs);
-/* Hold a guest back from writing while permissions are being reset. */
-bdrv_drained_begin(bs);
-/* Drop permissions before the graph change. */
-s->active = false;
 /* unfreeze, as otherwise bdrv_replace_node() will fail */
 if (s->chain_frozen) {
 s->chain_frozen = false;
 bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
 }
-bdrv_child_refresh_perms(cor_filter_bs, child, &error_abort);
-bdrv_replace_node(cor_filter_bs, bs, &error_abort);
-
-bdrv_drained_end(bs);
-bdrv_unref(bs);
+bdrv_drop_filter(cor_filter_bs, &error_abort);
 bdrv_unref(cor_filter_bs);
 }
 
-- 
2.31.1




[PULL 08/14] vhost-user-blk-test: test discard/write zeroes invalid inputs

2021-05-14 Thread Kevin Wolf
From: Stefan Hajnoczi 

Exercise input validation code paths in
block/export/vhost-user-blk-server.c.

Signed-off-by: Stefan Hajnoczi 
Message-Id: <20210309094106.196911-5-stefa...@redhat.com>
Signed-off-by: Kevin Wolf 
Message-Id: <20210322092327.150720-4-stefa...@redhat.com>
Signed-off-by: Kevin Wolf 
---
 tests/qtest/vhost-user-blk-test.c | 124 ++
 1 file changed, 124 insertions(+)

diff --git a/tests/qtest/vhost-user-blk-test.c 
b/tests/qtest/vhost-user-blk-test.c
index d37e1c30bd..8796c74ca4 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -94,6 +94,124 @@ static uint64_t virtio_blk_request(QGuestAllocator *alloc, 
QVirtioDevice *d,
 return addr;
 }
 
+static void test_invalid_discard_write_zeroes(QVirtioDevice *dev,
+  QGuestAllocator *alloc,
+  QTestState *qts,
+  QVirtQueue *vq,
+  uint32_t type)
+{
+QVirtioBlkReq req;
+struct virtio_blk_discard_write_zeroes dwz_hdr;
+struct virtio_blk_discard_write_zeroes dwz_hdr2[2];
+uint64_t req_addr;
+uint32_t free_head;
+uint8_t status;
+
+/* More than one dwz is not supported */
+req.type = type;
+req.data = (char *) dwz_hdr2;
+dwz_hdr2[0].sector = 0;
+dwz_hdr2[0].num_sectors = 1;
+dwz_hdr2[0].flags = 0;
+dwz_hdr2[1].sector = 1;
+dwz_hdr2[1].num_sectors = 1;
+dwz_hdr2[1].flags = 0;
+
+virtio_blk_fix_dwz_hdr(dev, &dwz_hdr2[0]);
+virtio_blk_fix_dwz_hdr(dev, &dwz_hdr2[1]);
+
+req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr2));
+
+free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr2), false, true);
+qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr2), 1, true,
+   false);
+
+qvirtqueue_kick(qts, dev, vq, free_head);
+
+qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+   QVIRTIO_BLK_TIMEOUT_US);
+status = readb(req_addr + 16 + sizeof(dwz_hdr2));
+g_assert_cmpint(status, ==, VIRTIO_BLK_S_UNSUPP);
+
+guest_free(alloc, req_addr);
+
+/* num_sectors must be less than config->max_write_zeroes_sectors */
+req.type = type;
+req.data = (char *) &dwz_hdr;
+dwz_hdr.sector = 0;
+dwz_hdr.num_sectors = 0x;
+dwz_hdr.flags = 0;
+
+virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr), 1, true,
+   false);
+
+qvirtqueue_kick(qts, dev, vq, free_head);
+
+qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+   QVIRTIO_BLK_TIMEOUT_US);
+status = readb(req_addr + 16 + sizeof(dwz_hdr));
+g_assert_cmpint(status, ==, VIRTIO_BLK_S_IOERR);
+
+guest_free(alloc, req_addr);
+
+/* sector must be less than the device capacity */
+req.type = type;
+req.data = (char *) &dwz_hdr;
+dwz_hdr.sector = TEST_IMAGE_SIZE / 512 + 1;
+dwz_hdr.num_sectors = 1;
+dwz_hdr.flags = 0;
+
+virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr), 1, true,
+   false);
+
+qvirtqueue_kick(qts, dev, vq, free_head);
+
+qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+   QVIRTIO_BLK_TIMEOUT_US);
+status = readb(req_addr + 16 + sizeof(dwz_hdr));
+g_assert_cmpint(status, ==, VIRTIO_BLK_S_IOERR);
+
+guest_free(alloc, req_addr);
+
+/* reserved flag bits must be zero */
+req.type = type;
+req.data = (char *) &dwz_hdr;
+dwz_hdr.sector = 0;
+dwz_hdr.num_sectors = 1;
+dwz_hdr.flags = ~VIRTIO_BLK_WRITE_ZEROES_FLAG_UNMAP;
+
+virtio_blk_fix_dwz_hdr(dev, &dwz_hdr);
+
+req_addr = virtio_blk_request(alloc, dev, &req, sizeof(dwz_hdr));
+
+free_head = qvirtqueue_add(qts, vq, req_addr, 16, false, true);
+qvirtqueue_add(qts, vq, req_addr + 16, sizeof(dwz_hdr), false, true);
+qvirtqueue_add(qts, vq, req_addr + 16 + sizeof(dwz_hdr), 1, true,
+   false);
+
+qvirtqueue_kick(qts, dev, vq, free_head);
+
+qvirtio_wait_used_elem(qts, dev, vq, free_head, NULL,
+   QVIRTIO_BLK_TIMEOUT_US);
+status = readb(req_addr + 16 + sizeof(dwz_hdr));
+g_assert_cmpint(status, ==, VIRTIO_BLK_S_UNSUPP);
+
+guest_free(alloc, req_addr);
+}
+
 /* Returns the request virtqueue so th

[PULL 07/19] qemu-iotests: move command line and environment handling from TestRunner to TestEnv

2021-05-14 Thread Max Reitz
From: Paolo Bonzini 

In the next patch, "check" will learn how to execute a test script without
going through TestRunner.  To enable this, keep only the text output
and subprocess handling in the TestRunner; move into TestEnv the logic
to prepare for running a subprocess.

Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Paolo Bonzini 
Tested-by: Emanuele Giuseppe Esposito 
Message-Id: <20210323181928.311862-4-pbonz...@redhat.com>
Message-Id: <20210503110110.476887-4-pbonz...@redhat.com>
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/testenv.py| 17 -
 tests/qemu-iotests/testrunner.py | 14 +-
 2 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/tests/qemu-iotests/testenv.py b/tests/qemu-iotests/testenv.py
index 6d27712617..fca3a609e0 100644
--- a/tests/qemu-iotests/testenv.py
+++ b/tests/qemu-iotests/testenv.py
@@ -25,7 +25,7 @@
 import random
 import subprocess
 import glob
-from typing import Dict, Any, Optional, ContextManager
+from typing import List, Dict, Any, Optional, ContextManager
 
 
 def isxfile(path: str) -> bool:
@@ -74,6 +74,21 @@ class TestEnv(ContextManager['TestEnv']):
  'CACHEMODE_IS_DEFAULT', 'IMGFMT_GENERIC', 'IMGOPTSSYNTAX',
  'IMGKEYSECRET', 'QEMU_DEFAULT_MACHINE', 'MALLOC_PERTURB_']
 
+def prepare_subprocess(self, args: List[str]) -> Dict[str, str]:
+if self.debug:
+args.append('-d')
+
+with open(args[0], encoding="utf-8") as f:
+try:
+if f.readline().rstrip() == '#!/usr/bin/env python3':
+args.insert(0, self.python)
+except UnicodeDecodeError:  # binary test? for future.
+pass
+
+os_env = os.environ.copy()
+os_env.update(self.get_env())
+return os_env
+
 def get_env(self) -> Dict[str, str]:
 env = {}
 for v in self.env_variables:
diff --git a/tests/qemu-iotests/testrunner.py b/tests/qemu-iotests/testrunner.py
index 1fc61fcaa3..519924dc81 100644
--- a/tests/qemu-iotests/testrunner.py
+++ b/tests/qemu-iotests/testrunner.py
@@ -129,7 +129,6 @@ class TestRunner(ContextManager['TestRunner']):
 def __init__(self, env: TestEnv, makecheck: bool = False,
  color: str = 'auto') -> None:
 self.env = env
-self.test_run_env = self.env.get_env()
 self.makecheck = makecheck
 self.last_elapsed = LastElapsedTime('.last-elapsed-cache', env)
 
@@ -243,18 +242,7 @@ def do_run_test(self, test: str) -> TestResult:
 silent_unlink(p)
 
 args = [str(f_test.resolve())]
-if self.env.debug:
-args.append('-d')
-
-with f_test.open(encoding="utf-8") as f:
-try:
-if f.readline().rstrip() == '#!/usr/bin/env python3':
-args.insert(0, self.env.python)
-except UnicodeDecodeError:  # binary test? for future.
-pass
-
-env = os.environ.copy()
-env.update(self.test_run_env)
+env = self.env.prepare_subprocess(args)
 
 t0 = time.time()
 with f_bad.open('w', encoding="utf-8") as f:
-- 
2.31.1




[PULL 03/14] block: Fix Transaction leak in bdrv_reopen_multiple()

2021-05-14 Thread Kevin Wolf
Like other error paths, this one needs to call tran_finalize() and clean
up the BlockReopenQueue, too.

Fixes: CID 1452772
Fixes: 72373e40fbc7e4218061a8211384db362d3e7348
Signed-off-by: Kevin Wolf 
Message-Id: <20210503110555.24001-3-kw...@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: Kevin Wolf 
---
 block.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block.c b/block.c
index c411e8a5c6..13321c1cc5 100644
--- a/block.c
+++ b/block.c
@@ -4051,7 +4051,7 @@ int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, 
Error **errp)
 ret = bdrv_flush(bs_entry->state.bs);
 if (ret < 0) {
 error_setg_errno(errp, -ret, "Error flushing drive");
-goto cleanup;
+goto abort;
 }
 }
 
-- 
2.30.2




[PULL 03/19] monitor: hmp_qemu_io: acquire aio contex, fix crash

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

Max reported the following bug:

$ ./qemu-img create -f raw src.img 1G
$ ./qemu-img create -f raw dst.img 1G

$ (echo '
   {"execute":"qmp_capabilities"}
   {"execute":"blockdev-mirror",
"arguments":{"job-id":"mirror",
 "device":"source",
 "target":"target",
 "sync":"full",
 "filter-node-name":"mirror-top"}}
'; sleep 3; echo '
   {"execute":"human-monitor-command",
"arguments":{"command-line":
 "qemu-io mirror-top \"write 0 1G\""}}') \
| x86_64-softmmu/qemu-system-x86_64 \
   -qmp stdio \
   -blockdev file,node-name=source,filename=src.img \
   -blockdev file,node-name=target,filename=dst.img \
   -object iothread,id=iothr0 \
   -device virtio-blk,drive=source,iothread=iothr0

crashes:

0  raise () at /usr/lib/libc.so.6
1  abort () at /usr/lib/libc.so.6
2  error_exit
   (err=,
   msg=msg@entry=0x55fbb1634790 <__func__.27> "qemu_mutex_unlock_impl")
   at ../util/qemu-thread-posix.c:37
3  qemu_mutex_unlock_impl
   (mutex=mutex@entry=0x55fbb25ab6e0,
   file=file@entry=0x55fbb1636957 "../util/async.c",
   line=line@entry=650)
   at ../util/qemu-thread-posix.c:109
4  aio_context_release (ctx=ctx@entry=0x55fbb25ab680) at ../util/async.c:650
5  bdrv_do_drained_begin
   (bs=bs@entry=0x55fbb3a87000, recursive=recursive@entry=false,
   parent=parent@entry=0x0,
   ignore_bds_parents=ignore_bds_parents@entry=false,
   poll=poll@entry=true) at ../block/io.c:441
6  bdrv_do_drained_begin
   (poll=true, ignore_bds_parents=false, parent=0x0, recursive=false,
   bs=0x55fbb3a87000) at ../block/io.c:448
7  blk_drain (blk=0x55fbb26c5a00) at ../block/block-backend.c:1718
8  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:498
9  blk_unref (blk=0x55fbb26c5a00) at ../block/block-backend.c:491
10 hmp_qemu_io (mon=0x7fffaf3fc7d0, qdict=)
   at ../block/monitor/block-hmp-cmds.c:628

man pthread_mutex_unlock
...
EPERM  The  mutex type is PTHREAD_MUTEX_ERRORCHECK or
PTHREAD_MUTEX_RECURSIVE, or the mutex is a robust mutex, and the
current thread does not own the mutex.

So, thread doesn't own the mutex. And we have iothread here.

Next, note that AIO_WAIT_WHILE() documents that ctx must be acquired
exactly once by caller. But where is it acquired in the call stack?
Seems nowhere.

qemuio_command do acquire aio context.. But we need context acquired
around blk_unref() as well and actually around blk_insert_bs() too.

Let's refactor qemuio_command so that it doesn't acquire aio context
but callers do that instead. This way we can cleanly acquire aio
context in hmp_qemu_io() around all three calls.

Reported-by: Max Reitz 
Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-Id: <20210423134233.51495-1-vsement...@virtuozzo.com>
[mreitz: Fixed comment]
Signed-off-by: Max Reitz 
---
 block/monitor/block-hmp-cmds.c | 31 +--
 qemu-io-cmds.c |  8 
 qemu-io.c  | 17 +++--
 3 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
index ebf1033f31..3e6670c963 100644
--- a/block/monitor/block-hmp-cmds.c
+++ b/block/monitor/block-hmp-cmds.c
@@ -557,8 +557,10 @@ void hmp_eject(Monitor *mon, const QDict *qdict)
 
 void hmp_qemu_io(Monitor *mon, const QDict *qdict)
 {
-BlockBackend *blk;
+BlockBackend *blk = NULL;
+BlockDriverState *bs = NULL;
 BlockBackend *local_blk = NULL;
+AioContext *ctx = NULL;
 bool qdev = qdict_get_try_bool(qdict, "qdev", false);
 const char *device = qdict_get_str(qdict, "device");
 const char *command = qdict_get_str(qdict, "command");
@@ -573,20 +575,24 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
 } else {
 blk = blk_by_name(device);
 if (!blk) {
-BlockDriverState *bs = bdrv_lookup_bs(NULL, device, &err);
-if (bs) {
-blk = local_blk = blk_new(bdrv_get_aio_context(bs),
-  0, BLK_PERM_ALL);
-ret = blk_insert_bs(blk, bs, &err);
-if (ret < 0) {
-goto fail;
-}
-} else {
+bs = bdrv_lookup_bs(NULL, device, &err);
+if (!bs) {
 goto fail;
 }
 }
 }
 
+ctx = blk ? blk_get_aio_context(blk) : bdrv_get_aio_context(bs);
+aio_context_acquire(ctx);
+
+if (bs) {
+blk = local_blk = blk_new(bdrv_get_aio_context(bs), 0, BLK_PERM_ALL);
+ret = blk_insert_bs(blk, bs, &err);
+if (ret < 0) {
+goto fail;
+}
+}
+
 /*
  * Notably absent: Proper permission management. This is sad, but it seems
  * almost impossible to achieve without changing the semantics and thereby
@@ -616,6 +622,11 @@ void hmp_qemu_io(Monitor *mon, const QDict *qdict)
 
 fail:
 blk_unref(local_blk);
+
+if (ctx) {
+aio_c

Re: [PATCH v2 04/12] crypto: drop back compatibility typedefs for nettle

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:04 AM Daniel P. Berrangé  wrote:
>
> Now that we only support modern nettle, we don't need to have local
> typedefs to mask the real nettle types.
>
> Reviewed-by: Thomas Huth 
> Reviewed-by: Richard Henderson 
> Signed-off-by: Daniel P. Berrangé 
> ---
>  crypto/cipher-nettle.c.inc | 60 --
>  crypto/hash-nettle.c   |  6 ++--
>  crypto/hmac-nettle.c   |  8 ++---
>  3 files changed, 30 insertions(+), 44 deletions(-)
>

Reviewed-by: Willian Rampazzo ̉̉




[PULL 05/14] block/export: improve vu_blk_sect_range_ok()

2021-05-14 Thread Kevin Wolf
From: Stefan Hajnoczi 

The checks in vu_blk_sect_range_ok() assume VIRTIO_BLK_SECTOR_SIZE is
equal to BDRV_SECTOR_SIZE. This is true, but let's add a
QEMU_BUILD_BUG_ON() to make it explicit.

We might as well check that the request buffer size is a multiple of
VIRTIO_BLK_SECTOR_SIZE while we're at it.

Suggested-by: Max Reitz 
Signed-off-by: Stefan Hajnoczi 
Message-Id: <20210331142727.391465-1-stefa...@redhat.com>
Reviewed-by: Eric Blake 
Signed-off-by: Kevin Wolf 
---
 block/export/vhost-user-blk-server.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/block/export/vhost-user-blk-server.c 
b/block/export/vhost-user-blk-server.c
index fa06996d37..1862563336 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -70,9 +70,16 @@ static void vu_blk_req_complete(VuBlkReq *req)
 static bool vu_blk_sect_range_ok(VuBlkExport *vexp, uint64_t sector,
  size_t size)
 {
-uint64_t nb_sectors = size >> BDRV_SECTOR_BITS;
+uint64_t nb_sectors;
 uint64_t total_sectors;
 
+if (size % VIRTIO_BLK_SECTOR_SIZE) {
+return false;
+}
+
+nb_sectors = size >> VIRTIO_BLK_SECTOR_BITS;
+
+QEMU_BUILD_BUG_ON(BDRV_SECTOR_SIZE != VIRTIO_BLK_SECTOR_SIZE);
 if (nb_sectors > BDRV_REQUEST_MAX_SECTORS) {
 return false;
 }
-- 
2.30.2




[PULL 04/16] virtio-blk: Constify VirtIOFeature feature_sizes[]

2021-05-14 Thread Michael S. Tsirkin
From: Philippe Mathieu-Daudé 

Signed-off-by: Philippe Mathieu-Daudé 
Message-Id: <20210511104157.2880306-3-phi...@redhat.com>
---
 hw/block/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index d28979efb8..f139cd7cc9 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -40,7 +40,7 @@
  * Starting from the discard feature, we can use this array to properly
  * set the config size depending on the features enabled.
  */
-static VirtIOFeature feature_sizes[] = {
+static const VirtIOFeature feature_sizes[] = {
 {.flags = 1ULL << VIRTIO_BLK_F_DISCARD,
  .end = endof(struct virtio_blk_config, discard_sector_alignment)},
 {.flags = 1ULL << VIRTIO_BLK_F_WRITE_ZEROES,
-- 
MST




[PULL 16/19] block/write-threshold: drop extra APIs

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

bdrv_write_threshold_exceeded() is unused.

bdrv_write_threshold_is_set() is used only to double check the value of
bs->write_threshold_offset in tests. No real sense in it (both tests do
check real value with help of bdrv_write_threshold_get())

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20210506090621.11848-5-vsement...@virtuozzo.com>
Reviewed-by: Eric Blake 
Reviewed-by: Stefan Hajnoczi 
[mreitz: Adjusted commit message as per Eric's suggestion]
Signed-off-by: Max Reitz 
---
 include/block/write-threshold.h   | 24 
 block/write-threshold.c   | 19 ---
 tests/unit/test-write-threshold.c |  4 
 3 files changed, 47 deletions(-)

diff --git a/include/block/write-threshold.h b/include/block/write-threshold.h
index 848a5dde85..a03ee1cacd 100644
--- a/include/block/write-threshold.h
+++ b/include/block/write-threshold.h
@@ -35,30 +35,6 @@ void bdrv_write_threshold_set(BlockDriverState *bs, uint64_t 
threshold_bytes);
  */
 uint64_t bdrv_write_threshold_get(const BlockDriverState *bs);
 
-/*
- * bdrv_write_threshold_is_set
- *
- * Tell if a write threshold is set for a given BDS.
- */
-bool bdrv_write_threshold_is_set(const BlockDriverState *bs);
-
-/*
- * bdrv_write_threshold_exceeded
- *
- * Return the extent of a write request that exceeded the threshold,
- * or zero if the request is below the threshold.
- * Return zero also if the threshold was not set.
- *
- * NOTE: here we assume the following holds for each request this code
- * deals with:
- *
- * assert((req->offset + req->bytes) <= UINT64_MAX)
- *
- * Please not there is *not* an actual C assert().
- */
-uint64_t bdrv_write_threshold_exceeded(const BlockDriverState *bs,
-   const BdrvTrackedRequest *req);
-
 /*
  * bdrv_write_threshold_check_write
  *
diff --git a/block/write-threshold.c b/block/write-threshold.c
index 71df3c434f..65a6acd142 100644
--- a/block/write-threshold.c
+++ b/block/write-threshold.c
@@ -24,25 +24,6 @@ uint64_t bdrv_write_threshold_get(const BlockDriverState *bs)
 return bs->write_threshold_offset;
 }
 
-bool bdrv_write_threshold_is_set(const BlockDriverState *bs)
-{
-return bs->write_threshold_offset > 0;
-}
-
-uint64_t bdrv_write_threshold_exceeded(const BlockDriverState *bs,
-   const BdrvTrackedRequest *req)
-{
-if (bdrv_write_threshold_is_set(bs)) {
-if (req->offset > bs->write_threshold_offset) {
-return (req->offset - bs->write_threshold_offset) + req->bytes;
-}
-if ((req->offset + req->bytes) > bs->write_threshold_offset) {
-return (req->offset + req->bytes) - bs->write_threshold_offset;
-}
-}
-return 0;
-}
-
 void bdrv_write_threshold_set(BlockDriverState *bs, uint64_t threshold_bytes)
 {
 bs->write_threshold_offset = threshold_bytes;
diff --git a/tests/unit/test-write-threshold.c 
b/tests/unit/test-write-threshold.c
index fd40a815b8..bb5c1a5217 100644
--- a/tests/unit/test-write-threshold.c
+++ b/tests/unit/test-write-threshold.c
@@ -18,8 +18,6 @@ static void test_threshold_not_set_on_init(void)
 BlockDriverState bs;
 memset(&bs, 0, sizeof(bs));
 
-g_assert(!bdrv_write_threshold_is_set(&bs));
-
 res = bdrv_write_threshold_get(&bs);
 g_assert_cmpint(res, ==, 0);
 }
@@ -33,8 +31,6 @@ static void test_threshold_set_get(void)
 
 bdrv_write_threshold_set(&bs, threshold);
 
-g_assert(bdrv_write_threshold_is_set(&bs));
-
 res = bdrv_write_threshold_get(&bs);
 g_assert_cmpint(res, ==, threshold);
 }
-- 
2.31.1




[PULL 06/19] qemu-iotests: allow passing unittest.main arguments to the test scripts

2021-05-14 Thread Max Reitz
From: Paolo Bonzini 

Python test scripts that use unittest consist of multiple tests.
unittest.main allows selecting which tests to run, but currently this
is not possible because the iotests wrapper ignores sys.argv.

unittest.main command line options also allow the user to pick the
desired options for verbosity, failfast mode, etc.  While "-d" is
currently translated to "-v", it also enables extra debug output,
and other options are not available at all.

These command line options only work if the unittest.main testRunner
argument is a type, rather than a TestRunner instance.  Therefore, pass
the class name and "verbosity" argument to unittest.main, and adjust for
the different default warnings between TextTestRunner and unittest.main.

Signed-off-by: Paolo Bonzini 
Reviewed-by: Vladimir Sementsov-Ogievskiy 
Tested-by: Emanuele Giuseppe Esposito 
Message-Id: <20210323181928.311862-3-pbonz...@redhat.com>
Message-Id: <20210503110110.476887-3-pbonz...@redhat.com>
Signed-off-by: Max Reitz 
---
 tests/qemu-iotests/iotests.py | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 55a017577f..5ead94229f 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -1308,12 +1308,16 @@ def __init__(self, stream: Optional[TextIO] = None,
  resultclass=resultclass,
  **kwargs)
 
-def execute_unittest(debug=False):
+def execute_unittest(argv: List[str], debug: bool = False) -> None:
 """Executes unittests within the calling module."""
 
-verbosity = 2 if debug else 1
-runner = ReproducibleTestRunner(verbosity=verbosity)
-unittest.main(testRunner=runner)
+# Some tests have warnings, especially ResourceWarnings for unclosed
+# files and sockets.  Ignore them for now to ensure reproducibility of
+# the test output.
+unittest.main(argv=argv,
+  testRunner=ReproducibleTestRunner,
+  verbosity=2 if debug else 1,
+  warnings=None if sys.warnoptions else 'ignore')
 
 def execute_setup_common(supported_fmts: Sequence[str] = (),
  supported_platforms: Sequence[str] = (),
@@ -1350,7 +1354,7 @@ def execute_test(*args, test_function=None, **kwargs):
 
 debug = execute_setup_common(*args, **kwargs)
 if not test_function:
-execute_unittest(debug)
+execute_unittest(sys.argv, debug)
 else:
 test_function()
 
-- 
2.31.1




Re: [PATCH v2 06/12] crypto: bump min gnutls to 3.5.18, dropping RHEL-7 support

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:04 AM Daniel P. Berrangé  wrote:
>
> It has been over two years since RHEL-8 was released, and thus per the
> platform build policy, we no longer need to support RHEL-7 as a build
> target. This lets us increment the minimum required gnutls version
>
> Per repology, current shipping versions are:
>
>  RHEL-8: 3.6.14
>   Debian Buster: 3.6.7
>  openSUSE Leap 15.2: 3.6.7
>Ubuntu LTS 18.04: 3.5.18
>Ubuntu LTS 20.04: 3.6.13
> FreeBSD: 3.6.15
>   Fedora 33: 3.6.16
>   Fedora 34: 3.7.1
> OpenBSD: 3.6.15
>  macOS HomeBrew: 3.6.15
>
> Ubuntu LTS 18.04 has the oldest version and so 3.5.18 is the new minimum.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .gitlab-ci.yml | 15 ---
>  configure  |  2 +-
>  2 files changed, 1 insertion(+), 16 deletions(-)
>

Reviewed-by: Willian Rampazzo 




Re: [PATCH v3 3/4] migrate-bitmaps-test: Fix pylint warnings

2021-05-14 Thread Vladimir Sementsov-Ogievskiy

14.05.2021 18:43, Max Reitz wrote:

There are a couple of things pylint takes issue with:
- The "time" import is unused
- The import order (iotests should come last)
- get_bitmap_hash() doesn't use @self and so should be a function
- Semicolons at the end of some lines
- Parentheses after "if"
- Some lines are too long (80 characters instead of 79)
- inject_test_case()'s @name parameter shadows a top-level @name
   variable
- "lambda self: mc(self)" were equivalent to just "mc", but in
   inject_test_case(), it is not equivalent, so add a comment and disable
   the warning locally
- Always put two empty lines after a function
- f'exec: cat > /dev/null' does not need to be an f-string

Fix them.

Signed-off-by: Max Reitz 
---


[..]


-def inject_test_case(klass, name, method, *args, **kwargs):
+def inject_test_case(klass, suffix, method, *args, **kwargs):
  mc = operator.methodcaller(method, *args, **kwargs)
-setattr(klass, 'test_' + method + name, lambda self: mc(self))
+# The lambda is required to enforce the `self` parameter.  Without it,
+# `mc` would be called without any arguments, and then complain.
+# pylint: disable=unnecessary-lambda
+setattr(klass, 'test_' + method + suffix, lambda self: mc(self))
+
  


Interesting... I decided to experiment a bit, and what I can say now:

The actual reason is that class attrubute of type , becomes a 
 of the class instance on instantiation.

lambda is a function, so on instantiation we'll have "method", and method can be called 
as obj.method(), and original function will get "self" first argument automatically.

mc is not a function, it's , so there is no 
magic, instance of the class doesn't get own method but just a refence to class 
variable instead.

So, let's modify comment to something like:

We want to add function attribute to class, so that it correctly converted to method on 
instantiation. lamba is necessary to "convert" methodcaller object (which is 
callable, but not a function) to function.


with it:
Reviewed-by: Vladimir Sementsov-Ogievskiy 

 my expirements =

# cat x.py
import operator

class X:
def hello(self, arg):
print("hello", arg)


mc = operator.methodcaller("hello", "Vova")
lmd = lambda self: mc(self)

print('mc:', type(mc))
print('lmd:', type(lmd))

setattr(X, "test_hello_direct", mc)
setattr(X, "test_hello_lambda", lmd)
X.simply_assigned = lmd

x = X()

x.assigned_to_instance = lmd

print('mc attached:', type(x.test_hello_direct))
print('lmd attached:', type(x.test_hello_lambda))
print('lmd simply assigned:', type(x.simply_assigned))
print('lmd assigned to instance:', type(x.assigned_to_instance))

x.test_hello_lambda()
x.simply_assigned()

print("x.test_hello_lambda is x.simply_assigned", x.test_hello_lambda is 
x.simply_assigned)
print("x.test_hello_lambda is X.test_hello_lambda", x.test_hello_lambda is 
X.test_hello_lambda)
print("x.test_hello_direct is X.test_hello_direct", x.test_hello_direct is 
X.test_hello_direct)
print("X.test_hello_lambda is X.simply_assigned", X.test_hello_lambda is 
X.simply_assigned)
print("X.test_hello_lambda type:", type(X.test_hello_lambda))

try:
x.assigned_to_instance()
except Exception as e:
print("assigned to instance call failed:", e)

try:
x.test_hello_direct()
except Exception as e:
print("direct call failed:", e)



# python3 x.py
mc: 
lmd: 
mc attached: 
lmd attached: 
lmd simply assigned: 
lmd assigned to instance: 
hello Vova
hello Vova
x.test_hello_lambda is x.simply_assigned False
x.test_hello_lambda is X.test_hello_lambda False
x.test_hello_direct is X.test_hello_direct True
X.test_hello_lambda is X.simply_assigned True
X.test_hello_lambda type: 
assigned to instance call failed: () missing 1 required positional 
argument: 'self'
direct call failed: methodcaller expected 1 argument, got 0


--
Best regards,
Vladimir



Re: [PATCH v2 01/12] gitlab: drop linux user build job for CentOS 7

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:04 AM Daniel P. Berrangé  wrote:
>
> It has been over two years since RHEL-8 was released, and thus per the
> platform build policy, we no longer need to support RHEL-7 as a build
> target.
>
> The build-user-centos7 job was to detect a failure specific to CentOS
> 7 and there are already other linux user jobs for other platforms.
> Thus we can drop this job rather than move it to CentOS 8.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .gitlab-ci.yml | 9 -
>  1 file changed, 9 deletions(-)
>

Reviewed-by: Willian Rampazzo 




[PULL 04/19] mirror: stop cancelling in-flight requests on non-force cancel in READY

2021-05-14 Thread Max Reitz
From: Vladimir Sementsov-Ogievskiy 

If mirror is READY than cancel operation is not discarding the whole
result of the operation, but instead it's a documented way get a
point-in-time snapshot of source disk.

So, we should not cancel any requests if mirror is READ and
force=false. Let's fix that case.

Note, that bug that we have before this commit is not critical, as the
only .bdrv_cancel_in_flight implementation is nbd_cancel_in_flight()
and it cancels only requests waiting for reconnection, so it should be
rare case.

Fixes: 521ff8b779b11c394dbdc43f02e158dd99df308a
Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-Id: <20210421075858.40197-1-vsement...@virtuozzo.com>
Signed-off-by: Max Reitz 
---
 include/block/block_int.h | 2 +-
 include/qemu/job.h| 2 +-
 block/backup.c| 2 +-
 block/mirror.c| 6 --
 job.c | 2 +-
 tests/qemu-iotests/264| 2 +-
 6 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index c823f5b1b3..731ffedb27 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -357,7 +357,7 @@ struct BlockDriver {
  * of in-flight requests, so don't waste the time if possible.
  *
  * One example usage is to avoid waiting for an nbd target node reconnect
- * timeout during job-cancel.
+ * timeout during job-cancel with force=true.
  */
 void (*bdrv_cancel_in_flight)(BlockDriverState *bs);
 
diff --git a/include/qemu/job.h b/include/qemu/job.h
index efc6fa7544..41162ed494 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -254,7 +254,7 @@ struct JobDriver {
 /**
  * If the callback is not NULL, it will be invoked in job_cancel_async
  */
-void (*cancel)(Job *job);
+void (*cancel)(Job *job, bool force);
 
 
 /** Called when the job is freed */
diff --git a/block/backup.c b/block/backup.c
index 6cf2f974aa..bd3614ce70 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -331,7 +331,7 @@ static void coroutine_fn backup_set_speed(BlockJob *job, 
int64_t speed)
 }
 }
 
-static void backup_cancel(Job *job)
+static void backup_cancel(Job *job, bool force)
 {
 BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
 
diff --git a/block/mirror.c b/block/mirror.c
index 840b8e8c15..019f6deaa5 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1178,12 +1178,14 @@ static bool mirror_drained_poll(BlockJob *job)
 return !!s->in_flight;
 }
 
-static void mirror_cancel(Job *job)
+static void mirror_cancel(Job *job, bool force)
 {
 MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
 BlockDriverState *target = blk_bs(s->target);
 
-bdrv_cancel_in_flight(target);
+if (force || !job_is_ready(job)) {
+bdrv_cancel_in_flight(target);
+}
 }
 
 static const BlockJobDriver mirror_job_driver = {
diff --git a/job.c b/job.c
index 4aff13d95a..8775c1803b 100644
--- a/job.c
+++ b/job.c
@@ -716,7 +716,7 @@ static int job_finalize_single(Job *job)
 static void job_cancel_async(Job *job, bool force)
 {
 if (job->driver->cancel) {
-job->driver->cancel(job);
+job->driver->cancel(job, force);
 }
 if (job->user_paused) {
 /* Do not call job_enter here, the caller will handle it.  */
diff --git a/tests/qemu-iotests/264 b/tests/qemu-iotests/264
index 4f96825a22..bc431d1a19 100755
--- a/tests/qemu-iotests/264
+++ b/tests/qemu-iotests/264
@@ -95,7 +95,7 @@ class TestNbdReconnect(iotests.QMPTestCase):
 self.assert_qmp(result, 'return', {})
 
 def cancel_job(self):
-result = self.vm.qmp('block-job-cancel', device='drive0')
+result = self.vm.qmp('block-job-cancel', device='drive0', force=True)
 self.assert_qmp(result, 'return', {})
 
 start_t = time.time()
-- 
2.31.1




Re: [PATCH v2 02/12] patchew: move quick build job from CentOS 7 to CentOS 8 container

2021-05-14 Thread Willian Rampazzo
On Fri, May 14, 2021 at 9:04 AM Daniel P. Berrangé  wrote:
>
> It has been over two years since RHEL-8 was released, and thus per the
> platform build policy, we no longer need to support RHEL-7 as a build
> target.
>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  .patchew.yml | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>

Reviewed-by: Willian Rampazzo 




  1   2   3   4   >