Re: [Qemu-devel] [PATCH 3/3] pc: Don't make CPU properties mandatory unless necessary

2019-08-16 Thread Markus Armbruster
Eduardo Habkost  writes:

> On Fri, Aug 16, 2019 at 02:22:58PM +0200, Markus Armbruster wrote:
>> Erik Skultety  writes:
>> 
>> > On Fri, Aug 16, 2019 at 08:10:20AM +0200, Markus Armbruster wrote:
>> >> Eduardo Habkost  writes:
>> >>
>> >> > We have this issue reported when using libvirt to hotplug CPUs:
>> >> > https://bugzilla.redhat.com/show_bug.cgi?id=1741451
>> >> >
>> >> > Basically, libvirt is not copying die-id from
>> >> > query-hotpluggable-cpus, but die-id is now mandatory.
>> >>
>> >> Uh-oh, "is now mandatory": making an optional property mandatory is an
>> >> incompatible change.  When did we do that?  Commit hash, please.
>> >>
>> >> [...]
>> >>
>> >
>> > I don't even see it as being optional ever - the property wasn't even
>> > recognized before commit 176d2cda0de introduced it as mandatory.
>> 
>> Compatibility break.
>> 
>> Commit 176d2cda0de is in v4.1.0.  If I had learned about it a bit
>> earlier, I would've argued for a last minute fix or a revert.  Now we
>> have a regression in the release.
>> 
>> Eduardo, I think this fix should go into v4.1.1.  Please add cc:
>> qemu-stable.
>
> I did it in v2.
>
>> 
>> How can we best avoid such compatibility breaks to slip in undetected?
>> 
>> A static checker would be nice.  For vmstate, we have
>> scripts/vmstate-static-checker.py.  Not sure it's used.
>
> I don't think this specific bug would be detected with a static
> checker.  "die-id is mandatory" is not something that can be
> extracted by looking at QOM data structures.  The new rule was
> being enforced by the hotplug handler callbacks, and the hotplug
> handler call tree is a bit complex (too complex for my taste, but
> I digress).

QOM does too much in code.  Turing tarpit.

> We could have detected this with a simple CPU hotplug automated
> test case, though.  Or with a very simple -device test case like
> the one I have submitted with this patch.

The external QOM interface is huge.  Even if we had an army of
industrious gnomes writing simple test cases for all of it, we'd still
need a fleet of machines to actually run them, and at least a batallion
of gnomes to maintain them.

The extremely basic qom-test gobbles up a painful amount of CPU cycles
already:

$ time for i in `find bld/*-softmmu -maxdepth 1 -name qemu-system-\* -perm 
/u+x`; do QTEST_QEMU_BINARY=$i bld/tests/qom-test; done
/aarch64/qom/versatileab: OK
[260 lines of the form name/of/test: OK omitted...]
/xtensaeb/qom/lx60: OK

real3m33.001s
user2m18.081s
sys 1m31.809s

> This was detected by libvirt automated test cases.  It would be

Nice.

> nice if this was run during the -rc stage and not only after the
> 4.1.0 release, though.

We don't always get lucky.

> I don't know details of the test job.  Danilo, Mirek, Yash: do
> you know how this bug was detected, and what we could do to run
> the same test jobs in upstream QEMU release candidates?

Thinking about how to make the best use of the tests we have is in
order.



Re: [Qemu-devel] [PATCH qemu] target/ppc: Add Directed Privileged Door-bell Exception State (DPDES) SPR

2019-08-16 Thread David Gibson
On Fri, Aug 16, 2019 at 04:17:33PM +1000, Alexey Kardashevskiy wrote:
> DPDES stores a status of a doorbell message and if it is lost in
> migration, the destination CPU won't receive it. This does not hit us
> much as IPIs complete too quick to catch a pending one and even if
> we missed one, broadcasts happen often enough to wake that CPU.
> 
> This defines DPDES and registers with KVM for migration.
> 
> Signed-off-by: Alexey Kardashevskiy 

Ouch, I'm kind of surprised this hasn't bitten us before.

Really we ought to also wire this up to the emulated doorbell
instructions as well, but this certainly improves the behaviour so
I've merged it to ppc-for-4.2.

> ---
>  target/ppc/cpu.h|  1 +
>  target/ppc/translate_init.inc.c | 14 ++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> index 64799386f9ab..f0521a435d2d 100644
> --- a/target/ppc/cpu.h
> +++ b/target/ppc/cpu.h
> @@ -1466,6 +1466,7 @@ typedef PowerPCCPU ArchCPU;
>  #define SPR_MPC_ICTRL (0x09E)
>  #define SPR_MPC_BAR   (0x09F)
>  #define SPR_PSPB  (0x09F)
> +#define SPR_DPDES (0x0B0)
>  #define SPR_DAWR  (0x0B4)
>  #define SPR_RPR   (0x0BA)
>  #define SPR_CIABR (0x0BB)
> diff --git a/target/ppc/translate_init.inc.c b/target/ppc/translate_init.inc.c
> index c9fcd87095f5..7e41ae145600 100644
> --- a/target/ppc/translate_init.inc.c
> +++ b/target/ppc/translate_init.inc.c
> @@ -8198,6 +8198,18 @@ static void gen_spr_power8_pspb(CPUPPCState *env)
>   KVM_REG_PPC_PSPB, 0);
>  }
>  
> +static void gen_spr_power8_dpdes(CPUPPCState *env)
> +{
> +#if !defined(CONFIG_USER_ONLY)
> +/* Directed Privileged Door-bell Exception State, used for IPI */
> +spr_register_kvm_hv(env, SPR_DPDES, "DPDES",
> +SPR_NOACCESS, SPR_NOACCESS,
> +_read_generic, SPR_NOACCESS,
> +_read_generic, _write_generic,
> +KVM_REG_PPC_DPDES, 0x);
> +#endif
> +}
> +
>  static void gen_spr_power8_ic(CPUPPCState *env)
>  {
>  #if !defined(CONFIG_USER_ONLY)
> @@ -8629,6 +8641,7 @@ static void init_proc_POWER8(CPUPPCState *env)
>  gen_spr_power8_pmu_user(env);
>  gen_spr_power8_tm(env);
>  gen_spr_power8_pspb(env);
> +gen_spr_power8_dpdes(env);
>  gen_spr_vtb(env);
>  gen_spr_power8_ic(env);
>  gen_spr_power8_book4(env);
> @@ -8817,6 +8830,7 @@ static void init_proc_POWER9(CPUPPCState *env)
>  gen_spr_power8_pmu_user(env);
>  gen_spr_power8_tm(env);
>  gen_spr_power8_pspb(env);
> +gen_spr_power8_dpdes(env);
>  gen_spr_vtb(env);
>  gen_spr_power8_ic(env);
>  gen_spr_power8_book4(env);

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH] job: drop job_drain

2019-08-16 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20190816170457.522990-1-vsement...@virtuozzo.com/



Hi,

This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===

clang -iquote /tmp/qemu-test/build/tests -iquote tests -iquote 
/tmp/qemu-test/src/tcg -iquote /tmp/qemu-test/src/tcg/i386 
-I/tmp/qemu-test/src/linux-headers -I/tmp/qemu-test/build/linux-headers -iquote 
. -iquote /tmp/qemu-test/src -iquote /tmp/qemu-test/src/accel/tcg -iquote 
/tmp/qemu-test/src/include -I/usr/include/pixman-1  
-I/tmp/qemu-test/src/dtc/libfdt -Werror  -pthread -I/usr/include/glib-2.0 
-I/usr/lib64/glib-2.0/include  -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes 
-Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes 
-fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -Wno-string-plus-int 
-Wno-typedef-redefinition -Wno-initializer-overrides -Wexpansion-to-defined 
-Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body 
-Wnested-externs -Wformat-security -Wformat-y2k -Winit-self 
-Wignored-qualifiers -Wold-style-definition -Wtype-limits 
-fstack-protector-strong  -I/usr/include/p11-kit-1 -I/usr/include/libpng16  
-I/usr/include/spice-1 -I/usr/include/spice-server -I/usr/include/cacard 
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/nss3 
-I/usr/include/nspr4 -pthread -I/usr/include/libmount -I/usr/include/blkid 
-I/usr/include/uuid -I/usr/include/pixman-1   -I/tmp/qemu-test/src/tests -MMD 
-MP -MT tests/test-shift128.o -MF tests/test-shift128.d -fsanitize=undefined 
-fsanitize=address -g   -c -o tests/test-shift128.o 
/tmp/qemu-test/src/tests/test-shift128.c
clang -iquote /tmp/qemu-test/build/tests -iquote tests -iquote 
/tmp/qemu-test/src/tcg -iquote /tmp/qemu-test/src/tcg/i386 
-I/tmp/qemu-test/src/linux-headers -I/tmp/qemu-test/build/linux-headers -iquote 
. -iquote /tmp/qemu-test/src -iquote /tmp/qemu-test/src/accel/tcg -iquote 
/tmp/qemu-test/src/include -I/usr/include/pixman-1  
-I/tmp/qemu-test/src/dtc/libfdt -Werror  -pthread -I/usr/include/glib-2.0 
-I/usr/lib64/glib-2.0/include  -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes 
-Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes 
-fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -Wno-string-plus-int 
-Wno-typedef-redefinition -Wno-initializer-overrides -Wexpansion-to-defined 
-Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body 
-Wnested-externs -Wformat-security -Wformat-y2k -Winit-self 
-Wignored-qualifiers -Wold-style-definition -Wtype-limits 
-fstack-protector-strong  -I/usr/include/p11-kit-1 -I/usr/include/libpng16  
-I/usr/include/spice-1 -I/usr/include/spice-server -I/usr/include/cacard 
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/nss3 
-I/usr/include/nspr4 -pthread -I/usr/include/libmount -I/usr/include/blkid 
-I/usr/include/uuid -I/usr/include/pixman-1   -I/tmp/qemu-test/src/tests -MMD 
-MP -MT tests/test-mul64.o -MF tests/test-mul64.d -fsanitize=undefined 
-fsanitize=address -g   -c -o tests/test-mul64.o 
/tmp/qemu-test/src/tests/test-mul64.c
clang -iquote /tmp/qemu-test/build/tests -iquote tests -iquote 
/tmp/qemu-test/src/tcg -iquote /tmp/qemu-test/src/tcg/i386 
-I/tmp/qemu-test/src/linux-headers -I/tmp/qemu-test/build/linux-headers -iquote 
. -iquote /tmp/qemu-test/src -iquote /tmp/qemu-test/src/accel/tcg -iquote 
/tmp/qemu-test/src/include -I/usr/include/pixman-1  
-I/tmp/qemu-test/src/dtc/libfdt -Werror  -pthread -I/usr/include/glib-2.0 
-I/usr/lib64/glib-2.0/include  -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE 
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes 
-Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes 
-fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -Wno-string-plus-int 
-Wno-typedef-redefinition -Wno-initializer-overrides -Wexpansion-to-defined 
-Wendif-labels -Wno-shift-negative-value -Wno-missing-include-dirs -Wempty-body 
-Wnested-externs -Wformat-security -Wformat-y2k -Winit-self 
-Wignored-qualifiers -Wold-style-definition -Wtype-limits 
-fstack-protector-strong  -I/usr/include/p11-kit-1 -I/usr/include/libpng16  
-I/usr/include/spice-1 -I/usr/include/spice-server -I/usr/include/cacard 
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/nss3 
-I/usr/include/nspr4 -pthread -I/usr/include/libmount -I/usr/include/blkid 
-I/usr/include/uuid -I/usr/include/pixman-1   -I/tmp/qemu-test/src/tests -MMD 
-MP -MT tests/test-int128.o -MF tests/test-int128.d -fsanitize=undefined 
-fsanitize=address -g   -c -o tests/test-int128.o 
/tmp/qemu-test/src/tests/test-int128.c

Re: [Qemu-devel] [PATCH] linux-user: Support gdb 'qOffsets' query for ELF

2019-08-16 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20190816233422.16715-1-...@google.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH] linux-user: Support gdb 'qOffsets' query for ELF
Message-id: 20190816233422.16715-1-...@google.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20190816233422.16715-1-...@google.com -> 
patchew/20190816233422.16715-1-...@google.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for 
path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 
'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for 
path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for 
path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for 
path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for 
path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) 
registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 
'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' 
(https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 
'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' 
(https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 
'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out 
'20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) 
registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' 
(https://github.com/openssl/openssl) registered for path 
'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': 
checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out 
'50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered 
for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) 
registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': 
checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked 
out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 
'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out 
'09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 'roms/ipxe'...
Submodule path 

Re: [Qemu-devel] [PATCH] linux-user: Support gdb 'qOffsets' query for ELF

2019-08-16 Thread Josh Kunz via Qemu-devel
+cc: riku.voi...@iki.fi, I typoed the email on the first go.

On Fri, Aug 16, 2019 at 4:34 PM Josh Kunz  wrote:

> This is needed to support debugging PIE ELF binaries running under QEMU
> user mode. Currently, `code_offset` and `data_offset` remain unset for
> all ELF binaries, so GDB is unable to correctly locate the position of
> the binary's text and data.
>
> The fields `code_offset`, and `data_offset` were originally added way
> back in 2006 to support debugging of bFMT executables (978efd6aac6),
> and support was just never added for ELF. Since non-PIE binaries are
> loaded at exactly the address specified in the binary, GDB does not need
> to relocate any symbols, so the buggy behavior is not normally observed.
>
> Buglink: https://bugs.launchpad.net/qemu/+bug/1528239
> Signed-off-by: Josh Kunz 
> ---
>  linux-user/elfload.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
> index 3365e192eb..ceac035208 100644
> --- a/linux-user/elfload.c
> +++ b/linux-user/elfload.c
> @@ -2380,6 +2380,8 @@ static void load_elf_image(const char *image_name,
> int image_fd,
>  }
>
>  info->load_bias = load_bias;
> +info->code_offset = load_bias;
> +info->data_offset = load_bias;
>  info->load_addr = load_addr;
>  info->entry = ehdr->e_entry + load_bias;
>  info->start_code = -1;
> --
> 2.23.0.rc1.153.gdeed80330f-goog
>
>


Re: [Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL ioctls.

2019-08-16 Thread mailer
Hi Shu-Chun Weng via Qemu-devel!

We received your email, but were unable to deliver it because it
contains content which has been blacklisted by the list admin. Please
remove your application/pkcs7-signature attachments and send again.

You are also advised to configure your email client to send emails in
plain text to avoid additional errors in the future:

https://useplaintext.email

If you have any questions, please reply to this email to reach the mail
admin. We apologise for the inconvenience.



[Qemu-devel] [PATCH] linux-user: Support gdb 'qOffsets' query for ELF

2019-08-16 Thread Josh Kunz via Qemu-devel
This is needed to support debugging PIE ELF binaries running under QEMU
user mode. Currently, `code_offset` and `data_offset` remain unset for
all ELF binaries, so GDB is unable to correctly locate the position of
the binary's text and data.

The fields `code_offset`, and `data_offset` were originally added way
back in 2006 to support debugging of bFMT executables (978efd6aac6),
and support was just never added for ELF. Since non-PIE binaries are
loaded at exactly the address specified in the binary, GDB does not need
to relocate any symbols, so the buggy behavior is not normally observed.

Buglink: https://bugs.launchpad.net/qemu/+bug/1528239
Signed-off-by: Josh Kunz 
---
 linux-user/elfload.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 3365e192eb..ceac035208 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2380,6 +2380,8 @@ static void load_elf_image(const char *image_name, int 
image_fd,
 }
 
 info->load_bias = load_bias;
+info->code_offset = load_bias;
+info->data_offset = load_bias;
 info->load_addr = load_addr;
 info->entry = ehdr->e_entry + load_bias;
 info->start_code = -1;
-- 
2.23.0.rc1.153.gdeed80330f-goog




Re: [Qemu-devel] [PATCH v2] linux-user: Add support for SIOCETHTOOL ioctl

2019-08-16 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20190817000714.142802-1-...@google.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH v2] linux-user: Add support for SIOCETHTOOL ioctl
Message-id: 20190817000714.142802-1-...@google.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20190817000714.142802-1-...@google.com -> 
patchew/20190817000714.142802-1-...@google.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for 
path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 
'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for 
path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for 
path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for 
path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for 
path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) 
registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 
'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' 
(https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 
'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' 
(https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 
'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out 
'20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) 
registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' 
(https://github.com/openssl/openssl) registered for path 
'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': 
checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out 
'50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered 
for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) 
registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': 
checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked 
out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 
'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out 
'09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 'roms/ipxe'...
Submodule path 

Re: [Qemu-devel] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF

2019-08-16 Thread Yao, Jiewen


> -Original Message-
> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Saturday, August 17, 2019 6:20 AM
> To: Laszlo Ersek 
> Cc: Yao, Jiewen ; Paolo Bonzini
> ; de...@edk2.groups.io; edk2-rfc-groups-io
> ; qemu devel list ; Igor
> Mammedov ; Chen, Yingwen
> ; Nakajima, Jun ; Boris
> Ostrovsky ; Joao Marcal Lemos Martins
> ; Phillip Goerl 
> Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> 
> On Fri, 16 Aug 2019 22:15:15 +0200
> Laszlo Ersek  wrote:
> 
> > +Alex (direct question at the bottom)
> >
> > On 08/16/19 09:49, Yao, Jiewen wrote:
> > > below
> > >
> > >> -Original Message-
> > >> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> > >> Sent: Friday, August 16, 2019 3:20 PM
> > >> To: Yao, Jiewen ; Laszlo Ersek
> > >> ; de...@edk2.groups.io
> > >> Cc: edk2-rfc-groups-io ; qemu devel list
> > >> ; Igor Mammedov
> ;
> > >> Chen, Yingwen ; Nakajima, Jun
> > >> ; Boris Ostrovsky
> ;
> > >> Joao Marcal Lemos Martins ; Phillip
> Goerl
> > >> 
> > >> Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> > >>
> > >> On 16/08/19 04:46, Yao, Jiewen wrote:
> > >>> Comment below:
> > >>>
> > >>>
> >  -Original Message-
> >  From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> >  Sent: Friday, August 16, 2019 12:21 AM
> >  To: Laszlo Ersek ; de...@edk2.groups.io; Yao,
> > >> Jiewen
> >  
> >  Cc: edk2-rfc-groups-io ; qemu devel list
> >  ; Igor Mammedov
> > >> ;
> >  Chen, Yingwen ; Nakajima, Jun
> >  ; Boris Ostrovsky
> > >> ;
> >  Joao Marcal Lemos Martins ; Phillip
> Goerl
> >  
> >  Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> > 
> >  On 15/08/19 17:00, Laszlo Ersek wrote:
> > > On 08/14/19 16:04, Paolo Bonzini wrote:
> > >> On 14/08/19 15:20, Yao, Jiewen wrote:
> >  - Does this part require a new branch somewhere in the OVMF
> SEC
> >  code?
> >    How do we determine whether the CPU executing SEC is BSP
> or
> >    hot-plugged AP?
> > >>> [Jiewen] I think this is blocked from hardware perspective, since
> the
> > >> first
> >  instruction.
> > >>> There are some hardware specific registers can be used to
> determine
> > >> if
> >  the CPU is new added.
> > >>> I don’t think this must be same as the real hardware.
> > >>> You are free to invent some registers in device model to be used
> in
> >  OVMF hot plug driver.
> > >>
> > >> Yes, this would be a new operation mode for QEMU, that only
> applies
> > >> to
> > >> hot-plugged CPUs.  In this mode the AP doesn't reply to INIT or
> SMI,
> > >> in
> > >> fact it doesn't reply to anything at all.
> > >>
> >  - How do we tell the hot-plugged AP where to start execution?
> (I.e.
> >  that
> >    it should execute code at a particular pflash location.)
> > >>> [Jiewen] Same real mode reset vector at :FFF0.
> > >>
> > >> You do not need a reset vector or INIT/SIPI/SIPI sequence at all in
> > >> QEMU.  The AP does not start execution at all when it is
> unplugged,
> > >> so
> > >> no cache-as-RAM etc.
> > >>
> > >> We only need to modify QEMU so that hot-plugged APIs do not
> reply
> > >> to
> > >> INIT/SIPI/SMI.
> > >>
> > >>> I don’t think there is problem for real hardware, who always has
> CAR.
> > >>> Can QEMU provide some CPU specific space, such as MMIO
> region?
> > >>
> > >> Why is a CPU-specific region needed if every other processor is in
> SMM
> > >> and thus trusted.
> > >
> > > I was going through the steps Jiewen and Yingwen recommended.
> > >
> > > In step (02), the new CPU is expected to set up RAM access. In step
> > > (03), the new CPU, executing code from flash, is expected to "send
> > >> board
> > > message to tell host CPU (GPIO->SCI) -- I am waiting for hot-add
> > > message." For that action, the new CPU may need a stack
> (minimally if
> > >> we
> > > want to use C function calls).
> > >
> > > Until step (03), there had been no word about any other (=
> pre-plugged)
> > > CPUs (more precisely, Jiewen even confirmed "No impact to other
> > > processors"), so I didn't assume that other CPUs had entered SMM.
> > >
> > > Paolo, I've attempted to read Jiewen's response, and yours, as
> carefully
> > > as I can. I'm still very confused. If you have a better understanding,
> > > could you please write up the 15-step process from the thread
> starter
> > > again, with all QEMU customizations applied? Such as, unnecessary
> > >> steps
> > > removed, and platform specifics filled in.
> > 
> >  Sure.
> > 
> >  (01a) QEMU: create new CPU.  The CPU already exists, but it does
> not
> >   start running code until unparked by the CPU hotplug
> controller.
> > 
> >  (01b) QEMU: trigger SCI
> > 
> >  (02-03) no equivalent
> > 
> >  

[Qemu-devel] [PATCH v2] linux-user: Add support for SIOCETHTOOL ioctl

2019-08-16 Thread Shu-Chun Weng via Qemu-devel
The ioctl numeric values are platform-independent and determined by
the file include/uapi/linux/sockios.h in Linux kernel source code:

  #define SIOCETHTOOL   0x8946

These ioctls get (or set) the field ifr_data of type char* in the
structure ifreq. Such functionality is achieved in QEMU by using
MK_STRUCT() and MK_PTR() macros with an appropriate argument, as
it was done for existing similar cases.

Signed-off-by: Shu-Chun Weng 
---
 linux-user/ioctls.h   | 1 +
 linux-user/syscall_defs.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/linux-user/ioctls.h b/linux-user/ioctls.h
index 3281c97ca2..9d231df665 100644
--- a/linux-user/ioctls.h
+++ b/linux-user/ioctls.h
@@ -208,6 +208,7 @@
   IOCTL(SIOCGIFINDEX, IOC_W | IOC_R, MK_PTR(MK_STRUCT(STRUCT_int_ifreq)))
   IOCTL(SIOCSIFPFLAGS, IOC_W, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
   IOCTL(SIOCGIFPFLAGS, IOC_W | IOC_R, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
+  IOCTL(SIOCETHTOOL, IOC_R | IOC_W, MK_PTR(MK_STRUCT(STRUCT_ptr_ifreq)))
   IOCTL(SIOCSIFLINK, 0, TYPE_NULL)
   IOCTL_SPECIAL(SIOCGIFCONF, IOC_W | IOC_R, do_ioctl_ifconf,
 MK_PTR(MK_STRUCT(STRUCT_ifconf)))
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index 0662270300..276f96039f 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -819,6 +819,8 @@ struct target_pollfd {
 #define TARGET_SIOCGIFTXQLEN   0x8942  /* Get the tx queue length  
*/
 #define TARGET_SIOCSIFTXQLEN   0x8943  /* Set the tx queue length  
*/
 
+#define TARGET_SIOCETHTOOL 0x8946  /* Ethtool interface
*/
+
 /* ARP cache control calls. */
 #define TARGET_OLD_SIOCDARP0x8950  /* old delete ARP table entry   
*/
 #define TARGET_OLD_SIOCGARP0x8951  /* old get ARP table entry  
*/
-- 
2.23.0.rc1.153.gdeed80330f-goog




Re: [Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL ioctls.

2019-08-16 Thread Shu-Chun Weng via Qemu-devel
Thank you Aleksandar,

I've updated the patch description and will send out v2 soon.

As for the length of the line: all lines in file syscall_defs.h are of
length 81 with a fixed width comment at the end. I'm not sure if making the
one line I add 80-character-wide is the right choice.

Shu-Chun

On Fri, Aug 16, 2019 at 3:37 PM Aleksandar Markovic <
aleksandar.m.m...@gmail.com> wrote:

>
> 16.08.2019. 23.28, "Shu-Chun Weng via Qemu-devel" 
> је написао/ла:
> >
> > The ioctl numeric values are platform-independent and determined by
> > the file include/uapi/linux/sockios.h in Linux kernel source code:
> >
> >   #define SIOCETHTOOL   0x8946
> >
> > These ioctls get (or set) the field ifr_data of type char* in the
> > structure ifreq. Such functionality is achieved in QEMU by using
> > MK_STRUCT() and MK_PTR() macros with an appropriate argument, as
> > it was done for existing similar cases.
> >
> > Signed-off-by: Shu-Chun Weng 
> > ---
>
> Shu-Chun, hi, and welcome!
>
> Just a couple of cosmetic things:
>
>   - by convention, the title of this patch should start with
> "linux-user:", since this patch affects linux user QEMU module;
>
>   - the patch title is too long (and has some minor mistakes) -
> "linux-user: Add support for SIOCETHTOOL ioctl" should be good enough;
>
>   - the lenght of the code lines that you add or modify must not be
> greater than 80.
>
> Sincerely,
> Aleksandar
>
> >  linux-user/ioctls.h   | 1 +
> >  linux-user/syscall_defs.h | 2 ++
> >  2 files changed, 3 insertions(+)
> >
> > diff --git a/linux-user/ioctls.h b/linux-user/ioctls.h
> > index 3281c97ca2..9d231df665 100644
> > --- a/linux-user/ioctls.h
> > +++ b/linux-user/ioctls.h
> > @@ -208,6 +208,7 @@
> >IOCTL(SIOCGIFINDEX, IOC_W | IOC_R,
> MK_PTR(MK_STRUCT(STRUCT_int_ifreq)))
> >IOCTL(SIOCSIFPFLAGS, IOC_W, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
> >IOCTL(SIOCGIFPFLAGS, IOC_W | IOC_R,
> MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
> > +  IOCTL(SIOCETHTOOL, IOC_R | IOC_W, MK_PTR(MK_STRUCT(STRUCT_ptr_ifreq)))
> >IOCTL(SIOCSIFLINK, 0, TYPE_NULL)
> >IOCTL_SPECIAL(SIOCGIFCONF, IOC_W | IOC_R, do_ioctl_ifconf,
> >  MK_PTR(MK_STRUCT(STRUCT_ifconf)))
> > diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
> > index 0662270300..276f96039f 100644
> > --- a/linux-user/syscall_defs.h
> > +++ b/linux-user/syscall_defs.h
> > @@ -819,6 +819,8 @@ struct target_pollfd {
> >  #define TARGET_SIOCGIFTXQLEN   0x8942  /* Get the tx queue
> length  */
> >  #define TARGET_SIOCSIFTXQLEN   0x8943  /* Set the tx queue
> length  */
> >
> > +#define TARGET_SIOCETHTOOL 0x8946  /* Ethtool interface
> */
> > +
> >  /* ARP cache control calls. */
> >  #define TARGET_OLD_SIOCDARP0x8950  /* old delete ARP table
> entry   */
> >  #define TARGET_OLD_SIOCGARP0x8951  /* old get ARP table
> entry  */
> > --
> > 2.23.0.rc1.153.gdeed80330f-goog
> >
> >
>


smime.p7s
Description: S/MIME Cryptographic Signature


[Qemu-devel] [PULL 3/3] hw/ide/atapi: Use the ldst API

2019-08-16 Thread John Snow
From: Philippe Mathieu-Daudé 

The big-endian load/store functions are already provided
by "qemu/bswap.h".
Avoid code duplication, use the generic API.

Signed-off-by: Philippe Mathieu-Daudé 
Message-id: 20190808130454.9930-1-phi...@redhat.com
Signed-off-by: John Snow 
---
 hw/ide/atapi.c | 80 ++
 1 file changed, 28 insertions(+), 52 deletions(-)

diff --git a/hw/ide/atapi.c b/hw/ide/atapi.c
index 1b0f66cc089..17a9d635d84 100644
--- a/hw/ide/atapi.c
+++ b/hw/ide/atapi.c
@@ -45,30 +45,6 @@ static void padstr8(uint8_t *buf, int buf_size, const char 
*src)
 }
 }
 
-static inline void cpu_to_ube16(uint8_t *buf, int val)
-{
-buf[0] = val >> 8;
-buf[1] = val & 0xff;
-}
-
-static inline void cpu_to_ube32(uint8_t *buf, unsigned int val)
-{
-buf[0] = val >> 24;
-buf[1] = val >> 16;
-buf[2] = val >> 8;
-buf[3] = val & 0xff;
-}
-
-static inline int ube16_to_cpu(const uint8_t *buf)
-{
-return (buf[0] << 8) | buf[1];
-}
-
-static inline int ube32_to_cpu(const uint8_t *buf)
-{
-return (buf[0] << 24) | (buf[1] << 16) | (buf[2] << 8) | buf[3];
-}
-
 static void lba_to_msf(uint8_t *buf, int lba)
 {
 lba += 150;
@@ -485,7 +461,7 @@ static inline uint8_t ide_atapi_set_profile(uint8_t *buf, 
uint8_t *index,
 uint8_t *buf_profile = buf + 12; /* start of profiles */
 
 buf_profile += ((*index) * 4); /* start of indexed profile */
-cpu_to_ube16 (buf_profile, profile);
+stw_be_p(buf_profile, profile);
 buf_profile[2] = ((buf_profile[0] == buf[6]) && (buf_profile[1] == 
buf[7]));
 
 /* each profile adds 4 bytes to the response */
@@ -518,9 +494,9 @@ static int ide_dvd_read_structure(IDEState *s, int format,
 buf[7] = 0;   /* default densities */
 
 /* FIXME: 0x3 per spec? */
-cpu_to_ube32(buf + 8, 0); /* start sector */
-cpu_to_ube32(buf + 12, total_sectors - 1); /* end sector */
-cpu_to_ube32(buf + 16, total_sectors - 1); /* l0 end sector */
+stl_be_p(buf + 8, 0); /* start sector */
+stl_be_p(buf + 12, total_sectors - 1); /* end sector */
+stl_be_p(buf + 16, total_sectors - 1); /* l0 end sector */
 
 /* Size of buffer, not including 2 byte size field */
 stw_be_p(buf, 2048 + 2);
@@ -839,7 +815,7 @@ static void cmd_get_configuration(IDEState *s, uint8_t *buf)
 }
 
 /* XXX: could result in alignment problems in some architectures */
-max_len = ube16_to_cpu(buf + 7);
+max_len = lduw_be_p(buf + 7);
 
 /*
  * XXX: avoid overflow for io_buffer if max_len is bigger than
@@ -859,16 +835,16 @@ static void cmd_get_configuration(IDEState *s, uint8_t 
*buf)
  * to use as current.  0 means there is no media
  */
 if (media_is_dvd(s)) {
-cpu_to_ube16(buf + 6, MMC_PROFILE_DVD_ROM);
+stw_be_p(buf + 6, MMC_PROFILE_DVD_ROM);
 } else if (media_is_cd(s)) {
-cpu_to_ube16(buf + 6, MMC_PROFILE_CD_ROM);
+stw_be_p(buf + 6, MMC_PROFILE_CD_ROM);
 }
 
 buf[10] = 0x02 | 0x01; /* persistent and current */
 len = 12; /* headers: 8 + 4 */
 len += ide_atapi_set_profile(buf, , MMC_PROFILE_DVD_ROM);
 len += ide_atapi_set_profile(buf, , MMC_PROFILE_CD_ROM);
-cpu_to_ube32(buf, len - 4); /* data length */
+stl_be_p(buf, len - 4); /* data length */
 
 ide_atapi_cmd_reply(s, len, max_len);
 }
@@ -878,7 +854,7 @@ static void cmd_mode_sense(IDEState *s, uint8_t *buf)
 int action, code;
 int max_len;
 
-max_len = ube16_to_cpu(buf + 7);
+max_len = lduw_be_p(buf + 7);
 action = buf[2] >> 6;
 code = buf[2] & 0x3f;
 
@@ -886,7 +862,7 @@ static void cmd_mode_sense(IDEState *s, uint8_t *buf)
 case 0: /* current values */
 switch(code) {
 case MODE_PAGE_R_W_ERROR: /* error recovery */
-cpu_to_ube16([0], 16 - 2);
+stw_be_p([0], 16 - 2);
 buf[2] = 0x70;
 buf[3] = 0;
 buf[4] = 0;
@@ -905,7 +881,7 @@ static void cmd_mode_sense(IDEState *s, uint8_t *buf)
 ide_atapi_cmd_reply(s, 16, max_len);
 break;
 case MODE_PAGE_AUDIO_CTL:
-cpu_to_ube16([0], 24 - 2);
+stw_be_p([0], 24 - 2);
 buf[2] = 0x70;
 buf[3] = 0;
 buf[4] = 0;
@@ -924,7 +900,7 @@ static void cmd_mode_sense(IDEState *s, uint8_t *buf)
 ide_atapi_cmd_reply(s, 24, max_len);
 break;
 case MODE_PAGE_CAPABILITIES:
-cpu_to_ube16([0], 30 - 2);
+stw_be_p([0], 30 - 2);
 buf[2] = 0x70;
 buf[3] = 0;
 buf[4] = 0;
@@ -946,11 +922,11 @@ static void cmd_mode_sense(IDEState *s, uint8_t *buf)
 buf[14] |= 1 << 1;
 }
 buf[15] = 0x00; /* No volume & mute control, no changer */
-cpu_to_ube16([16], 704); /* 4x read speed */
+

[Qemu-devel] [PULL 0/3] Ide patches

2019-08-16 Thread John Snow
The following changes since commit afd760539308a5524accf964107cdb1d54a059e3:

  Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190816' 
into staging (2019-08-16 17:21:40 +0100)

are available in the Git repository at:

  https://github.com/jnsnow/qemu.git tags/ide-pull-request

for you to fetch changes up to 614ab7d127536655ef105d4153ea264c88e855c1:

  hw/ide/atapi: Use the ldst API (2019-08-16 19:14:04 -0400)


Pull request

Stable notes: patches one and two can be considered
  for the next -stable release.



John Snow (1):
  Revert "ide/ahci: Check for -ECANCELED in aio callbacks"

Paolo Bonzini (1):
  dma-helpers: ensure AIO callback is invoked after cancellation

Philippe Mathieu-Daudé (1):
  hw/ide/atapi: Use the ldst API

 dma-helpers.c  | 13 +---
 hw/ide/ahci.c  |  3 --
 hw/ide/atapi.c | 80 ++
 hw/ide/core.c  | 14 -
 4 files changed, 37 insertions(+), 73 deletions(-)

-- 
2.21.0




[Qemu-devel] [PULL 32/36] iotests/257: test traditional sync modes

2019-08-16 Thread John Snow
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-12-js...@redhat.com
[Edit 'Bitmap' --> 'bitmap' in 257.out --js]
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 |   41 +-
 tests/qemu-iotests/257.out | 3089 
 2 files changed, 3128 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index 53ab31c92e1..c2a72c577aa 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -283,6 +283,12 @@ def test_bitmap_sync(bsync_mode, msync_mode='bitmap', 
failure=None):
   Bitmaps are always synchronized, regardless of failure.
   (Partial images must be kept.)
 
+:param msync_mode: The mirror sync mode to use for the first backup.
+   Can be any one of:
+- bitmap: Backups based on bitmap manifest.
+- full:   Full backups.
+- top:Full backups of the top layer only.
+
 :param failure: Is the (optional) failure mode, and can be any of:
 - None: No failure. Test the normative path. Default.
 - simulated:Cancel the job right before it completes.
@@ -393,7 +399,7 @@ def test_bitmap_sync(bsync_mode, msync_mode='bitmap', 
failure=None):
 # group 1 gets cleared first, then group two gets written.
 if ((bsync_mode == 'on-success' and not failure) or
 (bsync_mode == 'always')):
-ebitmap.clear_group(1)
+ebitmap.clear()
 ebitmap.dirty_group(2)
 
 vm.run_job(job, auto_dismiss=True, auto_finalize=False,
@@ -404,8 +410,19 @@ def test_bitmap_sync(bsync_mode, msync_mode='bitmap', 
failure=None):
 log('')
 
 if bsync_mode == 'always' and failure == 'intermediate':
+# TOP treats anything allocated as dirty, expect to see:
+if msync_mode == 'top':
+ebitmap.dirty_group(0)
+
 # We manage to copy one sector (one bit) before the error.
 ebitmap.clear_bit(ebitmap.first_bit)
+
+# Full returns all bits set except what was copied/skipped
+if msync_mode == 'full':
+fail_bit = ebitmap.first_bit
+ebitmap.clear()
+ebitmap.dirty_bits(range(fail_bit, SIZE // GRANULARITY))
+
 ebitmap.compare(get_bitmap(bitmaps, drive0.device, 'bitmap0'))
 
 # 2 - Writes and Reference Backup
@@ -499,10 +516,25 @@ def test_backup_api():
 'bitmap404': ['on-success', 'always', 'never', None],
 'bitmap0':   [None],
 },
+'full': {
+None:['on-success', 'always', 'never'],
+'bitmap404': ['on-success', 'always', 'never', None],
+'bitmap0':   ['never', None],
+},
+'top': {
+None:['on-success', 'always', 'never'],
+'bitmap404': ['on-success', 'always', 'never', None],
+'bitmap0':   ['never', None],
+},
+'none': {
+None:['on-success', 'always', 'never'],
+'bitmap404': ['on-success', 'always', 'never', None],
+'bitmap0':   ['on-success', 'always', 'never', None],
+}
 }
 
 # Dicts, as always, are not stably-ordered prior to 3.7, so use tuples:
-for sync_mode in ('incremental', 'bitmap'):
+for sync_mode in ('incremental', 'bitmap', 'full', 'top', 'none'):
 log("-- Sync mode {:s} tests --\n".format(sync_mode))
 for bitmap in (None, 'bitmap404', 'bitmap0'):
 for policy in error_cases[sync_mode][bitmap]:
@@ -517,6 +549,11 @@ def main():
 for failure in ("simulated", "intermediate", None):
 test_bitmap_sync(bsync_mode, "bitmap", failure)
 
+for sync_mode in ('full', 'top'):
+for bsync_mode in ('on-success', 'always'):
+for failure in ('simulated', 'intermediate', None):
+test_bitmap_sync(bsync_mode, sync_mode, failure)
+
 test_backup_api()
 
 if __name__ == '__main__':
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 811b1b11f19..84b79d7bfe9 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -2246,6 +2246,3002 @@ qemu_img compare "TEST_DIR/PID-bsync2" 
"TEST_DIR/PID-fbackup2" ==> Identical, OK
 qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
 
 
+=== Mode full; Bitmap Sync on-success with simulated failure ===
+
+--- Preparing image & VM ---
+
+{"execute": "blockdev-add", "arguments": {"driver": "qcow2", "file": 
{"driver": "file", "filename": "TEST_DIR/PID-img"}, "node-name": "drive0"}}
+{"return": {}}
+{"execute": "device_add", "arguments": {"drive": "drive0", "driver": 
"scsi-hd", "id": "device0", "share-rw": true}}
+{"return": {}}
+
+--- Write #0 ---
+
+write -P0x49 0x000 0x1

[Qemu-devel] [PULL 17/36] iotests: add test 257 for bitmap-mode backups

2019-08-16 Thread John Snow
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-18-js...@redhat.com
[Removed 'auto' group, as per new testing config guidelines --js]
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 |  416 +++
 tests/qemu-iotests/257.out | 2247 
 tests/qemu-iotests/group   |1 +
 3 files changed, 2664 insertions(+)
 create mode 100755 tests/qemu-iotests/257
 create mode 100644 tests/qemu-iotests/257.out

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
new file mode 100755
index 000..39526837499
--- /dev/null
+++ b/tests/qemu-iotests/257
@@ -0,0 +1,416 @@
+#!/usr/bin/env python
+#
+# Test bitmap-sync backups (incremental, differential, and partials)
+#
+# Copyright (c) 2019 John Snow for Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see .
+#
+# owner=js...@redhat.com
+
+from collections import namedtuple
+import math
+import os
+
+import iotests
+from iotests import log, qemu_img
+
+SIZE = 64 * 1024 * 1024
+GRANULARITY = 64 * 1024
+
+Pattern = namedtuple('Pattern', ['byte', 'offset', 'size'])
+def mkpattern(byte, offset, size=GRANULARITY):
+"""Constructor for Pattern() with default size"""
+return Pattern(byte, offset, size)
+
+class PatternGroup:
+"""Grouping of Pattern objects. Initialize with an iterable of Patterns."""
+def __init__(self, patterns):
+self.patterns = patterns
+
+def bits(self, granularity):
+"""Calculate the unique bits dirtied by this pattern grouping"""
+res = set()
+for pattern in self.patterns:
+lower = pattern.offset // granularity
+upper = (pattern.offset + pattern.size - 1) // granularity
+res = res | set(range(lower, upper + 1))
+return res
+
+GROUPS = [
+PatternGroup([
+# Batch 0: 4 clusters
+mkpattern('0x49', 0x000),
+mkpattern('0x6c', 0x010),   # 1M
+mkpattern('0x6f', 0x200),   # 32M
+mkpattern('0x76', 0x3ff)]), # 64M - 64K
+PatternGroup([
+# Batch 1: 6 clusters (3 new)
+mkpattern('0x65', 0x000),   # Full overwrite
+mkpattern('0x77', 0x00f8000),   # Partial-left (1M-32K)
+mkpattern('0x72', 0x2008000),   # Partial-right (32M+32K)
+mkpattern('0x69', 0x3fe)]), # Adjacent-left (64M - 128K)
+PatternGroup([
+# Batch 2: 7 clusters (3 new)
+mkpattern('0x74', 0x001),   # Adjacent-right
+mkpattern('0x69', 0x00e8000),   # Partial-left  (1M-96K)
+mkpattern('0x6e', 0x2018000),   # Partial-right (32M+96K)
+mkpattern('0x67', 0x3fe,
+  2*GRANULARITY)]), # Overwrite [(64M-128K)-64M)
+PatternGroup([
+# Batch 3: 8 clusters (5 new)
+# Carefully chosen such that nothing re-dirties the one cluster
+# that copies out successfully before failure in Group #1.
+mkpattern('0xaa', 0x001,
+  3*GRANULARITY),   # Overwrite and 2x Adjacent-right
+mkpattern('0xbb', 0x00d8000),   # Partial-left (1M-160K)
+mkpattern('0xcc', 0x2028000),   # Partial-right (32M+160K)
+mkpattern('0xdd', 0x3fc)]), # New; leaving a gap to the right
+]
+
+class Drive:
+"""Represents, vaguely, a drive attached to a VM.
+Includes format, graph, and device information."""
+
+def __init__(self, path, vm=None):
+self.path = path
+self.vm = vm
+self.fmt = None
+self.size = None
+self.node = None
+self.device = None
+
+@property
+def name(self):
+return self.node or self.device
+
+def img_create(self, fmt, size):
+self.fmt = fmt
+self.size = size
+iotests.qemu_img_create('-f', self.fmt, self.path, str(self.size))
+
+def create_target(self, name, fmt, size):
+basename = os.path.basename(self.path)
+file_node_name = "file_{}".format(basename)
+vm = self.vm
+
+log(vm.command('blockdev-create', job_id='bdc-file-job',
+   options={
+   'driver': 'file',
+   'filename': self.path,
+   'size': 0,
+   }))
+vm.run_job('bdc-file-job')
+log(vm.command('blockdev-add', driver='file',
+   node_name=file_node_name, 

[Qemu-devel] [PULL 2/3] Revert "ide/ahci: Check for -ECANCELED in aio callbacks"

2019-08-16 Thread John Snow
This reverts commit 0d910cfeaf2076b116b4517166d5deb0fea76394.

It's not correct to just ignore an error code in a callback; we need to
handle that error and possible report failure to the guest so that they
don't wait indefinitely for an operation that will now never finish.

This ought to help cases reported by Nutanix where iSCSI returns a
legitimate -ECANCELED for certain operations which should be propagated
normally.

Reported-by: Shaju Abraham 
Signed-off-by: John Snow 
Message-id: 20190729223605.7163-1-js...@redhat.com
Signed-off-by: John Snow 
---
 hw/ide/ahci.c |  3 ---
 hw/ide/core.c | 14 --
 2 files changed, 17 deletions(-)

diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index d72da85605a..d45393c019d 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -1025,9 +1025,6 @@ static void ncq_cb(void *opaque, int ret)
 IDEState *ide_state = _tfs->drive->port.ifs[0];
 
 ncq_tfs->aiocb = NULL;
-if (ret == -ECANCELED) {
-return;
-}
 
 if (ret < 0) {
 bool is_read = ncq_tfs->cmd == READ_FPDMA_QUEUED;
diff --git a/hw/ide/core.c b/hw/ide/core.c
index 38b6cdac87b..e6e54c6c9a2 100644
--- a/hw/ide/core.c
+++ b/hw/ide/core.c
@@ -723,9 +723,6 @@ static void ide_sector_read_cb(void *opaque, int ret)
 s->pio_aiocb = NULL;
 s->status &= ~BUSY_STAT;
 
-if (ret == -ECANCELED) {
-return;
-}
 if (ret != 0) {
 if (ide_handle_rw_error(s, -ret, IDE_RETRY_PIO |
 IDE_RETRY_READ)) {
@@ -841,10 +838,6 @@ static void ide_dma_cb(void *opaque, int ret)
 uint64_t offset;
 bool stay_active = false;
 
-if (ret == -ECANCELED) {
-return;
-}
-
 if (ret == -EINVAL) {
 ide_dma_error(s);
 return;
@@ -976,10 +969,6 @@ static void ide_sector_write_cb(void *opaque, int ret)
 IDEState *s = opaque;
 int n;
 
-if (ret == -ECANCELED) {
-return;
-}
-
 s->pio_aiocb = NULL;
 s->status &= ~BUSY_STAT;
 
@@ -1059,9 +1048,6 @@ static void ide_flush_cb(void *opaque, int ret)
 
 s->pio_aiocb = NULL;
 
-if (ret == -ECANCELED) {
-return;
-}
 if (ret < 0) {
 /* XXX: What sector number to set here? */
 if (ide_handle_rw_error(s, -ret, IDE_RETRY_FLUSH)) {
-- 
2.21.0




[Qemu-devel] [PULL 24/36] iotests/257: Refactor backup helpers

2019-08-16 Thread John Snow
This test needs support for non-bitmap backups and missing or
unspecified bitmap sync modes, so rewrite the helpers to be a little
more generic.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-4-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 |  56 ++-
 tests/qemu-iotests/257.out | 192 ++---
 2 files changed, 128 insertions(+), 120 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index bc66ea03b24..aaa8f595043 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -207,31 +207,37 @@ def get_bitmap(bitmaps, drivename, name, recording=None):
 return bitmap
 return None
 
+def blockdev_backup(vm, device, target, sync, **kwargs):
+# Strip any arguments explicitly nulled by the caller:
+kwargs = {key: val for key, val in kwargs.items() if val is not None}
+result = vm.qmp_log('blockdev-backup',
+device=device,
+target=target,
+sync=sync,
+**kwargs)
+return result
+
+def blockdev_backup_mktarget(drive, target_id, filepath, sync, **kwargs):
+target_drive = Drive(filepath, vm=drive.vm)
+target_drive.create_target(target_id, drive.fmt, drive.size)
+blockdev_backup(drive.vm, drive.name, target_id, sync, **kwargs)
+
 def reference_backup(drive, n, filepath):
 log("--- Reference Backup #{:d} ---\n".format(n))
 target_id = "ref_target_{:d}".format(n)
 job_id = "ref_backup_{:d}".format(n)
-target_drive = Drive(filepath, vm=drive.vm)
-
-target_drive.create_target(target_id, drive.fmt, drive.size)
-drive.vm.qmp_log("blockdev-backup",
- job_id=job_id, device=drive.name,
- target=target_id, sync="full")
+blockdev_backup_mktarget(drive, target_id, filepath, "full",
+ job_id=job_id)
 drive.vm.run_job(job_id, auto_dismiss=True)
 log('')
 
-def bitmap_backup(drive, n, filepath, bitmap, bitmap_mode):
-log("--- Bitmap Backup #{:d} ---\n".format(n))
-target_id = "bitmap_target_{:d}".format(n)
-job_id = "bitmap_backup_{:d}".format(n)
-target_drive = Drive(filepath, vm=drive.vm)
-
-target_drive.create_target(target_id, drive.fmt, drive.size)
-drive.vm.qmp_log("blockdev-backup", job_id=job_id, device=drive.name,
- target=target_id, sync="bitmap",
- bitmap_mode=bitmap_mode,
- bitmap=bitmap,
- auto_finalize=False)
+def backup(drive, n, filepath, sync, **kwargs):
+log("--- Test Backup #{:d} ---\n".format(n))
+target_id = "backup_target_{:d}".format(n)
+job_id = "backup_{:d}".format(n)
+kwargs.setdefault('auto-finalize', False)
+blockdev_backup_mktarget(drive, target_id, filepath, sync,
+ job_id=job_id, **kwargs)
 return job_id
 
 def perform_writes(drive, n):
@@ -263,7 +269,7 @@ def compare_images(image, reference, baseimg=None, 
expected_match=True):
 "OK!" if ret == expected_ret else "ERROR!"),
 filters=[iotests.filter_testfiles])
 
-def test_bitmap_sync(bsync_mode, failure=None):
+def test_bitmap_sync(bsync_mode, msync_mode='bitmap', failure=None):
 """
 Test bitmap backup routines.
 
@@ -291,7 +297,7 @@ def test_bitmap_sync(bsync_mode, failure=None):
  fbackup0, fbackup1, fbackup2), \
  iotests.VM() as vm:
 
-mode = "Bitmap Sync Mode {:s}".format(bsync_mode)
+mode = "Mode {:s}; Bitmap Sync {:s}".format(msync_mode, bsync_mode)
 preposition = "with" if failure else "without"
 cond = "{:s} {:s}".format(preposition,
   "{:s} failure".format(failure) if failure
@@ -362,12 +368,13 @@ def test_bitmap_sync(bsync_mode, failure=None):
 ebitmap.compare(bitmap)
 reference_backup(drive0, 1, fbackup1)
 
-# 1 - Bitmap Backup (Optional induced failure)
+# 1 - Test Backup (w/ Optional induced failure)
 if failure == 'intermediate':
 # Activate blkdebug induced failure for second-to-next read
 log(vm.hmp_qemu_io(drive0.name, 'flush'))
 log('')
-job = bitmap_backup(drive0, 1, bsync1, "bitmap0", bsync_mode)
+job = backup(drive0, 1, bsync1, msync_mode,
+ bitmap="bitmap0", bitmap_mode=bsync_mode)
 
 def _callback():
 """Issue writes while the job is open to test bitmap divergence."""
@@ -408,7 +415,8 @@ def test_bitmap_sync(bsync_mode, failure=None):
 reference_backup(drive0, 2, fbackup2)
 
 # 2 - Bitmap Backup (In failure modes, this is a recovery.)
-job = bitmap_backup(drive0, 2, bsync2, "bitmap0", bsync_mode)
+job = backup(drive0, 2, bsync2, "bitmap",
+ bitmap="bitmap0", bitmap_mode=bsync_mode)
 

[Qemu-devel] [PULL 31/36] block/backup: support bitmap sync modes for non-bitmap backups

2019-08-16 Thread John Snow
Accept bitmaps and sync policies for the other backup modes.
This allows us to do things like create a bitmap synced to a full backup
without a transaction, or start a resumable backup process.

Some combinations don't make sense, though:

- NEVER policy combined with any non-BITMAP mode doesn't do anything,
  because the bitmap isn't used for input or output.
  It's harmless, but is almost certainly never what the user wanted.

- sync=NONE is more questionable. It can't use on-success because this
  job never completes with success anyway, and the resulting artifact
  of 'always' is suspect: because we start with a full bitmap and only
  copy out segments that get written to, the final output bitmap will
  always be ... a fully set bitmap.

  Maybe there's contexts in which bitmaps make sense for sync=none,
  but not without more severe changes to the current job, and omitting
  it here doesn't prevent us from adding it later.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-11-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c   |  8 +---
 blockdev.c   | 22 ++
 qapi/block-core.json |  6 --
 3 files changed, 27 insertions(+), 9 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 9e1382ec5c6..a9be07258c1 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -697,7 +697,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 return NULL;
 }
 
-if (sync_mode == MIRROR_SYNC_MODE_BITMAP) {
+if (sync_bitmap) {
 /* If we need to write to this bitmap, check that we can: */
 if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
 bdrv_dirty_bitmap_check(sync_bitmap, BDRV_BITMAP_DEFAULT, errp)) {
@@ -708,12 +708,6 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
 return NULL;
 }
-} else if (sync_bitmap) {
-error_setg(errp,
-   "a bitmap was given to backup_job_create, "
-   "but it received an incompatible sync_mode (%s)",
-   MirrorSyncMode_str(sync_mode));
-return NULL;
 }
 
 len = bdrv_getlength(bs);
diff --git a/blockdev.c b/blockdev.c
index f889da0b427..64d06d1f672 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3567,6 +3567,28 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_ALLOW_RO, errp)) {
 return NULL;
 }
+
+/* This does not produce a useful bitmap artifact: */
+if (backup->sync == MIRROR_SYNC_MODE_NONE) {
+error_setg(errp, "sync mode '%s' does not produce meaningful 
bitmap"
+   " outputs", MirrorSyncMode_str(backup->sync));
+return NULL;
+}
+
+/* If the bitmap isn't used for input or output, this is useless: */
+if (backup->bitmap_mode == BITMAP_SYNC_MODE_NEVER &&
+backup->sync != MIRROR_SYNC_MODE_BITMAP) {
+error_setg(errp, "Bitmap sync mode '%s' has no meaningful effect"
+   " when combined with sync mode '%s'",
+   BitmapSyncMode_str(backup->bitmap_mode),
+   MirrorSyncMode_str(backup->sync));
+return NULL;
+}
+}
+
+if (!backup->has_bitmap && backup->has_bitmap_mode) {
+error_setg(errp, "Cannot specify bitmap sync mode without a bitmap");
+return NULL;
 }
 
 if (!backup->auto_finalize) {
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 8344fbe2030..d72cf5f354b 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1352,13 +1352,15 @@
 # @speed: the maximum speed, in bytes per second. The default is 0,
 # for unlimited.
 #
-# @bitmap: the name of a dirty bitmap if sync is "bitmap" or "incremental".
+# @bitmap: The name of a dirty bitmap to use.
 #  Must be present if sync is "bitmap" or "incremental".
+#  Can be present if sync is "full" or "top".
 #  Must not be present otherwise.
 #  (Since 2.4 (drive-backup), 3.1 (blockdev-backup))
 #
 # @bitmap-mode: Specifies the type of data the bitmap should contain after
-#   the operation concludes. Must be present if sync is "bitmap".
+#   the operation concludes.
+#   Must be present if a bitmap was provided,
 #   Must NOT be present otherwise. (Since 4.2)
 #
 # @compress: true to compress data, if the target format supports it.
-- 
2.21.0




[Qemu-devel] [PULL 36/36] tests/test-hbitmap: test next_zero and _next_dirty_area after truncate

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

Test that hbitmap_next_zero and hbitmap_next_dirty_area can find things
after old bitmap end.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Message-id: 20190805164652.42409-1-vsement...@virtuozzo.com
Tested-by: John Snow 
Reviewed-by: John Snow 
Signed-off-by: John Snow 
---
 tests/test-hbitmap.c | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
index 592d8219db2..eed5d288cbc 100644
--- a/tests/test-hbitmap.c
+++ b/tests/test-hbitmap.c
@@ -1004,6 +1004,15 @@ static void test_hbitmap_next_zero_4(TestHBitmapData 
*data, const void *unused)
 test_hbitmap_next_zero_do(data, 4);
 }
 
+static void test_hbitmap_next_zero_after_truncate(TestHBitmapData *data,
+  const void *unused)
+{
+hbitmap_test_init(data, L1, 0);
+hbitmap_test_truncate_impl(data, L1 * 2);
+hbitmap_set(data->hb, 0, L1);
+test_hbitmap_next_zero_check(data, 0);
+}
+
 static void test_hbitmap_next_dirty_area_check(TestHBitmapData *data,
uint64_t offset,
uint64_t count)
@@ -1104,6 +1113,15 @@ static void 
test_hbitmap_next_dirty_area_4(TestHBitmapData *data,
 test_hbitmap_next_dirty_area_do(data, 4);
 }
 
+static void test_hbitmap_next_dirty_area_after_truncate(TestHBitmapData *data,
+const void *unused)
+{
+hbitmap_test_init(data, L1, 0);
+hbitmap_test_truncate_impl(data, L1 * 2);
+hbitmap_set(data->hb, L1 + 1, 1);
+test_hbitmap_next_dirty_area_check(data, 0, UINT64_MAX);
+}
+
 int main(int argc, char **argv)
 {
 g_test_init(, , NULL);
@@ -1169,6 +1187,8 @@ int main(int argc, char **argv)
  test_hbitmap_next_zero_0);
 hbitmap_test_add("/hbitmap/next_zero/next_zero_4",
  test_hbitmap_next_zero_4);
+hbitmap_test_add("/hbitmap/next_zero/next_zero_after_truncate",
+ test_hbitmap_next_zero_after_truncate);
 
 hbitmap_test_add("/hbitmap/next_dirty_area/next_dirty_area_0",
  test_hbitmap_next_dirty_area_0);
@@ -1176,6 +1196,8 @@ int main(int argc, char **argv)
  test_hbitmap_next_dirty_area_1);
 hbitmap_test_add("/hbitmap/next_dirty_area/next_dirty_area_4",
  test_hbitmap_next_dirty_area_4);
+hbitmap_test_add("/hbitmap/next_dirty_area/next_dirty_area_after_truncate",
+ test_hbitmap_next_dirty_area_after_truncate);
 
 g_test_run();
 
-- 
2.21.0




[Qemu-devel] [PULL 35/36] block/backup: refactor write_flags

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

write flags are constant, let's store it in BackupBlockJob instead of
recalculating. It also makes two boolean fields to be unused, so,
drop them.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190730163251.755248-4-vsement...@virtuozzo.com
Signed-off-by: John Snow 
---
 block/backup.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 083fc189af9..2baf7bed65a 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -50,14 +50,13 @@ typedef struct BackupBlockJob {
 uint64_t len;
 uint64_t bytes_read;
 int64_t cluster_size;
-bool compress;
 NotifierWithReturn before_write;
 QLIST_HEAD(, CowRequest) inflight_reqs;
 
 bool use_copy_range;
 int64_t copy_range_size;
 
-bool serialize_target_writes;
+BdrvRequestFlags write_flags;
 bool initializing_bitmap;
 } BackupBlockJob;
 
@@ -113,10 +112,6 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 BlockBackend *blk = job->common.blk;
 int nbytes;
 int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-int write_flags =
-(job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0) |
-(job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
-
 
 assert(QEMU_IS_ALIGNED(start, job->cluster_size));
 bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
@@ -135,7 +130,7 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 }
 
 ret = blk_co_pwrite(job->target, start, nbytes, *bounce_buffer,
-write_flags);
+job->write_flags);
 if (ret < 0) {
 trace_backup_do_cow_write_fail(job, start, ret);
 if (error_is_read) {
@@ -163,7 +158,6 @@ static int coroutine_fn 
backup_cow_with_offload(BackupBlockJob *job,
 BlockBackend *blk = job->common.blk;
 int nbytes;
 int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-int write_flags = job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0;
 
 assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size));
 assert(QEMU_IS_ALIGNED(start, job->cluster_size));
@@ -172,7 +166,7 @@ static int coroutine_fn 
backup_cow_with_offload(BackupBlockJob *job,
 bdrv_reset_dirty_bitmap(job->copy_bitmap, start,
 job->cluster_size * nr_clusters);
 ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
-read_flags, write_flags);
+read_flags, job->write_flags);
 if (ret < 0) {
 trace_backup_do_cow_copy_range_fail(job, start, ret);
 bdrv_set_dirty_bitmap(job->copy_bitmap, start,
@@ -751,10 +745,16 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 job->sync_mode = sync_mode;
 job->sync_bitmap = sync_bitmap;
 job->bitmap_mode = bitmap_mode;
-job->compress = compress;
 
-/* Detect image-fleecing (and similar) schemes */
-job->serialize_target_writes = bdrv_chain_contains(target, bs);
+/*
+ * Set write flags:
+ * 1. Detect image-fleecing (and similar) schemes
+ * 2. Handle compression
+ */
+job->write_flags =
+(bdrv_chain_contains(target, bs) ? BDRV_REQ_SERIALISING : 0) |
+(compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
+
 job->cluster_size = cluster_size;
 job->copy_bitmap = copy_bitmap;
 copy_bitmap = NULL;
-- 
2.21.0




[Qemu-devel] [PULL 1/3] dma-helpers: ensure AIO callback is invoked after cancellation

2019-08-16 Thread John Snow
From: Paolo Bonzini 

dma_aio_cancel unschedules the BH if there is one, which corresponds
to the reschedule_dma case of dma_blk_cb.  This can stall the DMA
permanently, because dma_complete will never get invoked and therefore
nobody will ever invoke the original AIO callback in dbs->common.cb.

Fix this by invoking the callback (which is ensured to happen after
a bdrv_aio_cancel_async, or done manually in the dbs->bh case), and
add assertions to check that the DMA state machine is indeed waiting
for dma_complete or reschedule_dma, but never both.

Reported-by: John Snow 
Signed-off-by: Paolo Bonzini 
Message-id: 20190729213416.1972-1-pbonz...@redhat.com
Signed-off-by: John Snow 
---
 dma-helpers.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 2d7e02d35e5..d3871dc61ea 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -90,6 +90,7 @@ static void reschedule_dma(void *opaque)
 {
 DMAAIOCB *dbs = (DMAAIOCB *)opaque;
 
+assert(!dbs->acb && dbs->bh);
 qemu_bh_delete(dbs->bh);
 dbs->bh = NULL;
 dma_blk_cb(dbs, 0);
@@ -111,15 +112,12 @@ static void dma_complete(DMAAIOCB *dbs, int ret)
 {
 trace_dma_complete(dbs, ret, dbs->common.cb);
 
+assert(!dbs->acb && !dbs->bh);
 dma_blk_unmap(dbs);
 if (dbs->common.cb) {
 dbs->common.cb(dbs->common.opaque, ret);
 }
 qemu_iovec_destroy(>iov);
-if (dbs->bh) {
-qemu_bh_delete(dbs->bh);
-dbs->bh = NULL;
-}
 qemu_aio_unref(dbs);
 }
 
@@ -179,14 +177,21 @@ static void dma_aio_cancel(BlockAIOCB *acb)
 
 trace_dma_aio_cancel(dbs);
 
+assert(!(dbs->acb && dbs->bh));
 if (dbs->acb) {
+/* This will invoke dma_blk_cb.  */
 blk_aio_cancel_async(dbs->acb);
+return;
 }
+
 if (dbs->bh) {
 cpu_unregister_map_client(dbs->bh);
 qemu_bh_delete(dbs->bh);
 dbs->bh = NULL;
 }
+if (dbs->common.cb) {
+dbs->common.cb(dbs->common.opaque, -ECANCELED);
+}
 }
 
 static AioContext *dma_get_aio_context(BlockAIOCB *acb)
-- 
2.21.0




[Qemu-devel] [PULL 34/36] block/backup: deal with zero detection

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

We have detect_zeroes option, so at least for blockdev-backup user
should define it if zero-detection is needed. For drive-backup leave
detection enabled by default but do it through existing option instead
of open-coding.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190730163251.755248-2-vsement...@virtuozzo.com
Signed-off-by: John Snow 
---
 block/backup.c | 15 ++-
 blockdev.c |  8 
 2 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index a9be07258c1..083fc189af9 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -113,7 +113,10 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 BlockBackend *blk = job->common.blk;
 int nbytes;
 int read_flags = is_write_notifier ? BDRV_REQ_NO_SERIALISING : 0;
-int write_flags = job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0;
+int write_flags =
+(job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0) |
+(job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0);
+
 
 assert(QEMU_IS_ALIGNED(start, job->cluster_size));
 bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
@@ -131,14 +134,8 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 goto fail;
 }
 
-if (buffer_is_zero(*bounce_buffer, nbytes)) {
-ret = blk_co_pwrite_zeroes(job->target, start,
-   nbytes, write_flags | BDRV_REQ_MAY_UNMAP);
-} else {
-ret = blk_co_pwrite(job->target, start,
-nbytes, *bounce_buffer, write_flags |
-(job->compress ? BDRV_REQ_WRITE_COMPRESSED : 0));
-}
+ret = blk_co_pwrite(job->target, start, nbytes, *bounce_buffer,
+write_flags);
 if (ret < 0) {
 trace_backup_do_cow_write_fail(job, start, ret);
 if (error_is_read) {
diff --git a/blockdev.c b/blockdev.c
index 64d06d1f672..2e536dde3e9 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3615,7 +3615,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
JobTxn *txn,
 BlockDriverState *source = NULL;
 BlockJob *job = NULL;
 AioContext *aio_context;
-QDict *options = NULL;
+QDict *options;
 Error *local_err = NULL;
 int flags;
 int64_t size;
@@ -3688,10 +3688,10 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
JobTxn *txn,
 goto out;
 }
 
+options = qdict_new();
+qdict_put_str(options, "discard", "unmap");
+qdict_put_str(options, "detect-zeroes", "unmap");
 if (backup->format) {
-if (!options) {
-options = qdict_new();
-}
 qdict_put_str(options, "driver", backup->format);
 }
 
-- 
2.21.0




[Qemu-devel] [PULL 28/36] block/backup: centralize copy_bitmap initialization

2019-08-16 Thread John Snow
Just a few housekeeping changes that keeps the following commit easier
to read; perform the initial copy_bitmap initialization in one place.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-8-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index b04ab2d5f0c..305f9b3468b 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -451,16 +451,22 @@ static int coroutine_fn backup_loop(BackupBlockJob *job)
 return ret;
 }
 
-/* init copy_bitmap from sync_bitmap */
-static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
+static void backup_init_copy_bitmap(BackupBlockJob *job)
 {
-bool ret = bdrv_dirty_bitmap_merge_internal(job->copy_bitmap,
-job->sync_bitmap,
-NULL, true);
-assert(ret);
+bool ret;
+uint64_t estimate;
 
-job_progress_set_remaining(>common.job,
-   bdrv_get_dirty_count(job->copy_bitmap));
+if (job->sync_mode == MIRROR_SYNC_MODE_BITMAP) {
+ret = bdrv_dirty_bitmap_merge_internal(job->copy_bitmap,
+   job->sync_bitmap,
+   NULL, true);
+assert(ret);
+} else {
+bdrv_set_dirty_bitmap(job->copy_bitmap, 0, job->len);
+}
+
+estimate = bdrv_get_dirty_count(job->copy_bitmap);
+job_progress_set_remaining(>common.job, estimate);
 }
 
 static int coroutine_fn backup_run(Job *job, Error **errp)
@@ -472,12 +478,7 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 QLIST_INIT(>inflight_reqs);
 qemu_co_rwlock_init(>flush_rwlock);
 
-if (s->sync_mode == MIRROR_SYNC_MODE_BITMAP) {
-backup_incremental_init_copy_bitmap(s);
-} else {
-bdrv_set_dirty_bitmap(s->copy_bitmap, 0, s->len);
-job_progress_set_remaining(job, s->len);
-}
+backup_init_copy_bitmap(s);
 
 s->before_write.notify = backup_before_write_notify;
 bdrv_add_before_write_notifier(bs, >before_write);
-- 
2.21.0




[Qemu-devel] [PULL 29/36] block/backup: add backup_is_cluster_allocated

2019-08-16 Thread John Snow
Modify bdrv_is_unallocated_range to utilize the pnum return from
bdrv_is_allocated, and in the process change the semantics from
"is unallocated" to "is allocated."

Optionally returns a full number of clusters that share the same
allocation status.

This will be used to carefully toggle bits in the bitmap for sync=top
initialization in the following commits.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-9-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c | 62 +++---
 1 file changed, 44 insertions(+), 18 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 305f9b3468b..f6bf32c9438 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -185,6 +185,48 @@ static int coroutine_fn 
backup_cow_with_offload(BackupBlockJob *job,
 return nbytes;
 }
 
+/*
+ * Check if the cluster starting at offset is allocated or not.
+ * return via pnum the number of contiguous clusters sharing this allocation.
+ */
+static int backup_is_cluster_allocated(BackupBlockJob *s, int64_t offset,
+   int64_t *pnum)
+{
+BlockDriverState *bs = blk_bs(s->common.blk);
+int64_t count, total_count = 0;
+int64_t bytes = s->len - offset;
+int ret;
+
+assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
+
+while (true) {
+ret = bdrv_is_allocated(bs, offset, bytes, );
+if (ret < 0) {
+return ret;
+}
+
+total_count += count;
+
+if (ret || count == 0) {
+/*
+ * ret: partial segment(s) are considered allocated.
+ * otherwise: unallocated tail is treated as an entire segment.
+ */
+*pnum = DIV_ROUND_UP(total_count, s->cluster_size);
+return ret;
+}
+
+/* Unallocated segment(s) with uncertain following segment(s) */
+if (total_count >= s->cluster_size) {
+*pnum = total_count / s->cluster_size;
+return 0;
+}
+
+offset += count;
+bytes -= count;
+}
+}
+
 static int coroutine_fn backup_do_cow(BackupBlockJob *job,
   int64_t offset, uint64_t bytes,
   bool *error_is_read,
@@ -398,34 +440,18 @@ static bool coroutine_fn yield_and_check(BackupBlockJob 
*job)
 return false;
 }
 
-static bool bdrv_is_unallocated_range(BlockDriverState *bs,
-  int64_t offset, int64_t bytes)
-{
-int64_t end = offset + bytes;
-
-while (offset < end && !bdrv_is_allocated(bs, offset, bytes, )) {
-if (bytes == 0) {
-return true;
-}
-offset += bytes;
-bytes = end - offset;
-}
-
-return offset >= end;
-}
-
 static int coroutine_fn backup_loop(BackupBlockJob *job)
 {
 bool error_is_read;
 int64_t offset;
 BdrvDirtyBitmapIter *bdbi;
-BlockDriverState *bs = blk_bs(job->common.blk);
 int ret = 0;
+int64_t dummy;
 
 bdbi = bdrv_dirty_iter_new(job->copy_bitmap);
 while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
 if (job->sync_mode == MIRROR_SYNC_MODE_TOP &&
-bdrv_is_unallocated_range(bs, offset, job->cluster_size))
+!backup_is_cluster_allocated(job, offset, ))
 {
 bdrv_reset_dirty_bitmap(job->copy_bitmap, offset,
 job->cluster_size);
-- 
2.21.0




[Qemu-devel] [PULL 27/36] block/backup: improve sync=bitmap work estimates

2019-08-16 Thread John Snow
When making backups based on bitmaps, the work estimate can be more
accurate. Update iotests to reflect the new strategy.

TOP work estimates are broken, but do not get worse with this commit.
That issue is addressed in the following commits instead.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-7-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c |  8 +++-
 tests/qemu-iotests/256.out |  4 ++--
 tests/qemu-iotests/257.out | 36 ++--
 3 files changed, 23 insertions(+), 25 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index f704c83a98f..b04ab2d5f0c 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -459,9 +459,8 @@ static void 
backup_incremental_init_copy_bitmap(BackupBlockJob *job)
 NULL, true);
 assert(ret);
 
-/* TODO job_progress_set_remaining() would make more sense */
-job_progress_update(>common.job,
-job->len - bdrv_get_dirty_count(job->copy_bitmap));
+job_progress_set_remaining(>common.job,
+   bdrv_get_dirty_count(job->copy_bitmap));
 }
 
 static int coroutine_fn backup_run(Job *job, Error **errp)
@@ -473,12 +472,11 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 QLIST_INIT(>inflight_reqs);
 qemu_co_rwlock_init(>flush_rwlock);
 
-job_progress_set_remaining(job, s->len);
-
 if (s->sync_mode == MIRROR_SYNC_MODE_BITMAP) {
 backup_incremental_init_copy_bitmap(s);
 } else {
 bdrv_set_dirty_bitmap(s->copy_bitmap, 0, s->len);
+job_progress_set_remaining(job, s->len);
 }
 
 s->before_write.notify = backup_before_write_notify;
diff --git a/tests/qemu-iotests/256.out b/tests/qemu-iotests/256.out
index eec38614ec4..f18ecb0f912 100644
--- a/tests/qemu-iotests/256.out
+++ b/tests/qemu-iotests/256.out
@@ -113,7 +113,7 @@
 {
   "return": {}
 }
-{"data": {"device": "j2", "len": 67108864, "offset": 67108864, "speed": 0, 
"type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "j3", "len": 67108864, "offset": 67108864, "speed": 0, 
"type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "j2", "len": 0, "offset": 0, "speed": 0, "type": 
"backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": 
"USECS", "seconds": "SECS"}}
+{"data": {"device": "j3", "len": 0, "offset": 0, "speed": 0, "type": 
"backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": 
"USECS", "seconds": "SECS"}}
 
 --- Done ---
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 43f2e0f9c99..811b1b11f19 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -150,7 +150,7 @@ expecting 7 dirty sectors; have 7. OK!
 {"execute": "job-cancel", "arguments": {"id": "backup_1"}}
 {"return": {}}
 {"data": {"id": "backup_1", "type": "backup"}, "event": "BLOCK_JOB_PENDING", 
"timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_1", "len": 67108864, "offset": 67108864, "speed": 
0, "type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_1", "len": 393216, "offset": 393216, "speed": 0, 
"type": "backup"}, "event": "BLOCK_JOB_CANCELLED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
 "device0": [
@@ -228,7 +228,7 @@ expecting 15 dirty sectors; have 15. OK!
 {"execute": "job-finalize", "arguments": {"id": "backup_2"}}
 {"return": {}}
 {"data": {"id": "backup_2", "type": "backup"}, "event": "BLOCK_JOB_PENDING", 
"timestamp": {"microseconds": "USECS", "seconds": "SECS"}}
-{"data": {"device": "backup_2", "len": 67108864, "offset": 67108864, "speed": 
0, "type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
+{"data": {"device": "backup_2", "len": 983040, "offset": 983040, "speed": 0, 
"type": "backup"}, "event": "BLOCK_JOB_COMPLETED", "timestamp": 
{"microseconds": "USECS", "seconds": "SECS"}}
 {
   "bitmaps": {
 "device0": [
@@ -367,7 +367,7 @@ expecting 6 dirty sectors; have 6. OK!
 {"execute": "blockdev-backup", "arguments": {"auto-finalize": false, "bitmap": 
"bitmap0", "bitmap-mode": "never", "device": "drive0", "job-id": "backup_1", 
"sync": "bitmap", "target": "backup_target_1"}}
 {"return": {}}
 {"data": {"action": "report", "device": "backup_1", "operation": "read"}, 
"event": "BLOCK_JOB_ERROR", "timestamp": {"microseconds": "USECS", "seconds": 
"SECS"}}
-{"data": {"device": "backup_1", "error": "Input/output error", "len": 
67108864, "offset": 66781184, "speed": 0, "type": "backup"}, "event": 
"BLOCK_JOB_COMPLETED", "timestamp": {"microseconds": "USECS", "seconds": 
"SECS"}}
+{"data": {"device": "backup_1", "error": "Input/output 

[Qemu-devel] [PULL 33/36] qapi: add dirty-bitmaps to query-named-block-nodes result

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

Let's add a possibility to query dirty-bitmaps not only on root nodes.
It is useful when dealing both with snapshots and incremental backups.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: John Snow 
Message-id: 20190717173937.18747-1-js...@redhat.com
[Added deprecation information. --js]
Signed-off-by: John Snow 
[Fixed spelling --js]
---
 block/qapi.c |  5 +
 qapi/block-core.json |  6 +-
 qemu-deprecated.texi | 12 
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/block/qapi.c b/block/qapi.c
index 917435f0226..15f10302647 100644
--- a/block/qapi.c
+++ b/block/qapi.c
@@ -79,6 +79,11 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *blk,
 info->backing_file = g_strdup(bs->backing_file);
 }
 
+if (!QLIST_EMPTY(>dirty_bitmaps)) {
+info->has_dirty_bitmaps = true;
+info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
+}
+
 info->detect_zeroes = bs->detect_zeroes;
 
 if (blk && blk_get_public(blk)->throttle_group_member.throttle_state) {
diff --git a/qapi/block-core.json b/qapi/block-core.json
index d72cf5f354b..e9364a4a293 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -360,6 +360,9 @@
 # @write_threshold: configured write threshold for the device.
 #   0 if disabled. (Since 2.3)
 #
+# @dirty-bitmaps: dirty bitmaps information (only present if node
+# has one or more dirty bitmaps) (Since 4.2)
+#
 # Since: 0.14.0
 #
 ##
@@ -378,7 +381,7 @@
 '*bps_wr_max_length': 'int', '*iops_max_length': 'int',
 '*iops_rd_max_length': 'int', '*iops_wr_max_length': 'int',
 '*iops_size': 'int', '*group': 'str', 'cache': 'BlockdevCacheInfo',
-'write_threshold': 'int' } }
+'write_threshold': 'int', '*dirty-bitmaps': ['BlockDirtyInfo'] } }
 
 ##
 # @BlockDeviceIoStatus:
@@ -656,6 +659,7 @@
 #
 # @dirty-bitmaps: dirty bitmaps information (only present if the
 # driver has one or more dirty bitmaps) (Since 2.0)
+# Deprecated in 4.2; see BlockDeviceInfo instead.
 #
 # @io-status: @BlockDeviceIoStatus. Only present if the device
 # supports it and the VM is configured to stop on errors
diff --git a/qemu-deprecated.texi b/qemu-deprecated.texi
index f7680c08e10..00a4b6f3504 100644
--- a/qemu-deprecated.texi
+++ b/qemu-deprecated.texi
@@ -154,6 +154,18 @@ The ``status'' field of the ``BlockDirtyInfo'' structure, 
returned by
 the query-block command is deprecated. Two new boolean fields,
 ``recording'' and ``busy'' effectively replace it.
 
+@subsection query-block result field dirty-bitmaps (Since 4.2)
+
+The ``dirty-bitmaps`` field of the ``BlockInfo`` structure, returned by
+the query-block command is itself now deprecated. The ``dirty-bitmaps``
+field of the ``BlockDeviceInfo`` struct should be used instead, which is the
+type of the ``inserted`` field in query-block replies, as well as the
+type of array items in query-named-block-nodes.
+
+Since the ``dirty-bitmaps`` field is optionally present in both the old and
+new locations, clients must use introspection to learn where to anticipate
+the field if/when it does appear in command output.
+
 @subsection query-cpus (since 2.12.0)
 
 The ``query-cpus'' command is replaced by the ``query-cpus-fast'' command.
-- 
2.21.0




[Qemu-devel] [PULL 23/36] iotests/257: add EmulatedBitmap class

2019-08-16 Thread John Snow
Represent a bitmap with an object that we can mark and clear bits in.
This makes it easier to manage partial writes when we don't write a
full group's worth of patterns before an error.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-3-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 | 124 +
 1 file changed, 75 insertions(+), 49 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index 02f9ae06490..bc66ea03b24 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -85,6 +85,59 @@ GROUPS = [
 Pattern('0xdd', 0x3fc)]), # New; leaving a gap to the right
 ]
 
+
+class EmulatedBitmap:
+def __init__(self, granularity=GRANULARITY):
+self._bits = set()
+self.granularity = granularity
+
+def dirty_bits(self, bits):
+self._bits |= set(bits)
+
+def dirty_group(self, n):
+self.dirty_bits(GROUPS[n].bits(self.granularity))
+
+def clear(self):
+self._bits = set()
+
+def clear_bits(self, bits):
+self._bits -= set(bits)
+
+def clear_bit(self, bit):
+self.clear_bits({bit})
+
+def clear_group(self, n):
+self.clear_bits(GROUPS[n].bits(self.granularity))
+
+@property
+def first_bit(self):
+return sorted(self.bits)[0]
+
+@property
+def bits(self):
+return self._bits
+
+@property
+def count(self):
+return len(self.bits)
+
+def compare(self, qmp_bitmap):
+"""
+Print a nice human-readable message checking that a bitmap as reported
+by the QMP interface has as many bits set as we expect it to.
+"""
+
+name = qmp_bitmap.get('name', '(anonymous)')
+log("= Checking Bitmap {:s} =".format(name))
+
+want = self.count
+have = qmp_bitmap['count'] // qmp_bitmap['granularity']
+
+log("expecting {:d} dirty sectors; have {:d}. {:s}".format(
+want, have, "OK!" if want == have else "ERROR!"))
+log('')
+
+
 class Drive:
 """Represents, vaguely, a drive attached to a VM.
 Includes format, graph, and device information."""
@@ -195,27 +248,6 @@ def perform_writes(drive, n):
 log('')
 return bitmaps
 
-def calculate_bits(groups=None):
-"""Calculate how many bits we expect to see dirtied."""
-if groups:
-bits = set.union(*(GROUPS[group].bits(GRANULARITY) for group in 
groups))
-return len(bits)
-return 0
-
-def bitmap_comparison(bitmap, groups=None, want=0):
-"""
-Print a nice human-readable message checking that this bitmap has as
-many bits set as we expect it to.
-"""
-log("= Checking Bitmap {:s} =".format(bitmap.get('name', '(anonymous)')))
-
-if groups:
-want = calculate_bits(groups)
-have = bitmap['count'] // bitmap['granularity']
-
-log("expecting {:d} dirty sectors; have {:d}. {:s}".format(
-want, have, "OK!" if want == have else "ERROR!"))
-log('')
 
 def compare_images(image, reference, baseimg=None, expected_match=True):
 """
@@ -321,12 +353,13 @@ def test_bitmap_sync(bsync_mode, failure=None):
 vm.qmp_log("block-dirty-bitmap-add", node=drive0.name,
name="bitmap0", granularity=GRANULARITY)
 log('')
+ebitmap = EmulatedBitmap()
 
 # 1 - Writes and Reference Backup
 bitmaps = perform_writes(drive0, 1)
-dirty_groups = {1}
+ebitmap.dirty_group(1)
 bitmap = get_bitmap(bitmaps, drive0.device, 'bitmap0')
-bitmap_comparison(bitmap, groups=dirty_groups)
+ebitmap.compare(bitmap)
 reference_backup(drive0, 1, fbackup1)
 
 # 1 - Bitmap Backup (Optional induced failure)
@@ -342,54 +375,47 @@ def test_bitmap_sync(bsync_mode, failure=None):
 log('')
 bitmaps = perform_writes(drive0, 2)
 # Named bitmap (static, should be unchanged)
-bitmap_comparison(get_bitmap(bitmaps, drive0.device, 'bitmap0'),
-  groups=dirty_groups)
+ebitmap.compare(get_bitmap(bitmaps, drive0.device, 'bitmap0'))
 # Anonymous bitmap (dynamic, shows new writes)
-bitmap_comparison(get_bitmap(bitmaps, drive0.device, '',
- recording=True), groups={2})
-dirty_groups.add(2)
+anonymous = EmulatedBitmap()
+anonymous.dirty_group(2)
+anonymous.compare(get_bitmap(bitmaps, drive0.device, '',
+ recording=True))
+
+# Simulate the order in which this will happen:
+# group 1 gets cleared first, then group two gets written.
+if ((bsync_mode == 'on-success' and not failure) or
+(bsync_mode == 'always')):
+ebitmap.clear_group(1)
+ebitmap.dirty_group(2)
 
 vm.run_job(job, auto_dismiss=True, 

[Qemu-devel] [PULL 22/36] iotests/257: add Pattern class

2019-08-16 Thread John Snow
Just kidding, this is easier to manage with a full class instead of a
namedtuple.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-2-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 | 58 +++---
 1 file changed, 32 insertions(+), 26 deletions(-)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index 39526837499..02f9ae06490 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -19,7 +19,6 @@
 #
 # owner=js...@redhat.com
 
-from collections import namedtuple
 import math
 import os
 
@@ -29,10 +28,18 @@ from iotests import log, qemu_img
 SIZE = 64 * 1024 * 1024
 GRANULARITY = 64 * 1024
 
-Pattern = namedtuple('Pattern', ['byte', 'offset', 'size'])
-def mkpattern(byte, offset, size=GRANULARITY):
-"""Constructor for Pattern() with default size"""
-return Pattern(byte, offset, size)
+
+class Pattern:
+def __init__(self, byte, offset, size=GRANULARITY):
+self.byte = byte
+self.offset = offset
+self.size = size
+
+def bits(self, granularity):
+lower = self.offset // granularity
+upper = (self.offset + self.size - 1) // granularity
+return set(range(lower, upper + 1))
+
 
 class PatternGroup:
 """Grouping of Pattern objects. Initialize with an iterable of Patterns."""
@@ -43,40 +50,39 @@ class PatternGroup:
 """Calculate the unique bits dirtied by this pattern grouping"""
 res = set()
 for pattern in self.patterns:
-lower = pattern.offset // granularity
-upper = (pattern.offset + pattern.size - 1) // granularity
-res = res | set(range(lower, upper + 1))
+res |= pattern.bits(granularity)
 return res
 
+
 GROUPS = [
 PatternGroup([
 # Batch 0: 4 clusters
-mkpattern('0x49', 0x000),
-mkpattern('0x6c', 0x010),   # 1M
-mkpattern('0x6f', 0x200),   # 32M
-mkpattern('0x76', 0x3ff)]), # 64M - 64K
+Pattern('0x49', 0x000),
+Pattern('0x6c', 0x010),   # 1M
+Pattern('0x6f', 0x200),   # 32M
+Pattern('0x76', 0x3ff)]), # 64M - 64K
 PatternGroup([
 # Batch 1: 6 clusters (3 new)
-mkpattern('0x65', 0x000),   # Full overwrite
-mkpattern('0x77', 0x00f8000),   # Partial-left (1M-32K)
-mkpattern('0x72', 0x2008000),   # Partial-right (32M+32K)
-mkpattern('0x69', 0x3fe)]), # Adjacent-left (64M - 128K)
+Pattern('0x65', 0x000),   # Full overwrite
+Pattern('0x77', 0x00f8000),   # Partial-left (1M-32K)
+Pattern('0x72', 0x2008000),   # Partial-right (32M+32K)
+Pattern('0x69', 0x3fe)]), # Adjacent-left (64M - 128K)
 PatternGroup([
 # Batch 2: 7 clusters (3 new)
-mkpattern('0x74', 0x001),   # Adjacent-right
-mkpattern('0x69', 0x00e8000),   # Partial-left  (1M-96K)
-mkpattern('0x6e', 0x2018000),   # Partial-right (32M+96K)
-mkpattern('0x67', 0x3fe,
-  2*GRANULARITY)]), # Overwrite [(64M-128K)-64M)
+Pattern('0x74', 0x001),   # Adjacent-right
+Pattern('0x69', 0x00e8000),   # Partial-left  (1M-96K)
+Pattern('0x6e', 0x2018000),   # Partial-right (32M+96K)
+Pattern('0x67', 0x3fe,
+2*GRANULARITY)]), # Overwrite [(64M-128K)-64M)
 PatternGroup([
 # Batch 3: 8 clusters (5 new)
 # Carefully chosen such that nothing re-dirties the one cluster
 # that copies out successfully before failure in Group #1.
-mkpattern('0xaa', 0x001,
-  3*GRANULARITY),   # Overwrite and 2x Adjacent-right
-mkpattern('0xbb', 0x00d8000),   # Partial-left (1M-160K)
-mkpattern('0xcc', 0x2028000),   # Partial-right (32M+160K)
-mkpattern('0xdd', 0x3fc)]), # New; leaving a gap to the right
+Pattern('0xaa', 0x001,
+3*GRANULARITY),   # Overwrite and 2x Adjacent-right
+Pattern('0xbb', 0x00d8000),   # Partial-left (1M-160K)
+Pattern('0xcc', 0x2028000),   # Partial-right (32M+160K)
+Pattern('0xdd', 0x3fc)]), # New; leaving a gap to the right
 ]
 
 class Drive:
-- 
2.21.0




[Qemu-devel] [PULL 26/36] iotests/257: test API failures

2019-08-16 Thread John Snow
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-6-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/257 | 67 ++
 tests/qemu-iotests/257.out | 85 ++
 2 files changed, 152 insertions(+)

diff --git a/tests/qemu-iotests/257 b/tests/qemu-iotests/257
index aaa8f595043..53ab31c92e1 100755
--- a/tests/qemu-iotests/257
+++ b/tests/qemu-iotests/257
@@ -447,10 +447,77 @@ def test_bitmap_sync(bsync_mode, msync_mode='bitmap', 
failure=None):
 compare_images(img_path, fbackup2)
 log('')
 
+def test_backup_api():
+"""
+Test malformed and prohibited invocations of the backup API.
+"""
+with iotests.FilePaths(['img', 'bsync1']) as \
+ (img_path, backup_path), \
+ iotests.VM() as vm:
+
+log("\n=== API failure tests ===\n")
+log('--- Preparing image & VM ---\n')
+drive0 = Drive(img_path, vm=vm)
+drive0.img_create(iotests.imgfmt, SIZE)
+vm.add_device("{},id=scsi0".format(iotests.get_virtio_scsi_device()))
+vm.launch()
+
+file_config = {
+'driver': 'file',
+'filename': drive0.path
+}
+
+vm.qmp_log('blockdev-add',
+   filters=[iotests.filter_qmp_testfiles],
+   node_name="drive0",
+   driver=drive0.fmt,
+   file=file_config)
+drive0.node = 'drive0'
+drive0.device = 'device0'
+vm.qmp_log("device_add", id=drive0.device,
+   drive=drive0.name, driver="scsi-hd")
+log('')
+
+target0 = Drive(backup_path, vm=vm)
+target0.create_target("backup_target", drive0.fmt, drive0.size)
+log('')
+
+vm.qmp_log("block-dirty-bitmap-add", node=drive0.name,
+   name="bitmap0", granularity=GRANULARITY)
+log('')
+
+log('-- Testing invalid QMP commands --\n')
+
+error_cases = {
+'incremental': {
+None:['on-success', 'always', 'never', None],
+'bitmap404': ['on-success', 'always', 'never', None],
+'bitmap0':   ['always', 'never']
+},
+'bitmap': {
+None:['on-success', 'always', 'never', None],
+'bitmap404': ['on-success', 'always', 'never', None],
+'bitmap0':   [None],
+},
+}
+
+# Dicts, as always, are not stably-ordered prior to 3.7, so use tuples:
+for sync_mode in ('incremental', 'bitmap'):
+log("-- Sync mode {:s} tests --\n".format(sync_mode))
+for bitmap in (None, 'bitmap404', 'bitmap0'):
+for policy in error_cases[sync_mode][bitmap]:
+blockdev_backup(drive0.vm, drive0.name, "backup_target",
+sync_mode, job_id='api_job',
+bitmap=bitmap, bitmap_mode=policy)
+log('')
+
+
 def main():
 for bsync_mode in ("never", "on-success", "always"):
 for failure in ("simulated", "intermediate", None):
 test_bitmap_sync(bsync_mode, "bitmap", failure)
 
+test_backup_api()
+
 if __name__ == '__main__':
 iotests.script_main(main, supported_fmts=['qcow2'])
diff --git a/tests/qemu-iotests/257.out b/tests/qemu-iotests/257.out
index 0abc96acd36..43f2e0f9c99 100644
--- a/tests/qemu-iotests/257.out
+++ b/tests/qemu-iotests/257.out
@@ -2245,3 +2245,88 @@ qemu_img compare "TEST_DIR/PID-bsync1" 
"TEST_DIR/PID-fbackup1" ==> Identical, OK
 qemu_img compare "TEST_DIR/PID-bsync2" "TEST_DIR/PID-fbackup2" ==> Identical, 
OK!
 qemu_img compare "TEST_DIR/PID-img" "TEST_DIR/PID-fbackup2" ==> Identical, OK!
 
+
+=== API failure tests ===
+
+--- Preparing image & VM ---
+
+{"execute": "blockdev-add", "arguments": {"driver": "qcow2", "file": 
{"driver": "file", "filename": "TEST_DIR/PID-img"}, "node-name": "drive0"}}
+{"return": {}}
+{"execute": "device_add", "arguments": {"drive": "drive0", "driver": 
"scsi-hd", "id": "device0"}}
+{"return": {}}
+
+{}
+{"execute": "job-dismiss", "arguments": {"id": "bdc-file-job"}}
+{"return": {}}
+{}
+{}
+{"execute": "job-dismiss", "arguments": {"id": "bdc-fmt-job"}}
+{"return": {}}
+{}
+
+{"execute": "block-dirty-bitmap-add", "arguments": {"granularity": 65536, 
"name": "bitmap0", "node": "drive0"}}
+{"return": {}}
+
+-- Testing invalid QMP commands --
+
+-- Sync mode incremental tests --
+
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "on-success", 
"device": "drive0", "job-id": "api_job", "sync": "incremental", "target": 
"backup_target"}}
+{"error": {"class": "GenericError", "desc": "must provide a valid bitmap name 
for 'incremental' sync mode"}}
+
+{"execute": "blockdev-backup", "arguments": {"bitmap-mode": "always", 
"device": "drive0", "job-id": "api_job", "sync": "incremental", "target": 
"backup_target"}}
+{"error": {"class": 

[Qemu-devel] [PULL 30/36] block/backup: teach TOP to never copy unallocated regions

2019-08-16 Thread John Snow
Presently, If sync=TOP is selected, we mark the entire bitmap as dirty.
In the write notifier handler, we dutifully copy out such regions.

Fix this in three parts:

1. Mark the bitmap as being initialized before the first yield.
2. After the first yield but before the backup loop, interrogate the
allocation status asynchronously and initialize the bitmap.
3. Teach the write notifier to interrogate allocation status if it is
invoked during bitmap initialization.

As an effect of this patch, the job progress for TOP backups
now behaves like this:

- total progress starts at bdrv_length.
- As allocation status is interrogated, total progress decreases.
- As blocks are copied, current progress increases.

Taken together, the floor and ceiling move to meet each other.


Signed-off-by: John Snow 
Message-id: 20190716000117.25219-10-js...@redhat.com
[Remove ret = -ECANCELED change. --js]
[Squash in conflict resolution based on Max's patch --js]
Message-id: c8b0ab36-79c8-0b4b-3193-4e12ed8c8...@redhat.com
Reviewed-by: Max Reitz 
Signed-off-by: John Snow 
---
 block/backup.c | 79 --
 block/trace-events |  1 +
 2 files changed, 71 insertions(+), 9 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index f6bf32c9438..9e1382ec5c6 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -58,6 +58,7 @@ typedef struct BackupBlockJob {
 int64_t copy_range_size;
 
 bool serialize_target_writes;
+bool initializing_bitmap;
 } BackupBlockJob;
 
 static const BlockJobDriver backup_job_driver;
@@ -227,6 +228,35 @@ static int backup_is_cluster_allocated(BackupBlockJob *s, 
int64_t offset,
 }
 }
 
+/**
+ * Reset bits in copy_bitmap starting at offset if they represent unallocated
+ * data in the image. May reset subsequent contiguous bits.
+ * @return 0 when the cluster at @offset was unallocated,
+ * 1 otherwise, and -ret on error.
+ */
+static int64_t backup_bitmap_reset_unallocated(BackupBlockJob *s,
+   int64_t offset, int64_t *count)
+{
+int ret;
+int64_t clusters, bytes, estimate;
+
+ret = backup_is_cluster_allocated(s, offset, );
+if (ret < 0) {
+return ret;
+}
+
+bytes = clusters * s->cluster_size;
+
+if (!ret) {
+bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
+estimate = bdrv_get_dirty_count(s->copy_bitmap);
+job_progress_set_remaining(>common.job, estimate);
+}
+
+*count = bytes;
+return ret;
+}
+
 static int coroutine_fn backup_do_cow(BackupBlockJob *job,
   int64_t offset, uint64_t bytes,
   bool *error_is_read,
@@ -236,6 +266,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 int ret = 0;
 int64_t start, end; /* bytes */
 void *bounce_buffer = NULL;
+int64_t status_bytes;
 
 qemu_co_rwlock_rdlock(>flush_rwlock);
 
@@ -262,6 +293,17 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 dirty_end = end;
 }
 
+if (job->initializing_bitmap) {
+ret = backup_bitmap_reset_unallocated(job, start, _bytes);
+if (ret == 0) {
+trace_backup_do_cow_skip_range(job, start, status_bytes);
+start += status_bytes;
+continue;
+}
+/* Clamp to known allocated region */
+dirty_end = MIN(dirty_end, start + status_bytes);
+}
+
 trace_backup_do_cow_process(job, start);
 
 if (job->use_copy_range) {
@@ -446,18 +488,9 @@ static int coroutine_fn backup_loop(BackupBlockJob *job)
 int64_t offset;
 BdrvDirtyBitmapIter *bdbi;
 int ret = 0;
-int64_t dummy;
 
 bdbi = bdrv_dirty_iter_new(job->copy_bitmap);
 while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
-if (job->sync_mode == MIRROR_SYNC_MODE_TOP &&
-!backup_is_cluster_allocated(job, offset, ))
-{
-bdrv_reset_dirty_bitmap(job->copy_bitmap, offset,
-job->cluster_size);
-continue;
-}
-
 do {
 if (yield_and_check(job)) {
 goto out;
@@ -488,6 +521,13 @@ static void backup_init_copy_bitmap(BackupBlockJob *job)
NULL, true);
 assert(ret);
 } else {
+if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
+/*
+ * We can't hog the coroutine to initialize this thoroughly.
+ * Set a flag and resume work when we are able to yield safely.
+ */
+job->initializing_bitmap = true;
+}
 bdrv_set_dirty_bitmap(job->copy_bitmap, 0, job->len);
 }
 
@@ -509,6 +549,26 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 s->before_write.notify = backup_before_write_notify;
 bdrv_add_before_write_notifier(bs, >before_write);
 
+if (s->sync_mode 

[Qemu-devel] [PULL 19/36] blockdev: reduce aio_context locked sections in bitmap add/remove

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

Commit 0a6c86d024c52 returned these locks back to add/remove
functionality, to protect from intersection of persistent bitmap
related IO with other IO. But other bitmap-related functions called
here are unrelated to the problem, and there are no needs to keep these
calls inside critical sections.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: John Snow 
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190708220502.12977-2-js...@redhat.com
Signed-off-by: John Snow 
---
 blockdev.c | 30 +-
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index a44ab1f709e..bcd766a1a24 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2813,7 +2813,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const 
char *name,
 {
 BlockDriverState *bs;
 BdrvDirtyBitmap *bitmap;
-AioContext *aio_context = NULL;
 
 if (!name || name[0] == '\0') {
 error_setg(errp, "Bitmap name cannot be empty");
@@ -2849,16 +2848,20 @@ void qmp_block_dirty_bitmap_add(const char *node, const 
char *name,
 }
 
 if (persistent) {
-aio_context = bdrv_get_aio_context(bs);
+AioContext *aio_context = bdrv_get_aio_context(bs);
+bool ok;
+
 aio_context_acquire(aio_context);
-if (!bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp)) {
-goto out;
+ok = bdrv_can_store_new_dirty_bitmap(bs, name, granularity, errp);
+aio_context_release(aio_context);
+if (!ok) {
+return;
 }
 }
 
 bitmap = bdrv_create_dirty_bitmap(bs, granularity, name, errp);
 if (bitmap == NULL) {
-goto out;
+return;
 }
 
 if (disabled) {
@@ -2866,10 +2869,6 @@ void qmp_block_dirty_bitmap_add(const char *node, const 
char *name,
 }
 
 bdrv_dirty_bitmap_set_persistence(bitmap, persistent);
- out:
-if (aio_context) {
-aio_context_release(aio_context);
-}
 }
 
 void qmp_block_dirty_bitmap_remove(const char *node, const char *name,
@@ -2877,8 +2876,6 @@ void qmp_block_dirty_bitmap_remove(const char *node, 
const char *name,
 {
 BlockDriverState *bs;
 BdrvDirtyBitmap *bitmap;
-Error *local_err = NULL;
-AioContext *aio_context = NULL;
 
 bitmap = block_dirty_bitmap_lookup(node, name, , errp);
 if (!bitmap || !bs) {
@@ -2891,20 +2888,19 @@ void qmp_block_dirty_bitmap_remove(const char *node, 
const char *name,
 }
 
 if (bdrv_dirty_bitmap_get_persistence(bitmap)) {
-aio_context = bdrv_get_aio_context(bs);
+AioContext *aio_context = bdrv_get_aio_context(bs);
+Error *local_err = NULL;
+
 aio_context_acquire(aio_context);
 bdrv_remove_persistent_dirty_bitmap(bs, name, _err);
+aio_context_release(aio_context);
 if (local_err != NULL) {
 error_propagate(errp, local_err);
-goto out;
+return;
 }
 }
 
 bdrv_release_dirty_bitmap(bs, bitmap);
- out:
-if (aio_context) {
-aio_context_release(aio_context);
-}
 }
 
 /**
-- 
2.21.0




[Qemu-devel] [PULL 16/36] iotests: Add virtio-scsi device helper

2019-08-16 Thread John Snow
Seems that it comes up enough.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-17-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/040| 6 +-
 tests/qemu-iotests/093| 6 ++
 tests/qemu-iotests/139| 7 ++-
 tests/qemu-iotests/238| 5 +
 tests/qemu-iotests/iotests.py | 4 
 5 files changed, 10 insertions(+), 18 deletions(-)

diff --git a/tests/qemu-iotests/040 b/tests/qemu-iotests/040
index aa0b1847e30..6db9abf8e6e 100755
--- a/tests/qemu-iotests/040
+++ b/tests/qemu-iotests/040
@@ -85,11 +85,7 @@ class TestSingleDrive(ImageCommitTestCase):
 qemu_io('-f', 'raw', '-c', 'write -P 0xab 0 524288', backing_img)
 qemu_io('-f', iotests.imgfmt, '-c', 'write -P 0xef 524288 524288', 
mid_img)
 self.vm = iotests.VM().add_drive(test_img, 
"node-name=top,backing.node-name=mid,backing.backing.node-name=base", 
interface="none")
-if iotests.qemu_default_machine == 's390-ccw-virtio':
-self.vm.add_device("virtio-scsi-ccw")
-else:
-self.vm.add_device("virtio-scsi-pci")
-
+self.vm.add_device(iotests.get_virtio_scsi_device())
 self.vm.add_device("scsi-hd,id=scsi0,drive=drive0")
 self.vm.launch()
 self.has_quit = False
diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093
index 4b2cac1d0c6..3c4f5173cea 100755
--- a/tests/qemu-iotests/093
+++ b/tests/qemu-iotests/093
@@ -367,10 +367,8 @@ class ThrottleTestGroupNames(iotests.QMPTestCase):
 class ThrottleTestRemovableMedia(iotests.QMPTestCase):
 def setUp(self):
 self.vm = iotests.VM()
-if iotests.qemu_default_machine == 's390-ccw-virtio':
-self.vm.add_device("virtio-scsi-ccw,id=virtio-scsi")
-else:
-self.vm.add_device("virtio-scsi-pci,id=virtio-scsi")
+self.vm.add_device("{},id=virtio-scsi".format(
+iotests.get_virtio_scsi_device()))
 self.vm.launch()
 
 def tearDown(self):
diff --git a/tests/qemu-iotests/139 b/tests/qemu-iotests/139
index 933b45121a9..2176ea51ba8 100755
--- a/tests/qemu-iotests/139
+++ b/tests/qemu-iotests/139
@@ -35,11 +35,8 @@ class TestBlockdevDel(iotests.QMPTestCase):
 def setUp(self):
 iotests.qemu_img('create', '-f', iotests.imgfmt, base_img, '1M')
 self.vm = iotests.VM()
-if iotests.qemu_default_machine == 's390-ccw-virtio':
-self.vm.add_device("virtio-scsi-ccw,id=virtio-scsi")
-else:
-self.vm.add_device("virtio-scsi-pci,id=virtio-scsi")
-
+self.vm.add_device("{},id=virtio-scsi".format(
+iotests.get_virtio_scsi_device()))
 self.vm.launch()
 
 def tearDown(self):
diff --git a/tests/qemu-iotests/238 b/tests/qemu-iotests/238
index 08bc7e6b4be..e5ac2b2ff84 100755
--- a/tests/qemu-iotests/238
+++ b/tests/qemu-iotests/238
@@ -23,10 +23,7 @@ import os
 import iotests
 from iotests import log
 
-if iotests.qemu_default_machine == 's390-ccw-virtio':
-virtio_scsi_device = 'virtio-scsi-ccw'
-else:
-virtio_scsi_device = 'virtio-scsi-pci'
+virtio_scsi_device = iotests.get_virtio_scsi_device()
 
 vm = iotests.VM()
 vm.launch()
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 385dbad16ac..84438e837cb 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -164,6 +164,10 @@ def qemu_io_silent(*args):
  (-exitcode, ' '.join(args)))
 return exitcode
 
+def get_virtio_scsi_device():
+if qemu_default_machine == 's390-ccw-virtio':
+return 'virtio-scsi-ccw'
+return 'virtio-scsi-pci'
 
 class QemuIoInteractive:
 def __init__(self, *args):
-- 
2.21.0




[Qemu-devel] [PULL 25/36] block/backup: hoist bitmap check into QMP interface

2019-08-16 Thread John Snow
This is nicer to do in the unified QMP interface that we have now,
because it lets us use the right terminology back at the user.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190716000117.25219-5-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c | 13 -
 blockdev.c | 10 ++
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index f8309be01b3..f704c83a98f 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -576,6 +576,10 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 assert(bs);
 assert(target);
 
+/* QMP interface protects us from these cases */
+assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
+assert(sync_bitmap || sync_mode != MIRROR_SYNC_MODE_BITMAP);
+
 if (bs == target) {
 error_setg(errp, "Source and target cannot be the same");
 return NULL;
@@ -607,16 +611,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 return NULL;
 }
 
-/* QMP interface should have handled translating this to bitmap mode */
-assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
-
 if (sync_mode == MIRROR_SYNC_MODE_BITMAP) {
-if (!sync_bitmap) {
-error_setg(errp, "must provide a valid bitmap name for "
-   "'%s' sync mode", MirrorSyncMode_str(sync_mode));
-return NULL;
-}
-
 /* If we need to write to this bitmap, check that we can: */
 if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
 bdrv_dirty_bitmap_check(sync_bitmap, BDRV_BITMAP_DEFAULT, errp)) {
diff --git a/blockdev.c b/blockdev.c
index 210226d8290..f889da0b427 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3529,6 +3529,16 @@ static BlockJob *do_backup_common(BackupCommon *backup,
 return NULL;
 }
 
+if ((backup->sync == MIRROR_SYNC_MODE_BITMAP) ||
+(backup->sync == MIRROR_SYNC_MODE_INCREMENTAL)) {
+/* done before desugaring 'incremental' to print the right message */
+if (!backup->has_bitmap) {
+error_setg(errp, "must provide a valid bitmap name for "
+   "'%s' sync mode", MirrorSyncMode_str(backup->sync));
+return NULL;
+}
+}
+
 if (backup->sync == MIRROR_SYNC_MODE_INCREMENTAL) {
 if (backup->has_bitmap_mode &&
 backup->bitmap_mode != BITMAP_SYNC_MODE_ON_SUCCESS) {
-- 
2.21.0




[Qemu-devel] [PULL 20/36] qapi: implement block-dirty-bitmap-remove transaction action

2019-08-16 Thread John Snow
It is used to do transactional movement of the bitmap (which is
possible in conjunction with merge command). Transactional bitmap
movement is needed in scenarios with external snapshot, when we don't
want to leave copy of the bitmap in the base image.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190708220502.12977-3-js...@redhat.com
[Edited "since" version to 4.2 --js]
Signed-off-by: John Snow 
---
 block.c|  2 +-
 block/dirty-bitmap.c   | 15 +++
 blockdev.c | 79 +++---
 include/block/dirty-bitmap.h   |  2 +-
 migration/block-dirty-bitmap.c |  2 +-
 qapi/transaction.json  |  2 +
 6 files changed, 85 insertions(+), 17 deletions(-)

diff --git a/block.c b/block.c
index 2a2d0696672..3e698e9cabd 100644
--- a/block.c
+++ b/block.c
@@ -5346,7 +5346,7 @@ static void coroutine_fn 
bdrv_co_invalidate_cache(BlockDriverState *bs,
 for (bm = bdrv_dirty_bitmap_next(bs, NULL); bm;
  bm = bdrv_dirty_bitmap_next(bs, bm))
 {
-bdrv_dirty_bitmap_set_migration(bm, false);
+bdrv_dirty_bitmap_skip_store(bm, false);
 }
 
 ret = refresh_total_sectors(bs, bs->total_sectors);
diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 75a5daf116f..134e0c9a0c8 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -48,10 +48,9 @@ struct BdrvDirtyBitmap {
 bool inconsistent;  /* bitmap is persistent, but inconsistent.
It cannot be used at all in any way, except
a QMP user can remove it. */
-bool migration; /* Bitmap is selected for migration, it should
-   not be stored on the next inactivation
-   (persistent flag doesn't matter until next
-   invalidation).*/
+bool skip_store;/* We are either migrating or deleting this
+ * bitmap; it should not be stored on the next
+ * inactivation. */
 QLIST_ENTRY(BdrvDirtyBitmap) list;
 };
 
@@ -762,16 +761,16 @@ void bdrv_dirty_bitmap_set_inconsistent(BdrvDirtyBitmap 
*bitmap)
 }
 
 /* Called with BQL taken. */
-void bdrv_dirty_bitmap_set_migration(BdrvDirtyBitmap *bitmap, bool migration)
+void bdrv_dirty_bitmap_skip_store(BdrvDirtyBitmap *bitmap, bool skip)
 {
 qemu_mutex_lock(bitmap->mutex);
-bitmap->migration = migration;
+bitmap->skip_store = skip;
 qemu_mutex_unlock(bitmap->mutex);
 }
 
 bool bdrv_dirty_bitmap_get_persistence(BdrvDirtyBitmap *bitmap)
 {
-return bitmap->persistent && !bitmap->migration;
+return bitmap->persistent && !bitmap->skip_store;
 }
 
 bool bdrv_dirty_bitmap_inconsistent(const BdrvDirtyBitmap *bitmap)
@@ -783,7 +782,7 @@ bool bdrv_has_changed_persistent_bitmaps(BlockDriverState 
*bs)
 {
 BdrvDirtyBitmap *bm;
 QLIST_FOREACH(bm, >dirty_bitmaps, list) {
-if (bm->persistent && !bm->readonly && !bm->migration) {
+if (bm->persistent && !bm->readonly && !bm->skip_store) {
 return true;
 }
 }
diff --git a/blockdev.c b/blockdev.c
index bcd766a1a24..210226d8290 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2136,6 +2136,51 @@ static void 
block_dirty_bitmap_merge_prepare(BlkActionState *common,
 errp);
 }
 
+static BdrvDirtyBitmap *do_block_dirty_bitmap_remove(
+const char *node, const char *name, bool release,
+BlockDriverState **bitmap_bs, Error **errp);
+
+static void block_dirty_bitmap_remove_prepare(BlkActionState *common,
+  Error **errp)
+{
+BlockDirtyBitmap *action;
+BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+if (action_check_completion_mode(common, errp) < 0) {
+return;
+}
+
+action = common->action->u.block_dirty_bitmap_remove.data;
+
+state->bitmap = do_block_dirty_bitmap_remove(action->node, action->name,
+ false, >bs, errp);
+if (state->bitmap) {
+bdrv_dirty_bitmap_skip_store(state->bitmap, true);
+bdrv_dirty_bitmap_set_busy(state->bitmap, true);
+}
+}
+
+static void block_dirty_bitmap_remove_abort(BlkActionState *common)
+{
+BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+if (state->bitmap) {
+bdrv_dirty_bitmap_skip_store(state->bitmap, false);
+bdrv_dirty_bitmap_set_busy(state->bitmap, false);
+}
+}
+
+static void block_dirty_bitmap_remove_commit(BlkActionState *common)
+{
+BlockDirtyBitmapState *state = DO_UPCAST(BlockDirtyBitmapState,
+ common, common);
+
+

[Qemu-devel] [PULL 13/36] iotests: add testing shim for script-style python tests

2019-08-16 Thread John Snow
Because the new-style python tests don't use the iotests.main() test
launcher, we don't turn on the debugger logging for these scripts
when invoked via ./check -d.

Refactor the launcher shim into new and old style shims so that they
share environmental configuration.

Two cleanup notes: debug was not actually used as a global, and there
was no reason to create a class in an inner scope just to achieve
default variables; we can simply create an instance of the runner with
the values we want instead.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-14-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/iotests.py | 40 +++
 1 file changed, 26 insertions(+), 14 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 91172c39a52..7fc062cdcf4 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -61,7 +61,6 @@ cachemode = os.environ.get('CACHEMODE')
 qemu_default_machine = os.environ.get('QEMU_DEFAULT_MACHINE')
 
 socket_scm_helper = os.environ.get('SOCKET_SCM_HELPER', 'socket_scm_helper')
-debug = False
 
 luks_default_secret_object = 'secret,id=keysec0,data=' + \
  os.environ.get('IMGKEYSECRET', '')
@@ -858,11 +857,22 @@ def skip_if_unsupported(required_formats=[], 
read_only=False):
 return func_wrapper
 return skip_test_decorator
 
-def main(supported_fmts=[], supported_oses=['linux'], supported_cache_modes=[],
- unsupported_fmts=[]):
-'''Run tests'''
+def execute_unittest(output, verbosity, debug):
+runner = unittest.TextTestRunner(stream=output, descriptions=True,
+ verbosity=verbosity)
+try:
+# unittest.main() will use sys.exit(); so expect a SystemExit
+# exception
+unittest.main(testRunner=runner)
+finally:
+if not debug:
+sys.stderr.write(re.sub(r'Ran (\d+) tests? in [\d.]+s',
+r'Ran \1 tests', output.getvalue()))
 
-global debug
+def execute_test(test_function=None,
+ supported_fmts=[], supported_oses=['linux'],
+ supported_cache_modes=[], unsupported_fmts=[]):
+"""Run either unittest or script-style tests."""
 
 # We are using TEST_DIR and QEMU_DEFAULT_MACHINE as proxies to
 # indicate that we're not being run via "check". There may be
@@ -894,13 +904,15 @@ def main(supported_fmts=[], supported_oses=['linux'], 
supported_cache_modes=[],
 
 logging.basicConfig(level=(logging.DEBUG if debug else logging.WARN))
 
-class MyTestRunner(unittest.TextTestRunner):
-def __init__(self, stream=output, descriptions=True, 
verbosity=verbosity):
-unittest.TextTestRunner.__init__(self, stream, descriptions, 
verbosity)
+if not test_function:
+execute_unittest(output, verbosity, debug)
+else:
+test_function()
 
-# unittest.main() will use sys.exit() so expect a SystemExit exception
-try:
-unittest.main(testRunner=MyTestRunner)
-finally:
-if not debug:
-sys.stderr.write(re.sub(r'Ran (\d+) tests? in [\d.]+s', r'Ran \1 
tests', output.getvalue()))
+def script_main(test_function, *args, **kwargs):
+"""Run script-style tests outside of the unittest framework"""
+execute_test(test_function, *args, **kwargs)
+
+def main(*args, **kwargs):
+"""Run tests using the unittest framework"""
+execute_test(None, *args, **kwargs)
-- 
2.21.0




[Qemu-devel] [PULL 21/36] iotests: test bitmap moving inside 254

2019-08-16 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

Test persistent bitmap copying with and without removal of original
bitmap.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190708220502.12977-4-js...@redhat.com
[Edited comment "bitmap1" --> "bitmap2" as per review. --js]
Signed-off-by: John Snow 
---
 tests/qemu-iotests/254 | 30 +-
 tests/qemu-iotests/254.out | 82 ++
 2 files changed, 110 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/254 b/tests/qemu-iotests/254
index 8edba91c5d4..09584f3f7de 100755
--- a/tests/qemu-iotests/254
+++ b/tests/qemu-iotests/254
@@ -1,6 +1,6 @@
 #!/usr/bin/env python
 #
-# Test external snapshot with bitmap copying.
+# Test external snapshot with bitmap copying and moving.
 #
 # Copyright (c) 2019 Virtuozzo International GmbH. All rights reserved.
 #
@@ -32,6 +32,10 @@ vm = iotests.VM().add_drive(disk, opts='node-name=base')
 vm.launch()
 
 vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap0')
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap1',
+   persistent=True)
+vm.qmp_log('block-dirty-bitmap-add', node='drive0', name='bitmap2',
+   persistent=True)
 
 vm.hmp_qemu_io('drive0', 'write 0 512K')
 
@@ -39,16 +43,38 @@ vm.qmp_log('transaction', indent=2, actions=[
 {'type': 'blockdev-snapshot-sync',
  'data': {'device': 'drive0', 'snapshot-file': top,
   'snapshot-node-name': 'snap'}},
+
+# copy non-persistent bitmap0
 {'type': 'block-dirty-bitmap-add',
  'data': {'node': 'snap', 'name': 'bitmap0'}},
 {'type': 'block-dirty-bitmap-merge',
  'data': {'node': 'snap', 'target': 'bitmap0',
-  'bitmaps': [{'node': 'base', 'name': 'bitmap0'}]}}
+  'bitmaps': [{'node': 'base', 'name': 'bitmap0'}]}},
+
+# copy persistent bitmap1, original will be saved to base image
+{'type': 'block-dirty-bitmap-add',
+ 'data': {'node': 'snap', 'name': 'bitmap1', 'persistent': True}},
+{'type': 'block-dirty-bitmap-merge',
+ 'data': {'node': 'snap', 'target': 'bitmap1',
+  'bitmaps': [{'node': 'base', 'name': 'bitmap1'}]}},
+
+# move persistent bitmap2, original will be removed and not saved
+# to base image
+{'type': 'block-dirty-bitmap-add',
+ 'data': {'node': 'snap', 'name': 'bitmap2', 'persistent': True}},
+{'type': 'block-dirty-bitmap-merge',
+ 'data': {'node': 'snap', 'target': 'bitmap2',
+  'bitmaps': [{'node': 'base', 'name': 'bitmap2'}]}},
+{'type': 'block-dirty-bitmap-remove',
+ 'data': {'node': 'base', 'name': 'bitmap2'}}
 ], filters=[iotests.filter_qmp_testfiles])
 
 result = vm.qmp('query-block')['return'][0]
 log("query-block: device = {}, node-name = {}, dirty-bitmaps:".format(
 result['device'], result['inserted']['node-name']))
 log(result['dirty-bitmaps'], indent=2)
+log("\nbitmaps in backing image:")
+log(result['inserted']['image']['backing-image']['format-specific'] \
+['data']['bitmaps'], indent=2)
 
 vm.shutdown()
diff --git a/tests/qemu-iotests/254.out b/tests/qemu-iotests/254.out
index d7394cf0026..d185c0532f6 100644
--- a/tests/qemu-iotests/254.out
+++ b/tests/qemu-iotests/254.out
@@ -1,5 +1,9 @@
 {"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap0", "node": 
"drive0"}}
 {"return": {}}
+{"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap1", "node": 
"drive0", "persistent": true}}
+{"return": {}}
+{"execute": "block-dirty-bitmap-add", "arguments": {"name": "bitmap2", "node": 
"drive0", "persistent": true}}
+{"return": {}}
 {
   "execute": "transaction",
   "arguments": {
@@ -31,6 +35,55 @@
   "target": "bitmap0"
 },
 "type": "block-dirty-bitmap-merge"
+  },
+  {
+"data": {
+  "name": "bitmap1",
+  "node": "snap",
+  "persistent": true
+},
+"type": "block-dirty-bitmap-add"
+  },
+  {
+"data": {
+  "bitmaps": [
+{
+  "name": "bitmap1",
+  "node": "base"
+}
+  ],
+  "node": "snap",
+  "target": "bitmap1"
+},
+"type": "block-dirty-bitmap-merge"
+  },
+  {
+"data": {
+  "name": "bitmap2",
+  "node": "snap",
+  "persistent": true
+},
+"type": "block-dirty-bitmap-add"
+  },
+  {
+"data": {
+  "bitmaps": [
+{
+  "name": "bitmap2",
+  "node": "base"
+}
+  ],
+  "node": "snap",
+  "target": "bitmap2"
+},
+"type": "block-dirty-bitmap-merge"
+  },
+  {
+"data": {
+  "name": "bitmap2",
+  "node": "base"
+},
+"type": "block-dirty-bitmap-remove"
   }
 ]
   }
@@ -40,6 +93,24 @@
 }
 query-block: device = drive0, node-name = snap, dirty-bitmaps:
 [
+ 

[Qemu-devel] [PULL 15/36] iotests: teach FilePath to produce multiple paths

2019-08-16 Thread John Snow
Use "FilePaths" instead of "FilePath" to request multiple files be
cleaned up after we leave that object's scope.

This is not crucial; but it saves a little typing.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-16-js...@redhat.com
Signed-off-by: John Snow 
---
 tests/qemu-iotests/iotests.py | 34 --
 1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 81ae7b911ac..385dbad16ac 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -358,31 +358,45 @@ class Timeout:
 def timeout(self, signum, frame):
 raise Exception(self.errmsg)
 
+def file_pattern(name):
+return "{0}-{1}".format(os.getpid(), name)
 
-class FilePath(object):
-'''An auto-generated filename that cleans itself up.
+class FilePaths(object):
+"""
+FilePaths is an auto-generated filename that cleans itself up.
 
 Use this context manager to generate filenames and ensure that the file
 gets deleted::
 
-with TestFilePath('test.img') as img_path:
+with FilePaths(['test.img']) as img_path:
 qemu_img('create', img_path, '1G')
 # migration_sock_path is automatically deleted
-'''
-def __init__(self, name):
-filename = '{0}-{1}'.format(os.getpid(), name)
-self.path = os.path.join(test_dir, filename)
+"""
+def __init__(self, names):
+self.paths = []
+for name in names:
+self.paths.append(os.path.join(test_dir, file_pattern(name)))
 
 def __enter__(self):
-return self.path
+return self.paths
 
 def __exit__(self, exc_type, exc_val, exc_tb):
 try:
-os.remove(self.path)
+for path in self.paths:
+os.remove(path)
 except OSError:
 pass
 return False
 
+class FilePath(FilePaths):
+"""
+FilePath is a specialization of FilePaths that takes a single filename.
+"""
+def __init__(self, name):
+super(FilePath, self).__init__([name])
+
+def __enter__(self):
+return self.paths[0]
 
 def file_path_remover():
 for path in reversed(file_path_remover.paths):
@@ -407,7 +421,7 @@ def file_path(*names):
 
 paths = []
 for name in names:
-filename = '{0}-{1}'.format(os.getpid(), name)
+filename = file_pattern(name)
 path = os.path.join(test_dir, filename)
 file_path_remover.paths.append(path)
 paths.append(path)
-- 
2.21.0




[Qemu-devel] [PULL 11/36] block/backup: upgrade copy_bitmap to BdrvDirtyBitmap

2019-08-16 Thread John Snow
This simplifies some interface matters; namely the initialization and
(later) merging the manifest back into the sync_bitmap if it was
provided.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-12-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c | 84 ++
 1 file changed, 44 insertions(+), 40 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index d07b838930f..474f8eeae29 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -38,7 +38,10 @@ typedef struct CowRequest {
 typedef struct BackupBlockJob {
 BlockJob common;
 BlockBackend *target;
+
 BdrvDirtyBitmap *sync_bitmap;
+BdrvDirtyBitmap *copy_bitmap;
+
 MirrorSyncMode sync_mode;
 BitmapSyncMode bitmap_mode;
 BlockdevOnError on_source_error;
@@ -51,7 +54,6 @@ typedef struct BackupBlockJob {
 NotifierWithReturn before_write;
 QLIST_HEAD(, CowRequest) inflight_reqs;
 
-HBitmap *copy_bitmap;
 bool use_copy_range;
 int64_t copy_range_size;
 
@@ -113,7 +115,7 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 int write_flags = job->serialize_target_writes ? BDRV_REQ_SERIALISING : 0;
 
 assert(QEMU_IS_ALIGNED(start, job->cluster_size));
-hbitmap_reset(job->copy_bitmap, start, job->cluster_size);
+bdrv_reset_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
 nbytes = MIN(job->cluster_size, job->len - start);
 if (!*bounce_buffer) {
 *bounce_buffer = blk_blockalign(blk, job->cluster_size);
@@ -146,7 +148,7 @@ static int coroutine_fn 
backup_cow_with_bounce_buffer(BackupBlockJob *job,
 
 return nbytes;
 fail:
-hbitmap_set(job->copy_bitmap, start, job->cluster_size);
+bdrv_set_dirty_bitmap(job->copy_bitmap, start, job->cluster_size);
 return ret;
 
 }
@@ -169,12 +171,14 @@ static int coroutine_fn 
backup_cow_with_offload(BackupBlockJob *job,
 assert(QEMU_IS_ALIGNED(start, job->cluster_size));
 nbytes = MIN(job->copy_range_size, end - start);
 nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size);
-hbitmap_reset(job->copy_bitmap, start, job->cluster_size * nr_clusters);
+bdrv_reset_dirty_bitmap(job->copy_bitmap, start,
+job->cluster_size * nr_clusters);
 ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
 read_flags, write_flags);
 if (ret < 0) {
 trace_backup_do_cow_copy_range_fail(job, start, ret);
-hbitmap_set(job->copy_bitmap, start, job->cluster_size * nr_clusters);
+bdrv_set_dirty_bitmap(job->copy_bitmap, start,
+  job->cluster_size * nr_clusters);
 return ret;
 }
 
@@ -204,13 +208,14 @@ static int coroutine_fn backup_do_cow(BackupBlockJob *job,
 while (start < end) {
 int64_t dirty_end;
 
-if (!hbitmap_get(job->copy_bitmap, start)) {
+if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
 trace_backup_do_cow_skip(job, start);
 start += job->cluster_size;
 continue; /* already copied */
 }
 
-dirty_end = hbitmap_next_zero(job->copy_bitmap, start, (end - start));
+dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
+(end - start));
 if (dirty_end < 0) {
 dirty_end = end;
 }
@@ -307,14 +312,16 @@ static void backup_abort(Job *job)
 static void backup_clean(Job *job)
 {
 BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
+BlockDriverState *bs = blk_bs(s->common.blk);
+
+if (s->copy_bitmap) {
+bdrv_release_dirty_bitmap(bs, s->copy_bitmap);
+s->copy_bitmap = NULL;
+}
+
 assert(s->target);
 blk_unref(s->target);
 s->target = NULL;
-
-if (s->copy_bitmap) {
-hbitmap_free(s->copy_bitmap);
-s->copy_bitmap = NULL;
-}
 }
 
 void backup_do_checkpoint(BlockJob *job, Error **errp)
@@ -329,7 +336,7 @@ void backup_do_checkpoint(BlockJob *job, Error **errp)
 return;
 }
 
-hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len);
+bdrv_set_dirty_bitmap(backup_job->copy_bitmap, 0, backup_job->len);
 }
 
 static void backup_drain(BlockJob *job)
@@ -398,59 +405,52 @@ static bool bdrv_is_unallocated_range(BlockDriverState 
*bs,
 
 static int coroutine_fn backup_loop(BackupBlockJob *job)
 {
-int ret;
 bool error_is_read;
 int64_t offset;
-HBitmapIter hbi;
+BdrvDirtyBitmapIter *bdbi;
 BlockDriverState *bs = blk_bs(job->common.blk);
+int ret = 0;
 
-hbitmap_iter_init(, job->copy_bitmap, 0);
-while ((offset = hbitmap_iter_next()) != -1) {
+bdbi = bdrv_dirty_iter_new(job->copy_bitmap);
+while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
 if (job->sync_mode == MIRROR_SYNC_MODE_TOP &&
 bdrv_is_unallocated_range(bs, offset, job->cluster_size))
 {
-   

[Qemu-devel] [PULL 18/36] block/backup: loosen restriction on readonly bitmaps

2019-08-16 Thread John Snow
With the "never" sync policy, we actually can utilize readonly bitmaps
now. Loosen the check at the QMP level, and tighten it based on
provided arguments down at the job creation level instead.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-19-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c | 6 ++
 blockdev.c | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/block/backup.c b/block/backup.c
index 2be570c0bfd..f8309be01b3 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -617,6 +617,12 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 return NULL;
 }
 
+/* If we need to write to this bitmap, check that we can: */
+if (bitmap_mode != BITMAP_SYNC_MODE_NEVER &&
+bdrv_dirty_bitmap_check(sync_bitmap, BDRV_BITMAP_DEFAULT, errp)) {
+return NULL;
+}
+
 /* Create a new bitmap, and freeze/disable this one. */
 if (bdrv_dirty_bitmap_create_successor(bs, sync_bitmap, errp) < 0) {
 return NULL;
diff --git a/blockdev.c b/blockdev.c
index 985b6cd75c0..a44ab1f709e 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3491,7 +3491,7 @@ static BlockJob *do_backup_common(BackupCommon *backup,
"when providing a bitmap");
 return NULL;
 }
-if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_DEFAULT, errp)) {
+if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_ALLOW_RO, errp)) {
 return NULL;
 }
 }
-- 
2.21.0




[Qemu-devel] [PULL 10/36] block/dirty-bitmap: add bdrv_dirty_bitmap_get

2019-08-16 Thread John Snow
Add a public interface for get. While we're at it,
rename "bdrv_get_dirty_bitmap_locked" to "bdrv_dirty_bitmap_get_locked".

(There are more functions to rename to the bdrv_dirty_bitmap_VERB form,
but they will wait until the conclusion of this series.)

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-11-js...@redhat.com
Signed-off-by: John Snow 
---
 block/dirty-bitmap.c | 19 ---
 block/mirror.c   |  2 +-
 include/block/dirty-bitmap.h |  4 ++--
 migration/block.c|  5 ++---
 nbd/server.c |  2 +-
 5 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 7881fea684b..75a5daf116f 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -509,14 +509,19 @@ BlockDirtyInfoList 
*bdrv_query_dirty_bitmaps(BlockDriverState *bs)
 }
 
 /* Called within bdrv_dirty_bitmap_lock..unlock */
-bool bdrv_get_dirty_locked(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
-   int64_t offset)
+bool bdrv_dirty_bitmap_get_locked(BdrvDirtyBitmap *bitmap, int64_t offset)
 {
-if (bitmap) {
-return hbitmap_get(bitmap->bitmap, offset);
-} else {
-return false;
-}
+return hbitmap_get(bitmap->bitmap, offset);
+}
+
+bool bdrv_dirty_bitmap_get(BdrvDirtyBitmap *bitmap, int64_t offset)
+{
+bool ret;
+bdrv_dirty_bitmap_lock(bitmap);
+ret = bdrv_dirty_bitmap_get_locked(bitmap, offset);
+bdrv_dirty_bitmap_unlock(bitmap);
+
+return ret;
 }
 
 /**
diff --git a/block/mirror.c b/block/mirror.c
index 70f24d9ef63..2b870683f14 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -476,7 +476,7 @@ static uint64_t coroutine_fn 
mirror_iteration(MirrorBlockJob *s)
 int64_t next_offset = offset + nb_chunks * s->granularity;
 int64_t next_chunk = next_offset / s->granularity;
 if (next_offset >= s->bdev_length ||
-!bdrv_get_dirty_locked(source, s->dirty_bitmap, next_offset)) {
+!bdrv_dirty_bitmap_get_locked(s->dirty_bitmap, next_offset)) {
 break;
 }
 if (test_bit(next_chunk, s->in_flight_bitmap)) {
diff --git a/include/block/dirty-bitmap.h b/include/block/dirty-bitmap.h
index 62682eb865f..0120ef3f05a 100644
--- a/include/block/dirty-bitmap.h
+++ b/include/block/dirty-bitmap.h
@@ -84,12 +84,12 @@ void bdrv_dirty_bitmap_set_busy(BdrvDirtyBitmap *bitmap, 
bool busy);
 void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
  HBitmap **backup, Error **errp);
 void bdrv_dirty_bitmap_set_migration(BdrvDirtyBitmap *bitmap, bool migration);
+bool bdrv_dirty_bitmap_get(BdrvDirtyBitmap *bitmap, int64_t offset);
 
 /* Functions that require manual locking.  */
 void bdrv_dirty_bitmap_lock(BdrvDirtyBitmap *bitmap);
 void bdrv_dirty_bitmap_unlock(BdrvDirtyBitmap *bitmap);
-bool bdrv_get_dirty_locked(BlockDriverState *bs, BdrvDirtyBitmap *bitmap,
-   int64_t offset);
+bool bdrv_dirty_bitmap_get_locked(BdrvDirtyBitmap *bitmap, int64_t offset);
 void bdrv_set_dirty_bitmap_locked(BdrvDirtyBitmap *bitmap,
   int64_t offset, int64_t bytes);
 void bdrv_reset_dirty_bitmap_locked(BdrvDirtyBitmap *bitmap,
diff --git a/migration/block.c b/migration/block.c
index e81fd7e14fa..aa747b55fa8 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -521,7 +521,6 @@ static int mig_save_device_dirty(QEMUFile *f, 
BlkMigDevState *bmds,
  int is_async)
 {
 BlkMigBlock *blk;
-BlockDriverState *bs = blk_bs(bmds->blk);
 int64_t total_sectors = bmds->total_sectors;
 int64_t sector;
 int nr_sectors;
@@ -536,8 +535,8 @@ static int mig_save_device_dirty(QEMUFile *f, 
BlkMigDevState *bmds,
 blk_mig_unlock();
 }
 bdrv_dirty_bitmap_lock(bmds->dirty_bitmap);
-if (bdrv_get_dirty_locked(bs, bmds->dirty_bitmap,
-  sector * BDRV_SECTOR_SIZE)) {
+if (bdrv_dirty_bitmap_get_locked(bmds->dirty_bitmap,
+ sector * BDRV_SECTOR_SIZE)) {
 if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
 nr_sectors = total_sectors - sector;
 } else {
diff --git a/nbd/server.c b/nbd/server.c
index 3eacb898757..f55ccf8edfd 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2004,7 +2004,7 @@ static unsigned int bitmap_to_extents(BdrvDirtyBitmap 
*bitmap, uint64_t offset,
 bdrv_dirty_bitmap_lock(bitmap);
 
 it = bdrv_dirty_iter_new(bitmap);
-dirty = bdrv_get_dirty_locked(NULL, bitmap, offset);
+dirty = bdrv_dirty_bitmap_get_locked(bitmap, offset);
 
 assert(begin < overall_end && nb_extents);
 while (begin < overall_end && i < nb_extents) {
-- 
2.21.0




[Qemu-devel] [PULL 14/36] iotests: teach run_job to cancel pending jobs

2019-08-16 Thread John Snow
run_job can cancel pending jobs to simulate failure. This lets us use
the pending callback to issue test commands while the job is open, but
then still have the job fail in the end.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-15-js...@redhat.com
[Maintainer edit: Merge conflict resolution in run_job]
Signed-off-by: John Snow 
---
 tests/qemu-iotests/iotests.py | 24 ++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 7fc062cdcf4..81ae7b911ac 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -541,7 +541,23 @@ class VM(qtest.QEMUQtestMachine):
 
 # Returns None on success, and an error string on failure
 def run_job(self, job, auto_finalize=True, auto_dismiss=False,
-pre_finalize=None, use_log=True, wait=60.0):
+pre_finalize=None, cancel=False, use_log=True, wait=60.0):
+"""
+run_job moves a job from creation through to dismissal.
+
+:param job: String. ID of recently-launched job
+:param auto_finalize: Bool. True if the job was launched with
+  auto_finalize. Defaults to True.
+:param auto_dismiss: Bool. True if the job was launched with
+ auto_dismiss=True. Defaults to False.
+:param pre_finalize: Callback. A callable that takes no arguments to be
+ invoked prior to issuing job-finalize, if any.
+:param cancel: Bool. When true, cancels the job after the pre_finalize
+   callback.
+:param use_log: Bool. When false, does not log QMP messages.
+:param wait: Float. Timeout value specifying how long to wait for any
+ event, in seconds. Defaults to 60.0.
+"""
 match_device = {'data': {'device': job}}
 match_id = {'data': {'id': job}}
 events = [
@@ -570,7 +586,11 @@ class VM(qtest.QEMUQtestMachine):
 elif status == 'pending' and not auto_finalize:
 if pre_finalize:
 pre_finalize()
-if use_log:
+if cancel and use_log:
+self.qmp_log('job-cancel', id=job)
+elif cancel:
+self.qmp('job-cancel', id=job)
+elif use_log:
 self.qmp_log('job-finalize', id=job)
 else:
 self.qmp('job-finalize', id=job)
-- 
2.21.0




[Qemu-devel] [PULL 05/36] block/backup: Add mirror sync mode 'bitmap'

2019-08-16 Thread John Snow
We don't need or want a new sync mode for simple differences in
semantics.  Create a new mode simply named "BITMAP" that is designed to
make use of the new Bitmap Sync Mode field.

Because the only bitmap sync mode is 'on-success', this adds no new
functionality to the backup job (yet). The old incremental backup mode
is maintained as a syntactic sugar for sync=bitmap, mode=on-success.

Add all of the plumbing necessary to support this new instruction.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-6-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c| 20 
 block/mirror.c|  6 --
 block/replication.c   |  2 +-
 blockdev.c| 25 +++--
 include/block/block_int.h |  4 +++-
 qapi/block-core.json  | 21 +++--
 6 files changed, 58 insertions(+), 20 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 4743c8f0bc5..2b4c5c23e4e 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -38,9 +38,9 @@ typedef struct CowRequest {
 typedef struct BackupBlockJob {
 BlockJob common;
 BlockBackend *target;
-/* bitmap for sync=incremental */
 BdrvDirtyBitmap *sync_bitmap;
 MirrorSyncMode sync_mode;
+BitmapSyncMode bitmap_mode;
 BlockdevOnError on_source_error;
 BlockdevOnError on_target_error;
 CoRwlock flush_rwlock;
@@ -461,7 +461,7 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
 
 job_progress_set_remaining(job, s->len);
 
-if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
+if (s->sync_mode == MIRROR_SYNC_MODE_BITMAP) {
 backup_incremental_init_copy_bitmap(s);
 } else {
 hbitmap_set(s->copy_bitmap, 0, s->len);
@@ -545,6 +545,7 @@ static int64_t 
backup_calculate_cluster_size(BlockDriverState *target,
 BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
   BlockDriverState *target, int64_t speed,
   MirrorSyncMode sync_mode, BdrvDirtyBitmap *sync_bitmap,
+  BitmapSyncMode bitmap_mode,
   bool compress,
   BlockdevOnError on_source_error,
   BlockdevOnError on_target_error,
@@ -592,10 +593,13 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 return NULL;
 }
 
-if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
+/* QMP interface should have handled translating this to bitmap mode */
+assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
+
+if (sync_mode == MIRROR_SYNC_MODE_BITMAP) {
 if (!sync_bitmap) {
 error_setg(errp, "must provide a valid bitmap name for "
- "\"incremental\" sync mode");
+   "'%s' sync mode", MirrorSyncMode_str(sync_mode));
 return NULL;
 }
 
@@ -605,8 +609,8 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 }
 } else if (sync_bitmap) {
 error_setg(errp,
-   "a sync_bitmap was provided to backup_run, "
-   "but received an incompatible sync_mode (%s)",
+   "a bitmap was given to backup_job_create, "
+   "but it received an incompatible sync_mode (%s)",
MirrorSyncMode_str(sync_mode));
 return NULL;
 }
@@ -649,8 +653,8 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 job->on_source_error = on_source_error;
 job->on_target_error = on_target_error;
 job->sync_mode = sync_mode;
-job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
-   sync_bitmap : NULL;
+job->sync_bitmap = sync_bitmap;
+job->bitmap_mode = bitmap_mode;
 job->compress = compress;
 
 /* Detect image-fleecing (and similar) schemes */
diff --git a/block/mirror.c b/block/mirror.c
index 9b36391bb97..70f24d9ef63 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1755,8 +1755,10 @@ void mirror_start(const char *job_id, BlockDriverState 
*bs,
 bool is_none_mode;
 BlockDriverState *base;
 
-if (mode == MIRROR_SYNC_MODE_INCREMENTAL) {
-error_setg(errp, "Sync mode 'incremental' not supported");
+if ((mode == MIRROR_SYNC_MODE_INCREMENTAL) ||
+(mode == MIRROR_SYNC_MODE_BITMAP)) {
+error_setg(errp, "Sync mode '%s' not supported",
+   MirrorSyncMode_str(mode));
 return;
 }
 is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
diff --git a/block/replication.c b/block/replication.c
index 23b2993d747..936b2f8b5a4 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -543,7 +543,7 @@ static void replication_start(ReplicationState *rs, 
ReplicationMode mode,
 
 s->backup_job = backup_job_create(
 NULL, s->secondary_disk->bs, 
s->hidden_disk->bs,
-0, MIRROR_SYNC_MODE_NONE, NULL, false,
+

[Qemu-devel] [PULL 02/36] drive-backup: create do_backup_common

2019-08-16 Thread John Snow
Create a common core that comprises the actual meat of what the backup API
boundary needs to do, and then switch drive-backup to use it.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-3-js...@redhat.com
Signed-off-by: John Snow 
---
 blockdev.c | 122 +
 1 file changed, 67 insertions(+), 55 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 95cdd5a5cb0..d822b19b4b0 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3427,6 +3427,70 @@ out:
 aio_context_release(aio_context);
 }
 
+/* Common QMP interface for drive-backup and blockdev-backup */
+static BlockJob *do_backup_common(BackupCommon *backup,
+  BlockDriverState *bs,
+  BlockDriverState *target_bs,
+  AioContext *aio_context,
+  JobTxn *txn, Error **errp)
+{
+BlockJob *job = NULL;
+BdrvDirtyBitmap *bmap = NULL;
+int job_flags = JOB_DEFAULT;
+int ret;
+
+if (!backup->has_speed) {
+backup->speed = 0;
+}
+if (!backup->has_on_source_error) {
+backup->on_source_error = BLOCKDEV_ON_ERROR_REPORT;
+}
+if (!backup->has_on_target_error) {
+backup->on_target_error = BLOCKDEV_ON_ERROR_REPORT;
+}
+if (!backup->has_job_id) {
+backup->job_id = NULL;
+}
+if (!backup->has_auto_finalize) {
+backup->auto_finalize = true;
+}
+if (!backup->has_auto_dismiss) {
+backup->auto_dismiss = true;
+}
+if (!backup->has_compress) {
+backup->compress = false;
+}
+
+ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
+if (ret < 0) {
+return NULL;
+}
+
+if (backup->has_bitmap) {
+bmap = bdrv_find_dirty_bitmap(bs, backup->bitmap);
+if (!bmap) {
+error_setg(errp, "Bitmap '%s' could not be found", backup->bitmap);
+return NULL;
+}
+if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_DEFAULT, errp)) {
+return NULL;
+}
+}
+
+if (!backup->auto_finalize) {
+job_flags |= JOB_MANUAL_FINALIZE;
+}
+if (!backup->auto_dismiss) {
+job_flags |= JOB_MANUAL_DISMISS;
+}
+
+job = backup_job_create(backup->job_id, bs, target_bs, backup->speed,
+backup->sync, bmap, backup->compress,
+backup->on_source_error, backup->on_target_error,
+job_flags, NULL, NULL, txn, errp);
+return job;
+}
+
 static BlockJob *do_drive_backup(DriveBackup *backup, JobTxn *txn,
  Error **errp)
 {
@@ -3434,39 +3498,16 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
JobTxn *txn,
 BlockDriverState *target_bs;
 BlockDriverState *source = NULL;
 BlockJob *job = NULL;
-BdrvDirtyBitmap *bmap = NULL;
 AioContext *aio_context;
 QDict *options = NULL;
 Error *local_err = NULL;
-int flags, job_flags = JOB_DEFAULT;
+int flags;
 int64_t size;
 bool set_backing_hd = false;
-int ret;
 
-if (!backup->has_speed) {
-backup->speed = 0;
-}
-if (!backup->has_on_source_error) {
-backup->on_source_error = BLOCKDEV_ON_ERROR_REPORT;
-}
-if (!backup->has_on_target_error) {
-backup->on_target_error = BLOCKDEV_ON_ERROR_REPORT;
-}
 if (!backup->has_mode) {
 backup->mode = NEW_IMAGE_MODE_ABSOLUTE_PATHS;
 }
-if (!backup->has_job_id) {
-backup->job_id = NULL;
-}
-if (!backup->has_auto_finalize) {
-backup->auto_finalize = true;
-}
-if (!backup->has_auto_dismiss) {
-backup->auto_dismiss = true;
-}
-if (!backup->has_compress) {
-backup->compress = false;
-}
 
 bs = bdrv_lookup_bs(backup->device, backup->device, errp);
 if (!bs) {
@@ -3543,12 +3584,6 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
JobTxn *txn,
 goto out;
 }
 
-ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
-if (ret < 0) {
-bdrv_unref(target_bs);
-goto out;
-}
-
 if (set_backing_hd) {
 bdrv_set_backing_hd(target_bs, source, _err);
 if (local_err) {
@@ -3556,31 +3591,8 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
JobTxn *txn,
 }
 }
 
-if (backup->has_bitmap) {
-bmap = bdrv_find_dirty_bitmap(bs, backup->bitmap);
-if (!bmap) {
-error_setg(errp, "Bitmap '%s' could not be found", backup->bitmap);
-goto unref;
-}
-if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_DEFAULT, errp)) {
-goto unref;
-}
-}
-if (!backup->auto_finalize) {
-job_flags |= JOB_MANUAL_FINALIZE;
-}
-if (!backup->auto_dismiss) {
-job_flags |= JOB_MANUAL_DISMISS;
-}
-
-job = backup_job_create(backup->job_id, bs, target_bs, 

[Qemu-devel] [PULL 12/36] block/backup: add 'always' bitmap sync policy

2019-08-16 Thread John Snow
This adds an "always" policy for bitmap synchronization. Regardless of if
the job succeeds or fails, the bitmap is *always* synchronized. This means
that for backups that fail part-way through, the bitmap retains a record of
which sectors need to be copied out to accomplish a new backup using the
old, partial result.

In effect, this allows us to "resume" a failed backup; however the new backup
will be from the new point in time, so it isn't a "resume" as much as it is
an "incremental retry." This can be useful in the case of extremely large
backups that fail considerably through the operation and we'd like to not waste
the work that was already performed.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-13-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c   | 27 +++
 qapi/block-core.json |  5 -
 2 files changed, 23 insertions(+), 9 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 474f8eeae29..2be570c0bfd 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -278,18 +278,29 @@ static void backup_cleanup_sync_bitmap(BackupBlockJob 
*job, int ret)
 {
 BdrvDirtyBitmap *bm;
 BlockDriverState *bs = blk_bs(job->common.blk);
+bool sync = (((ret == 0) || (job->bitmap_mode == BITMAP_SYNC_MODE_ALWAYS)) 
\
+ && (job->bitmap_mode != BITMAP_SYNC_MODE_NEVER));
 
-if (ret < 0 || job->bitmap_mode == BITMAP_SYNC_MODE_NEVER) {
+if (sync) {
 /*
- * Failure, or we don't want to synchronize the bitmap.
- * Merge the successor back into the parent, delete nothing.
+ * We succeeded, or we always intended to sync the bitmap.
+ * Delete this bitmap and install the child.
  */
-bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
-assert(bm);
-} else {
-/* Everything is fine, delete this bitmap and install the backup. */
 bm = bdrv_dirty_bitmap_abdicate(bs, job->sync_bitmap, NULL);
-assert(bm);
+} else {
+/*
+ * We failed, or we never intended to sync the bitmap anyway.
+ * Merge the successor back into the parent, keeping all data.
+ */
+bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
+}
+
+assert(bm);
+
+if (ret < 0 && job->bitmap_mode == BITMAP_SYNC_MODE_ALWAYS) {
+/* If we failed and synced, merge in the bits we didn't copy: */
+bdrv_dirty_bitmap_merge_internal(bm, job->copy_bitmap,
+ NULL, true);
 }
 }
 
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 06e34488a30..8344fbe2030 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1149,10 +1149,13 @@
 # @never: The bitmap is never synchronized with the operation, and is
 # treated solely as a read-only manifest of blocks to copy.
 #
+# @always: The bitmap is always synchronized with the operation,
+#  regardless of whether or not the operation was successful.
+#
 # Since: 4.2
 ##
 { 'enum': 'BitmapSyncMode',
-  'data': ['on-success', 'never'] }
+  'data': ['on-success', 'never', 'always'] }
 
 ##
 # @MirrorCopyMode:
-- 
2.21.0




[Qemu-devel] [PULL 09/36] block/dirty-bitmap: add bdrv_dirty_bitmap_merge_internal

2019-08-16 Thread John Snow
I'm surprised it didn't come up sooner, but sometimes we have a +busy
bitmap as a source. This is dangerous from the QMP API, but if we are
the owner that marked the bitmap busy, it's safe to merge it using it as
a read only source.

It is not safe in the general case to allow users to read from in-use
bitmaps, so create an internal variant that foregoes the safety
checking.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-10-js...@redhat.com
Signed-off-by: John Snow 
---
 block/dirty-bitmap.c  | 54 +++
 include/block/block_int.h |  3 +++
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/block/dirty-bitmap.c b/block/dirty-bitmap.c
index 95a9c2a5d8a..7881fea684b 100644
--- a/block/dirty-bitmap.c
+++ b/block/dirty-bitmap.c
@@ -810,6 +810,12 @@ bool bdrv_dirty_bitmap_next_dirty_area(BdrvDirtyBitmap 
*bitmap,
 return hbitmap_next_dirty_area(bitmap->bitmap, offset, bytes);
 }
 
+/**
+ * bdrv_merge_dirty_bitmap: merge src into dest.
+ * Ensures permissions on bitmaps are reasonable; use for public API.
+ *
+ * @backup: If provided, make a copy of dest here prior to merge.
+ */
 void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const BdrvDirtyBitmap *src,
  HBitmap **backup, Error **errp)
 {
@@ -833,6 +839,42 @@ void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const 
BdrvDirtyBitmap *src,
 goto out;
 }
 
+ret = bdrv_dirty_bitmap_merge_internal(dest, src, backup, false);
+assert(ret);
+
+out:
+qemu_mutex_unlock(dest->mutex);
+if (src->mutex != dest->mutex) {
+qemu_mutex_unlock(src->mutex);
+}
+}
+
+/**
+ * bdrv_dirty_bitmap_merge_internal: merge src into dest.
+ * Does NOT check bitmap permissions; not suitable for use as public API.
+ *
+ * @backup: If provided, make a copy of dest here prior to merge.
+ * @lock: If true, lock and unlock bitmaps on the way in/out.
+ * returns true if the merge succeeded; false if unattempted.
+ */
+bool bdrv_dirty_bitmap_merge_internal(BdrvDirtyBitmap *dest,
+  const BdrvDirtyBitmap *src,
+  HBitmap **backup,
+  bool lock)
+{
+bool ret;
+
+assert(!bdrv_dirty_bitmap_readonly(dest));
+assert(!bdrv_dirty_bitmap_inconsistent(dest));
+assert(!bdrv_dirty_bitmap_inconsistent(src));
+
+if (lock) {
+qemu_mutex_lock(dest->mutex);
+if (src->mutex != dest->mutex) {
+qemu_mutex_lock(src->mutex);
+}
+}
+
 if (backup) {
 *backup = dest->bitmap;
 dest->bitmap = hbitmap_alloc(dest->size, hbitmap_granularity(*backup));
@@ -840,11 +882,13 @@ void bdrv_merge_dirty_bitmap(BdrvDirtyBitmap *dest, const 
BdrvDirtyBitmap *src,
 } else {
 ret = hbitmap_merge(dest->bitmap, src->bitmap, dest->bitmap);
 }
-assert(ret);
 
-out:
-qemu_mutex_unlock(dest->mutex);
-if (src->mutex != dest->mutex) {
-qemu_mutex_unlock(src->mutex);
+if (lock) {
+qemu_mutex_unlock(dest->mutex);
+if (src->mutex != dest->mutex) {
+qemu_mutex_unlock(src->mutex);
+}
 }
+
+return ret;
 }
diff --git a/include/block/block_int.h b/include/block/block_int.h
index 80953ac8aeb..aa697f1f694 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1253,6 +1253,9 @@ void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, 
int64_t bytes);
 
 void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out);
 void bdrv_restore_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *backup);
+bool bdrv_dirty_bitmap_merge_internal(BdrvDirtyBitmap *dest,
+  const BdrvDirtyBitmap *src,
+  HBitmap **backup, bool lock);
 
 void bdrv_inc_in_flight(BlockDriverState *bs);
 void bdrv_dec_in_flight(BlockDriverState *bs);
-- 
2.21.0




[Qemu-devel] [PULL 01/36] qapi/block-core: Introduce BackupCommon

2019-08-16 Thread John Snow
drive-backup and blockdev-backup have an awful lot of things in common
that are the same. Let's fix that.

I don't deduplicate 'target', because the semantics actually did change
between each structure. Leave that one alone so it can be documented
separately.

Where documentation was not identical, use the most up-to-date version.
For "speed", use Blockdev-Backup's version. For "sync", use
Drive-Backup's version.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
[Maintainer edit: modified commit message. --js]
Reviewed-by: Markus Armbruster 
Message-id: 20190709232550.10724-2-js...@redhat.com
Signed-off-by: John Snow 
---
 qapi/block-core.json | 103 ++-
 1 file changed, 33 insertions(+), 70 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index f1e7701fbea..8ca12004ae9 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1315,32 +1315,23 @@
   'data': { 'node': 'str', 'overlay': 'str' } }
 
 ##
-# @DriveBackup:
+# @BackupCommon:
 #
 # @job-id: identifier for the newly-created block job. If
 #  omitted, the device name will be used. (Since 2.7)
 #
 # @device: the device name or node-name of a root node which should be copied.
 #
-# @target: the target of the new image. If the file exists, or if it
-#  is a device, the existing file/device will be used as the new
-#  destination.  If it does not exist, a new file will be created.
-#
-# @format: the format of the new destination, default is to
-#  probe if @mode is 'existing', else the format of the source
-#
 # @sync: what parts of the disk image should be copied to the destination
 #(all the disk, only the sectors allocated in the topmost image, from a
 #dirty bitmap, or only new I/O).
 #
-# @mode: whether and how QEMU should create a new image, default is
-#'absolute-paths'.
-#
-# @speed: the maximum speed, in bytes per second
+# @speed: the maximum speed, in bytes per second. The default is 0,
+# for unlimited.
 #
 # @bitmap: the name of dirty bitmap if sync is "incremental".
 #  Must be present if sync is "incremental", must NOT be present
-#  otherwise. (Since 2.4)
+#  otherwise. (Since 2.4 (drive-backup), 3.1 (blockdev-backup))
 #
 # @compress: true to compress data, if the target format supports it.
 #(default: false) (since 2.8)
@@ -1370,75 +1361,47 @@
 # I/O.  If an error occurs during a guest write request, the device's
 # rerror/werror actions will be used.
 #
+# Since: 4.2
+##
+{ 'struct': 'BackupCommon',
+  'data': { '*job-id': 'str', 'device': 'str',
+'sync': 'MirrorSyncMode', '*speed': 'int',
+'*bitmap': 'str', '*compress': 'bool',
+'*on-source-error': 'BlockdevOnError',
+'*on-target-error': 'BlockdevOnError',
+'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
+
+##
+# @DriveBackup:
+#
+# @target: the target of the new image. If the file exists, or if it
+#  is a device, the existing file/device will be used as the new
+#  destination.  If it does not exist, a new file will be created.
+#
+# @format: the format of the new destination, default is to
+#  probe if @mode is 'existing', else the format of the source
+#
+# @mode: whether and how QEMU should create a new image, default is
+#'absolute-paths'.
+#
 # Since: 1.6
 ##
 { 'struct': 'DriveBackup',
-  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
-'*format': 'str', 'sync': 'MirrorSyncMode',
-'*mode': 'NewImageMode', '*speed': 'int',
-'*bitmap': 'str', '*compress': 'bool',
-'*on-source-error': 'BlockdevOnError',
-'*on-target-error': 'BlockdevOnError',
-'*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
+  'base': 'BackupCommon',
+  'data': { 'target': 'str',
+'*format': 'str',
+'*mode': 'NewImageMode' } }
 
 ##
 # @BlockdevBackup:
 #
-# @job-id: identifier for the newly-created block job. If
-#  omitted, the device name will be used. (Since 2.7)
-#
-# @device: the device name or node-name of a root node which should be copied.
-#
 # @target: the device name or node-name of the backup target node.
 #
-# @sync: what parts of the disk image should be copied to the destination
-#(all the disk, only the sectors allocated in the topmost image, or
-#only new I/O).
-#
-# @speed: the maximum speed, in bytes per second. The default is 0,
-# for unlimited.
-#
-# @bitmap: the name of dirty bitmap if sync is "incremental".
-#  Must be present if sync is "incremental", must NOT be present
-#  otherwise. (Since 3.1)
-#
-# @compress: true to compress data, if the target format supports it.
-#(default: false) (since 2.8)
-#
-# @on-source-error: the action to take on an error on the source,
-#   default 'report'.  'stop' and 'enospc' can only be used
-# 

[Qemu-devel] [PULL 04/36] qapi: add BitmapSyncMode enum

2019-08-16 Thread John Snow
Depending on what a user is trying to accomplish, there might be a few
bitmap cleanup actions that occur when an operation is finished that
could be useful.

I am proposing three:
- NEVER: The bitmap is never synchronized against what was copied.
- ALWAYS: The bitmap is always synchronized, even on failures.
- ON-SUCCESS: The bitmap is synchronized only on success.

The existing incremental backup modes use 'on-success' semantics,
so add just that one for right now.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Reviewed-by: Markus Armbruster 
Message-id: 20190709232550.10724-5-js...@redhat.com
Signed-off-by: John Snow 
---
 qapi/block-core.json | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 8ca12004ae9..06eb3bb3d78 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1134,6 +1134,20 @@
 { 'enum': 'MirrorSyncMode',
   'data': ['top', 'full', 'none', 'incremental'] }
 
+##
+# @BitmapSyncMode:
+#
+# An enumeration of possible behaviors for the synchronization of a bitmap
+# when used for data copy operations.
+#
+# @on-success: The bitmap is only synced when the operation is successful.
+#  This is the behavior always used for 'INCREMENTAL' backups.
+#
+# Since: 4.2
+##
+{ 'enum': 'BitmapSyncMode',
+  'data': ['on-success'] }
+
 ##
 # @MirrorCopyMode:
 #
-- 
2.21.0




[Qemu-devel] [PULL 07/36] hbitmap: Fix merge when b is empty, and result is not an alias of a

2019-08-16 Thread John Snow
Nobody calls the function like this currently, but we neither prohibit
or cope with this behavior. I decided to make the function cope with it.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-8-js...@redhat.com
Signed-off-by: John Snow 
---
 util/hbitmap.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/util/hbitmap.c b/util/hbitmap.c
index bcc0acdc6a0..83927f3c08a 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -785,8 +785,9 @@ bool hbitmap_can_merge(const HBitmap *a, const HBitmap *b)
 }
 
 /**
- * Given HBitmaps A and B, let A := A (BITOR) B.
- * Bitmap B will not be modified.
+ * Given HBitmaps A and B, let R := A (BITOR) B.
+ * Bitmaps A and B will not be modified,
+ * except when bitmap R is an alias of A or B.
  *
  * @return true if the merge was successful,
  * false if it was not attempted.
@@ -801,7 +802,13 @@ bool hbitmap_merge(const HBitmap *a, const HBitmap *b, 
HBitmap *result)
 }
 assert(hbitmap_can_merge(b, result));
 
-if (hbitmap_count(b) == 0) {
+if ((!hbitmap_count(a) && result == b) ||
+(!hbitmap_count(b) && result == a)) {
+return true;
+}
+
+if (!hbitmap_count(a) && !hbitmap_count(b)) {
+hbitmap_reset_all(result);
 return true;
 }
 
-- 
2.21.0




[Qemu-devel] [PULL 08/36] hbitmap: enable merging across granularities

2019-08-16 Thread John Snow
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-9-js...@redhat.com
Signed-off-by: John Snow 
---
 util/hbitmap.c | 36 +++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/util/hbitmap.c b/util/hbitmap.c
index 83927f3c08a..fd44c897ab0 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -781,7 +781,27 @@ void hbitmap_truncate(HBitmap *hb, uint64_t size)
 
 bool hbitmap_can_merge(const HBitmap *a, const HBitmap *b)
 {
-return (a->size == b->size) && (a->granularity == b->granularity);
+return (a->orig_size == b->orig_size);
+}
+
+/**
+ * hbitmap_sparse_merge: performs dst = dst | src
+ * works with differing granularities.
+ * best used when src is sparsely populated.
+ */
+static void hbitmap_sparse_merge(HBitmap *dst, const HBitmap *src)
+{
+uint64_t offset = 0;
+uint64_t count = src->orig_size;
+
+while (hbitmap_next_dirty_area(src, , )) {
+hbitmap_set(dst, offset, count);
+offset += count;
+if (offset >= src->orig_size) {
+break;
+}
+count = src->orig_size - offset;
+}
 }
 
 /**
@@ -812,10 +832,24 @@ bool hbitmap_merge(const HBitmap *a, const HBitmap *b, 
HBitmap *result)
 return true;
 }
 
+if (a->granularity != b->granularity) {
+if ((a != result) && (b != result)) {
+hbitmap_reset_all(result);
+}
+if (a != result) {
+hbitmap_sparse_merge(result, a);
+}
+if (b != result) {
+hbitmap_sparse_merge(result, b);
+}
+return true;
+}
+
 /* This merge is O(size), as BITS_PER_LONG and HBITMAP_LEVELS are constant.
  * It may be possible to improve running times for sparsely populated maps
  * by using hbitmap_iter_next, but this is suboptimal for dense maps.
  */
+assert(a->size == b->size);
 for (i = HBITMAP_LEVELS - 1; i >= 0; i--) {
 for (j = 0; j < a->sizes[i]; j++) {
 result->levels[i][j] = a->levels[i][j] | b->levels[i][j];
-- 
2.21.0




[Qemu-devel] [PULL 03/36] blockdev-backup: utilize do_backup_common

2019-08-16 Thread John Snow
Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-4-js...@redhat.com
Signed-off-by: John Snow 
---
 blockdev.c | 65 +-
 1 file changed, 6 insertions(+), 59 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index d822b19b4b0..8e4f70a8d66 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3626,78 +3626,25 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
JobTxn *txn,
 {
 BlockDriverState *bs;
 BlockDriverState *target_bs;
-Error *local_err = NULL;
-BdrvDirtyBitmap *bmap = NULL;
 AioContext *aio_context;
-BlockJob *job = NULL;
-int job_flags = JOB_DEFAULT;
-int ret;
-
-if (!backup->has_speed) {
-backup->speed = 0;
-}
-if (!backup->has_on_source_error) {
-backup->on_source_error = BLOCKDEV_ON_ERROR_REPORT;
-}
-if (!backup->has_on_target_error) {
-backup->on_target_error = BLOCKDEV_ON_ERROR_REPORT;
-}
-if (!backup->has_job_id) {
-backup->job_id = NULL;
-}
-if (!backup->has_auto_finalize) {
-backup->auto_finalize = true;
-}
-if (!backup->has_auto_dismiss) {
-backup->auto_dismiss = true;
-}
-if (!backup->has_compress) {
-backup->compress = false;
-}
+BlockJob *job;
 
 bs = bdrv_lookup_bs(backup->device, backup->device, errp);
 if (!bs) {
 return NULL;
 }
 
-aio_context = bdrv_get_aio_context(bs);
-aio_context_acquire(aio_context);
-
 target_bs = bdrv_lookup_bs(backup->target, backup->target, errp);
 if (!target_bs) {
-goto out;
+return NULL;
 }
 
-ret = bdrv_try_set_aio_context(target_bs, aio_context, errp);
-if (ret < 0) {
-goto out;
-}
+aio_context = bdrv_get_aio_context(bs);
+aio_context_acquire(aio_context);
 
-if (backup->has_bitmap) {
-bmap = bdrv_find_dirty_bitmap(bs, backup->bitmap);
-if (!bmap) {
-error_setg(errp, "Bitmap '%s' could not be found", backup->bitmap);
-goto out;
-}
-if (bdrv_dirty_bitmap_check(bmap, BDRV_BITMAP_DEFAULT, errp)) {
-goto out;
-}
-}
+job = do_backup_common(qapi_BlockdevBackup_base(backup),
+   bs, target_bs, aio_context, txn, errp);
 
-if (!backup->auto_finalize) {
-job_flags |= JOB_MANUAL_FINALIZE;
-}
-if (!backup->auto_dismiss) {
-job_flags |= JOB_MANUAL_DISMISS;
-}
-job = backup_job_create(backup->job_id, bs, target_bs, backup->speed,
-backup->sync, bmap, backup->compress,
-backup->on_source_error, backup->on_target_error,
-job_flags, NULL, NULL, txn, _err);
-if (local_err != NULL) {
-error_propagate(errp, local_err);
-}
-out:
 aio_context_release(aio_context);
 return job;
 }
-- 
2.21.0




[Qemu-devel] [PULL 00/36] Bitmaps patches

2019-08-16 Thread John Snow
The following changes since commit afd760539308a5524accf964107cdb1d54a059e3:

  Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190816' 
into staging (2019-08-16 17:21:40 +0100)

are available in the Git repository at:

  https://github.com/jnsnow/qemu.git tags/bitmaps-pull-request

for you to fetch changes up to a5f8a60b3eafd5563af48546d5d126d448e62ac5:

  tests/test-hbitmap: test next_zero and _next_dirty_area after truncate 
(2019-08-16 18:29:43 -0400)


Pull request

Rebase notes:

011/36:[0003] [FC] 'block/backup: upgrade copy_bitmap to BdrvDirtyBitmap'
016/36:[] [-C] 'iotests: Add virtio-scsi device helper'
017/36:[0002] [FC] 'iotests: add test 257 for bitmap-mode backups'
030/36:[0011] [FC] 'block/backup: teach TOP to never copy unallocated regions'
032/36:[0018] [FC] 'iotests/257: test traditional sync modes'

11: A new hbitmap call was added late in 4.1, changed to
bdrv_dirty_bitmap_next_zero.
16: Context-only (self.has_quit is new context in 040)
17: Removed 'auto' to follow upstream trends in iotest fashion
30: Handled explicitly on-list with R-B from Max.
32: Fix capitalization in test, as mentioned on-list.



John Snow (30):
  qapi/block-core: Introduce BackupCommon
  drive-backup: create do_backup_common
  blockdev-backup: utilize do_backup_common
  qapi: add BitmapSyncMode enum
  block/backup: Add mirror sync mode 'bitmap'
  block/backup: add 'never' policy to bitmap sync mode
  hbitmap: Fix merge when b is empty, and result is not an alias of a
  hbitmap: enable merging across granularities
  block/dirty-bitmap: add bdrv_dirty_bitmap_merge_internal
  block/dirty-bitmap: add bdrv_dirty_bitmap_get
  block/backup: upgrade copy_bitmap to BdrvDirtyBitmap
  block/backup: add 'always' bitmap sync policy
  iotests: add testing shim for script-style python tests
  iotests: teach run_job to cancel pending jobs
  iotests: teach FilePath to produce multiple paths
  iotests: Add virtio-scsi device helper
  iotests: add test 257 for bitmap-mode backups
  block/backup: loosen restriction on readonly bitmaps
  qapi: implement block-dirty-bitmap-remove transaction action
  iotests/257: add Pattern class
  iotests/257: add EmulatedBitmap class
  iotests/257: Refactor backup helpers
  block/backup: hoist bitmap check into QMP interface
  iotests/257: test API failures
  block/backup: improve sync=bitmap work estimates
  block/backup: centralize copy_bitmap initialization
  block/backup: add backup_is_cluster_allocated
  block/backup: teach TOP to never copy unallocated regions
  block/backup: support bitmap sync modes for non-bitmap backups
  iotests/257: test traditional sync modes

Vladimir Sementsov-Ogievskiy (6):
  blockdev: reduce aio_context locked sections in bitmap add/remove
  iotests: test bitmap moving inside 254
  qapi: add dirty-bitmaps to query-named-block-nodes result
  block/backup: deal with zero detection
  block/backup: refactor write_flags
  tests/test-hbitmap: test next_zero and _next_dirty_area after truncate

 block.c|2 +-
 block/backup.c |  312 +-
 block/dirty-bitmap.c   |   88 +-
 block/mirror.c |8 +-
 block/qapi.c   |5 +
 block/replication.c|2 +-
 block/trace-events |1 +
 blockdev.c |  353 ++-
 include/block/block_int.h  |7 +-
 include/block/dirty-bitmap.h   |6 +-
 migration/block-dirty-bitmap.c |2 +-
 migration/block.c  |5 +-
 nbd/server.c   |2 +-
 qapi/block-core.json   |  146 +-
 qapi/transaction.json  |2 +
 qemu-deprecated.texi   |   12 +
 tests/qemu-iotests/040 |6 +-
 tests/qemu-iotests/093 |6 +-
 tests/qemu-iotests/139 |7 +-
 tests/qemu-iotests/238 |5 +-
 tests/qemu-iotests/254 |   30 +-
 tests/qemu-iotests/254.out |   82 +
 tests/qemu-iotests/256.out |4 +-
 tests/qemu-iotests/257 |  560 
 tests/qemu-iotests/257.out | 5421 
 tests/qemu-iotests/group   |1 +
 tests/qemu-iotests/iotests.py  |  102 +-
 tests/test-hbitmap.c   |   22 +
 util/hbitmap.c |   49 +-
 29 files changed, 6843 insertions(+), 405 deletions(-)
 create mode 100755 tests/qemu-iotests/257
 create mode 100644 tests/qemu-iotests/257.out

-- 
2.21.0




[Qemu-devel] [PULL 06/36] block/backup: add 'never' policy to bitmap sync mode

2019-08-16 Thread John Snow
This adds a "never" policy for bitmap synchronization. Regardless of if
the job succeeds or fails, we never update the bitmap. This can be used
to perform differential backups, or simply to avoid the job modifying a
bitmap.

Signed-off-by: John Snow 
Reviewed-by: Max Reitz 
Message-id: 20190709232550.10724-7-js...@redhat.com
Signed-off-by: John Snow 
---
 block/backup.c   | 7 +--
 qapi/block-core.json | 5 -
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 2b4c5c23e4e..d07b838930f 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -274,8 +274,11 @@ static void backup_cleanup_sync_bitmap(BackupBlockJob 
*job, int ret)
 BdrvDirtyBitmap *bm;
 BlockDriverState *bs = blk_bs(job->common.blk);
 
-if (ret < 0) {
-/* Merge the successor back into the parent, delete nothing. */
+if (ret < 0 || job->bitmap_mode == BITMAP_SYNC_MODE_NEVER) {
+/*
+ * Failure, or we don't want to synchronize the bitmap.
+ * Merge the successor back into the parent, delete nothing.
+ */
 bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
 assert(bm);
 } else {
diff --git a/qapi/block-core.json b/qapi/block-core.json
index dd926f78285..06e34488a30 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1146,10 +1146,13 @@
 # @on-success: The bitmap is only synced when the operation is successful.
 #  This is the behavior always used for 'INCREMENTAL' backups.
 #
+# @never: The bitmap is never synchronized with the operation, and is
+# treated solely as a read-only manifest of blocks to copy.
+#
 # Since: 4.2
 ##
 { 'enum': 'BitmapSyncMode',
-  'data': ['on-success'] }
+  'data': ['on-success', 'never'] }
 
 ##
 # @MirrorCopyMode:
-- 
2.21.0




Re: [Qemu-devel] [Qemu-block] [PATCH] block: posix: Always allocate the first block

2019-08-16 Thread John Snow



On 8/16/19 6:45 PM, Nir Soffer wrote:
> On Sat, Aug 17, 2019 at 12:57 AM John Snow  > wrote:
> 
> On 8/16/19 5:21 PM, Nir Soffer wrote:
> > When creating an image with preallocation "off" or "falloc", the first
> > block of the image is typically not allocated. When using Gluster
> > storage backed by XFS filesystem, reading this block using direct I/O
> > succeeds regardless of request length, fooling alignment detection.
> >
> > In this case we fallback to a safe value (4096) instead of the optimal
> > value (512), which may lead to unneeded data copying when aligning
> > requests.  Allocating the first block avoids the fallback.
> >
> 
> Where does this detection/fallback happen? (Can it be improved?)
> 
> 
> In raw_probe_alignment().
> 
> This patch explain the issues:
> https://lists.nongnu.org/archive/html/qemu-block/2019-08/msg00568.html
> 
> Here Kevin and me discussed ways to improve it:
> https://lists.nongnu.org/archive/html/qemu-block/2019-08/msg00426.html
> 

Thanks for the reading!
That does help explain this patch better.

> > When using preallocation=off, we always allocate at least one
> filesystem
> > block:
> >
> >     $ ./qemu-img create -f raw test.raw 1g
> >     Formatting 'test.raw', fmt=raw size=1073741824
> >
> >     $ ls -lhs test.raw
> >     4.0K -rw-r--r--. 1 nsoffer nsoffer 1.0G Aug 16 23:48 test.raw
> >
> > I did quick performance tests for these flows:
> > - Provisioning a VM with a new raw image.
> > - Copying disks with qemu-img convert to new raw target image
> >
> > I installed Fedora 29 server on raw sparse image, measuring the time
> > from clicking "Begin installation" until the "Reboot" button appears:
> >
> > Before(s)  After(s)     Diff(%)
> > ---
> >      356        389        +8.4
> >
> > I ran this only once, so we cannot tell much from these results.
> >
> 
> That seems like a pretty big difference for just having pre-allocated a
> single block. What was the actual command line / block graph for
> that test?
> 
> 
> Having the first block allocated changes the alignment.
> 
> Before this patch, we detect request_alignment=1, so we fallback to 4096.
> Then we detect buf_align=1, so we fallback to value of request alignment.
> 
> The guest see a disk with:
> logical_block_size = 512
> physical_block_size = 512
> 
> But qemu uses:
> request_alignment = 4096
> buf_align = 4096
> 
> storage uses:
> logical_block_size = 512
> physical_block_size = 512
> 
> If the guest does direct I/O using 512 bytes aligment, qemu has to copy
> the buffer to align them to 4096 bytes.
> 
> After this patch, qemu detects the alignment correctly, so we have:
> 
> guest
> logical_block_size = 512
> physical_block_size = 512
> 
> qemu
> request_alignment = 512
> buf_align = 512
> 
> storage:
> logical_block_size = 512
> physical_block_size = 512
> 
> We expect this to be more efficient because qemu does not have to emulate
> anything.
> 
> Was this over a network that could explain the variance?
> 
> 
> Maybe, this is complete install of Fedora 29 server, I'm not sure if the
> installation 
> access the network.
> 
> > The second test was cloning the installation image with qemu-img
> > convert, doing 10 runs:
> >
> >     for i in $(seq 10); do
> >         rm -f dst.raw
> >         sleep 10
> >         time ./qemu-img convert -f raw -O raw -t none -T none
> src.raw dst.raw
> >     done
> >
> > Here is a table comparing the total time spent:
> >
> > Type    Before(s)   After(s)    Diff(%)
> > ---
> > real      530.028    469.123      -11.4
> > user       17.204     10.768      -37.4
> > sys        17.881      7.011      -60.7
> >
> > Here we see very clear improvement in CPU usage.
> >
> 
> Hard to argue much with that. I feel a little strange trying to force
> the allocation of the first block, but I suppose in practice "almost no
> preallocation" is indistinguishable from "exactly no preallocation" if
> you squint.
> 
> 
> Right.
> 
> The real issue is that filesystems and block devices do not expose the
> alignment
> requirement for direct I/O, so we need to use these hacks and assumptions.
> 
> With local XFS we use xfsctl(XFS_IOC_DIOINFO) to get request_alignment,
> but this does
> not help for XFS filesystem used by Gluster on the server side.
> 
> I hope that Niels is working on adding similar ioctl for Glsuter, os it
> can expose the properties
> of the remote filesystem.
> 
> Nir

That sounds quite a bit less hacky, but I agree we still have to do what
we can in the meantime.

(It looks like you've been hashing this out with Kevin for a while, so
I'm going to sheepishly defer to his judgment on this patch. While I
think it's probably a fine 

Re: [Qemu-devel] [PATCH] ppc: Three floating point fixes

2019-08-16 Thread Aleksandar Markovic
16.08.2019. 21.28, "Paul A. Clarke"  је написао/ла:
>
> From: "Paul A. Clarke" 
>
> - target/ppc/fpu_helper.c:
>   - helper_todouble() was not properly converting INFINITY from 32 bit
>   float to 64 bit double.
>   - helper_todouble() was not properly converting any denormalized
>   32 bit float to 64 bit double.
>
> - GCC, as of version 8 or so, takes advantage of the hardware's
>   implementation of the xscvdpspn instruction to optimize the following
>   sequence:
> xscvdpspn vs0,vs1
> mffprwz   r8,f0
>   ISA 3.0B has xscvdpspn leaving its result in word 1 of the target
register,
>   and mffprwz expecting its input to come from word 0 of the source
register.
>   This sequence fails with QEMU, as a shift is required between those two
>   instructions.  However, the hardware splats the result to both word 0
and
>   word 1 of its output register, so the shift is not necessary.
>   Expect a future revision of the ISA to specify this behavior.
>

Hmmm... Isn't this a gcc bug (using undocumented hardware feature), given
everything you said here?

Sincerely,
Aleksandar

> Signed-off-by: Paul A. Clarke 
> ---
>  target/ppc/fpu_helper.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
> index 5611cf0..82b5425 100644
> --- a/target/ppc/fpu_helper.c
> +++ b/target/ppc/fpu_helper.c
> @@ -62,13 +62,14 @@ uint64_t helper_todouble(uint32_t arg)
>  ret  = (uint64_t)extract32(arg, 30, 2) << 62;
>  ret |= ((extract32(arg, 30, 1) ^ 1) * (uint64_t)7) << 59;
>  ret |= (uint64_t)extract32(arg, 0, 30) << 29;
> +ret |= (0x7ffULL * (extract32(arg, 23, 8) == 0xff)) << 52;
>  } else {
>  /* Zero or Denormalized operand.  */
>  ret = (uint64_t)extract32(arg, 31, 1) << 63;
>  if (unlikely(abs_arg != 0)) {
>  /* Denormalized operand.  */
>  int shift = clz32(abs_arg) - 9;
> -int exp = -126 - shift + 1023;
> +int exp = -127 - shift + 1023;
>  ret |= (uint64_t)exp << 52;
>  ret |= abs_arg << (shift + 29);
>  }
> @@ -2871,10 +2872,14 @@ void helper_xscvqpdp(CPUPPCState *env, uint32_t
opcode,
>
>  uint64_t helper_xscvdpspn(CPUPPCState *env, uint64_t xb)
>  {
> +uint64_t result;
> +
>  float_status tstat = env->fp_status;
>  set_float_exception_flags(0, );
>
> -return (uint64_t)float64_to_float32(xb, ) << 32;
> +result = (uint64_t)float64_to_float32(xb, );
> +/* hardware replicates result to both words of the doubleword
result.  */
> +return (result << 32) | result;
>  }
>
>  uint64_t helper_xscvspdpn(CPUPPCState *env, uint64_t xb)
> --
> 1.8.3.1
>
>


[Qemu-devel] [Bug 1810400] Re: Failed to make dirty bitmaps writable: Can't update bitmap directory: Operation not permitted

2019-08-16 Thread John Snow
Acknowledged; target is 4.2.

Vladimir Sementsov-Ogievskiy has some patches in-flight that seek to
correct block commit behavior with bitmaps:
https://lists.gnu.org/archive/html/qemu-devel/2019-08/msg01160.html


** Changed in: qemu
   Status: New => Confirmed

** Changed in: qemu
 Assignee: (unassigned) => John Snow (jnsnow)

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1810400

Title:
   Failed to make dirty bitmaps writable: Can't update bitmap directory:
  Operation not permitted

Status in QEMU:
  Confirmed

Bug description:
  blockcommit does not work if there is dirty block.

  virsh version
  Compiled against library: libvirt 4.10.0
  Using library: libvirt 4.10.0
  Using API: QEMU 4.10.0
  Running hypervisor: QEMU 2.12.0

  Scenario:
  1. Create an instance
  2. Add dirty bitmap to vm disk.
  3. create a snapshot(external or internal)
  4. revert snapshot or blockcommit disk

  virsh blockcommit rota-test vda  --active
  Active Block Commit started

  virsh blockjob rota-test vda --info
  No current block job for vda

  
  rota-test.log:
   starting up libvirt version: 4.10.0, package: 1.el7 (CBS , 
2018-12-05-12:27:12, c1bk.rdu2.centos.org), qemu version: 
2.12.0qemu-kvm-ev-2.12.0-18.el7_6.1.1, kernel: 4.1.12-103.9.7.el7uek.x86_64, 
hostname: vm-kvm07
  LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name 
guest=rota-test,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-101-rota-test/master-key.aes
 -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off,dump-guest-core=off -cpu 
SandyBridge,hypervisor=on,xsaveopt=on -m 8192 -realtime mlock=off -smp 
3,sockets=3,cores=1,threads=1 -uuid 50dec55c-a80a-4adc-a788-7ba23230064e 
-no-user-config -nodefaults -chardev socket,id=charmonitor,fd=59,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew 
-global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global 
PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device 
ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device 
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive 
file=/var/lib/libvirt/images/rota-0003,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
 -netdev tap,fd=61,id=hostnet0,vhost=on,vhostfd=62 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e8:09:94,bus=pci.0,addr=0x3 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-chardev spicevmc,id=charchannel0,name=vdagent -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
 -spice port=5902,addr=0.0.0.0,disable-ticketing,seamless-migration=on -device 
qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
 -chardev spicevmc,id=charredir0,name=usbredir -device 
usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev 
spicevmc,id=charredir1,name=usbredir -device 
usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -sandbox 
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg 
timestamp=on
  2019-01-03T07:50:43.810142Z qemu-kvm: -chardev pty,id=charserial0: char 
device redirected to /dev/pts/3 (label charserial0)
  main_channel_link: add main channel client
  red_qxl_set_cursor_peer:
  inputs_connect: inputs channel client create
  inputs_channel_detach_tablet:
  #block339: Failed to make dirty bitmaps writable: Can't update bitmap 
directory: Operation not permitted

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1810400/+subscriptions



Re: [Qemu-devel] [Qemu-block] [PATCH] block: posix: Always allocate the first block

2019-08-16 Thread Nir Soffer
On Sat, Aug 17, 2019 at 12:57 AM John Snow  wrote:

> On 8/16/19 5:21 PM, Nir Soffer wrote:
> > When creating an image with preallocation "off" or "falloc", the first
> > block of the image is typically not allocated. When using Gluster
> > storage backed by XFS filesystem, reading this block using direct I/O
> > succeeds regardless of request length, fooling alignment detection.
> >
> > In this case we fallback to a safe value (4096) instead of the optimal
> > value (512), which may lead to unneeded data copying when aligning
> > requests.  Allocating the first block avoids the fallback.
> >
>
> Where does this detection/fallback happen? (Can it be improved?)
>

In raw_probe_alignment().

This patch explain the issues:
https://lists.nongnu.org/archive/html/qemu-block/2019-08/msg00568.html

Here Kevin and me discussed ways to improve it:
https://lists.nongnu.org/archive/html/qemu-block/2019-08/msg00426.html

> When using preallocation=off, we always allocate at least one filesystem
> > block:
> >
> > $ ./qemu-img create -f raw test.raw 1g
> > Formatting 'test.raw', fmt=raw size=1073741824
> >
> > $ ls -lhs test.raw
> > 4.0K -rw-r--r--. 1 nsoffer nsoffer 1.0G Aug 16 23:48 test.raw
> >
> > I did quick performance tests for these flows:
> > - Provisioning a VM with a new raw image.
> > - Copying disks with qemu-img convert to new raw target image
> >
> > I installed Fedora 29 server on raw sparse image, measuring the time
> > from clicking "Begin installation" until the "Reboot" button appears:
> >
> > Before(s)  After(s) Diff(%)
> > ---
> >  356389+8.4
> >
> > I ran this only once, so we cannot tell much from these results.
> >
>
> That seems like a pretty big difference for just having pre-allocated a
> single block. What was the actual command line / block graph for that test?
>

Having the first block allocated changes the alignment.

Before this patch, we detect request_alignment=1, so we fallback to 4096.
Then we detect buf_align=1, so we fallback to value of request alignment.

The guest see a disk with:
logical_block_size = 512
physical_block_size = 512

But qemu uses:
request_alignment = 4096
buf_align = 4096

storage uses:
logical_block_size = 512
physical_block_size = 512

If the guest does direct I/O using 512 bytes aligment, qemu has to copy
the buffer to align them to 4096 bytes.

After this patch, qemu detects the alignment correctly, so we have:

guest
logical_block_size = 512
physical_block_size = 512

qemu
request_alignment = 512
buf_align = 512

storage:
logical_block_size = 512
physical_block_size = 512

We expect this to be more efficient because qemu does not have to emulate
anything.

Was this over a network that could explain the variance?
>

Maybe, this is complete install of Fedora 29 server, I'm not sure if the
installation
access the network.

> The second test was cloning the installation image with qemu-img
> > convert, doing 10 runs:
> >
> > for i in $(seq 10); do
> > rm -f dst.raw
> > sleep 10
> > time ./qemu-img convert -f raw -O raw -t none -T none src.raw
> dst.raw
> > done
> >
> > Here is a table comparing the total time spent:
> >
> > TypeBefore(s)   After(s)Diff(%)
> > ---
> > real  530.028469.123  -11.4
> > user   17.204 10.768  -37.4
> > sys17.881  7.011  -60.7
> >
> > Here we see very clear improvement in CPU usage.
> >
>
> Hard to argue much with that. I feel a little strange trying to force
> the allocation of the first block, but I suppose in practice "almost no
> preallocation" is indistinguishable from "exactly no preallocation" if
> you squint.
>

Right.

The real issue is that filesystems and block devices do not expose the
alignment
requirement for direct I/O, so we need to use these hacks and assumptions.

With local XFS we use xfsctl(XFS_IOC_DIOINFO) to get request_alignment, but
this does
not help for XFS filesystem used by Gluster on the server side.

I hope that Niels is working on adding similar ioctl for Glsuter, os it can
expose the properties
of the remote filesystem.

Nir


Re: [Qemu-devel] [POC Seabios PATCH] seabios: use isolated SMM address space for relocation

2019-08-16 Thread Boris Ostrovsky
On 8/16/19 7:24 AM, Igor Mammedov wrote:
> for purpose of demo SMRAM (at 0x3) is aliased at a in system address 
> space
> for easy initialization of SMI entry point.
> Here is resulting debug output showing that RAM at 0x3 is not affected
> by SMM and only RAM in SMM adderss space is modified:
>
> init smm
> smm_relocate: before relocaten
> smm_relocate: RAM codeentry 0
> smm_relocate: RAM  cpu.i64.smm_base  0
> smm_relocate: SMRAM  codeentry f000c831eac88c
> smm_relocate: SMRAM  cpu.i64.smm_base  0
> handle_smi cmd=0 smbase=0x0003
> smm_relocate: after relocaten
> smm_relocate: RAM codeentry 0
> smm_relocate: RAM  cpu.i64.smm_base  0
> smm_relocate: SMRAM  codeentry f000c831eac88c
> smm_relocate: SMRAM  cpu.i64.smm_base  a


I most likely don't understand how this is supposed to work but aren't
we here successfully reading SMRAM from non-SMM context, something we
are not supposed to be able to do?


-boris




Re: [Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL ioctls.

2019-08-16 Thread Aleksandar Markovic
16.08.2019. 23.28, "Shu-Chun Weng via Qemu-devel" 
је написао/ла:
>
> The ioctl numeric values are platform-independent and determined by
> the file include/uapi/linux/sockios.h in Linux kernel source code:
>
>   #define SIOCETHTOOL   0x8946
>
> These ioctls get (or set) the field ifr_data of type char* in the
> structure ifreq. Such functionality is achieved in QEMU by using
> MK_STRUCT() and MK_PTR() macros with an appropriate argument, as
> it was done for existing similar cases.
>
> Signed-off-by: Shu-Chun Weng 
> ---

Shu-Chun, hi, and welcome!

Just a couple of cosmetic things:

  - by convention, the title of this patch should start with "linux-user:",
since this patch affects linux user QEMU module;

  - the patch title is too long (and has some minor mistakes) -
"linux-user: Add support for SIOCETHTOOL ioctl" should be good enough;

  - the lenght of the code lines that you add or modify must not be greater
than 80.

Sincerely,
Aleksandar

>  linux-user/ioctls.h   | 1 +
>  linux-user/syscall_defs.h | 2 ++
>  2 files changed, 3 insertions(+)
>
> diff --git a/linux-user/ioctls.h b/linux-user/ioctls.h
> index 3281c97ca2..9d231df665 100644
> --- a/linux-user/ioctls.h
> +++ b/linux-user/ioctls.h
> @@ -208,6 +208,7 @@
>IOCTL(SIOCGIFINDEX, IOC_W | IOC_R, MK_PTR(MK_STRUCT(STRUCT_int_ifreq)))
>IOCTL(SIOCSIFPFLAGS, IOC_W, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
>IOCTL(SIOCGIFPFLAGS, IOC_W | IOC_R,
MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
> +  IOCTL(SIOCETHTOOL, IOC_R | IOC_W, MK_PTR(MK_STRUCT(STRUCT_ptr_ifreq)))
>IOCTL(SIOCSIFLINK, 0, TYPE_NULL)
>IOCTL_SPECIAL(SIOCGIFCONF, IOC_W | IOC_R, do_ioctl_ifconf,
>  MK_PTR(MK_STRUCT(STRUCT_ifconf)))
> diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
> index 0662270300..276f96039f 100644
> --- a/linux-user/syscall_defs.h
> +++ b/linux-user/syscall_defs.h
> @@ -819,6 +819,8 @@ struct target_pollfd {
>  #define TARGET_SIOCGIFTXQLEN   0x8942  /* Get the tx queue
length  */
>  #define TARGET_SIOCSIFTXQLEN   0x8943  /* Set the tx queue
length  */
>
> +#define TARGET_SIOCETHTOOL 0x8946  /* Ethtool interface
  */
> +
>  /* ARP cache control calls. */
>  #define TARGET_OLD_SIOCDARP0x8950  /* old delete ARP table
entry   */
>  #define TARGET_OLD_SIOCGARP0x8951  /* old get ARP table
entry  */
> --
> 2.23.0.rc1.153.gdeed80330f-goog
>
>


Re: [Qemu-devel] bitmaps branch conflict resolution

2019-08-16 Thread Max Reitz
On 17.08.19 00:07, John Snow wrote:
> Hi Max, I took your patch and adjusted it slightly: I don't like
> "skip_bytes" anymore because it's clear now that we don't only read that
> value when we're skipping bytes, so now it's just status_bytes.

Yep, sure.

> Since this is based on your fixup, would you like to offer an
> Ack/S-o-b/R-B/whichever here?

Sure:

Reviewed-by: Max Reitz 

Additional explanation for others:

The conflict resolution in itself is just a matter of the
“backup_bitmap_reset_unallocated” block and the
“bdrv_dirty_bitmap_next_zero” block introduced in the same place in two
separate patches (one went to master, the other to bitmaps-next).

So the question is how to order them.  On the first glance, it doesn’t
matter, it can go both ways.

On a second glance, it turns out we need to combine the results, hence
the new MIN() here.

If we are initializing the bitmap, bdrv_dirty_bitmap_next_zero() does
not necessarily return the correct result.  It is only accurate insofar
we have actually initialized the bitmap.  We can get that information
from backup_bitmap_reset_unallocated(): It ensures that the bitmap is
accurate in the [start, start + status_bytes) range.

Therefore, we have to limit dirty_end by start + status_bytes.

I don’t think it really matters whether we do the
backup_bitmap_reset_allocated() or the bdrv_dirty_bitmap_next_zero()
first.  It’s just that it’s slightly simpler to do the latter first,
because the former is in a conditional block, so we can put the MIN()
right there.  Hence the order change here.

(If we did it the other way around, we’d need another conditional block
“if (job->initializing_bitmap) { dirty_end = MIN(...) }” after we have
both dirty_end and status_bytes.)

Max

> diff --git a/block/backup.c b/block/backup.c
> index ee4d5598986..9e1382ec5c6 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -266,7 +266,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob
> *job,
>  int ret = 0;
>  int64_t start, end; /* bytes */
>  void *bounce_buffer = NULL;
> -int64_t skip_bytes;
> +int64_t status_bytes;
> 
>  qemu_co_rwlock_rdlock(>flush_rwlock);
> 
> @@ -287,21 +287,23 @@ static int coroutine_fn
> backup_do_cow(BackupBlockJob *job,
>  continue; /* already copied */
>  }
> 
> -if (job->initializing_bitmap) {
> -ret = backup_bitmap_reset_unallocated(job, start, _bytes);
> -if (ret == 0) {
> -trace_backup_do_cow_skip_range(job, start, skip_bytes);
> -start += skip_bytes;
> -continue;
> -}
> -}
> -
>  dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
>  (end - start));
>  if (dirty_end < 0) {
>  dirty_end = end;
>  }
> 
> +if (job->initializing_bitmap) {
> +ret = backup_bitmap_reset_unallocated(job, start,
> _bytes);
> +if (ret == 0) {
> +trace_backup_do_cow_skip_range(job, start, status_bytes);
> +start += status_bytes;
> +continue;
> +}
> +/* Clamp to known allocated region */
> +dirty_end = MIN(dirty_end, start + status_bytes);
> +}
> +
>  trace_backup_do_cow_process(job, start);
> 
>  if (job->use_copy_range) {
> 




signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF

2019-08-16 Thread Alex Williamson
On Fri, 16 Aug 2019 22:15:15 +0200
Laszlo Ersek  wrote:

> +Alex (direct question at the bottom)
> 
> On 08/16/19 09:49, Yao, Jiewen wrote:
> > below
> >   
> >> -Original Message-
> >> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
> >> Sent: Friday, August 16, 2019 3:20 PM
> >> To: Yao, Jiewen ; Laszlo Ersek
> >> ; de...@edk2.groups.io
> >> Cc: edk2-rfc-groups-io ; qemu devel list
> >> ; Igor Mammedov ;
> >> Chen, Yingwen ; Nakajima, Jun
> >> ; Boris Ostrovsky ;
> >> Joao Marcal Lemos Martins ; Phillip Goerl
> >> 
> >> Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> >>
> >> On 16/08/19 04:46, Yao, Jiewen wrote:  
> >>> Comment below:
> >>>
> >>>  
>  -Original Message-
>  From: Paolo Bonzini [mailto:pbonz...@redhat.com]
>  Sent: Friday, August 16, 2019 12:21 AM
>  To: Laszlo Ersek ; de...@edk2.groups.io; Yao,  
> >> Jiewen  
>  
>  Cc: edk2-rfc-groups-io ; qemu devel list
>  ; Igor Mammedov  
> >> ;  
>  Chen, Yingwen ; Nakajima, Jun
>  ; Boris Ostrovsky  
> >> ;  
>  Joao Marcal Lemos Martins ; Phillip Goerl
>  
>  Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
> 
>  On 15/08/19 17:00, Laszlo Ersek wrote:  
> > On 08/14/19 16:04, Paolo Bonzini wrote:  
> >> On 14/08/19 15:20, Yao, Jiewen wrote:  
>  - Does this part require a new branch somewhere in the OVMF SEC  
>  code?  
>    How do we determine whether the CPU executing SEC is BSP or
>    hot-plugged AP?  
> >>> [Jiewen] I think this is blocked from hardware perspective, since the 
> >>>  
> >> first  
>  instruction.  
> >>> There are some hardware specific registers can be used to determine  
> >> if  
>  the CPU is new added.  
> >>> I don’t think this must be same as the real hardware.
> >>> You are free to invent some registers in device model to be used in  
>  OVMF hot plug driver.  
> >>
> >> Yes, this would be a new operation mode for QEMU, that only applies  
> >> to  
> >> hot-plugged CPUs.  In this mode the AP doesn't reply to INIT or SMI,  
> >> in  
> >> fact it doesn't reply to anything at all.
> >>  
>  - How do we tell the hot-plugged AP where to start execution? (I.e.  
>  that  
>    it should execute code at a particular pflash location.)  
> >>> [Jiewen] Same real mode reset vector at :FFF0.  
> >>
> >> You do not need a reset vector or INIT/SIPI/SIPI sequence at all in
> >> QEMU.  The AP does not start execution at all when it is unplugged,  
> >> so  
> >> no cache-as-RAM etc.
> >>
> >> We only need to modify QEMU so that hot-plugged APIs do not reply  
> >> to  
> >> INIT/SIPI/SMI.
> >>  
> >>> I don’t think there is problem for real hardware, who always has CAR.
> >>> Can QEMU provide some CPU specific space, such as MMIO region?  
> >>
> >> Why is a CPU-specific region needed if every other processor is in SMM
> >> and thus trusted.  
> >
> > I was going through the steps Jiewen and Yingwen recommended.
> >
> > In step (02), the new CPU is expected to set up RAM access. In step
> > (03), the new CPU, executing code from flash, is expected to "send  
> >> board  
> > message to tell host CPU (GPIO->SCI) -- I am waiting for hot-add
> > message." For that action, the new CPU may need a stack (minimally if  
> >> we  
> > want to use C function calls).
> >
> > Until step (03), there had been no word about any other (= pre-plugged)
> > CPUs (more precisely, Jiewen even confirmed "No impact to other
> > processors"), so I didn't assume that other CPUs had entered SMM.
> >
> > Paolo, I've attempted to read Jiewen's response, and yours, as carefully
> > as I can. I'm still very confused. If you have a better understanding,
> > could you please write up the 15-step process from the thread starter
> > again, with all QEMU customizations applied? Such as, unnecessary  
> >> steps  
> > removed, and platform specifics filled in.  
> 
>  Sure.
> 
>  (01a) QEMU: create new CPU.  The CPU already exists, but it does not
>   start running code until unparked by the CPU hotplug controller.
> 
>  (01b) QEMU: trigger SCI
> 
>  (02-03) no equivalent
> 
>  (04) Host CPU: (OS) execute GPE handler from DSDT
> 
>  (05) Host CPU: (OS) Port 0xB2 write, all CPUs enter SMM (NOTE: New CPU
>   will not enter CPU because SMI is disabled)
> 
>  (06) Host CPU: (SMM) Save 38000, Update 38000 -- fill simple SMM
>   rebase code.
> 
>  (07a) Host CPU: (SMM) Write to CPU hotplug controller to enable
>   new CPU
> 
>  (07b) Host CPU: (SMM) Send INIT/SIPI/SIPI to new CPU.  
> >>> [Jiewen] NOTE: INIT/SIPI/SIPI can be sent by a malicious CPU. There is no
> >>> restriction that INIT/SIPI/SIPI can only be sent in SMM.  
> 

Re: [Qemu-devel] [Qemu-block] [PATCH] block: posix: Always allocate the first block

2019-08-16 Thread John Snow



On 8/16/19 5:21 PM, Nir Soffer wrote:
> When creating an image with preallocation "off" or "falloc", the first
> block of the image is typically not allocated. When using Gluster
> storage backed by XFS filesystem, reading this block using direct I/O
> succeeds regardless of request length, fooling alignment detection.
> 
> In this case we fallback to a safe value (4096) instead of the optimal
> value (512), which may lead to unneeded data copying when aligning
> requests.  Allocating the first block avoids the fallback.
> 

Where does this detection/fallback happen? (Can it be improved?)

> When using preallocation=off, we always allocate at least one filesystem
> block:
> 
> $ ./qemu-img create -f raw test.raw 1g
> Formatting 'test.raw', fmt=raw size=1073741824
> 
> $ ls -lhs test.raw
> 4.0K -rw-r--r--. 1 nsoffer nsoffer 1.0G Aug 16 23:48 test.raw
> 
> I did quick performance tests for these flows:
> - Provisioning a VM with a new raw image.
> - Copying disks with qemu-img convert to new raw target image
> 
> I installed Fedora 29 server on raw sparse image, measuring the time
> from clicking "Begin installation" until the "Reboot" button appears:
> 
> Before(s)  After(s) Diff(%)
> ---
>  356389+8.4
> 
> I ran this only once, so we cannot tell much from these results.
> 

That seems like a pretty big difference for just having pre-allocated a
single block. What was the actual command line / block graph for that test?

Was this over a network that could explain the variance?

> The second test was cloning the installation image with qemu-img
> convert, doing 10 runs:
> 
> for i in $(seq 10); do
> rm -f dst.raw
> sleep 10
> time ./qemu-img convert -f raw -O raw -t none -T none src.raw dst.raw
> done
> 
> Here is a table comparing the total time spent:
> 
> TypeBefore(s)   After(s)Diff(%)
> ---
> real  530.028469.123  -11.4
> user   17.204 10.768  -37.4
> sys17.881  7.011  -60.7
> 
> Here we see very clear improvement in CPU usage.
> 

Hard to argue much with that. I feel a little strange trying to force
the allocation of the first block, but I suppose in practice "almost no
preallocation" is indistinguishable from "exactly no preallocation" if
you squint.

> Signed-off-by: Nir Soffer 
> ---
>  block/file-posix.c | 25 +
>  tests/qemu-iotests/150.out |  1 +
>  tests/qemu-iotests/160 |  4 
>  tests/qemu-iotests/175 | 19 +--
>  tests/qemu-iotests/175.out |  8 
>  tests/qemu-iotests/221.out | 12 
>  tests/qemu-iotests/253.out | 12 
>  7 files changed, 63 insertions(+), 18 deletions(-)
> 
> diff --git a/block/file-posix.c b/block/file-posix.c
> index b9c33c8f6c..3964dd2021 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -1755,6 +1755,27 @@ static int handle_aiocb_discard(void *opaque)
>  return ret;
>  }
>  
> +/*
> + * Help alignment detection by allocating the first block.
> + *
> + * When reading with direct I/O from unallocated area on Gluster backed by 
> XFS,
> + * reading succeeds regardless of request length. In this case we fallback to
> + * safe aligment which is not optimal. Allocating the first block avoids this
> + * fallback.
> + *
> + * Returns: 0 on success, -errno on failure.
> + */
> +static int allocate_first_block(int fd)
> +{
> +ssize_t n;
> +
> +do {
> +n = pwrite(fd, "\0", 1, 0);
> +} while (n == -1 && errno == EINTR);
> +
> +return (n == -1) ? -errno : 0;
> +}
> +
>  static int handle_aiocb_truncate(void *opaque)
>  {
>  RawPosixAIOData *aiocb = opaque;
> @@ -1794,6 +1815,8 @@ static int handle_aiocb_truncate(void *opaque)
>  /* posix_fallocate() doesn't set errno. */
>  error_setg_errno(errp, -result,
>   "Could not preallocate new data");
> +} else if (current_length == 0) {
> +allocate_first_block(fd);
>  }
>  } else {
>  result = 0;
> @@ -1855,6 +1878,8 @@ static int handle_aiocb_truncate(void *opaque)
>  if (ftruncate(fd, offset) != 0) {
>  result = -errno;
>  error_setg_errno(errp, -result, "Could not resize file");
> +} else if (current_length == 0 && offset > current_length) {
> +allocate_first_block(fd);
>  }
>  return result;
>  default:
> diff --git a/tests/qemu-iotests/150.out b/tests/qemu-iotests/150.out
> index 2a54e8dcfa..3cdc7727a5 100644
> --- a/tests/qemu-iotests/150.out
> +++ b/tests/qemu-iotests/150.out
> @@ -3,6 +3,7 @@ QA output created by 150
>  === Mapping sparse conversion ===
>  
>  Offset  Length  File
> +0   0x1000  TEST_DIR/t.IMGFMT
>  
>  === Mapping non-sparse conversion ===
>  
> diff --git 

Re: [Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL ioctls.

2019-08-16 Thread no-reply
Patchew URL: https://patchew.org/QEMU/20190816211356.59244-1-...@google.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL 
ioctls.
Message-id: 20190816211356.59244-1-...@google.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/20190816211356.59244-1-...@google.com -> 
patchew/20190816211356.59244-1-...@google.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for 
path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 
'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for 
path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for 
path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for 
path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for 
path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) 
registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 
'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' 
(https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 
'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' 
(https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 
'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out 
'20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) 
registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' 
(https://github.com/openssl/openssl) registered for path 
'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': 
checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out 
'50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered 
for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) 
registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': 
checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked 
out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 
'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out 
'09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 'roms/ipxe'...
Submodule path 

[Qemu-devel] [PATCH] linux-user: add memfd_create

2019-08-16 Thread Shu-Chun Weng via Qemu-devel
Add support for the memfd_create syscall. If the host does not have the
libc wrapper, translate to a direct syscall with NC-macro.

Buglink: https://bugs.launchpad.net/qemu/+bug/1734792
Signed-off-by: Shu-Chun Weng 
---
 include/qemu/memfd.h |  4 
 linux-user/syscall.c | 11 +++
 util/memfd.c |  2 +-
 3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/include/qemu/memfd.h b/include/qemu/memfd.h
index d551c28b68..975b6bdb77 100644
--- a/include/qemu/memfd.h
+++ b/include/qemu/memfd.h
@@ -32,6 +32,10 @@
 #define MFD_HUGE_SHIFT 26
 #endif
 
+#if defined CONFIG_LINUX && !defined CONFIG_MEMFD
+int memfd_create(const char *name, unsigned int flags);
+#endif
+
 int qemu_memfd_create(const char *name, size_t size, bool hugetlb,
   uint64_t hugetlbsize, unsigned int seals, Error **errp);
 bool qemu_memfd_alloc_check(void);
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 8367cb138d..b506c1f40e 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -20,6 +20,7 @@
 #include "qemu/osdep.h"
 #include "qemu/cutils.h"
 #include "qemu/path.h"
+#include "qemu/memfd.h"
 #include 
 #include 
 #include 
@@ -11938,6 +11939,16 @@ static abi_long do_syscall1(void *cpu_env, int num, 
abi_long arg1,
 /* PowerPC specific.  */
 return do_swapcontext(cpu_env, arg1, arg2, arg3);
 #endif
+#ifdef TARGET_NR_memfd_create
+case TARGET_NR_memfd_create:
+p = lock_user_string(arg1);
+if (!p) {
+return -TARGET_EFAULT;
+}
+ret = get_errno(memfd_create(p, arg2));
+unlock_user(p, arg1, 0);
+return ret;
+#endif
 
 default:
 qemu_log_mask(LOG_UNIMP, "Unsupported syscall: %d\n", num);
diff --git a/util/memfd.c b/util/memfd.c
index 00334e5b21..4a3c07e0be 100644
--- a/util/memfd.c
+++ b/util/memfd.c
@@ -35,7 +35,7 @@
 #include 
 #include 
 
-static int memfd_create(const char *name, unsigned int flags)
+int memfd_create(const char *name, unsigned int flags)
 {
 #ifdef __NR_memfd_create
 return syscall(__NR_memfd_create, name, flags);
-- 
2.23.0.rc1.153.gdeed80330f-goog




[Qemu-devel] [PATCH] Add support for ethtool via TARGET_SIOCETHTOOL ioctls.

2019-08-16 Thread Shu-Chun Weng via Qemu-devel
The ioctl numeric values are platform-independent and determined by
the file include/uapi/linux/sockios.h in Linux kernel source code:

  #define SIOCETHTOOL   0x8946

These ioctls get (or set) the field ifr_data of type char* in the
structure ifreq. Such functionality is achieved in QEMU by using
MK_STRUCT() and MK_PTR() macros with an appropriate argument, as
it was done for existing similar cases.

Signed-off-by: Shu-Chun Weng 
---
 linux-user/ioctls.h   | 1 +
 linux-user/syscall_defs.h | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/linux-user/ioctls.h b/linux-user/ioctls.h
index 3281c97ca2..9d231df665 100644
--- a/linux-user/ioctls.h
+++ b/linux-user/ioctls.h
@@ -208,6 +208,7 @@
   IOCTL(SIOCGIFINDEX, IOC_W | IOC_R, MK_PTR(MK_STRUCT(STRUCT_int_ifreq)))
   IOCTL(SIOCSIFPFLAGS, IOC_W, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
   IOCTL(SIOCGIFPFLAGS, IOC_W | IOC_R, MK_PTR(MK_STRUCT(STRUCT_short_ifreq)))
+  IOCTL(SIOCETHTOOL, IOC_R | IOC_W, MK_PTR(MK_STRUCT(STRUCT_ptr_ifreq)))
   IOCTL(SIOCSIFLINK, 0, TYPE_NULL)
   IOCTL_SPECIAL(SIOCGIFCONF, IOC_W | IOC_R, do_ioctl_ifconf,
 MK_PTR(MK_STRUCT(STRUCT_ifconf)))
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index 0662270300..276f96039f 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -819,6 +819,8 @@ struct target_pollfd {
 #define TARGET_SIOCGIFTXQLEN   0x8942  /* Get the tx queue length  
*/
 #define TARGET_SIOCSIFTXQLEN   0x8943  /* Set the tx queue length  
*/
 
+#define TARGET_SIOCETHTOOL 0x8946  /* Ethtool interface
*/
+
 /* ARP cache control calls. */
 #define TARGET_OLD_SIOCDARP0x8950  /* old delete ARP table entry   
*/
 #define TARGET_OLD_SIOCGARP0x8951  /* old get ARP table entry  
*/
-- 
2.23.0.rc1.153.gdeed80330f-goog




[Qemu-devel] [PATCH] block: posix: Always allocate the first block

2019-08-16 Thread Nir Soffer
When creating an image with preallocation "off" or "falloc", the first
block of the image is typically not allocated. When using Gluster
storage backed by XFS filesystem, reading this block using direct I/O
succeeds regardless of request length, fooling alignment detection.

In this case we fallback to a safe value (4096) instead of the optimal
value (512), which may lead to unneeded data copying when aligning
requests.  Allocating the first block avoids the fallback.

When using preallocation=off, we always allocate at least one filesystem
block:

$ ./qemu-img create -f raw test.raw 1g
Formatting 'test.raw', fmt=raw size=1073741824

$ ls -lhs test.raw
4.0K -rw-r--r--. 1 nsoffer nsoffer 1.0G Aug 16 23:48 test.raw

I did quick performance tests for these flows:
- Provisioning a VM with a new raw image.
- Copying disks with qemu-img convert to new raw target image

I installed Fedora 29 server on raw sparse image, measuring the time
from clicking "Begin installation" until the "Reboot" button appears:

Before(s)  After(s) Diff(%)
---
 356389+8.4

I ran this only once, so we cannot tell much from these results.

The second test was cloning the installation image with qemu-img
convert, doing 10 runs:

for i in $(seq 10); do
rm -f dst.raw
sleep 10
time ./qemu-img convert -f raw -O raw -t none -T none src.raw dst.raw
done

Here is a table comparing the total time spent:

TypeBefore(s)   After(s)Diff(%)
---
real  530.028469.123  -11.4
user   17.204 10.768  -37.4
sys17.881  7.011  -60.7

Here we see very clear improvement in CPU usage.

Signed-off-by: Nir Soffer 
---
 block/file-posix.c | 25 +
 tests/qemu-iotests/150.out |  1 +
 tests/qemu-iotests/160 |  4 
 tests/qemu-iotests/175 | 19 +--
 tests/qemu-iotests/175.out |  8 
 tests/qemu-iotests/221.out | 12 
 tests/qemu-iotests/253.out | 12 
 7 files changed, 63 insertions(+), 18 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index b9c33c8f6c..3964dd2021 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1755,6 +1755,27 @@ static int handle_aiocb_discard(void *opaque)
 return ret;
 }
 
+/*
+ * Help alignment detection by allocating the first block.
+ *
+ * When reading with direct I/O from unallocated area on Gluster backed by XFS,
+ * reading succeeds regardless of request length. In this case we fallback to
+ * safe aligment which is not optimal. Allocating the first block avoids this
+ * fallback.
+ *
+ * Returns: 0 on success, -errno on failure.
+ */
+static int allocate_first_block(int fd)
+{
+ssize_t n;
+
+do {
+n = pwrite(fd, "\0", 1, 0);
+} while (n == -1 && errno == EINTR);
+
+return (n == -1) ? -errno : 0;
+}
+
 static int handle_aiocb_truncate(void *opaque)
 {
 RawPosixAIOData *aiocb = opaque;
@@ -1794,6 +1815,8 @@ static int handle_aiocb_truncate(void *opaque)
 /* posix_fallocate() doesn't set errno. */
 error_setg_errno(errp, -result,
  "Could not preallocate new data");
+} else if (current_length == 0) {
+allocate_first_block(fd);
 }
 } else {
 result = 0;
@@ -1855,6 +1878,8 @@ static int handle_aiocb_truncate(void *opaque)
 if (ftruncate(fd, offset) != 0) {
 result = -errno;
 error_setg_errno(errp, -result, "Could not resize file");
+} else if (current_length == 0 && offset > current_length) {
+allocate_first_block(fd);
 }
 return result;
 default:
diff --git a/tests/qemu-iotests/150.out b/tests/qemu-iotests/150.out
index 2a54e8dcfa..3cdc7727a5 100644
--- a/tests/qemu-iotests/150.out
+++ b/tests/qemu-iotests/150.out
@@ -3,6 +3,7 @@ QA output created by 150
 === Mapping sparse conversion ===
 
 Offset  Length  File
+0   0x1000  TEST_DIR/t.IMGFMT
 
 === Mapping non-sparse conversion ===
 
diff --git a/tests/qemu-iotests/160 b/tests/qemu-iotests/160
index df89d3864b..ad2d054a47 100755
--- a/tests/qemu-iotests/160
+++ b/tests/qemu-iotests/160
@@ -57,6 +57,10 @@ for skip in $TEST_SKIP_BLOCKS; do
 $QEMU_IMG dd if="$TEST_IMG" of="$TEST_IMG.out" skip="$skip" -O "$IMGFMT" \
 2> /dev/null
 TEST_IMG="$TEST_IMG.out" _check_test_img
+
+# We always write the first byte of an image.
+printf "\0" > "$TEST_IMG.out.dd"
+
 dd if="$TEST_IMG" of="$TEST_IMG.out.dd" skip="$skip" status=none
 
 echo
diff --git a/tests/qemu-iotests/175 b/tests/qemu-iotests/175
index 51e62c8276..c6a3a7bb1e 100755
--- a/tests/qemu-iotests/175
+++ b/tests/qemu-iotests/175
@@ -37,14 +37,16 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
 # the file size.  This function hides the resulting difference in the
 

Re: [Qemu-devel] [PATCH 3/3] pc: Don't make CPU properties mandatory unless necessary

2019-08-16 Thread Yash Mankad



On 8/16/19 1:42 PM, Eduardo Habkost wrote:
> On Fri, Aug 16, 2019 at 02:22:58PM +0200, Markus Armbruster wrote:
>> Erik Skultety  writes:
>>
>>> On Fri, Aug 16, 2019 at 08:10:20AM +0200, Markus Armbruster wrote:
 Eduardo Habkost  writes:

> We have this issue reported when using libvirt to hotplug CPUs:
> https://bugzilla.redhat.com/show_bug.cgi?id=1741451
>
> Basically, libvirt is not copying die-id from
> query-hotpluggable-cpus, but die-id is now mandatory.
 Uh-oh, "is now mandatory": making an optional property mandatory is an
 incompatible change.  When did we do that?  Commit hash, please.

 [...]

>>> I don't even see it as being optional ever - the property wasn't even
>>> recognized before commit 176d2cda0de introduced it as mandatory.
>> Compatibility break.
>>
>> Commit 176d2cda0de is in v4.1.0.  If I had learned about it a bit
>> earlier, I would've argued for a last minute fix or a revert.  Now we
>> have a regression in the release.
>>
>> Eduardo, I think this fix should go into v4.1.1.  Please add cc:
>> qemu-stable.
> I did it in v2.
>
>> How can we best avoid such compatibility breaks to slip in undetected?
>>
>> A static checker would be nice.  For vmstate, we have
>> scripts/vmstate-static-checker.py.  Not sure it's used.
> I don't think this specific bug would be detected with a static
> checker.  "die-id is mandatory" is not something that can be
> extracted by looking at QOM data structures.  The new rule was
> being enforced by the hotplug handler callbacks, and the hotplug
> handler call tree is a bit complex (too complex for my taste, but
> I digress).
>
> We could have detected this with a simple CPU hotplug automated
> test case, though.  Or with a very simple -device test case like
> the one I have submitted with this patch.
>
> This was detected by libvirt automated test cases.  It would be
> nice if this was run during the -rc stage and not only after the
> 4.1.0 release, though.
>
> I don't know details of the test job.  Danilo, Mirek, Yash: do
> you know how this bug was detected, and what we could do to run
> the same test jobs in upstream QEMU release candidates?

This bug was caught by our internal gating tests.

The libvirt gating tests for the virt module include the
following Avocado-VT test case:

libvirt_vcpu_plug_unplug.positive_test.vcpu_set.live.vm_operate.save

This job failed with the error that you can see in the description
of the BZ#1741451 [0].

If you think that this would have been caught by a simple hotplug
case, I'd recommend adding a test for hotplug to avocado_qemu.
Otherwise, if you want, I can look into adding this particular
libvirt test case to our QEMU CI efforts.

Thanks,
Yash

[0] https://bugzilla.redhat.com/show_bug.cgi?id=1741451#c0



>





Re: [Qemu-devel] Translation of qemu to Swedish...

2019-08-16 Thread Aleksandar Markovic
16.08.2019. 22.20, "Sebastian Rasmussen"  је написао/ла:
>
> Hi!
>
> I noticed that a translation to Swedish was missing,
> so I'd like to contribute that. Let me know if there is
> some issue and I'll do my best to fix it. :)
>
>  / Sebastian
>
> From 9d8525b987e0db8309b6221a7e2a292fa5db9eec Mon Sep 17 00:00:00 2001
> From: Sebastian Rasmussen 
> Date: Fri, 16 Aug 2019 21:22:11 +0200
> Subject: [PATCH] Added Swedish translation.
>
> Signed-off-by: Sebastian Rasmussen 
> ---

Very kind of you, Sebastian!

I don't have any computer at hand to check, do I am asking you, or anybody
else, to check if there is no clash between hot keys: you used 'f' for both
"_Fånga inmatning" and "Visa _flika", 'a' for "_Avsluta" and "Zooma till
_anpassad storlek", and 'h' for "_Helskärm" and "Fånga vid _hovring". (If
those hot keys are applicable at different situations, they are OK, but not
in the same situation.)

Tack ska du ha!

Aleksandar

>  po/sv.po | 75 
>  1 file changed, 75 insertions(+)
>  create mode 100644 po/sv.po
>
> diff --git a/po/sv.po b/po/sv.po
> new file mode 100644
> index 00..e1ef3f7776
> --- /dev/null
> +++ b/po/sv.po
> @@ -0,0 +1,75 @@
> +# Swedish translation of qemu po-file.
> +# This file is put in the public domain.
> +# Sebastian Rasmussen , 2019.
> +#
> +msgid ""
> +msgstr ""
> +"Project-Id-Version: QEMU 2.12.91\n"
> +"Report-Msgid-Bugs-To: qemu-devel@nongnu.org\n"
> +"POT-Creation-Date: 2018-07-18 07:56+0200\n"
> +"PO-Revision-Date: 2019-08-16 21:19+0200\n"
> +"Last-Translator: Sebastian Rasmussen \n"
> +"Language-Team: Swedish \n"
> +"Language: sv\n"
> +"MIME-Version: 1.0\n"
> +"Content-Type: text/plain; charset=UTF-8\n"
> +"Content-Transfer-Encoding: 8bit\n"
> +"Plural-Forms: nplurals=2; plural=(n != 1);\n"
> +"X-Generator: Poedit 2.2.3\n"
> +
> +msgid " - Press Ctrl+Alt+G to release grab"
> +msgstr " - Tryck Ctrl+Alt+G för att sluta fånga"
> +
> +msgid " [Paused]"
> +msgstr " [Pausad]"
> +
> +msgid "_Pause"
> +msgstr "_Paus"
> +
> +msgid "_Reset"
> +msgstr "_Starta om"
> +
> +msgid "Power _Down"
> +msgstr "Stäng _ner"
> +
> +msgid "_Quit"
> +msgstr "_Avsluta"
> +
> +msgid "_Fullscreen"
> +msgstr "_Helskärm"
> +
> +msgid "_Copy"
> +msgstr "_Kopiera"
> +
> +msgid "Zoom _In"
> +msgstr "Zooma _in"
> +
> +msgid "Zoom _Out"
> +msgstr "Zooma _ut"
> +
> +msgid "Best _Fit"
> +msgstr "Anpassad _storlek"
> +
> +msgid "Zoom To _Fit"
> +msgstr "Zooma till _anpassad storlek"
> +
> +msgid "Grab On _Hover"
> +msgstr "Fånga vid _hovring"
> +
> +msgid "_Grab Input"
> +msgstr "_Fånga inmatning"
> +
> +msgid "Show _Tabs"
> +msgstr "Visa _flika"
> +
> +msgid "Detach Tab"
> +msgstr "Frigör flik"
> +
> +msgid "Show Menubar"
> +msgstr "Visa menyrad"
> +
> +msgid "_Machine"
> +msgstr "_Maskin"
> +
> +msgid "_View"
> +msgstr "_Visa"
> --
> 2.23.0.rc1
>


Re: [Qemu-devel] [Qemu-block] [PATCH v5 3/6] iotests: Add casenotrun report to bash tests

2019-08-16 Thread Cleber Rosa
On Thu, Aug 15, 2019 at 08:44:11PM -0400, John Snow wrote:
> 
> 
> On 7/19/19 12:30 PM, Andrey Shinkevich wrote:
> > The new function _casenotrun() is to be invoked if a test case cannot
> > be run for some reason. The user will be notified by a message passed
> > to the function.
> > 
> 
> Oh, I assume this is a sub-test granularity; if we need to skip
> individual items.
> 
> I'm good with this, but we should CC Cleber Rosa, who has struggled
> against this in the past, too.
>

The discussion I was involved in was not that much about skipping
tests per se, but about how to determine if a test should be skipped
or not.  At that time, we proposed an integration with the build
system, but the downside (and the reason for not pushing it forward)
was the requirement to run the iotest outside of a build tree.

> > Suggested-by: Kevin Wolf 
> > Signed-off-by: Andrey Shinkevich 
> > ---
> >  tests/qemu-iotests/common.rc | 7 +++
> >  1 file changed, 7 insertions(+)
> > 
> > diff --git a/tests/qemu-iotests/common.rc b/tests/qemu-iotests/common.rc
> > index 6e461a1..1089050 100644
> > --- a/tests/qemu-iotests/common.rc
> > +++ b/tests/qemu-iotests/common.rc
> > @@ -428,6 +428,13 @@ _notrun()
> >  exit
> >  }
> >  
> > +# bail out, setting up .casenotrun file
> > +#
> > +_casenotrun()
> > +{
> > +echo "[case not run] $*" >>"$OUTPUT_DIR/$seq.casenotrun"
> > +}
> > +
> >  # just plain bail out
> >  #
> >  _fail()
> > 
> 
> seems fine to me otherwise.
> 
> Reviewed-by: John Snow 

Yeah, this also LGTM.

Reviewed-by: Cleber Rosa 



[Qemu-devel] Translation of qemu to Swedish...

2019-08-16 Thread Sebastian Rasmussen
Hi!

I noticed that a translation to Swedish was missing,
so I'd like to contribute that. Let me know if there is
some issue and I'll do my best to fix it. :)

 / Sebastian

>From 9d8525b987e0db8309b6221a7e2a292fa5db9eec Mon Sep 17 00:00:00 2001
From: Sebastian Rasmussen 
Date: Fri, 16 Aug 2019 21:22:11 +0200
Subject: [PATCH] Added Swedish translation.

Signed-off-by: Sebastian Rasmussen 
---
 po/sv.po | 75 
 1 file changed, 75 insertions(+)
 create mode 100644 po/sv.po

diff --git a/po/sv.po b/po/sv.po
new file mode 100644
index 00..e1ef3f7776
--- /dev/null
+++ b/po/sv.po
@@ -0,0 +1,75 @@
+# Swedish translation of qemu po-file.
+# This file is put in the public domain.
+# Sebastian Rasmussen , 2019.
+#
+msgid ""
+msgstr ""
+"Project-Id-Version: QEMU 2.12.91\n"
+"Report-Msgid-Bugs-To: qemu-devel@nongnu.org\n"
+"POT-Creation-Date: 2018-07-18 07:56+0200\n"
+"PO-Revision-Date: 2019-08-16 21:19+0200\n"
+"Last-Translator: Sebastian Rasmussen \n"
+"Language-Team: Swedish \n"
+"Language: sv\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=2; plural=(n != 1);\n"
+"X-Generator: Poedit 2.2.3\n"
+
+msgid " - Press Ctrl+Alt+G to release grab"
+msgstr " - Tryck Ctrl+Alt+G för att sluta fånga"
+
+msgid " [Paused]"
+msgstr " [Pausad]"
+
+msgid "_Pause"
+msgstr "_Paus"
+
+msgid "_Reset"
+msgstr "_Starta om"
+
+msgid "Power _Down"
+msgstr "Stäng _ner"
+
+msgid "_Quit"
+msgstr "_Avsluta"
+
+msgid "_Fullscreen"
+msgstr "_Helskärm"
+
+msgid "_Copy"
+msgstr "_Kopiera"
+
+msgid "Zoom _In"
+msgstr "Zooma _in"
+
+msgid "Zoom _Out"
+msgstr "Zooma _ut"
+
+msgid "Best _Fit"
+msgstr "Anpassad _storlek"
+
+msgid "Zoom To _Fit"
+msgstr "Zooma till _anpassad storlek"
+
+msgid "Grab On _Hover"
+msgstr "Fånga vid _hovring"
+
+msgid "_Grab Input"
+msgstr "_Fånga inmatning"
+
+msgid "Show _Tabs"
+msgstr "Visa _flika"
+
+msgid "Detach Tab"
+msgstr "Frigör flik"
+
+msgid "Show Menubar"
+msgstr "Visa menyrad"
+
+msgid "_Machine"
+msgstr "_Maskin"
+
+msgid "_View"
+msgstr "_Visa"
-- 
2.23.0.rc1



Re: [Qemu-devel] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF

2019-08-16 Thread Laszlo Ersek
+Alex (direct question at the bottom)

On 08/16/19 09:49, Yao, Jiewen wrote:
> below
> 
>> -Original Message-
>> From: Paolo Bonzini [mailto:pbonz...@redhat.com]
>> Sent: Friday, August 16, 2019 3:20 PM
>> To: Yao, Jiewen ; Laszlo Ersek
>> ; de...@edk2.groups.io
>> Cc: edk2-rfc-groups-io ; qemu devel list
>> ; Igor Mammedov ;
>> Chen, Yingwen ; Nakajima, Jun
>> ; Boris Ostrovsky ;
>> Joao Marcal Lemos Martins ; Phillip Goerl
>> 
>> Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF
>>
>> On 16/08/19 04:46, Yao, Jiewen wrote:
>>> Comment below:
>>>
>>>
 -Original Message-
 From: Paolo Bonzini [mailto:pbonz...@redhat.com]
 Sent: Friday, August 16, 2019 12:21 AM
 To: Laszlo Ersek ; de...@edk2.groups.io; Yao,
>> Jiewen
 
 Cc: edk2-rfc-groups-io ; qemu devel list
 ; Igor Mammedov
>> ;
 Chen, Yingwen ; Nakajima, Jun
 ; Boris Ostrovsky
>> ;
 Joao Marcal Lemos Martins ; Phillip Goerl
 
 Subject: Re: [edk2-devel] CPU hotplug using SMM with QEMU+OVMF

 On 15/08/19 17:00, Laszlo Ersek wrote:
> On 08/14/19 16:04, Paolo Bonzini wrote:
>> On 14/08/19 15:20, Yao, Jiewen wrote:
 - Does this part require a new branch somewhere in the OVMF SEC
 code?
   How do we determine whether the CPU executing SEC is BSP or
   hot-plugged AP?
>>> [Jiewen] I think this is blocked from hardware perspective, since the
>> first
 instruction.
>>> There are some hardware specific registers can be used to determine
>> if
 the CPU is new added.
>>> I don’t think this must be same as the real hardware.
>>> You are free to invent some registers in device model to be used in
 OVMF hot plug driver.
>>
>> Yes, this would be a new operation mode for QEMU, that only applies
>> to
>> hot-plugged CPUs.  In this mode the AP doesn't reply to INIT or SMI,
>> in
>> fact it doesn't reply to anything at all.
>>
 - How do we tell the hot-plugged AP where to start execution? (I.e.
 that
   it should execute code at a particular pflash location.)
>>> [Jiewen] Same real mode reset vector at :FFF0.
>>
>> You do not need a reset vector or INIT/SIPI/SIPI sequence at all in
>> QEMU.  The AP does not start execution at all when it is unplugged,
>> so
>> no cache-as-RAM etc.
>>
>> We only need to modify QEMU so that hot-plugged APIs do not reply
>> to
>> INIT/SIPI/SMI.
>>
>>> I don’t think there is problem for real hardware, who always has CAR.
>>> Can QEMU provide some CPU specific space, such as MMIO region?
>>
>> Why is a CPU-specific region needed if every other processor is in SMM
>> and thus trusted.
>
> I was going through the steps Jiewen and Yingwen recommended.
>
> In step (02), the new CPU is expected to set up RAM access. In step
> (03), the new CPU, executing code from flash, is expected to "send
>> board
> message to tell host CPU (GPIO->SCI) -- I am waiting for hot-add
> message." For that action, the new CPU may need a stack (minimally if
>> we
> want to use C function calls).
>
> Until step (03), there had been no word about any other (= pre-plugged)
> CPUs (more precisely, Jiewen even confirmed "No impact to other
> processors"), so I didn't assume that other CPUs had entered SMM.
>
> Paolo, I've attempted to read Jiewen's response, and yours, as carefully
> as I can. I'm still very confused. If you have a better understanding,
> could you please write up the 15-step process from the thread starter
> again, with all QEMU customizations applied? Such as, unnecessary
>> steps
> removed, and platform specifics filled in.

 Sure.

 (01a) QEMU: create new CPU.  The CPU already exists, but it does not
  start running code until unparked by the CPU hotplug controller.

 (01b) QEMU: trigger SCI

 (02-03) no equivalent

 (04) Host CPU: (OS) execute GPE handler from DSDT

 (05) Host CPU: (OS) Port 0xB2 write, all CPUs enter SMM (NOTE: New CPU
  will not enter CPU because SMI is disabled)

 (06) Host CPU: (SMM) Save 38000, Update 38000 -- fill simple SMM
  rebase code.

 (07a) Host CPU: (SMM) Write to CPU hotplug controller to enable
  new CPU

 (07b) Host CPU: (SMM) Send INIT/SIPI/SIPI to new CPU.
>>> [Jiewen] NOTE: INIT/SIPI/SIPI can be sent by a malicious CPU. There is no
>>> restriction that INIT/SIPI/SIPI can only be sent in SMM.
>>
>> All of the CPUs are now in SMM, and INIT/SIPI/SIPI will be discarded
>> before 07a, so this is okay.
> [Jiewen] May I know why INIT/SIPI/SIPI is discarded before 07a but is 
> delivered at 07a?
> I don’t see any extra step between 06 and 07a.
> What is the magic here?

The magic is 07a itself, IIUC. The CPU hotplug controller would be
accessible only in SMM. And until 07a happens, the new CPU ignores

Re: [Qemu-devel] [PATCH v5 0/6] Allow Valgrind checking all QEMU processes

2019-08-16 Thread Cleber Rosa
On Fri, Jul 19, 2019 at 07:30:10PM +0300, Andrey Shinkevich wrote:
> In the current implementation of the QEMU bash iotests, only qemu-io
> processes may be run under the Valgrind, which is a useful tool for
> finding memory usage issues. Let's allow the common.rc bash script
> runing all the QEMU processes, such as qemu-kvm, qemu-img, qemu-ndb
> and qemu-vxhs, under the Valgrind tool.
>

FIY, this looks very similar (in purpose) to:

   https://avocado-framework.readthedocs.io/en/71.0/WrapProcess.html

And in fact Valgrind was one of the original motivations:

   
https://github.com/avocado-framework/avocado/blob/master/examples/wrappers/valgrind.sh

Maybe this can be helpful for the Python based iotests.

- Cleber.

> v5:
>   01: The patch "block/nbd: NBDReply is used being uninitialized" was detached
>   and taken into account in the patch "nbd: Initialize reply on failure"
>   by Eric Blake.
> 
> v4:
>   01: The patch "iotests: Set read-zeroes on in null block driver for 
> Valgrind"
>   was extended with new cases and issued as a separate series.
>   02: The new patch "block/nbd: NBDReply is used being uninitialized" was
>   added to resolve the failure of the iotest 083 run under Valgrind.
> 
> v3:
>   01: The new function _casenotrun() was added to the common.rc bash
>   script to notify the user of test cases dropped for some reason.
>   Suggested by Kevin.
>   Particularly, the notification about the nonexistent TMPDIR in
>   the test 051 was added (noticed by Vladimir).
>   02: The timeout in some test cases was extended for Valgrind because
>   it differs when running on the ramdisk.
>   03: Due to the common.nbd script has been changed with the commit
>   b28f582c, the patch "iotests: amend QEMU NBD process synchronization"
>   is actual no more. Note that QEMU_NBD is launched in the bash nested
>   shell in the _qemu_nbd_wrapper() as it was before in common.rc.
>   04: The patch "iotests: new file to suppress Valgrind errors" was dropped
>   due to my superficial understanding of the work of the function
>   blk_pread_unthrottled(). Special thanks to Kevin who shed the light
>   on the null block driver involved. Now, the parameter 'read-zeroes=on'
>   is passed to the null block driver to initialize the buffer in the
>   function guess_disk_lchs() that the Valgrind was complaining to.
> 
> v2:
>   01: The patch 2/7 of v1 was merged into the patch 1/7, suggested by Daniel.
>   02: Another patch 7/7 was added to introduce the Valgrind error suppression
>   file into the QEMU project.
>   Discussed in the email thread with the message ID:
>   <1560276131-683243-1-git-send-email-andrey.shinkev...@virtuozzo.com>
> 
> Andrey Shinkevich (6):
>   iotests: allow Valgrind checking all QEMU processes
>   iotests: exclude killed processes from running under  Valgrind
>   iotests: Add casenotrun report to bash tests
>   iotests: Valgrind fails with nonexistent directory
>   iotests: extended timeout under Valgrind
>   iotests: extend sleeping time under Valgrind
> 
>  tests/qemu-iotests/028   |  6 +++-
>  tests/qemu-iotests/039   |  5 +++
>  tests/qemu-iotests/039.out   | 30 +++--
>  tests/qemu-iotests/051   |  4 +++
>  tests/qemu-iotests/061   |  2 ++
>  tests/qemu-iotests/061.out   | 12 ++-
>  tests/qemu-iotests/137   |  1 +
>  tests/qemu-iotests/137.out   |  6 +---
>  tests/qemu-iotests/183   |  9 +-
>  tests/qemu-iotests/192   |  6 +++-
>  tests/qemu-iotests/247   |  6 +++-
>  tests/qemu-iotests/common.rc | 76 
> +---
>  12 files changed, 101 insertions(+), 62 deletions(-)
> 
> -- 
> 1.8.3.1
> 
> 



Re: [Qemu-devel] [edk2-devel] CPU hotplug using SMM with QEMU+OVMF

2019-08-16 Thread Laszlo Ersek
On 08/15/19 18:21, Paolo Bonzini wrote:
> On 15/08/19 17:00, Laszlo Ersek wrote:
>> On 08/14/19 16:04, Paolo Bonzini wrote:
>>> On 14/08/19 15:20, Yao, Jiewen wrote:
> - Does this part require a new branch somewhere in the OVMF SEC code?
>   How do we determine whether the CPU executing SEC is BSP or
>   hot-plugged AP?
 [Jiewen] I think this is blocked from hardware perspective, since the 
 first instruction.
 There are some hardware specific registers can be used to determine if the 
 CPU is new added.
 I don’t think this must be same as the real hardware.
 You are free to invent some registers in device model to be used in OVMF 
 hot plug driver.
>>>
>>> Yes, this would be a new operation mode for QEMU, that only applies to
>>> hot-plugged CPUs.  In this mode the AP doesn't reply to INIT or SMI, in
>>> fact it doesn't reply to anything at all.
>>>
> - How do we tell the hot-plugged AP where to start execution? (I.e. that
>   it should execute code at a particular pflash location.)
 [Jiewen] Same real mode reset vector at :FFF0.
>>>
>>> You do not need a reset vector or INIT/SIPI/SIPI sequence at all in
>>> QEMU.  The AP does not start execution at all when it is unplugged, so
>>> no cache-as-RAM etc.
>>>
>>> We only need to modify QEMU so that hot-plugged APIs do not reply to
>>> INIT/SIPI/SMI.
>>>
 I don’t think there is problem for real hardware, who always has CAR.
 Can QEMU provide some CPU specific space, such as MMIO region?
>>>
>>> Why is a CPU-specific region needed if every other processor is in SMM
>>> and thus trusted.
>>
>> I was going through the steps Jiewen and Yingwen recommended.
>>
>> In step (02), the new CPU is expected to set up RAM access. In step
>> (03), the new CPU, executing code from flash, is expected to "send board
>> message to tell host CPU (GPIO->SCI) -- I am waiting for hot-add
>> message." For that action, the new CPU may need a stack (minimally if we
>> want to use C function calls).
>>
>> Until step (03), there had been no word about any other (= pre-plugged)
>> CPUs (more precisely, Jiewen even confirmed "No impact to other
>> processors"), so I didn't assume that other CPUs had entered SMM.
>>
>> Paolo, I've attempted to read Jiewen's response, and yours, as carefully
>> as I can. I'm still very confused. If you have a better understanding,
>> could you please write up the 15-step process from the thread starter
>> again, with all QEMU customizations applied? Such as, unnecessary steps
>> removed, and platform specifics filled in.
> 
> Sure.
> 
> (01a) QEMU: create new CPU.  The CPU already exists, but it does not
>  start running code until unparked by the CPU hotplug controller.
> 
> (01b) QEMU: trigger SCI
> 
> (02-03) no equivalent
> 
> (04) Host CPU: (OS) execute GPE handler from DSDT
> 
> (05) Host CPU: (OS) Port 0xB2 write, all CPUs enter SMM (NOTE: New CPU
>  will not enter CPU because SMI is disabled)
> 
> (06) Host CPU: (SMM) Save 38000, Update 38000 -- fill simple SMM
>  rebase code.

(Could Intel open source code for this?)

> (07a) Host CPU: (SMM) Write to CPU hotplug controller to enable
>  new CPU
> 
> (07b) Host CPU: (SMM) Send INIT/SIPI/SIPI to new CPU.
> 
> (08a) New CPU: (Low RAM) Enter protected mode.

PCI DMA attack might be relevant (but yes, I see you've mentioned that
too, down-thread)

> 
> (08b) New CPU: (Flash) Signals host CPU to proceed and enter cli;hlt loop.
> 
> (09) Host CPU: (SMM) Send SMI to the new CPU only.
> 
> (10) New CPU: (SMM) Run SMM code at 38000, and rebase SMBASE to
>  TSEG.

I wish we could simply wake the new CPU -- after step 07a -- with an
SMI. IOW, if we could excise steps 07b, 08a, 08b.

Our CPU hotplug controller, and the initial parked state in 01a for the
new CPU, are going to be home-brewed anyway.

On the other hand...

> (11) Host CPU: (SMM) Restore 38000.
> 
> (12) Host CPU: (SMM) Update located data structure to add the new CPU
>  information. (This step will involve CPU_SERVICE protocol)
> 
> (13) New CPU: (Flash) do whatever other initialization is needed
> 
> (14) New CPU: (Flash) Deadloop, and wait for INIT-SIPI-SIPI.

basically step 08b is the environment to which the new CPU returns in
13/14, after the RSM.

Do we absolutely need low RAM for 08a (for entering protected mode)? we
could execute from pflash, no? OTOH we'd still need RAM for the stack,
and that could be attacked with PCI DMA similarly. I believe.

> (15) Host CPU: (OS) Send INIT-SIPI-SIPI to pull new CPU in..
> 
> 
> In other words, the cache-as-RAM phase of 02-03 is replaced by the
> INIT-SIPI-SIPI sequence of 07b-08a-08b.
> 
> 
>>> The QEMU DSDT could be modified (when secure boot is in effect) to OUT
>>> to 0xB2 when hotplug happens.  It could write a well-known value to
>>> 0xB2, to be read by an SMI handler in edk2.
>>
>> I dislike involving QEMU's generated DSDT in anything SMM (even
>> injecting the SMI), because the AML interpreter runs in the 

[Qemu-devel] [PATCH] monitor/qmp: Update comment for commit 4eaca8de268

2019-08-16 Thread Markus Armbruster
Commit 4eaca8de268 dropped monitor_qmp_respond()'s parameter @id
without updating its function comment.  Fix that.

Signed-off-by: Markus Armbruster 
---
 monitor/qmp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/monitor/qmp.c b/monitor/qmp.c
index e1b196217d..9d9e5d8b27 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -97,7 +97,7 @@ void qmp_send_response(MonitorQMP *mon, const QDict *rsp)
 }
 
 /*
- * Emit QMP response @rsp with ID @id to @mon.
+ * Emit QMP response @rsp to @mon.
  * Null @rsp can only happen for commands with QCO_NO_SUCCESS_RESP.
  * Nothing is emitted then.
  */
-- 
2.21.0




[Qemu-devel] [PATCH] ppc: Three floating point fixes

2019-08-16 Thread Paul A. Clarke
From: "Paul A. Clarke" 

- target/ppc/fpu_helper.c:
  - helper_todouble() was not properly converting INFINITY from 32 bit
  float to 64 bit double.
  - helper_todouble() was not properly converting any denormalized
  32 bit float to 64 bit double.

- GCC, as of version 8 or so, takes advantage of the hardware's
  implementation of the xscvdpspn instruction to optimize the following
  sequence:
xscvdpspn vs0,vs1
mffprwz   r8,f0
  ISA 3.0B has xscvdpspn leaving its result in word 1 of the target register,
  and mffprwz expecting its input to come from word 0 of the source register.
  This sequence fails with QEMU, as a shift is required between those two
  instructions.  However, the hardware splats the result to both word 0 and
  word 1 of its output register, so the shift is not necessary.
  Expect a future revision of the ISA to specify this behavior.

Signed-off-by: Paul A. Clarke 
---
 target/ppc/fpu_helper.c | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/target/ppc/fpu_helper.c b/target/ppc/fpu_helper.c
index 5611cf0..82b5425 100644
--- a/target/ppc/fpu_helper.c
+++ b/target/ppc/fpu_helper.c
@@ -62,13 +62,14 @@ uint64_t helper_todouble(uint32_t arg)
 ret  = (uint64_t)extract32(arg, 30, 2) << 62;
 ret |= ((extract32(arg, 30, 1) ^ 1) * (uint64_t)7) << 59;
 ret |= (uint64_t)extract32(arg, 0, 30) << 29;
+ret |= (0x7ffULL * (extract32(arg, 23, 8) == 0xff)) << 52;
 } else {
 /* Zero or Denormalized operand.  */
 ret = (uint64_t)extract32(arg, 31, 1) << 63;
 if (unlikely(abs_arg != 0)) {
 /* Denormalized operand.  */
 int shift = clz32(abs_arg) - 9;
-int exp = -126 - shift + 1023;
+int exp = -127 - shift + 1023;
 ret |= (uint64_t)exp << 52;
 ret |= abs_arg << (shift + 29);
 }
@@ -2871,10 +2872,14 @@ void helper_xscvqpdp(CPUPPCState *env, uint32_t opcode,
 
 uint64_t helper_xscvdpspn(CPUPPCState *env, uint64_t xb)
 {
+uint64_t result;
+
 float_status tstat = env->fp_status;
 set_float_exception_flags(0, );
 
-return (uint64_t)float64_to_float32(xb, ) << 32;
+result = (uint64_t)float64_to_float32(xb, );
+/* hardware replicates result to both words of the doubleword result.  */
+return (result << 32) | result;
 }
 
 uint64_t helper_xscvspdpn(CPUPPCState *env, uint64_t xb)
-- 
1.8.3.1




Re: [Qemu-devel] [PATCH v4] ppc: Add support for 'mffsl' instruction

2019-08-16 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/1565982203-11048-1-git-send-email...@us.ibm.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH v4] ppc: Add support for 'mffsl' instruction
Message-id: 1565982203-11048-1-git-send-email...@us.ibm.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] patchew/1565982203-11048-1-git-send-email...@us.ibm.com -> 
patchew/1565982203-11048-1-git-send-email...@us.ibm.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for 
path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 
'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for 
path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for 
path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for 
path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for 
path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) 
registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 
'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' 
(https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 
'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' 
(https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 
'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) 
registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out 
'20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) 
registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' 
(https://github.com/openssl/openssl) registered for path 
'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': 
checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out 
'50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered 
for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) 
registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': 
checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked 
out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 
'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out 
'09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 

Re: [Qemu-devel] [PATCH 0/4] backup: fix skipping unallocated clusters

2019-08-16 Thread John Snow



On 8/14/19 12:54 PM, Vladimir Sementsov-Ogievskiy wrote:
> 
> 
> 14 авг. 2019 г. 17:43 пользователь Vladimir Sementsov-Ogievskiy
>  написал:
> 
> Hi all!
> 
> There is a bug in not yet merged patch
> "block/backup: teach TOP to never copy unallocated regions"
> in https://github.com/jnsnow/qemu bitmaps. 04 fixes it. So, I propose
> to put 01-03 somewhere before
> "block/backup: teach TOP to never copy unallocated regions"
> and squash 04 into "block/backup: teach TOP to never copy
> unallocated regions" 
> 
> 
> Hmm, don't bother with it. Simpler is fix the bug in your commit by just
> use skip_bytes variable when initializing dirty_end.
> 

OK, just use Max's fix instead of this entire 4 patch series?

--js



[Qemu-devel] [PATCH v4] ppc: Add support for 'mffsl' instruction

2019-08-16 Thread Paul A. Clarke
From: "Paul A. Clarke" 

ISA 3.0B added a set of Floating-Point Status and Control Register (FPSCR)
instructions: mffsce, mffscdrn, mffscdrni, mffscrn, mffscrni, mffsl.
This patch adds support for 'mffsl'.

'mffsl' is identical to 'mffs', except it only returns mode, status, and enable
bits from the FPSCR.

On CPUs without support for 'mffsl' (below ISA 3.0), the 'mffsl' instruction
will execute identically to 'mffs'.

Note: I renamed FPSCR_RN to FPSCR_RN0 so I could create an FPSCR_RN mask which
is both bits of the FPSCR rounding mode, as defined in the ISA.

I also fixed a typo in the definition of FPSCR_FR.

Signed-off-by: Paul A. Clarke 

v4:
- nit: added some braces to resolve a checkpatch complaint.

v3:
- Changed tcg_gen_and_i64 to tcg_gen_andi_i64, eliminating the need for a
  temporary, per review from Richard Henderson.

v2:
- I found that I copied too much of the 'mffs' implementation.
  The 'Rc' condition code bits are not needed for 'mffsl'.  Removed.
- I now free the (renamed) 'tmask' temporary.
- I now bail early for older ISA to the original 'mffs' implementation.

---
 disas/ppc.c|  5 +
 target/ppc/cpu.h   | 15 ++-
 target/ppc/fpu_helper.c|  4 ++--
 target/ppc/translate/fp-impl.inc.c | 22 ++
 target/ppc/translate/fp-ops.inc.c  |  4 +++-
 5 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/disas/ppc.c b/disas/ppc.c
index a545437..63e97cf 100644
--- a/disas/ppc.c
+++ b/disas/ppc.c
@@ -1765,6 +1765,9 @@ extract_tbr (unsigned long insn,
 /* An X_MASK with the RA and RB fields fixed.  */
 #define XRARB_MASK (X_MASK | RA_MASK | RB_MASK)
 
+/* An X form instruction with the RA field fixed.  */
+#define XRA(op, xop, ra) (X((op), (xop)) | (((ra) << 16) & XRA_MASK))
+
 /* An XRARB_MASK, but with the L bit clear.  */
 #define XRLARB_MASK (XRARB_MASK & ~((unsigned long) 1 << 16))
 
@@ -4998,6 +5001,8 @@ const struct powerpc_opcode powerpc_opcodes[] = {
 { "ddivq",   XRC(63,546,0), X_MASK,POWER6, { FRT, FRA, FRB } },
 { "ddivq.",  XRC(63,546,1), X_MASK,POWER6, { FRT, FRA, FRB } },
 
+{ "mffsl",   XRA(63,583,12), XRARB_MASK,   POWER9, { FRT } },
+
 { "mffs",XRC(63,583,0), XRARB_MASK,COM,{ FRT } },
 { "mffs.",   XRC(63,583,1), XRARB_MASK,COM,{ FRT } },
 
diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index c9beba2..74e8da4 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -591,7 +591,7 @@ enum {
 #define FPSCR_XE 3  /* Floating-point inexact exception enable   */
 #define FPSCR_NI 2  /* Floating-point non-IEEE mode  */
 #define FPSCR_RN11
-#define FPSCR_RN 0  /* Floating-point rounding control   */
+#define FPSCR_RN00  /* Floating-point rounding control   */
 #define fpscr_fex(((env->fpscr) >> FPSCR_FEX)& 0x1)
 #define fpscr_vx (((env->fpscr) >> FPSCR_VX) & 0x1)
 #define fpscr_ox (((env->fpscr) >> FPSCR_OX) & 0x1)
@@ -614,7 +614,7 @@ enum {
 #define fpscr_ze (((env->fpscr) >> FPSCR_ZE) & 0x1)
 #define fpscr_xe (((env->fpscr) >> FPSCR_XE) & 0x1)
 #define fpscr_ni (((env->fpscr) >> FPSCR_NI) & 0x1)
-#define fpscr_rn (((env->fpscr) >> FPSCR_RN) & 0x3)
+#define fpscr_rn (((env->fpscr) >> FPSCR_RN0)& 0x3)
 /* Invalid operation exception summary */
 #define fpscr_ix ((env->fpscr) & ((1 << FPSCR_VXSNAN) | (1 << FPSCR_VXISI)  | \
   (1 << FPSCR_VXIDI)  | (1 << FPSCR_VXZDZ)  | \
@@ -640,7 +640,7 @@ enum {
 #define FP_VXZDZ(1ull << FPSCR_VXZDZ)
 #define FP_VXIMZ(1ull << FPSCR_VXIMZ)
 #define FP_VXVC (1ull << FPSCR_VXVC)
-#define FP_FR   (1ull << FSPCR_FR)
+#define FP_FR   (1ull << FPSCR_FR)
 #define FP_FI   (1ull << FPSCR_FI)
 #define FP_C(1ull << FPSCR_C)
 #define FP_FL   (1ull << FPSCR_FL)
@@ -648,7 +648,7 @@ enum {
 #define FP_FE   (1ull << FPSCR_FE)
 #define FP_FU   (1ull << FPSCR_FU)
 #define FP_FPCC (FP_FL | FP_FG | FP_FE | FP_FU)
-#define FP_FPRF (FP_C  | FP_FL | FP_FG | FP_FE | FP_FU)
+#define FP_FPRF (FP_C | FP_FPCC)
 #define FP_VXSOFT   (1ull << FPSCR_VXSOFT)
 #define FP_VXSQRT   (1ull << FPSCR_VXSQRT)
 #define FP_VXCVI(1ull << FPSCR_VXCVI)
@@ -659,7 +659,12 @@ enum {
 #define FP_XE   (1ull << FPSCR_XE)
 #define FP_NI   (1ull << FPSCR_NI)
 #define FP_RN1  (1ull << FPSCR_RN1)
-#define FP_RN   (1ull << FPSCR_RN)
+#define FP_RN0  (1ull << FPSCR_RN0)
+#define FP_RN   (FP_RN1 | FP_RN0)
+
+#define FP_MODE FP_RN
+#define FP_ENABLES  (FP_VE | FP_OE | FP_UE | FP_ZE | FP_XE)
+#define FP_STATUS   (FP_FR | FP_FI | FP_FPRF)
 
 /* the exception bits which can be cleared by mcrfs - includes FX */
 #define FP_EX_CLEAR_BITS (FP_FX | FP_OX | FP_UX | FP_ZX | \
diff --git 

Re: [Qemu-devel] [PATCH 0/2] Add virtio-fs (experimental)

2019-08-16 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20190816143321.20903-1-dgilb...@redhat.com/



Hi,

This series failed build test on s390x host. Please find the details below.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked under the git checkout with
# HEAD pointing to a commit that has the patches applied on top of "base"
# branch
set -e

echo
echo "=== ENV ==="
env

echo
echo "=== PACKAGES ==="
rpm -qa

echo
echo "=== UNAME ==="
uname -a

CC=$HOME/bin/cc
INSTALL=$PWD/install
BUILD=$PWD/build
mkdir -p $BUILD $INSTALL
SRC=$PWD
cd $BUILD
$SRC/configure --cc=$CC --prefix=$INSTALL
make -j4
# XXX: we need reliable clean up
# make check -j4 V=1
make install
=== TEST SCRIPT END ===




The full log is available at
http://patchew.org/logs/20190816143321.20903-1-dgilb...@redhat.com/testing.s390x/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [Qemu-devel] [Qemu-block] [PATCH] virtio-blk: Cancel the pending BH when the dataplane is reset

2019-08-16 Thread John Snow



On 8/16/19 1:15 PM, Philippe Mathieu-Daudé wrote:
> When 'system_reset' is called, the main loop clear the memory
> region cache before the BH has a chance to execute. Later when
> the deferred function is called, some assumptions that were
> made when scheduling them are no longer true when they actually
> execute.
> 
> This is what happens using a virtio-blk device (fresh RHEL7.8 install):
> 
>  $ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; 
> echo q) \
>| qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
>  -device virtio-blk-pci,id=image1,drive=drive_image1 \
>  -drive 
> file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none
>  \
>  -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
>  -netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
>  -monitor stdio -serial null -nographic
>   (qemu) system_reset
>   (qemu) system_reset
>   (qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: 
> Assertion `caches != NULL' failed.
>   Aborted
> 
>   (gdb) bt
>   Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
>   #0  0x5604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at 
> hw/virtio/virtio.c:227
>   #1  0x56040832972b in vring_avail_flags (vq=0x56040a24bdd0) at 
> hw/virtio/virtio.c:235
>   #2  0x56040832d13d in virtio_should_notify (vdev=0x56040a240630, 
> vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
>   #3  0x56040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, 
> vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
>   #4  0x5604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at 
> hw/block/dataplane/virtio-blk.c:75
>   #5  0x56040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
>   #6  0x56040883dccd in aio_bh_poll (ctx=0x560409161980) at 
> util/async.c:118
>   #7  0x560408842af7 in aio_dispatch (ctx=0x560409161980) at 
> util/aio-posix.c:460
>   #8  0x56040883e068 in aio_ctx_dispatch (source=0x560409161980, 
> callback=0x0, user_data=0x0) at util/async.c:261
>   #9  0x7f10a8fca06d in g_main_context_dispatch () at 
> /lib64/libglib-2.0.so.0
>   #10 0x560408841445 in glib_pollfds_poll () at util/main-loop.c:215
>   #11 0x5604088414bf in os_host_main_loop_wait (timeout=0) at 
> util/main-loop.c:238
>   #12 0x5604088415c4 in main_loop_wait (nonblocking=0) at 
> util/main-loop.c:514
>   #13 0x560408416b1e in main_loop () at vl.c:1923
>   #14 0x56040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, 
> envp=0x7ffc2c3f9d00) at vl.c:4578
> 
> Fix this by cancelling the BH when the virtio dataplane is stopped.
> 
> Reported-by: Yihuang Yu 
> Suggested-by: Stefan Hajnoczi 
> Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
> Signed-off-by: Philippe Mathieu-Daudé 
> ---
>  hw/block/dataplane/virtio-blk.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 9299a1a7c2..4030faa21d 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -301,6 +301,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
>  /* Clean up guest notifier (irq) */
>  k->set_guest_notifiers(qbus->parent, nvqs, false);
>  
> +qemu_bh_cancel(s->bh);
> +
>  vblk->dataplane_started = false;
>  s->stopping = false;
>  }
> 

Naive question:

Since we're canceling the BH here and we're stopping the device; do we
need to do anything like clear out batch_notify_vqs? I assume in
system_reset contexts that's going to be handled anyway, are there
non-reset contexts where it matters?

--js



Re: [Qemu-devel] [PATCH v2 0/3] colo: Add support for continious replication

2019-08-16 Thread Lukas Straub
On Fri, 16 Aug 2019 01:51:20 +
"Zhang, Chen"  wrote:

> > -Original Message-
> > From: Lukas Straub [mailto:lukasstra...@web.de]
> > Sent: Friday, August 16, 2019 3:48 AM
> > To: Dr. David Alan Gilbert 
> > Cc: qemu-devel ; Zhang, Chen
> > ; Jason Wang ; Xie
> > Changlong ; Wen Congyang
> > 
> > Subject: Re: [Qemu-devel] [PATCH v2 0/3] colo: Add support for continious
> > replication
> >
> > On Thu, 15 Aug 2019 19:57:37 +0100
> > "Dr. David Alan Gilbert"  wrote:
> >
> > > * Lukas Straub (lukasstra...@web.de) wrote:
> > > > Hello Everyone,
> > > > These Patches add support for continious replication to colo.
> > > > Please review.
> > >
> > >
> > > OK, for those who haven't followed COLO for so long; 'continuous
> > > replication' is when after the first primary fails, you can promote
> > > the original secondary to a new primary and start replicating again;
> > >
> > > i.e. current COLO gives you
> > >
> > > p<->s
> > > 
> > > s
> > >
> > > with your patches you can do
> > >
> > > s becomes p2
> > > p2<->s2
> > >
> > > and you're back to being resilient again.
> > >
> > > Which is great; because that was always an important missing piece.
> > >
> > > Do you have some test scripts/setup for this - it would be great to
> > > automate some testing.
> >
> > My Plan is to write a Pacemaker Resource Agent[1] for qemu-colo and then do
> > some long-term testing in my small cluster here. Writing standalone tests 
> > using
> > that Resource Agent should be easy, it just needs to be provided with the 
> > right
> > arguments and environment Variables.
>
> Thanks Dave's explanation.
> It looks good for me and I will test this series in my side.
>
> Another question: Is "Pacemaker Resource Agent[1] "  like a heartbeat module?

It's a bit more than that. Pacemaker itself is an Cluster Resource Manager, you 
can think of it like sysvinit but for clusters. It controls where in the 
cluster Resources run, what state (master/slave) and what to do in case of a 
Node or Resource failure. Now Resources can be anything like SQL-Server, 
Webserver, VM, etc. and Pacemaker itself doesn't directly control them, that's 
the Job of the Resource Agents. So a Resource Agent is like an init-script, but 
cluster-aware with more actions like start, stop, monitor, promote (to master) 
or migrate-to.

> I have wrote an internal heartbeat module running on Qemu, it make COLO can 
> detect fail and trigger failover automatically, no need external APP to call 
> the QMP command "x-colo-lost-heartbeat". If you need it, I can send a RFC 
> version recently.

Cool, this should be faster to failover than with Pacemaker.
What is the plan with cases like Primary-failover, which need to issue multiple 
commands?

> Thanks
> Zhang Chen
> >
> > Regards,
> > Lukas Straub
> >
> > [1] 
> > https://github.com/ClusterLabs/resource-agents/blob/master/doc/dev-guides/ra-dev-guide.asc#what-is-a-resource-agent




[Qemu-devel] more automated/public CI for QEMU pullreqs

2019-08-16 Thread Peter Maydell
We had a conversation some months back about ways we might switch
away from the current handling of pull requests which I do via some
hand-curated scripts and personal access to machines, to a more
automated system that could be operated by a wider range of people.
Unfortunately that conversation took place off-list (largely my fault
for forgetting a cc: at the beginning of the email chain), and in any
case it sort of fizzled out.  So let's restart it, on the mailing
list this time.

Here's a summary of stuff from the old thread and general
explanation of the problem:

My current setup is mostly just running the equivalent of
"make && make check" on a bunch of machines and configs
on the merge commit before I push it to master. I also do
a 'make check-tcg' on one of the builds and run a variant
of the 'linux-user-test' tarball of 'ls' binaries.
The scripts do some simple initial checks which mostly are
preventing problems seen in the past:
 * don't allow submodules to be updated unless I kick the
   merge off with a command line option saying submodule updates
   are OK here (this catches accidental misdriving of git by
   a submaintainer folding a submodule change into a patch
   during a rebase)
 * check we aren't trying to merge after tagging the final
   release but before doing the 'reopen development tree'
   commit that bumps the VERSION file
 * check for bogus "author is qemu-devel@nongnu.org" commits
 * check for UTF-8 mangling
 * check the gpg signature on the pullreq
A human needs to also eyeball the commits and the diffstat
for weird stuff (usually cursory for frequent pullreq submitters,
and more carefully for new submaintainers).

I have this semi-automated with some hacky scripts.  The major thing we
need for a replacement is the coverage of different host
architectures and operating systems, which is a poor match to most of
the cloud-CI services out there (including Travis).  We also want the
tests to run in a reasonably short wall-clock time from being kicked
off.

Awkward bonus extra requirement: it would be useful to be
able to do a merge CI run "privately", eg because the thing
being tested is a fix for a security bug that's not yet
public. But that's rare so we can probably do it by hand.

There are some other parts to this, like getting some kind of
project-role-account access to machines where that's OK, or finding
replacements where the machines really are my personal ones or
otherwise not OK for project access.  But I think that should be
fairly easy to resolve so let's keep this thread to the
automating-the-CI part.

The two major contenders suggested were:

(1) GitLab CI, which supports custom 'runners' which we can set
up to run builds and tests on machines we have project access to

(2) Patchew, which can handle running tests on multiple machines (eg
we do s390 testing today for all patches on list), and which we could
enhance to provide support for the release-manager to do their work

Advantages of Gitlab CI:
 * somebody else is doing the development and maintainance of the
   CI tool -- bigger 'bus factor' than patchew
 * already does (more or less) what we want without needing
   extra coding work

Advantages of Patchew:
 * we're already using it for patch submissions, so we know it's
   not going to go away
 * it's very easy to deploy to a new host
 * no dependencies except Python, so works anywhere we expect
   to be able to build QEMU (whereas gitlab CI's runner is
   written in Go, and there seem to be ongoing issues with getting
   it actually to compile for other architectures than x86)

I don't have an opinion really, but I think it would be good to
make a choice and start working forwards towards getting this
a bit less hacky and a bit more offloadable to other people.

Perhaps a good first step would be to keep the 'simple checks
of broken commits' part done as a local script but have the
CI done via "push proposed merge commit to $SOMEWHERE to
kick off the CI".

Input, opinions, recommendations, offers to do some of the work? :-)

thanks
-- PMM



Re: [Qemu-devel] [PATCH 0/2] target/arm: Take exceptions on ATS instructions

2019-08-16 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20190816125802.25877-1-peter.mayd...@linaro.org/



Hi,

This series failed build test on s390x host. Please find the details below.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked under the git checkout with
# HEAD pointing to a commit that has the patches applied on top of "base"
# branch
set -e

echo
echo "=== ENV ==="
env

echo
echo "=== PACKAGES ==="
rpm -qa

echo
echo "=== UNAME ==="
uname -a

CC=$HOME/bin/cc
INSTALL=$PWD/install
BUILD=$PWD/build
mkdir -p $BUILD $INSTALL
SRC=$PWD
cd $BUILD
$SRC/configure --cc=$CC --prefix=$INSTALL
make -j4
# XXX: we need reliable clean up
# make check -j4 V=1
make install
=== TEST SCRIPT END ===




The full log is available at
http://patchew.org/logs/20190816125802.25877-1-peter.mayd...@linaro.org/testing.s390x/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

Re: [Qemu-devel] [PATCH for-4.2 09/13] qcow2: Fix overly long snapshot tables

2019-08-16 Thread Max Reitz
On 31.07.19 11:22, Max Reitz wrote:
> On 30.07.19 21:08, Eric Blake wrote:
>> On 7/30/19 12:25 PM, Max Reitz wrote:
>>> We currently refuse to open qcow2 images with overly long snapshot
>>> tables.  This patch makes qemu-img check -r all drop all offending
>>> entries past what we deem acceptable.
>>>
>>> Signed-off-by: Max Reitz 
>>> ---
>>>  block/qcow2-snapshot.c | 89 +-
>>>  1 file changed, 79 insertions(+), 10 deletions(-)
>>
>> I'm less sure about this one.  8/13 should have no semantic effect (if
>> the user _depended_ on that much extra data, they should have set an
>> incompatible feature flag bit, at which point we'd leave their data
>> alone because we don't recognize the feature bit; so it is safe to
>> assume the user did not depend on the data and that we can thus nuke it
>> with impunity).  But here, we are throwing away the user's internal
>> snapshots, and not even giving them a say in which ones to throw away
>> (more likely, by trimming from the end, we are destroying the most
>> recent snapshots in favor of the older ones - but I could argue that
>> throwing away the oldest also has its uses).
> 
> First, I don’t think there really is a legitimate use case for having an
> overly long snapshot table.  In fact, I think our limit is too high as
> it is and we just introduced it this way because we didn’t have any
> repair functionality, and so just had to pick some limit that nobody
> could ever reasonably reach.
> 
> (As the test shows, you need more than 500 snapshots with 64 kB names
> and ID strings, and 1 kB of extra data to reach this limit.)
> 
> So the only likely cause to reach this number of snapshots is
> corruption.  OK, so maybe we don’t need to be able to fix it, then,
> because the image is corrupted anyway.
> 
> But I think we do want to be able to fix it, because otherwise you just
> can’t open the image at all and thus not even read the active layer.
> 
> 
> This gets me to: Second, it doesn’t make things worse.  Right now, we
> just refuse to open such images in all cases.  I’d personally prefer
> discarding some data on my image over losing it all.
> 
> 
> And third, I wonder what interface you have in mind.  I think adding an
> interface to qemu-img check to properly address this problem (letting
> the user discard individual snapshots) is hard.  I could imagine two things:
> 
> (A) Making qemu-img snapshot sometimes set BDRV_O_CHECK, too, or
> something.  For qemu-img snapshot -d, you don’t need to read the whole
> table into memory, and thus we don’t need to impose any limit.  But that
> seems pretty hackish to me.
> 
> (B) Maybe the proper solution would be to add an interactive interface
> to bdrv_check().  I can imagine that in the future, we may get more
> cases where we want interaction with the user on what data to delete and
> so on.  But that's hard...  (I’ll try.  Good thing stdio is already the
> standard interface in bdrv_check(), so I won’t have to feel bad if I go
> down that route even further.)

After some fiddling around, I don’t think this is worth it.  As I said,
this is an extremely rare case anyway, so the main goal should be to
just being able to access the active layer to copy at least that data
off the image.

The other side is that this would introduce quite complex code that
basically cannot be tested reasonably.  I’d rather not do that.

Max



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-devel] [PATCH 3/3] pc: Don't make CPU properties mandatory unless necessary

2019-08-16 Thread Eduardo Habkost
On Fri, Aug 16, 2019 at 02:22:58PM +0200, Markus Armbruster wrote:
> Erik Skultety  writes:
> 
> > On Fri, Aug 16, 2019 at 08:10:20AM +0200, Markus Armbruster wrote:
> >> Eduardo Habkost  writes:
> >>
> >> > We have this issue reported when using libvirt to hotplug CPUs:
> >> > https://bugzilla.redhat.com/show_bug.cgi?id=1741451
> >> >
> >> > Basically, libvirt is not copying die-id from
> >> > query-hotpluggable-cpus, but die-id is now mandatory.
> >>
> >> Uh-oh, "is now mandatory": making an optional property mandatory is an
> >> incompatible change.  When did we do that?  Commit hash, please.
> >>
> >> [...]
> >>
> >
> > I don't even see it as being optional ever - the property wasn't even
> > recognized before commit 176d2cda0de introduced it as mandatory.
> 
> Compatibility break.
> 
> Commit 176d2cda0de is in v4.1.0.  If I had learned about it a bit
> earlier, I would've argued for a last minute fix or a revert.  Now we
> have a regression in the release.
> 
> Eduardo, I think this fix should go into v4.1.1.  Please add cc:
> qemu-stable.

I did it in v2.

> 
> How can we best avoid such compatibility breaks to slip in undetected?
> 
> A static checker would be nice.  For vmstate, we have
> scripts/vmstate-static-checker.py.  Not sure it's used.

I don't think this specific bug would be detected with a static
checker.  "die-id is mandatory" is not something that can be
extracted by looking at QOM data structures.  The new rule was
being enforced by the hotplug handler callbacks, and the hotplug
handler call tree is a bit complex (too complex for my taste, but
I digress).

We could have detected this with a simple CPU hotplug automated
test case, though.  Or with a very simple -device test case like
the one I have submitted with this patch.

This was detected by libvirt automated test cases.  It would be
nice if this was run during the -rc stage and not only after the
4.1.0 release, though.

I don't know details of the test job.  Danilo, Mirek, Yash: do
you know how this bug was detected, and what we could do to run
the same test jobs in upstream QEMU release candidates?

-- 
Eduardo



Re: [Qemu-devel] RISCV: when will the CLIC be ready?

2019-08-16 Thread Alistair Francis
On Thu, Aug 15, 2019 at 8:39 PM liuzhiwei  wrote:
>
> Hi, Palmer
>
> When Michael Clark still was the maintainer of RISCV QEMU, he wrote in the 
> mail list, "the CLIC interrupt controller is under testing,
> and will be included in QEMU 3.1 or 3.2". It is pity that the CLIC is not in
> included even in QEMU 4.1.0.

I see that there is a CLIC branch available here:
https://github.com/riscv/riscv-qemu/pull/157

It looks like all of the work is in a single commit
(https://github.com/riscv/riscv-qemu/pull/157/commits/206d9ac339feb9ef2c325402a00f0f45f453d019)
and that most of the other commits in the PR have already made it into
master.

Although the CLIC commit is very large it doesn't seem impossible to
manually pull out the CLIC bits and apply it onto master.

Do you know the state of the CLIC model? If it's working it shouldn't
be too hard to rebase the work and get the code into mainline.

Alistair

>
> As we have cpus using CLIC, I have to use the out of tree qemu code in SIFIVE
> a long time. Could you tell me when it will be upstreamed?
>
> Best Regards
> Zhiwei
>



[Qemu-devel] [PATCH] virtio-blk: Cancel the pending BH when the dataplane is reset

2019-08-16 Thread Philippe Mathieu-Daudé
When 'system_reset' is called, the main loop clear the memory
region cache before the BH has a chance to execute. Later when
the deferred function is called, some assumptions that were
made when scheduling them are no longer true when they actually
execute.

This is what happens using a virtio-blk device (fresh RHEL7.8 install):

 $ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo 
q) \
   | qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
 -device virtio-blk-pci,id=image1,drive=drive_image1 \
 -drive 
file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none
 \
 -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
 -netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
 -monitor stdio -serial null -nographic
  (qemu) system_reset
  (qemu) system_reset
  (qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: 
Assertion `caches != NULL' failed.
  Aborted

  (gdb) bt
  Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
  #0  0x5604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at 
hw/virtio/virtio.c:227
  #1  0x56040832972b in vring_avail_flags (vq=0x56040a24bdd0) at 
hw/virtio/virtio.c:235
  #2  0x56040832d13d in virtio_should_notify (vdev=0x56040a240630, 
vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
  #3  0x56040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, 
vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
  #4  0x5604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at 
hw/block/dataplane/virtio-blk.c:75
  #5  0x56040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
  #6  0x56040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
  #7  0x560408842af7 in aio_dispatch (ctx=0x560409161980) at 
util/aio-posix.c:460
  #8  0x56040883e068 in aio_ctx_dispatch (source=0x560409161980, 
callback=0x0, user_data=0x0) at util/async.c:261
  #9  0x7f10a8fca06d in g_main_context_dispatch () at 
/lib64/libglib-2.0.so.0
  #10 0x560408841445 in glib_pollfds_poll () at util/main-loop.c:215
  #11 0x5604088414bf in os_host_main_loop_wait (timeout=0) at 
util/main-loop.c:238
  #12 0x5604088415c4 in main_loop_wait (nonblocking=0) at 
util/main-loop.c:514
  #13 0x560408416b1e in main_loop () at vl.c:1923
  #14 0x56040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, 
envp=0x7ffc2c3f9d00) at vl.c:4578

Fix this by cancelling the BH when the virtio dataplane is stopped.

Reported-by: Yihuang Yu 
Suggested-by: Stefan Hajnoczi 
Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
Signed-off-by: Philippe Mathieu-Daudé 
---
 hw/block/dataplane/virtio-blk.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 9299a1a7c2..4030faa21d 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -301,6 +301,8 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
 /* Clean up guest notifier (irq) */
 k->set_guest_notifiers(qbus->parent, nvqs, false);
 
+qemu_bh_cancel(s->bh);
+
 vblk->dataplane_started = false;
 s->stopping = false;
 }
-- 
2.20.1




Re: [Qemu-devel] [PATCH] job: drop job_drain

2019-08-16 Thread Vladimir Sementsov-Ogievskiy
16.08.2019 20:10, Vladimir Sementsov-Ogievskiy wrote:
> 16.08.2019 20:04, Vladimir Sementsov-Ogievskiy wrote:
>> In job_finish_sync job_enter should be enough for a job to make some
>> progress and draining is a wrong tool for it. So use job_enter directly
>> here and drop job_drain with all related staff not used more.
>>
>> Suggested-by: Kevin Wolf 
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> ---
>>
>> It's a continuation for
>>     [PATCH v4] blockjob: drain all job nodes in block_job_drain
>>
>>   include/block/blockjob_int.h | 19 ---
>>   include/qemu/job.h   | 13 -
>>   block/backup.c   | 19 +--
>>   block/commit.c   |  1 -
>>   block/mirror.c   | 28 +++-
>>   block/stream.c   |  1 -
>>   blockjob.c   | 13 -
>>   job.c    | 12 +---
>>   8 files changed, 5 insertions(+), 101 deletions(-)
>>
>> diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
>> index e4a318dd15..e2824a36a8 100644
>> --- a/include/block/blockjob_int.h
>> +++ b/include/block/blockjob_int.h
>> @@ -52,17 +52,6 @@ struct BlockJobDriver {
>>    * besides job->blk to the new AioContext.
>>    */
>>   void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
>> -
>> -    /*
>> - * If the callback is not NULL, it will be invoked when the job has to 
>> be
>> - * synchronously cancelled or completed; it should drain 
>> BlockDriverStates
>> - * as required to ensure progress.
>> - *
>> - * Block jobs must use the default implementation for job_driver.drain,
>> - * which will in turn call this callback after doing generic block job
>> - * stuff.
>> - */
>> -    void (*drain)(BlockJob *job);
>>   };
>>   /**
>> @@ -107,14 +96,6 @@ void block_job_free(Job *job);
>>    */
>>   void block_job_user_resume(Job *job);
>> -/**
>> - * block_job_drain:
>> - * Callback to be used for JobDriver.drain in all block jobs. Drains the 
>> main
>> - * block node associated with the block jobs and calls BlockJobDriver.drain 
>> for
>> - * job-specific actions.
>> - */
>> -void block_job_drain(Job *job);
>> -
>>   /**
>>    * block_job_ratelimit_get_delay:
>>    *
>> diff --git a/include/qemu/job.h b/include/qemu/job.h
>> index 9e7cd1e4a0..09739b8dd9 100644
>> --- a/include/qemu/job.h
>> +++ b/include/qemu/job.h
>> @@ -220,13 +220,6 @@ struct JobDriver {
>>    */
>>   void (*complete)(Job *job, Error **errp);
>> -    /*
>> - * If the callback is not NULL, it will be invoked when the job has to 
>> be
>> - * synchronously cancelled or completed; it should drain any activities
>> - * as required to ensure progress.
>> - */
>> -    void (*drain)(Job *job);
>> -
>>   /**
>>    * If the callback is not NULL, prepare will be invoked when all the 
>> jobs
>>    * belonging to the same transaction complete; or upon this job's 
>> completion
>> @@ -470,12 +463,6 @@ bool job_user_paused(Job *job);
>>    */
>>   void job_user_resume(Job *job, Error **errp);
>> -/*
>> - * Drain any activities as required to ensure progress. This can be called 
>> in a
>> - * loop to synchronously complete a job.
>> - */
>> -void job_drain(Job *job);
>> -
>>   /**
>>    * Get the next element from the list of block jobs after @job, or the
>>    * first one if @job is %NULL.
>> diff --git a/block/backup.c b/block/backup.c
>> index 715e1d3be8..d1ecdfa9aa 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -320,21 +320,6 @@ void backup_do_checkpoint(BlockJob *job, Error **errp)
>>   hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len);
>>   }
>> -static void backup_drain(BlockJob *job)
>> -{
>> -    BackupBlockJob *s = container_of(job, BackupBlockJob, common);
>> -
>> -    /* Need to keep a reference in case blk_drain triggers execution
>> - * of backup_complete...
>> - */
>> -    if (s->target) {
>> -    BlockBackend *target = s->target;
>> -    blk_ref(target);
>> -    blk_drain(target);
>> -    blk_unref(target);
>> -    }
>> -}
>> -
>>   static BlockErrorAction backup_error_action(BackupBlockJob *job,
>>   bool read, int error)
>>   {
>> @@ -488,13 +473,11 @@ static const BlockJobDriver backup_job_driver = {
>>   .job_type   = JOB_TYPE_BACKUP,
>>   .free   = block_job_free,
>>   .user_resume    = block_job_user_resume,
>> -    .drain  = block_job_drain,
>>   .run    = backup_run,
>>   .commit = backup_commit,
>>   .abort  = backup_abort,
>>   .clean  = backup_clean,
>> -    },
>> -    .drain  = backup_drain,
>> +    }
>>   };
>>   static int64_t backup_calculate_cluster_size(BlockDriverState *target,
>> diff --git 

Re: [Qemu-devel] [PATCH] job: drop job_drain

2019-08-16 Thread Vladimir Sementsov-Ogievskiy
16.08.2019 20:04, Vladimir Sementsov-Ogievskiy wrote:
> In job_finish_sync job_enter should be enough for a job to make some
> progress and draining is a wrong tool for it. So use job_enter directly
> here and drop job_drain with all related staff not used more.
> 
> Suggested-by: Kevin Wolf 
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> ---
> 
> It's a continuation for
> [PATCH v4] blockjob: drain all job nodes in block_job_drain
> 
>   include/block/blockjob_int.h | 19 ---
>   include/qemu/job.h   | 13 -
>   block/backup.c   | 19 +--
>   block/commit.c   |  1 -
>   block/mirror.c   | 28 +++-
>   block/stream.c   |  1 -
>   blockjob.c   | 13 -
>   job.c| 12 +---
>   8 files changed, 5 insertions(+), 101 deletions(-)
> 
> diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
> index e4a318dd15..e2824a36a8 100644
> --- a/include/block/blockjob_int.h
> +++ b/include/block/blockjob_int.h
> @@ -52,17 +52,6 @@ struct BlockJobDriver {
>* besides job->blk to the new AioContext.
>*/
>   void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
> -
> -/*
> - * If the callback is not NULL, it will be invoked when the job has to be
> - * synchronously cancelled or completed; it should drain 
> BlockDriverStates
> - * as required to ensure progress.
> - *
> - * Block jobs must use the default implementation for job_driver.drain,
> - * which will in turn call this callback after doing generic block job
> - * stuff.
> - */
> -void (*drain)(BlockJob *job);
>   };
>   
>   /**
> @@ -107,14 +96,6 @@ void block_job_free(Job *job);
>*/
>   void block_job_user_resume(Job *job);
>   
> -/**
> - * block_job_drain:
> - * Callback to be used for JobDriver.drain in all block jobs. Drains the main
> - * block node associated with the block jobs and calls BlockJobDriver.drain 
> for
> - * job-specific actions.
> - */
> -void block_job_drain(Job *job);
> -
>   /**
>* block_job_ratelimit_get_delay:
>*
> diff --git a/include/qemu/job.h b/include/qemu/job.h
> index 9e7cd1e4a0..09739b8dd9 100644
> --- a/include/qemu/job.h
> +++ b/include/qemu/job.h
> @@ -220,13 +220,6 @@ struct JobDriver {
>*/
>   void (*complete)(Job *job, Error **errp);
>   
> -/*
> - * If the callback is not NULL, it will be invoked when the job has to be
> - * synchronously cancelled or completed; it should drain any activities
> - * as required to ensure progress.
> - */
> -void (*drain)(Job *job);
> -
>   /**
>* If the callback is not NULL, prepare will be invoked when all the 
> jobs
>* belonging to the same transaction complete; or upon this job's 
> completion
> @@ -470,12 +463,6 @@ bool job_user_paused(Job *job);
>*/
>   void job_user_resume(Job *job, Error **errp);
>   
> -/*
> - * Drain any activities as required to ensure progress. This can be called 
> in a
> - * loop to synchronously complete a job.
> - */
> -void job_drain(Job *job);
> -
>   /**
>* Get the next element from the list of block jobs after @job, or the
>* first one if @job is %NULL.
> diff --git a/block/backup.c b/block/backup.c
> index 715e1d3be8..d1ecdfa9aa 100644
> --- a/block/backup.c
> +++ b/block/backup.c
> @@ -320,21 +320,6 @@ void backup_do_checkpoint(BlockJob *job, Error **errp)
>   hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len);
>   }
>   
> -static void backup_drain(BlockJob *job)
> -{
> -BackupBlockJob *s = container_of(job, BackupBlockJob, common);
> -
> -/* Need to keep a reference in case blk_drain triggers execution
> - * of backup_complete...
> - */
> -if (s->target) {
> -BlockBackend *target = s->target;
> -blk_ref(target);
> -blk_drain(target);
> -blk_unref(target);
> -}
> -}
> -
>   static BlockErrorAction backup_error_action(BackupBlockJob *job,
>   bool read, int error)
>   {
> @@ -488,13 +473,11 @@ static const BlockJobDriver backup_job_driver = {
>   .job_type   = JOB_TYPE_BACKUP,
>   .free   = block_job_free,
>   .user_resume= block_job_user_resume,
> -.drain  = block_job_drain,
>   .run= backup_run,
>   .commit = backup_commit,
>   .abort  = backup_abort,
>   .clean  = backup_clean,
> -},
> -.drain  = backup_drain,
> +}
>   };
>   
>   static int64_t backup_calculate_cluster_size(BlockDriverState *target,
> diff --git a/block/commit.c b/block/commit.c
> index 2c5a6d4ebc..697a779d8e 100644
> --- a/block/commit.c
> +++ b/block/commit.c
> @@ -216,7 +216,6 @@ static const BlockJobDriver 

[Qemu-devel] [PATCH v2] pc: Don't make die-id mandatory unless necessary

2019-08-16 Thread Eduardo Habkost
We have this issue reported when using libvirt to hotplug CPUs:
https://bugzilla.redhat.com/show_bug.cgi?id=1741451

Basically, libvirt is not copying die-id from
query-hotpluggable-cpus, but die-id is now mandatory.

We could blame libvirt and say it is not following the documented
interface, because we have this buried in the QAPI schema
documentation:

> Note: currently there are 5 properties that could be present
> but management should be prepared to pass through other
> properties with device_add command to allow for future
> interface extension. This also requires the filed names to be kept in
> sync with the properties passed to -device/device_add.

But I don't think this would be reasonable from us.  We can just
make QEMU more flexible and let die-id to be omitted when there's
no ambiguity.  This will allow us to keep compatibility with
existing libvirt versions.

Test case included to ensure we don't break this again.

Fixes: commit 176d2cda0dee ("i386/cpu: Consolidate die-id validity in smp 
context")
Signed-off-by: Eduardo Habkost 
---
Changes v1 -> v2:
* v1 was "pc: Don't make CPU properties mandatory unless necessary"
* Make only die-id optional (Igor Mammedov)
---
 hw/i386/pc.c |  8 ++
 tests/acceptance/pc_cpu_hotplug_props.py | 35 
 2 files changed, 43 insertions(+)
 create mode 100644 tests/acceptance/pc_cpu_hotplug_props.py

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 3ab4bcb3ca..9c3f6ae828 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2406,6 +2406,14 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
 int max_socket = (ms->smp.max_cpus - 1) /
 smp_threads / smp_cores / pcms->smp_dies;
 
+/*
+ * die-id was optional in QEMU 4.0 and older, so keep it optional
+ * if there's only one die per socket.
+ */
+if (cpu->die_id < 0 && pcms->smp_dies == 1) {
+cpu->die_id = 0;
+}
+
 if (cpu->socket_id < 0) {
 error_setg(errp, "CPU socket-id is not set");
 return;
diff --git a/tests/acceptance/pc_cpu_hotplug_props.py 
b/tests/acceptance/pc_cpu_hotplug_props.py
new file mode 100644
index 00..08b7e632c6
--- /dev/null
+++ b/tests/acceptance/pc_cpu_hotplug_props.py
@@ -0,0 +1,35 @@
+#
+# Ensure CPU die-id can be omitted on -device
+#
+#  Copyright (c) 2019 Red Hat Inc
+#
+# Author:
+#  Eduardo Habkost 
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2 of the License, or (at your option) any later version.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, see .
+#
+
+from avocado_qemu import Test
+
+class OmittedCPUProps(Test):
+"""
+:avocado: tags=arch:x86_64
+"""
+def test_no_die_id(self):
+self.vm.add_args('-nodefaults', '-S')
+self.vm.add_args('-smp', '1,sockets=2,cores=2,threads=2,maxcpus=8')
+self.vm.add_args('-cpu', 'qemu64')
+self.vm.add_args('-device', 
'qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0')
+self.vm.launch()
+self.assertEquals(len(self.vm.command('query-cpus')), 2)
-- 
2.21.0




[Qemu-devel] [PATCH] job: drop job_drain

2019-08-16 Thread Vladimir Sementsov-Ogievskiy
In job_finish_sync job_enter should be enough for a job to make some
progress and draining is a wrong tool for it. So use job_enter directly
here and drop job_drain with all related staff not used more.

Suggested-by: Kevin Wolf 
Signed-off-by: Vladimir Sementsov-Ogievskiy 
---

It's a continuation for
   [PATCH v4] blockjob: drain all job nodes in block_job_drain

 include/block/blockjob_int.h | 19 ---
 include/qemu/job.h   | 13 -
 block/backup.c   | 19 +--
 block/commit.c   |  1 -
 block/mirror.c   | 28 +++-
 block/stream.c   |  1 -
 blockjob.c   | 13 -
 job.c| 12 +---
 8 files changed, 5 insertions(+), 101 deletions(-)

diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index e4a318dd15..e2824a36a8 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -52,17 +52,6 @@ struct BlockJobDriver {
  * besides job->blk to the new AioContext.
  */
 void (*attached_aio_context)(BlockJob *job, AioContext *new_context);
-
-/*
- * If the callback is not NULL, it will be invoked when the job has to be
- * synchronously cancelled or completed; it should drain BlockDriverStates
- * as required to ensure progress.
- *
- * Block jobs must use the default implementation for job_driver.drain,
- * which will in turn call this callback after doing generic block job
- * stuff.
- */
-void (*drain)(BlockJob *job);
 };
 
 /**
@@ -107,14 +96,6 @@ void block_job_free(Job *job);
  */
 void block_job_user_resume(Job *job);
 
-/**
- * block_job_drain:
- * Callback to be used for JobDriver.drain in all block jobs. Drains the main
- * block node associated with the block jobs and calls BlockJobDriver.drain for
- * job-specific actions.
- */
-void block_job_drain(Job *job);
-
 /**
  * block_job_ratelimit_get_delay:
  *
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 9e7cd1e4a0..09739b8dd9 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -220,13 +220,6 @@ struct JobDriver {
  */
 void (*complete)(Job *job, Error **errp);
 
-/*
- * If the callback is not NULL, it will be invoked when the job has to be
- * synchronously cancelled or completed; it should drain any activities
- * as required to ensure progress.
- */
-void (*drain)(Job *job);
-
 /**
  * If the callback is not NULL, prepare will be invoked when all the jobs
  * belonging to the same transaction complete; or upon this job's 
completion
@@ -470,12 +463,6 @@ bool job_user_paused(Job *job);
  */
 void job_user_resume(Job *job, Error **errp);
 
-/*
- * Drain any activities as required to ensure progress. This can be called in a
- * loop to synchronously complete a job.
- */
-void job_drain(Job *job);
-
 /**
  * Get the next element from the list of block jobs after @job, or the
  * first one if @job is %NULL.
diff --git a/block/backup.c b/block/backup.c
index 715e1d3be8..d1ecdfa9aa 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -320,21 +320,6 @@ void backup_do_checkpoint(BlockJob *job, Error **errp)
 hbitmap_set(backup_job->copy_bitmap, 0, backup_job->len);
 }
 
-static void backup_drain(BlockJob *job)
-{
-BackupBlockJob *s = container_of(job, BackupBlockJob, common);
-
-/* Need to keep a reference in case blk_drain triggers execution
- * of backup_complete...
- */
-if (s->target) {
-BlockBackend *target = s->target;
-blk_ref(target);
-blk_drain(target);
-blk_unref(target);
-}
-}
-
 static BlockErrorAction backup_error_action(BackupBlockJob *job,
 bool read, int error)
 {
@@ -488,13 +473,11 @@ static const BlockJobDriver backup_job_driver = {
 .job_type   = JOB_TYPE_BACKUP,
 .free   = block_job_free,
 .user_resume= block_job_user_resume,
-.drain  = block_job_drain,
 .run= backup_run,
 .commit = backup_commit,
 .abort  = backup_abort,
 .clean  = backup_clean,
-},
-.drain  = backup_drain,
+}
 };
 
 static int64_t backup_calculate_cluster_size(BlockDriverState *target,
diff --git a/block/commit.c b/block/commit.c
index 2c5a6d4ebc..697a779d8e 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -216,7 +216,6 @@ static const BlockJobDriver commit_job_driver = {
 .job_type  = JOB_TYPE_COMMIT,
 .free  = block_job_free,
 .user_resume   = block_job_user_resume,
-.drain = block_job_drain,
 .run   = commit_run,
 .prepare   = commit_prepare,
 .abort = commit_abort,
diff --git a/block/mirror.c b/block/mirror.c
index 8cb75fb409..b91abe0288 

Re: [Qemu-devel] [PULL 00/29] target-arm queue

2019-08-16 Thread Peter Maydell
On Fri, 16 Aug 2019 at 14:17, Peter Maydell  wrote:
>
> First arm pullreq of 4.2...
>
> thanks
> -- PMM
>
> The following changes since commit 27608c7c66bd923eb5e5faab80e795408cbe2b51:
>
>   Merge remote-tracking branch 
> 'remotes/dgilbert/tags/pull-migration-20190814a' into staging (2019-08-16 
> 12:00:18 +0100)
>
> are available in the Git repository at:
>
>   https://git.linaro.org/people/pmaydell/qemu-arm.git 
> tags/pull-target-arm-20190816
>
> for you to fetch changes up to 664b7e3b97d6376f3329986c465b3782458b0f8b:
>
>   target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word (2019-08-16 
> 14:02:53 +0100)
>
> 
> target-arm queue:
>  * target/arm: generate a custom MIDR for -cpu max
>  * hw/misc/zynq_slcr: refactor to use standard register definition
>  * Set ENET_BD_BDU in I.MX FEC controller
>  * target/arm: Fix routing of singlestep exceptions
>  * refactor a32/t32 decoder handling of PC
>  * minor optimisations/cleanups of some a32/t32 codegen
>  * target/arm/cpu64: Ensure kvm really supports aarch64=off
>  * target/arm/cpu: Ensure we can use the pmu with kvm
>  * target/arm: Minor cleanups preparatory to KVM SVE support
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/4.2
for any user-visible changes.

-- PMM



Re: [Qemu-devel] [Bug 645662] Re: QEMU x87 emulation of trig and other complex ops is only at 64-bit precision, not 80-bit

2019-08-16 Thread Arno Wagner
Fine by me. I suggest to keep tracking this though, if necessary
in another bug item.

Regards,
Arno


On Fri, Aug 16, 2019 at 16:06:29 CEST, Peter Maydell wrote:
> Looking at our code we're still implementing the x87 insns FSIN, FCOS,
> FSINCOS, FPTAN, FPATAN, F2XM1, FYL2X, FYL2XP1 by "convert the floatx80
> to a host double and use the host C library functions", so I think this
> bug is still unfixed. If the C program in comment 1 and/or the Python
> code has stopped reporting failures it's probably just because the guest
> C library routines have stopped using the x87 80-bit FPU instructions
> internally.
> 
> 
> ** Changed in: qemu
>Status: Fix Released => Confirmed
> 
> -- 
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/645662
> 
> Title:
>   QEMU x87 emulation of trig and other complex ops is only at 64-bit
>   precision, not 80-bit
> 
> Status in QEMU:
>   Confirmed
> 
> Bug description:
>   When doing the regression tests for Python 3.1.2 with Qemu 0.12.5, (Linux 
> version 2.6.26-2-686 (Debian 2.6.26-25lenny1)),
>   gcc (Debian 4.3.2-1.1) 4.3.2, Python compiled from sources within qemu,
>   3 math tests fail, apparently because the floating point unit is buggy. 
> Qmeu was compiled from original sources
>   on Debian Lenny with kernel  2.6.34.6 from kernel.org, gcc  (Debian 
> 4.3.2-1.1) 4.3. 
> 
>   Regression testing errors:
> 
>   test_cmath
>   test test_cmath failed -- Traceback (most recent call last):
> File "/root/tools/python3/Python-3.1.2/Lib/test/test_cmath.py", line 364, 
> in
>   self.fail(error_message)
>   AssertionError: acos0034: acos(complex(-1.0002, 0.0))
>   Expected: complex(3.141592653589793, -2.1073424255447014e-08)
>   Received: complex(3.141592653589793, -2.1073424338879928e-08)
>   Received value insufficiently close to expected value.
> 
>   
>   test_float
>   test test_float failed -- Traceback (most recent call last):
> File "/root/tools/python3/Python-3.1.2/Lib/test/test_float.py", line 479, 
> in
>   self.assertEqual(s, repr(float(s)))
>   AssertionError: '8.72293771110361e+25' != '8.722937711103609e+25'
> 
>   
>   test_math
>   test test_math failed -- multiple errors occurred; run in verbose mode for 
> deta
> 
>   =>
> 
>   runtests.sh -v test_math
> 
>   le01:~/tools/python3/Python-3.1.2# ./runtests.sh -v test_math
>   test_math BAD
>1 BAD
>0 GOOD
>0 SKIPPED
>1 total
>   le01:~/tools/python3/Python-3.1.2#
> 
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/qemu/+bug/645662/+subscriptions

-- 
Arno Wagner, Dr. sc. techn., Dipl. Inform.,Email: a...@wagner.name
GnuPG: ID: CB5D9718  FP: 12D6 C03B 1B30 33BB 13CF  B774 E35C 5FA1 CB5D 9718

A good decision is based on knowledge and not on numbers. -- Plato

If it's in the news, don't worry about it.  The very definition of 
"news" is "something that hardly ever happens." -- Bruce Schneier

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/645662

Title:
  QEMU x87 emulation of trig and other complex ops is only at 64-bit
  precision, not 80-bit

Status in QEMU:
  Confirmed

Bug description:
  When doing the regression tests for Python 3.1.2 with Qemu 0.12.5, (Linux 
version 2.6.26-2-686 (Debian 2.6.26-25lenny1)),
  gcc (Debian 4.3.2-1.1) 4.3.2, Python compiled from sources within qemu,
  3 math tests fail, apparently because the floating point unit is buggy. Qmeu 
was compiled from original sources
  on Debian Lenny with kernel  2.6.34.6 from kernel.org, gcc  (Debian 
4.3.2-1.1) 4.3. 

  Regression testing errors:

  test_cmath
  test test_cmath failed -- Traceback (most recent call last):
File "/root/tools/python3/Python-3.1.2/Lib/test/test_cmath.py", line 364, in
  self.fail(error_message)
  AssertionError: acos0034: acos(complex(-1.0002, 0.0))
  Expected: complex(3.141592653589793, -2.1073424255447014e-08)
  Received: complex(3.141592653589793, -2.1073424338879928e-08)
  Received value insufficiently close to expected value.

  
  test_float
  test test_float failed -- Traceback (most recent call last):
File "/root/tools/python3/Python-3.1.2/Lib/test/test_float.py", line 479, in
  self.assertEqual(s, repr(float(s)))
  AssertionError: '8.72293771110361e+25' != '8.722937711103609e+25'

  
  test_math
  test test_math failed -- multiple errors occurred; run in verbose mode for 
deta

  =>

  runtests.sh -v test_math

  le01:~/tools/python3/Python-3.1.2# ./runtests.sh -v test_math
  test_math BAD
   1 BAD
   0 GOOD
   0 SKIPPED
   1 total
  le01:~/tools/python3/Python-3.1.2#

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/645662/+subscriptions



Re: [Qemu-devel] [PATCH 3/3] pc: Don't make CPU properties mandatory unless necessary

2019-08-16 Thread Eduardo Habkost
On Fri, Aug 16, 2019 at 03:20:11PM +0200, Igor Mammedov wrote:
> On Thu, 15 Aug 2019 15:38:03 -0300
> Eduardo Habkost  wrote:
> 
> > We have this issue reported when using libvirt to hotplug CPUs:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1741451
> > 
> > Basically, libvirt is not copying die-id from
> > query-hotpluggable-cpus, but die-id is now mandatory.
> 
> this should have been gated on compat property and affect
> only new machine types.
> Maybe we should do just that instead of fixup so libvirt
> would finally make proper handling of query-hotpluggable-cpus.
> 
>  
> > We could blame libvirt and say it is not following the documented
> > interface, because we have this buried in the QAPI schema
> > documentation:
> 
> I wouldn't say buried, if I understand it right QAPI schema
> should be the authoritative source of interface description.
> 
> If I recall it's not the first time, there was similar issue
> for exactly the same reason (libvirt not passing through
> all properties from query-hotpluggable-cpus).
> 
> And we had to fix it up on QEMU side (numa_cpu_pre_plug),
> but it seems 2 years later libvirt is still broken the same way :(
> 
> Should we really do fixups or finaly fix it on libvirt side?

Is it truly a bug in libvirt?  Making QEMU behave differently
when getting exactly the same input sounds like a bad idea, even
if we documented that at the QAPI documentation.

My suggestion is to instead drop the comment below from the QAPI
documentation.  New properties shouldn't become mandatory.

>  
> > > Note: currently there are 5 properties that could be present
> > > but management should be prepared to pass through other
> > > properties with device_add command to allow for future
> > > interface extension. This also requires the filed names to be kept in
> > > sync with the properties passed to -device/device_add.  
> > 
> > But I don't think this would be reasonable from us.  We can just
> > make QEMU more flexible and let CPU properties to be omitted when
> > there's no ambiguity.  This will allow us to keep compatibility
> > with existing libvirt versions.
> 
> I don't really like making rule from exceptions so I'd suggest doing
> it only for  die_id if we have to do fix it up (with fat comment
> like in numa_cpu_pre_plug).
> The rest are working fine as is.

I will insist we make it consistent for all properties, but I
don't want this discussion to hold the bug fix.  So I'll do this:

I will submit a new patch that makes only die-id optional, and CC
qemu-stable.

After that, i will submit this patch again, and we can discuss
whether we should make all properties optional.

-- 
Eduardo



Re: [Qemu-devel] [PATCH 3/3] pc: Don't make CPU properties mandatory unless necessary

2019-08-16 Thread Eduardo Habkost
On Fri, Aug 16, 2019 at 08:10:20AM +0200, Markus Armbruster wrote:
> Eduardo Habkost  writes:
> 
> > We have this issue reported when using libvirt to hotplug CPUs:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1741451
> >
> > Basically, libvirt is not copying die-id from
> > query-hotpluggable-cpus, but die-id is now mandatory.
> 
> Uh-oh, "is now mandatory": making an optional property mandatory is an
> incompatible change.  When did we do that?  Commit hash, please.

Sorry, forgot to include a "Fixes:" line.

commit 176d2cda0dee9f4f78f604ad72d6a111e8e38f3b
Author: Like Xu 
Date:   Wed Jun 12 16:40:58 2019 +0800

i386/cpu: Consolidate die-id validity in smp context

The field die_id (default as 0) and has_die_id are introduced to X86CPU.
Following the legacy smp check rules, the die_id validity is added to
the same contexts as leagcy smp variables such as hmp_hotpluggable_cpus(),
machine_set_cpu_numa_node(), cpu_slot_to_string() and pc_cpu_pre_plug().

Acked-by: Dr. David Alan Gilbert 
Signed-off-by: Like Xu 
Message-Id: <20190612084104.34984-4-like...@linux.intel.com>
Reviewed-by: Eduardo Habkost 
Signed-off-by: Eduardo Habkost 


-- 
Eduardo



Re: [Qemu-devel] [PATCH v3 0/4] qcow2: async handling of fragmented io

2019-08-16 Thread Vladimir Sementsov-Ogievskiy
15.08.2019 18:39, Vladimir Sementsov-Ogievskiy wrote:
> 15.08.2019 17:09, Max Reitz wrote:
>> On 15.08.19 14:10, Vladimir Sementsov-Ogievskiy wrote:
>>> Hi all!
>>>
>>> Here is an asynchronous scheme for handling fragmented qcow2
>>> reads and writes. Both qcow2 read and write functions loops through
>>> sequential portions of data. The series aim it to parallelize these
>>> loops iterations.
>>> It improves performance for fragmented qcow2 images, I've tested it
>>> as described below.
>>
>> Looks good to me, but I can’t take it yet because I need to wait for
>> Stefan’s branch to be merged, of course.
>>
>> Speaking of which, why didn’t you add any tests for the *_part()
>> methods?  I find it a bit unsettling that nothing would have caught the
>> bug you had in v2 in patch 3.
>>
> 
> Hmm, any test with write to fragmented area should have caught it.. Ok,
> I'll think of something.
> 
> 

And now I see that it's not trivial to make such a test:

1. qcow2 write is broken when we give nonzero qiov_offset to it, but only
qcow2_write calls bdrv_co_pwritev_part, so we need to have a test where qcow2
is a file child for qcow2

2. Then, the bug leads to the beginning of the qiov will be written to all 
parts.
But our testing tool qemu-io has only "write -P" command with buffer filled with
the same one byte, so we can't catch it


-- 
Best regards,
Vladimir


Re: [Qemu-devel] [PULL 00/16] Block layer patches

2019-08-16 Thread Peter Maydell
On Fri, 16 Aug 2019 at 10:36, Kevin Wolf  wrote:
>
> The following changes since commit 9e06029aea3b2eca1d5261352e695edc1e7d7b8b:
>
>   Update version for v4.1.0 release (2019-08-15 13:03:37 +0100)
>
> are available in the Git repository at:
>
>   git://repo.or.cz/qemu/kevin.git tags/for-upstream
>
> for you to fetch changes up to a6b257a08e3d72219f03e461a52152672fec0612:
>
>   file-posix: Handle undetectable alignment (2019-08-16 11:29:11 +0200)
>
> 
> Block layer patches:
>
> - file-posix: Fix O_DIRECT alignment detection
> - Fixes for concurrent block jobs
> - block-backend: Queue requests while drained (fix IDE vs. job crashes)
> - qemu-img convert: Deprecate using -n and -o together
> - iotests: Migration tests with filter nodes
> - iotests: More media change tests
>


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/4.2
for any user-visible changes.

-- PMM



Re: [Qemu-devel] [PATCH v4 0/3] Add Aspeed GPIO controller model

2019-08-16 Thread Cédric Le Goater
On 16/08/2019 09:32, Rashmica Gupta wrote:
> v5:
> - integrated AspeedGPIOController fields into AspeedGPIOClass
> - separated ast2600_3_6v and ast2600_1_8v into two classes

Rashmica,

This looks much nicer !  

Please take a look at branch aspeed-4.2 in which I have merged your
v5 and modified slightly the ast2600 part. 

  
https://github.com/legoater/qemu/commit/02b3df3f1a380eec4df7c49e88fa7ba27f75a610

I introduced a gpio_1_8v controller with its specific MMIO and IRQ
definitions. Tell me what you think of it. The principal motivation
behind these adjustments is that I don't know yet how we are going 
to instantiate/realize the specific models of the AST2600 SoC. the 
GPIO 1.8v is one of these extra controllers. 

Thanks,

C.

> v4:
> - proper interupt handling thanks to Andrew
> - switch statements for reading and writing suggested by Peter
> - some small cleanups suggested by Alexey
> 
> v3:
> - didn't have each gpio set up as an irq 
> - now can't access set AC on ast2400 (only exists on ast2500)
> - added ast2600 implementation (patch 3)
> - renamed a couple of variables for clarity
> 
> v2: Addressed Andrew's feedback, added debounce regs, renamed get/set to
> read/write to minimise confusion with a 'set' of registers.
> 
> Rashmica Gupta (3):
>   hw/gpio: Add basic Aspeed GPIO model for AST2400 and AST2500
>   aspeed: add a GPIO controller to the SoC
>   hw/gpio: Add in AST2600 specific implementation
> 
>  include/hw/arm/aspeed_soc.h   |3 +
>  include/hw/gpio/aspeed_gpio.h |  100 
>  hw/arm/aspeed_soc.c   |   17 +
>  hw/gpio/aspeed_gpio.c | 1006 +
>  hw/gpio/Makefile.objs |1 +
>  5 files changed, 1127 insertions(+)
>  create mode 100644 include/hw/gpio/aspeed_gpio.h
>  create mode 100644 hw/gpio/aspeed_gpio.c
> 




Re: [Qemu-devel] [PATCH v6 37/42] block: Leave BDS.backing_file constant

2019-08-16 Thread Vladimir Sementsov-Ogievskiy
09.08.2019 19:14, Max Reitz wrote:
> Parts of the block layer treat BDS.backing_file as if it were whatever
> the image header says (i.e., if it is a relative path, it is relative to
> the overlay), other parts treat it like a cache for
> bs->backing->bs->filename (relative paths are relative to the CWD).
> Considering bs->backing->bs->filename exists, let us make it mean the
> former.
> 
> Among other things, this now allows the user to specify a base when
> using qemu-img to commit an image file in a directory that is not the
> CWD (assuming, everything uses relative filenames).
> 
> Before this patch:
> 
> $ ./qemu-img create -f qcow2 foo/bot.qcow2 1M
> $ ./qemu-img create -f qcow2 -b bot.qcow2 foo/mid.qcow2
> $ ./qemu-img create -f qcow2 -b mid.qcow2 foo/top.qcow2
> $ ./qemu-img commit -b mid.qcow2 foo/top.qcow2
> qemu-img: Did not find 'mid.qcow2' in the backing chain of 'foo/top.qcow2'
> $ ./qemu-img commit -b foo/mid.qcow2 foo/top.qcow2
> qemu-img: Did not find 'foo/mid.qcow2' in the backing chain of 'foo/top.qcow2'
> $ ./qemu-img commit -b $PWD/foo/mid.qcow2 foo/top.qcow2
> qemu-img: Did not find '[...]/foo/mid.qcow2' in the backing chain of 
> 'foo/top.qcow2'

nothing works

> 
> After this patch:
> 
> $ ./qemu-img commit -b mid.qcow2 foo/top.qcow2
> Image committed.
> $ ./qemu-img commit -b foo/mid.qcow2 foo/top.qcow2
> qemu-img: Did not find 'foo/mid.qcow2' in the backing chain of 'foo/top.qcow2'
> $ ./qemu-img commit -b $PWD/foo/mid.qcow2 foo/top.qcow2
> Image committed.

something works.. However it seems that not working one is actually most 
probable
to be called by user. Anyway something is better than nothing.

> 
> With this change, bdrv_find_backing_image() must look at whether the
> user has overridden a BDS's backing file.  If so, it can no longer use
> bs->backing_file, but must instead compare the given filename against
> the backing node's filename directly.
> 
> Note that this changes the QAPI output for a node's backing_file.  We
> had very inconsistent output there (sometimes what the image header
> said, sometimes the actual filename of the backing image).  This
> inconsistent output was effectively useless, so we have to decide one
> way or the other.  Considering that bs->backing_file usually at runtime
> contained the path to the image relative to qemu's CWD (or absolute),
> this patch changes QAPI's backing_file to always report the
> bs->backing->bs->filename from now on.  If you want to receive the image
> header information, you have to refer to full-backing-filename.
> 
> This necessitates a change to iotest 228.  The interesting information
> it really wanted is the image header, and it can get that now, but it
> has to use full-backing-filename instead of backing_file.  Because of
> this patch's changes to bs->backing_file's behavior, we also need some
> reference output changes.
> 
> Along with the changes to bs->backing_file, stop updating
> BDS.backing_format in bdrv_backing_attach() as well.  This necessitates
> a change to the reference output of iotest 191.
> 
> iotest 245 changes in behavior: With the backing node no longer
> overriding the parent node's backing_file string, you can now omit the
> @backing option when reopening a node with neither a default nor a
> current backing file even if it used to have a backing node at some
> point.
> 
> Signed-off-by: Max Reitz 
> ---
>   include/block/block_int.h  | 19 ++-
>   block.c| 35 ---
>   block/qapi.c   |  7 ---
>   tests/qemu-iotests/191.out |  1 -
>   tests/qemu-iotests/228 |  6 +++---
>   tests/qemu-iotests/228.out |  6 +++---
>   tests/qemu-iotests/245 |  4 +++-
>   7 files changed, 55 insertions(+), 23 deletions(-)
> 
> diff --git a/include/block/block_int.h b/include/block/block_int.h
> index 42ee2fcf7f..993bafc090 100644
> --- a/include/block/block_int.h
> +++ b/include/block/block_int.h
> @@ -784,11 +784,20 @@ struct BlockDriverState {
>   bool walking_aio_notifiers; /* to make removal during iteration safe */
>   
>   char filename[PATH_MAX];
> -char backing_file[PATH_MAX]; /* if non zero, the image is a diff of
> -this file image */
> -/* The backing filename indicated by the image header; if we ever
> - * open this file, then this is replaced by the resulting BDS's
> - * filename (i.e. after a bdrv_refresh_filename() run). */
> +/*
> + * If not empty, this image is a diff in relation to backing_file.
> + * Note that this is the name given in the image header

Is it synced when image header is updated? If yes, it's not constant, if not 
it's just wrong.

> and
> + * therefore may or may not be equal to .backing->bs->filename.
> + * If this field contains a relative path, it is to be resolved
> + * relatively to the overlay's location.
> + */
> +char backing_file[PATH_MAX];
> +/*
> + * The backing filename 

Re: [Qemu-devel] [PATCH v5 2/3] aspeed: add a GPIO controller to the SoC

2019-08-16 Thread Cédric Le Goater
On 16/08/2019 09:40, Rashmica Gupta wrote:
> Cédric, this is how I thought changes to the SOC for your aspeed-4.1
> branch would look

Some comments, 
  
> From 13a07834476fa266c352d9a075b341c483b2edf9 Mon Sep 17 00:00:00 2001
> From: Rashmica Gupta 
> Date: Fri, 16 Aug 2019 15:18:22 +1000
> Subject: [PATCH] Aspeed SOC changes
> 
> ---
>  include/hw/arm/aspeed_soc.h |  4 +++-
>  hw/arm/aspeed_soc.c | 32 ++--
>  2 files changed, 25 insertions(+), 11 deletions(-)
> 
> diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
> index 8673661de8..f375271d5a 100644
> --- a/include/hw/arm/aspeed_soc.h
> +++ b/include/hw/arm/aspeed_soc.h
> @@ -28,6 +28,7 @@
>  #define ASPEED_WDTS_NUM  3
>  #define ASPEED_CPUS_NUM  2
>  #define ASPEED_MACS_NUM  2
> +#define ASPEED_GPIOS_NUM  2
>
>  
>  typedef struct AspeedSoCState {
>  /*< private >*/
> @@ -48,7 +49,7 @@ typedef struct AspeedSoCState {
>  AspeedSDMCState sdmc;
>  AspeedWDTState wdt[ASPEED_WDTS_NUM];
>  FTGMAC100State ftgmac100[ASPEED_MACS_NUM];
> -AspeedGPIOState gpio;
> +AspeedGPIOState gpio[ASPEED_GPIOS_NUM];

Even if they look the same, I think these are two different controllers 
and not multiple instances of the same. So I would rather introduce a new 
field 'gpio_1_8v' for the AST2600. 

>  } AspeedSoCState;
>  
>  #define TYPE_ASPEED_SOC "aspeed-soc"
> @@ -61,6 +62,7 @@ typedef struct AspeedSoCInfo {
>  uint64_t sram_size;
>  int spis_num;
>  int wdts_num;
> +int gpios_num;
>  const int *irqmap;
>  const hwaddr *memmap;
>  uint32_t num_cpus;
> diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
> index 7d47647016..414b99c4f3 100644
> --- a/hw/arm/aspeed_soc.c
> +++ b/hw/arm/aspeed_soc.c
> @@ -119,6 +119,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .sram_size= 0x8000,
>  .spis_num = 1,
>  .wdts_num = 2,
> +.gpios_num= 1,
>  .irqmap   = aspeed_soc_ast2400_irqmap,
>  .memmap   = aspeed_soc_ast2400_memmap,
>  .num_cpus = 1,
> @@ -132,6 +133,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .irqmap   = aspeed_soc_ast2500_irqmap,
>  .memmap   = aspeed_soc_ast2500_memmap,
>  .num_cpus = 1,
> +.gpios_num= 1,
>  },
>  };
>  
> @@ -226,9 +228,15 @@ static void aspeed_soc_init(Object *obj)
>  sysbus_init_child_obj(obj, "xdma", OBJECT(>xdma), sizeof(s-
>> xdma),
>TYPE_ASPEED_XDMA);
>  
> -snprintf(typename, sizeof(typename), "aspeed.gpio-%s", socname);
> -sysbus_init_child_obj(obj, "gpio", OBJECT(>gpio), sizeof(s-
>> gpio),
> -  typename);
> +for (i = 0; i < sc->info->gpios_num; i++) {
> +if (ASPEED_IS_AST2600(sc->info->silicon_rev)) {
> +snprintf(typename, sizeof(typename), "aspeed.gpio%d-%s",
> i, socname);
> +} else {
> +snprintf(typename, sizeof(typename), "aspeed.gpio-%s",
> socname);
> +}
> +sysbus_init_child_obj(obj, "gpio[*]", OBJECT(>gpio[i]),
> sizeof(s->gpio[i]),
> +   typename);
> +}
>  }
>  
>  static void aspeed_soc_realize(DeviceState *dev, Error **errp)
> @@ -410,15 +418,19 @@ static void aspeed_soc_realize(DeviceState *dev,
> Error **errp)
> aspeed_soc_get_irq(s, ASPEED_XDMA));
>  
>  /* GPIO */
> -object_property_set_bool(OBJECT(>gpio), true, "realized",
> );
> -if (err) {
> -error_propagate(errp, err);
> -return;
> +for (i = 0; i < sc->info->gpios_num; i++) {
> +hwaddr addr =  sc->info->memmap[ASPEED_GPIO] + i * 0x800;

I would introduce ASPEED_GPIO_V1_8V instead. 

> +object_property_set_bool(OBJECT(>gpio[i]), true,
> "realized", );
> +if (err) {
> +error_propagate(errp, err);
> +return;
> +}
> +sysbus_mmio_map(SYS_BUS_DEVICE(>gpio[i]), 0, addr);
> +sysbus_connect_irq(SYS_BUS_DEVICE(>gpio[i]), 0,
> +   aspeed_soc_get_irq(s, ASPEED_GPIO));

The interrupt is different.

C. 

>  }
> -sysbus_mmio_map(SYS_BUS_DEVICE(>gpio), 0, sc->info-
>> memmap[ASPEED_GPIO]);
> -sysbus_connect_irq(SYS_BUS_DEVICE(>gpio), 0,
> -   aspeed_soc_get_irq(s, ASPEED_GPIO));
>  }
> +
>  static Property aspeed_soc_properties[] = {
>  DEFINE_PROP_UINT32("num-cpus", AspeedSoCState, num_cpus, 0),
>  DEFINE_PROP_END_OF_LIST(),
> 




Re: [Qemu-devel] [PATCH v5 2/3] aspeed: add a GPIO controller to the SoC

2019-08-16 Thread Cédric Le Goater
On 16/08/2019 09:32, Rashmica Gupta wrote:
> Signed-off-by: Rashmica Gupta 



Reviewed-by: Cédric Le Goater 

Thanks,

C.

> ---
>  include/hw/arm/aspeed_soc.h |  3 +++
>  hw/arm/aspeed_soc.c | 17 +
>  2 files changed, 20 insertions(+)
> 
> diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
> index cef605ad6b..fa04abddd8 100644
> --- a/include/hw/arm/aspeed_soc.h
> +++ b/include/hw/arm/aspeed_soc.h
> @@ -22,6 +22,7 @@
>  #include "hw/ssi/aspeed_smc.h"
>  #include "hw/watchdog/wdt_aspeed.h"
>  #include "hw/net/ftgmac100.h"
> +#include "hw/gpio/aspeed_gpio.h"
>  
>  #define ASPEED_SPIS_NUM  2
>  #define ASPEED_WDTS_NUM  3
> @@ -47,6 +48,7 @@ typedef struct AspeedSoCState {
>  AspeedSDMCState sdmc;
>  AspeedWDTState wdt[ASPEED_WDTS_NUM];
>  FTGMAC100State ftgmac100[ASPEED_MACS_NUM];
> +AspeedGPIOState gpio;
>  } AspeedSoCState;
>  
>  #define TYPE_ASPEED_SOC "aspeed-soc"
> @@ -60,6 +62,7 @@ typedef struct AspeedSoCInfo {
>  int spis_num;
>  const char *fmc_typename;
>  const char **spi_typename;
> +const char *gpio_typename;
>  int wdts_num;
>  const int *irqmap;
>  const hwaddr *memmap;
> diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
> index c6fb3700f2..ff422c8ad1 100644
> --- a/hw/arm/aspeed_soc.c
> +++ b/hw/arm/aspeed_soc.c
> @@ -124,6 +124,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .spis_num = 1,
>  .fmc_typename = "aspeed.smc.fmc",
>  .spi_typename = aspeed_soc_ast2400_typenames,
> +.gpio_typename = "aspeed.gpio-ast2400",
>  .wdts_num = 2,
>  .irqmap   = aspeed_soc_ast2400_irqmap,
>  .memmap   = aspeed_soc_ast2400_memmap,
> @@ -136,6 +137,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .spis_num = 1,
>  .fmc_typename = "aspeed.smc.fmc",
>  .spi_typename = aspeed_soc_ast2400_typenames,
> +.gpio_typename = "aspeed.gpio-ast2400",
>  .wdts_num = 2,
>  .irqmap   = aspeed_soc_ast2400_irqmap,
>  .memmap   = aspeed_soc_ast2400_memmap,
> @@ -148,6 +150,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .spis_num = 1,
>  .fmc_typename = "aspeed.smc.fmc",
>  .spi_typename = aspeed_soc_ast2400_typenames,
> +.gpio_typename = "aspeed.gpio-ast2400",
>  .wdts_num = 2,
>  .irqmap   = aspeed_soc_ast2400_irqmap,
>  .memmap   = aspeed_soc_ast2400_memmap,
> @@ -160,6 +163,7 @@ static const AspeedSoCInfo aspeed_socs[] = {
>  .spis_num = 2,
>  .fmc_typename = "aspeed.smc.ast2500-fmc",
>  .spi_typename = aspeed_soc_ast2500_typenames,
> +.gpio_typename = "aspeed.gpio-ast2500",
>  .wdts_num = 3,
>  .irqmap   = aspeed_soc_ast2500_irqmap,
>  .memmap   = aspeed_soc_ast2500_memmap,
> @@ -246,6 +250,9 @@ static void aspeed_soc_init(Object *obj)
>  
>  sysbus_init_child_obj(obj, "xdma", OBJECT(>xdma), sizeof(s->xdma),
>TYPE_ASPEED_XDMA);
> +
> +sysbus_init_child_obj(obj, "gpio", OBJECT(>gpio), sizeof(s->gpio),
> +  sc->info->gpio_typename);
>  }
>  
>  static void aspeed_soc_realize(DeviceState *dev, Error **errp)
> @@ -425,6 +432,16 @@ static void aspeed_soc_realize(DeviceState *dev, Error 
> **errp)
>  sc->info->memmap[ASPEED_XDMA]);
>  sysbus_connect_irq(SYS_BUS_DEVICE(>xdma), 0,
> aspeed_soc_get_irq(s, ASPEED_XDMA));
> +
> +/* GPIO */
> +object_property_set_bool(OBJECT(>gpio), true, "realized", );
> +if (err) {
> +error_propagate(errp, err);
> +return;
> +}
> +sysbus_mmio_map(SYS_BUS_DEVICE(>gpio), 0, 
> sc->info->memmap[ASPEED_GPIO]);
> +sysbus_connect_irq(SYS_BUS_DEVICE(>gpio), 0,
> +   aspeed_soc_get_irq(s, ASPEED_GPIO));
>  }
>  static Property aspeed_soc_properties[] = {
>  DEFINE_PROP_UINT32("num-cpus", AspeedSoCState, num_cpus, 0),
> 




  1   2   3   >