Re: [Qemu-devel] [PATCH v2 40/67] target/arm: Implement SVE Integer Compare - Scalars Group

2018-02-23 Thread Richard Henderson
On 02/23/2018 09:00 AM, Peter Maydell wrote:
>> +
>> +uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
> 
> This could really use a comment about what part of the overall
> instruction it's doing.

Ok.

>> +
>> +/* For the helper, compress the different conditions into a computation
>> + * of how many iterations for which the condition is true.
>> + *
>> + * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
>> + * 2**64 iterations, overflowing to 0.  Of course, predicate registers
>> + * aren't that large, so any value >= predicate size is sufficient.
>> + */
...

> I got confused by this -- it is too far different from what the
> pseudocode is doing. Could we have more explanatory comments, please?

Ok.  I guess the comment above wasn't as helpful as I imagined.  I'll come up
with something for the next round.


r~



[Qemu-devel] [PATCH V5 3/4] tests/migration: Add migration-test header file

2018-02-23 Thread Wei Huang
This patch moves the settings related migration-test from the
migration-test.c file to a seperate header file. It also renames the
x86-a-b-bootblock.s file extension from .s to .S, allowing gcc
pre-processor to include the C-style header file correctly.

Signed-off-by: Wei Huang 
---
 tests/migration-test.c | 28 +++---
 tests/migration/Makefile   |  4 ++--
 tests/migration/migration-test.h   | 18 ++
 .../{x86-a-b-bootblock.s => x86-a-b-bootblock.S}   |  7 +++---
 tests/migration/x86-a-b-bootblock.h|  2 +-
 5 files changed, 39 insertions(+), 20 deletions(-)
 create mode 100644 tests/migration/migration-test.h
 rename tests/migration/{x86-a-b-bootblock.s => x86-a-b-bootblock.S} (94%)

diff --git a/tests/migration-test.c b/tests/migration-test.c
index 74f9361bdd..ce2922df6a 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -21,10 +21,10 @@
 #include "sysemu/sysemu.h"
 #include "hw/nvram/chrp_nvram.h"
 
-#define MIN_NVRAM_SIZE 8192 /* from spapr_nvram.c */
+#include "migration/migration-test.h"
 
-const unsigned start_address = 1024 * 1024;
-const unsigned end_address = 100 * 1024 * 1024;
+const unsigned start_address = TEST_MEM_START;
+const unsigned end_address = TEST_MEM_END;
 bool got_stop;
 
 #if defined(__linux__)
@@ -77,8 +77,8 @@ static bool ufd_version_check(void)
 
 static const char *tmpfs;
 
-/* A simple PC boot sector that modifies memory (1-100MB) quickly
- * outputting a 'B' every so often if it's still running.
+/* The boot file modifies memory area in [start_address, end_address)
+ * repeatedly. It outputs a 'B' at a fixed rate while it's still running.
  */
 #include "tests/migration/x86-a-b-bootblock.h"
 
@@ -104,9 +104,8 @@ static void init_bootfile_ppc(const char *bootpath)
 memcpy(header->name, "common", 6);
 chrp_nvram_finish_partition(header, MIN_NVRAM_SIZE);
 
-/* FW_MAX_SIZE is 4MB, but slof.bin is only 900KB,
- * so let's modify memory between 1MB and 100MB
- * to do like PC bootsector
+/* FW_MAX_SIZE is 4MB, but slof.bin is only 900KB. So it is OK to modify
+ * memory between start_address and end_address like PC bootsector does.
  */
 
 sprintf(buf + 16,
@@ -263,11 +262,11 @@ static void wait_for_migration_pass(QTestState *who)
 static void check_guests_ram(QTestState *who)
 {
 /* Our ASM test will have been incrementing one byte from each page from
- * 1MB to <100MB in order.
- * This gives us a constraint that any page's byte should be equal or less
- * than the previous pages byte (mod 256); and they should all be equal
- * except for one transition at the point where we meet the incrementer.
- * (We're running this with the guest stopped).
+ * start_address to  $@
diff --git a/tests/migration/migration-test.h b/tests/migration/migration-test.h
new file mode 100644
index 00..48b59b3281
--- /dev/null
+++ b/tests/migration/migration-test.h
@@ -0,0 +1,18 @@
+/*
+ * Copyright (c) 2018 Red Hat, Inc. and/or its affiliates
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef _TEST_MIGRATION_H_
+#define _TEST_MIGRATION_H_
+
+/* Common */
+#define TEST_MEM_START  (1 * 1024 * 1024)
+#define TEST_MEM_END(100 * 1024 * 1024)
+#define TEST_MEM_PAGE_SIZE  4096
+
+/* PPC */
+#define MIN_NVRAM_SIZE 8192 /* from spapr_nvram.c */
+
+#endif /* _TEST_MIGRATION_H_ */
diff --git a/tests/migration/x86-a-b-bootblock.s 
b/tests/migration/x86-a-b-bootblock.S
similarity index 94%
rename from tests/migration/x86-a-b-bootblock.s
rename to tests/migration/x86-a-b-bootblock.S
index 98dbfab084..08b51f9e7f 100644
--- a/tests/migration/x86-a-b-bootblock.s
+++ b/tests/migration/x86-a-b-bootblock.S
@@ -12,6 +12,7 @@
 #
 # Author: dgilb...@redhat.com
 
+#include "migration-test.h"
 
 .code16
 .org 0x7c00
@@ -45,11 +46,11 @@ start: # at 0x7c00 ?
 mov $0, %bl
 mainloop:
 # Start from 1MB
-mov $(1024*1024),%eax
+mov $TEST_MEM_START,%eax
 innerloop:
 incb (%eax)
-add $4096,%eax
-cmp $(100*1024*1024),%eax
+add $TEST_MEM_PAGE_SIZE,%eax
+cmp $TEST_MEM_END,%eax
 jl innerloop
 
 inc %bl
diff --git a/tests/migration/x86-a-b-bootblock.h 
b/tests/migration/x86-a-b-bootblock.h
index 9e8e2e028b..44e4b99506 100644
--- a/tests/migration/x86-a-b-bootblock.h
+++ b/tests/migration/x86-a-b-bootblock.h
@@ -1,5 +1,5 @@
 /* This file is automatically generated from
- * tests/migration/x86-a-b-bootblock.s, edit that and then run
+ * tests/migration/x86-a-b-bootblock.S, edit that and then run
  * "make x86-a-b-bootblock.h" inside tests/migration to update,
  * and then remember to send both in your patch submission.
  */
-- 
2.14.3




[Qemu-devel] [PATCH V5 1/4] rules: Move cross compilation auto detection functions to rules.mak

2018-02-23 Thread Wei Huang
This patch moves the auto detection functions for cross compilation from
roms/Makefile to rules.mak. So the functions can be shared among Makefiles
in QEMU.

Signed-off-by: Wei Huang 
---
 roms/Makefile | 24 +++-
 rules.mak | 15 +++
 2 files changed, 22 insertions(+), 17 deletions(-)

diff --git a/roms/Makefile b/roms/Makefile
index b5e5a69e91..e972c65333 100644
--- a/roms/Makefile
+++ b/roms/Makefile
@@ -21,23 +21,6 @@ pxe-rom-virtio   efi-rom-virtio   : DID := 1000
 pxe-rom-vmxnet3  efi-rom-vmxnet3  : VID := 15ad
 pxe-rom-vmxnet3  efi-rom-vmxnet3  : DID := 07b0
 
-#
-# cross compiler auto detection
-#
-path := $(subst :, ,$(PATH))
-system := $(shell uname -s | tr "A-Z" "a-z")
-
-# first find cross binutils in path
-find-cross-ld = $(firstword $(wildcard $(patsubst 
%,%/$(1)-*$(system)*-ld,$(path
-# then check we have cross gcc too
-find-cross-gcc = $(firstword $(wildcard $(patsubst %ld,%gcc,$(call 
find-cross-ld,$(1)
-# finally strip off path + toolname so we get the prefix
-find-cross-prefix = $(subst gcc,,$(notdir $(call find-cross-gcc,$(1
-
-powerpc64_cross_prefix := $(call find-cross-prefix,powerpc64)
-powerpc_cross_prefix := $(call find-cross-prefix,powerpc)
-x86_64_cross_prefix := $(call find-cross-prefix,x86_64)
-
 # tag our seabios builds
 SEABIOS_EXTRAVERSION="-prebuilt.qemu-project.org"
 
@@ -66,6 +49,13 @@ default:
@echo "  skiboot-- update skiboot.lid"
@echo "  u-boot.e500-- update u-boot.e500"
 
+SRC_PATH=..
+include $(SRC_PATH)/rules.mak
+
+powerpc64_cross_prefix := $(call find-cross-prefix,powerpc64)
+powerpc_cross_prefix := $(call find-cross-prefix,powerpc)
+x86_64_cross_prefix := $(call find-cross-prefix,x86_64)
+
 bios: build-seabios-config-seabios-128k build-seabios-config-seabios-256k
cp seabios/builds/seabios-128k/bios.bin ../pc-bios/bios.bin
cp seabios/builds/seabios-256k/bios.bin ../pc-bios/bios-256k.bin
diff --git a/rules.mak b/rules.mak
index 6e943335f3..ef8adee3f8 100644
--- a/rules.mak
+++ b/rules.mak
@@ -62,6 +62,21 @@ expand-objs = $(strip $(sort $(filter %.o,$1)) \
   $(foreach o,$(filter %.mo,$1),$($o-objs)) \
   $(filter-out %.o %.mo,$1))
 
+# Cross compilation auto detection. Use find-cross-prefix to detect the
+# target archtecture's prefix, and then append it to the build tool or pass
+# it to CROSS_COMPILE directly. Here is one example:
+#  x86_64_cross_prefix := $(call find-cross-prefix,x86_64)
+#  $(x86_64_cross_prefix)gcc -c test.c -o test.o
+#  make -C testdir CROSS_COMPILE=$(x86_64_cross_prefix)
+cross-search-path := $(subst :, ,$(PATH))
+cross-host-system := $(shell uname -s | tr "A-Z" "a-z")
+
+find-cross-ld = $(firstword $(wildcard $(patsubst \
+%,%/$(1)-*$(cross-host-system)*-ld,$(cross-search-path
+find-cross-gcc = $(firstword $(wildcard \
+$(patsubst %ld,%gcc,$(call find-cross-ld,$(1)
+find-cross-prefix = $(subst gcc,,$(notdir $(call find-cross-gcc,$(1
+
 %.o: %.c
$(call quiet-command,$(CC) $(QEMU_LOCAL_INCLUDES) $(QEMU_INCLUDES) \
   $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) $($@-cflags) \
-- 
2.14.3




[Qemu-devel] [PATCH V5 0/4] tests: Add migration test for aarch64

2018-02-23 Thread Wei Huang
This patchset adds a migration test for aarch64. It leverages
Dave Gilbert's recent patch "tests/migration: Add source to PC boot block"
to create a new test case for aarch64.

V4->V5:
 * Extract cross compilation detection code into rules.mak for sharing
 * Minor comment and code revision in migration-test.c & aarch64-a-b-kernel.S
 
V3->V4:
 * Rename .s to .S, allowing assembly to include C-style header file
 * Move test defines into a new migration-test.h file
 * Use different cpu & gic settings for kvm and tcg modes on aarch64
 * Clean up aarch64-a-b-kernel.S based on Andrew Jones' comments
 
V2->V3:
 * Convert build script to Makefile
 * Add cross-compilation support
 * Fix CPU type for "tcg" machine type
 * Revise asm code and the compilation process from asm to header file

V1->V2:
 * Similar to Dave Gilbert's recent changes to migration-test, we
   provide the test source and a build script in V2.
 * aarch64 kernel blob is defined as "unsigned char" because the source
   is now provided in V2.
 * Add "-machine none" to test_deprecated() because aarch64 doesn't have
   a default machine type.

RFC->V1:
 * aarch64 kernel blob is defined as an uint32_t array
 * The test code is re-written to address a data caching issue under KVM.
   Tests passed under both x86 and aarch64.
 * Re-use init_bootfile_x86() for both x86 and aarch64
 * Other minor fixes

Thanks,
-Wei

Wei Huang (4):
  rules: Move cross compilation auto detection functions to rules.mak
  tests/migration: Convert the boot block compilation script into
Makefile
  tests/migration: Add migration-test header file
  tests: Add migration test for aarch64

 roms/Makefile  | 24 ++-
 rules.mak  | 15 +
 tests/Makefile.include |  1 +
 tests/migration-test.c | 74 --
 tests/migration/Makefile   | 44 +
 tests/migration/aarch64-a-b-kernel.S   | 71 +
 tests/migration/aarch64-a-b-kernel.h   | 19 ++
 tests/migration/migration-test.h   | 23 +++
 tests/migration/rebuild-x86-bootblock.sh   | 33 --
 .../{x86-a-b-bootblock.s => x86-a-b-bootblock.S}   | 12 ++--
 tests/migration/x86-a-b-bootblock.h|  4 +-
 11 files changed, 244 insertions(+), 76 deletions(-)
 create mode 100644 tests/migration/Makefile
 create mode 100644 tests/migration/aarch64-a-b-kernel.S
 create mode 100644 tests/migration/aarch64-a-b-kernel.h
 create mode 100644 tests/migration/migration-test.h
 delete mode 100755 tests/migration/rebuild-x86-bootblock.sh
 rename tests/migration/{x86-a-b-bootblock.s => x86-a-b-bootblock.S} (88%)

-- 
2.14.3




Re: [Qemu-devel] [PATCH v3 04/31] target/arm/cpu.h: add additional float_status flags

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> Half-precision flush to zero behaviour is controlled by a separate
> FZ16 bit in the FPCR. To handle this we pass a pointer to
> fp_status_fp16 when working on half-precision operations. The value of
> the presented FPCR is calculated from an amalgam of the two when read.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - add FPCR_[FZ/FZ16/DN] defines to cpu.h and use
>   - only propagate flag status to fp_status as they are ored later
>   - ensure dnan and round mode propagated to fp_status_fp16
> ---

Reviewed-by: Richard Henderson 


r~



[Qemu-devel] [RFC v4 06/21] iotests: add pause_wait

2018-02-23 Thread John Snow
Split out the pause command into the actual pause and the wait.
Not every usage presently needs to resubmit a pause request.

The intent with the next commit will be to explicitly disallow
redundant or meaningless pause/resume requests, so the tests
need to become more judicious to reflect that.

Signed-off-by: John Snow 
---
 tests/qemu-iotests/030|  6 ++
 tests/qemu-iotests/055| 17 ++---
 tests/qemu-iotests/iotests.py | 12 
 3 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index 457984b8e9..251883226c 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -86,11 +86,9 @@ class TestSingleDrive(iotests.QMPTestCase):
 result = self.vm.qmp('block-stream', device='drive0')
 self.assert_qmp(result, 'return', {})
 
-result = self.vm.qmp('block-job-pause', device='drive0')
-self.assert_qmp(result, 'return', {})
-
+self.pause_job('drive0', wait=False)
 self.vm.resume_drive('drive0')
-self.pause_job('drive0')
+self.pause_wait('drive0')
 
 result = self.vm.qmp('query-block-jobs')
 offset = self.dictpath(result, 'return[0]/offset')
diff --git a/tests/qemu-iotests/055 b/tests/qemu-iotests/055
index 8a5d9fd269..3437c11507 100755
--- a/tests/qemu-iotests/055
+++ b/tests/qemu-iotests/055
@@ -86,11 +86,9 @@ class TestSingleDrive(iotests.QMPTestCase):
  target=target, sync='full')
 self.assert_qmp(result, 'return', {})
 
-result = self.vm.qmp('block-job-pause', device='drive0')
-self.assert_qmp(result, 'return', {})
-
+self.pause_job('drive0', wait=False)
 self.vm.resume_drive('drive0')
-self.pause_job('drive0')
+self.pause_wait('drive0')
 
 result = self.vm.qmp('query-block-jobs')
 offset = self.dictpath(result, 'return[0]/offset')
@@ -303,13 +301,12 @@ class TestSingleTransaction(iotests.QMPTestCase):
 ])
 self.assert_qmp(result, 'return', {})
 
-result = self.vm.qmp('block-job-pause', device='drive0')
-self.assert_qmp(result, 'return', {})
+self.pause_job('drive0', wait=False)
 
 result = self.vm.qmp('block-job-set-speed', device='drive0', speed=0)
 self.assert_qmp(result, 'return', {})
 
-self.pause_job('drive0')
+self.pause_wait('drive0')
 
 result = self.vm.qmp('query-block-jobs')
 offset = self.dictpath(result, 'return[0]/offset')
@@ -534,11 +531,9 @@ class TestDriveCompression(iotests.QMPTestCase):
 result = self.vm.qmp(cmd, device='drive0', sync='full', compress=True, 
**args)
 self.assert_qmp(result, 'return', {})
 
-result = self.vm.qmp('block-job-pause', device='drive0')
-self.assert_qmp(result, 'return', {})
-
+self.pause_job('drive0', wait=False)
 self.vm.resume_drive('drive0')
-self.pause_job('drive0')
+self.pause_wait('drive0')
 
 result = self.vm.qmp('query-block-jobs')
 offset = self.dictpath(result, 'return[0]/offset')
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index 1bcc9ca57d..5303bbc8e2 100644
--- a/tests/qemu-iotests/iotests.py
+++ b/tests/qemu-iotests/iotests.py
@@ -473,10 +473,7 @@ class QMPTestCase(unittest.TestCase):
 event = self.wait_until_completed(drive=drive)
 self.assert_qmp(event, 'data/type', 'mirror')
 
-def pause_job(self, job_id='job0'):
-result = self.vm.qmp('block-job-pause', device=job_id)
-self.assert_qmp(result, 'return', {})
-
+def pause_wait(self, job_id='job0'):
 with Timeout(1, "Timeout waiting for job to pause"):
 while True:
 result = self.vm.qmp('query-block-jobs')
@@ -484,6 +481,13 @@ class QMPTestCase(unittest.TestCase):
 if job['device'] == job_id and job['paused'] == True and 
job['busy'] == False:
 return job
 
+def pause_job(self, job_id='job0', wait=True):
+result = self.vm.qmp('block-job-pause', device=job_id)
+self.assert_qmp(result, 'return', {})
+if wait:
+return self.pause_wait(job_id)
+return result
+
 
 def notrun(reason):
 '''Skip this test suite'''
-- 
2.14.3




[Qemu-devel] [RFC v4 01/21] blockjobs: fix set-speed kick

2018-02-23 Thread John Snow
If speed is '0' it's not actually "less than" the previous speed.
Kick the job in this case too.

Signed-off-by: John Snow 
---
 blockjob.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/blockjob.c b/blockjob.c
index 3f52f29f75..24833ef30f 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -499,7 +499,7 @@ void block_job_set_speed(BlockJob *job, int64_t speed, 
Error **errp)
 }
 
 job->speed = speed;
-if (speed <= old_speed) {
+if (speed && speed <= old_speed) {
 return;
 }
 
-- 
2.14.3




[Qemu-devel] [RFC v4 03/21] blockjobs: add manual property

2018-02-23 Thread John Snow
This property will be used to opt-in to the new BlockJobs workflow
that allows a tighter, more explicit control over transitions from
one runstate to another.

While we're here, fix up the documentation for block_job_create
a little bit.

Signed-off-by: John Snow 
---
 blockjob.c   |  1 +
 include/block/blockjob.h | 10 ++
 include/block/blockjob_int.h |  4 +++-
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/blockjob.c b/blockjob.c
index 7ba3683ee3..47468331ec 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -700,6 +700,7 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 job->paused= true;
 job->pause_count   = 1;
 job->refcnt= 1;
+job->manual= (flags & BLOCK_JOB_MANUAL);
 aio_timer_init(qemu_get_aio_context(), >sleep_timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
block_job_sleep_timer_cb, job);
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 00403d9482..8ffabdcbc4 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -141,14 +141,24 @@ typedef struct BlockJob {
  */
 QEMUTimer sleep_timer;
 
+/**
+ * Set to true when the management API has requested manual job
+ * management semantics.
+ */
+bool manual;
+
 /** Non-NULL if this job is part of a transaction */
 BlockJobTxn *txn;
 QLIST_ENTRY(BlockJob) txn_list;
 } BlockJob;
 
 typedef enum BlockJobCreateFlags {
+/* Default behavior */
 BLOCK_JOB_DEFAULT = 0x00,
+/* BlockJob is not QMP-created and should not send QMP events */
 BLOCK_JOB_INTERNAL = 0x01,
+/* BlockJob requests manual job management steps. */
+BLOCK_JOB_MANUAL = 0x02,
 } BlockJobCreateFlags;
 
 /**
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index becaae74c2..259d49b32a 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -114,11 +114,13 @@ struct BlockJobDriver {
  * block_job_create:
  * @job_id: The id of the newly-created job, or %NULL to have one
  * generated automatically.
- * @job_type: The class object for the newly-created job.
+ * @driver: The class object for the newly-created job.
  * @txn: The transaction this job belongs to, if any. %NULL otherwise.
  * @bs: The block
  * @perm, @shared_perm: Permissions to request for @bs
  * @speed: The maximum speed, in bytes per second, or 0 for unlimited.
+ * @flags: Creation flags for the Block Job.
+ * See @BlockJobCreateFlags
  * @cb: Completion function for the job.
  * @opaque: Opaque pointer value passed to @cb.
  * @errp: Error object.
-- 
2.14.3




[Qemu-devel] [RFC v4 07/21] blockjobs: add block_job_verb permission table

2018-02-23 Thread John Snow
Which commands ("verbs") are appropriate for jobs in which state is
also somewhat burdensome to keep track of.

As of this commit, it looks rather useless, but begins to look more
interesting the more states we add to the STM table.

A recurring theme is that no verb will apply to an 'undefined' job.

Further, it's not presently possible to restrict the "pause" or "resume"
verbs any more than they are in this commit because of the asynchronous
nature of how jobs enter the PAUSED state; justifications for some
seemingly erroneous applications are given below.

=
Verbs
=

Cancel:Any state except undefined.
Pause: Any state except undefined;
   'created': Requests that the job pauses as it starts.
   'running': Normal usage. (PAUSED)
   'paused':  The job may be paused for internal reasons,
  but the user may wish to force an indefinite
  user-pause, so this is allowed.
   'ready':   Normal usage. (STANDBY)
   'standby': Same logic as above.
Resume:Any state except undefined;
   'created': Will lift a user's pause-on-start request.
   'running': Will lift a pause request before it takes effect.
   'paused':  Normal usage.
   'ready':   Will lift a pause request before it takes effect.
   'standby': Normal usage.
Set-speed: Any state except undefined, though ready may not be meaningful.
Complete:  Only a 'ready' job may accept a complete request.


===
Changes
===

(1)

To facilitate "nice" error checking, all five major block-job verb
interfaces in blockjob.c now support an errp parameter:

- block_job_user_cancel is added as a new interface.
- block_job_user_pause gains an errp paramter
- block_job_user_resume gains an errp parameter
- block_job_set_speed already had an errp parameter.
- block_job_complete already had an errp parameter.

(2)

block-job-pause and block-job-resume will no longer no-op when trying
to pause an already paused job, or trying to resume a job that isn't
paused. These functions will now report that they did not perform the
action requested because it was not possible.

iotests have been adjusted to address this new behavior.

(3)

block-job-complete doesn't worry about checking !block_job_started,
because the permission table guards against this.

(4)

test-bdrv-drain's job implementation needs to announce that it is
'ready' now, in order to be completed.

Signed-off-by: John Snow 
---
 block/trace-events   |  1 +
 blockdev.c   | 10 +++
 blockjob.c   | 71 ++--
 include/block/blockjob.h | 13 +++--
 qapi/block-core.json | 20 ++
 tests/test-bdrv-drain.c  |  1 +
 6 files changed, 100 insertions(+), 16 deletions(-)

diff --git a/block/trace-events b/block/trace-events
index b75a0c8409..3fe89f7ea6 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -6,6 +6,7 @@ bdrv_lock_medium(void *bs, bool locked) "bs %p locked %d"
 
 # blockjob.c
 block_job_state_transition(void *job,  int ret, const char *legal, const char 
*s0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)"
+block_job_apply_verb(void *job, const char *state, const char *verb, const 
char *legal) "job %p in state %s; applying verb %s (%s)"
 
 # block/block-backend.c
 blk_co_preadv(void *blk, void *bs, int64_t offset, unsigned int bytes, int 
flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
diff --git a/blockdev.c b/blockdev.c
index 3fb1ca803c..cba935a0a6 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3805,7 +3805,7 @@ void qmp_block_job_cancel(const char *device,
 }
 
 trace_qmp_block_job_cancel(job);
-block_job_cancel(job);
+block_job_user_cancel(job, errp);
 out:
 aio_context_release(aio_context);
 }
@@ -3815,12 +3815,12 @@ void qmp_block_job_pause(const char *device, Error 
**errp)
 AioContext *aio_context;
 BlockJob *job = find_block_job(device, _context, errp);
 
-if (!job || block_job_user_paused(job)) {
+if (!job) {
 return;
 }
 
 trace_qmp_block_job_pause(job);
-block_job_user_pause(job);
+block_job_user_pause(job, errp);
 aio_context_release(aio_context);
 }
 
@@ -3829,12 +3829,12 @@ void qmp_block_job_resume(const char *device, Error 
**errp)
 AioContext *aio_context;
 BlockJob *job = find_block_job(device, _context, errp);
 
-if (!job || !block_job_user_paused(job)) {
+if (!job) {
 return;
 }
 
 trace_qmp_block_job_resume(job);
-block_job_user_resume(job);
+block_job_user_resume(job, errp);
 aio_context_release(aio_context);
 }
 
diff --git a/blockjob.c b/blockjob.c
index d745b3bb69..4e424fef72 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -53,6 +53,15 @@ bool 
BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
 /* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0},
 };
 
+bool 

Re: [Qemu-devel] [RFC v4 00/21] blockjobs: add explicit job management

2018-02-23 Thread no-reply
Hi,

This series failed build test on ppcbe host. Please find the details below.

Type: series
Message-id: 20180223235142.21501-1-js...@redhat.com
Subject: [Qemu-devel] [RFC v4 00/21] blockjobs: add explicit job management

=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked under the git checkout with
# HEAD pointing to a commit that has the patches applied on top of "base"
# branch
set -e
echo "=== ENV ==="
env
echo "=== PACKAGES ==="
rpm -qa
echo "=== TEST BEGIN ==="
INSTALL=$PWD/install
BUILD=$PWD/build
mkdir -p $BUILD $INSTALL
SRC=$PWD
cd $BUILD
$SRC/configure --prefix=$INSTALL
make -j100
# XXX: we need reliable clean up
# make check -j100 V=1
make install
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 - [tag update]  patchew/20180223153636.29809-1-alex.ben...@linaro.org -> 
patchew/20180223153636.29809-1-alex.ben...@linaro.org
 * [new tag] patchew/20180223235142.21501-1-js...@redhat.com -> 
patchew/20180223235142.21501-1-js...@redhat.com
Submodule 'capstone' (git://git.qemu.org/capstone.git) registered for path 
'capstone'
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (git://git.qemu.org/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (git://git.qemu-project.org/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/ipxe' (git://git.qemu-project.org/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (git://git.qemu-project.org/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (git://git.qemu-project.org/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/qemu-palcode' (git://github.com/rth7680/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (git://git.qemu-project.org/seabios.git/) registered 
for path 'roms/seabios'
Submodule 'roms/seabios-hppa' (git://github.com/hdeller/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (git://git.qemu-project.org/sgabios.git) registered 
for path 'roms/sgabios'
Submodule 'roms/skiboot' (git://git.qemu.org/skiboot.git) registered for path 
'roms/skiboot'
Submodule 'roms/u-boot' (git://git.qemu-project.org/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/vgabios' (git://git.qemu-project.org/vgabios.git/) registered 
for path 'roms/vgabios'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered 
for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'd4e7d7ac663fcb55f1b93575445fcbca372f17a7'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'fa981320a1e0968d6fc1b8de319723ff8212b337'
Cloning into 'roms/ipxe'...
Submodule path 'roms/ipxe': checked out 
'0600d3ae94f93efd10fc6b3c7420a9557a3a1670'
Cloning into 'roms/openbios'...
Submodule path 'roms/openbios': checked out 
'54d959d97fb331708767b2fd4a878efd2bbc41bb'
Cloning into 'roms/openhackware'...
Submodule path 'roms/openhackware': checked out 
'c559da7c8eec5e45ef1f67978827af6f0b9546f5'
Cloning into 'roms/qemu-palcode'...
Submodule path 'roms/qemu-palcode': checked out 
'f3c7e44c70254975df2a00af39701eafbac4d471'
Cloning into 'roms/seabios'...
Submodule path 'roms/seabios': checked out 
'63451fca13c75870e1703eb3e20584d91179aebc'
Cloning into 'roms/seabios-hppa'...
Submodule path 'roms/seabios-hppa': checked out 
'649e6202b8d65d46c69f542b1380f840fbe8ab13'
Cloning into 'roms/sgabios'...
Submodule path 'roms/sgabios': checked out 
'cbaee52287e5f32373181cff50a00b6c4ac9015a'
Cloning into 'roms/skiboot'...
Submodule path 'roms/skiboot': checked out 
'e0ee24c27a172bcf482f6f2bc905e6211c134bcc'
Cloning into 'roms/u-boot'...
Submodule path 'roms/u-boot': checked out 
'd85ca029f257b53a96da6c2fb421e78a003a9943'
Cloning into 'roms/vgabios'...
Submodule path 'roms/vgabios': checked out 
'19ea12c230ded95928ecaef0db47a82231c2e485'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
Switched to a new branch 'test'
230e578 blockjobs: add manual_mgmt option to transactions
f278a51 iotests: test manual job dismissal
8e473ab blockjobs: Expose manual property
7ad2d01 blockjobs: add block-job-finalize
3857c91 blockjobs: add PENDING status and event
18eb8a4 blockjobs: add waiting status
daf9613 blockjobs: add prepare callback
78be501 blockjobs: add block_job_txn_apply function
4b659ab blockjobs: add commit, abort, clean helpers
4023046 blockjobs: ensure abort is called for cancelled jobs
e9300b1 blockjobs: add block_job_dismiss
4fc045e blockjobs: add NULL state
e6aa454 blockjobs: add CONCLUDED state
78efa2f 

Re: [Qemu-devel] [PATCH v6 00/23] RISC-V QEMU Port Submission

2018-02-23 Thread Richard Henderson
On 02/22/2018 04:11 PM, Michael Clark wrote:
> QEMU RISC-V Emulation Support (RV64GC, RV32GC)
> 
> This is hopefully the "fix remaining issues in-tree" release.

FWIW, I'm happy with this.

For those patches that I haven't given an explicit R-b, e.g. most of hw/, I
didn't see anything obviously wrong.  So I'll give them

Acked-by: Richard Henderson 

Unless anyone has any other comments, I would expect the next step would be for
you to create a signed pull request for Peter.


r~



Re: [Qemu-devel] [PATCH V4 3/3] tests: Add migration test for aarch64

2018-02-23 Thread Wei Huang


On 02/22/2018 03:00 AM, Andrew Jones wrote:
> On Wed, Feb 21, 2018 at 10:44:17PM -0600, Wei Huang wrote:
>> This patch adds migration test support for aarch64. The test code, which
>> implements the same functionality as x86, is booted as a kernel in qemu.
>> Here are the design choices we make for aarch64:
>>
>>  * We choose this -kernel approach because aarch64 QEMU doesn't provide a
>>built-in fw like x86 does. So instead of relying on a boot loader, we
>>use -kernel approach for aarch64.
>>  * The serial output is sent to PL011 directly.
>>  * The physical memory base for mach-virt machine is 0x4000. We change
>>the start_address and end_address for aarch64.
>>
>> In addition to providing the binary, this patch also includes the source
>> code and the build script in tests/migration/. So users can change the
>> source and/or re-compile the binary as they wish.
>>
>> Signed-off-by: Wei Huang 
>> ---
>>  tests/Makefile.include   |  1 +
>>  tests/migration-test.c   | 47 +---
>>  tests/migration/Makefile | 12 +-
>>  tests/migration/aarch64-a-b-kernel.S | 71 
>> 
>>  tests/migration/aarch64-a-b-kernel.h | 19 ++
>>  tests/migration/migration-test.h |  5 +++
>>  6 files changed, 147 insertions(+), 8 deletions(-)
>>  create mode 100644 tests/migration/aarch64-a-b-kernel.S
>>  create mode 100644 tests/migration/aarch64-a-b-kernel.h
>>
>> diff --git a/tests/Makefile.include b/tests/Makefile.include
>> index a1bcbffe12..df9f64438f 100644
>> --- a/tests/Makefile.include
>> +++ b/tests/Makefile.include
>> @@ -372,6 +372,7 @@ check-qtest-arm-y += tests/sdhci-test$(EXESUF)
>>  check-qtest-aarch64-y = tests/numa-test$(EXESUF)
>>  check-qtest-aarch64-y += tests/sdhci-test$(EXESUF)
>>  check-qtest-aarch64-y += tests/boot-serial-test$(EXESUF)
>> +check-qtest-aarch64-y += tests/migration-test$(EXESUF)
>>  
>>  check-qtest-microblazeel-y = $(check-qtest-microblaze-y)
>>  
>> diff --git a/tests/migration-test.c b/tests/migration-test.c
>> index e2e06ed337..a4f6732a59 100644
>> --- a/tests/migration-test.c
>> +++ b/tests/migration-test.c
>> @@ -11,6 +11,7 @@
>>   */
>>  
>>  #include "qemu/osdep.h"
>> +#include 
>>  
>>  #include "libqtest.h"
>>  #include "qapi/qmp/qdict.h"
>> @@ -23,8 +24,8 @@
>>  
>>  #include "migration/migration-test.h"
>>  
>> -const unsigned start_address = TEST_MEM_START;
>> -const unsigned end_address = TEST_MEM_END;
>> +unsigned start_address = TEST_MEM_START;
>> +unsigned end_address = TEST_MEM_END;
>>  bool got_stop;
>>  
>>  #if defined(__linux__)
>> @@ -81,12 +82,13 @@ static const char *tmpfs;
>>   * outputting a 'B' every so often if it's still running.
>>   */
>>  #include "tests/migration/x86-a-b-bootblock.h"
>> +#include "tests/migration/aarch64-a-b-kernel.h"
>>  
>> -static void init_bootfile_x86(const char *bootpath)
>> +static void init_bootfile(const char *bootpath, void *content)
>>  {
>>  FILE *bootfile = fopen(bootpath, "wb");
>>  
>> -g_assert_cmpint(fwrite(x86_bootsect, 512, 1, bootfile), ==, 1);
>> +g_assert_cmpint(fwrite(content, 512, 1, bootfile), ==, 1);
>>  fclose(bootfile);
>>  }
>>  
>> @@ -393,7 +395,7 @@ static void test_migrate_start(QTestState **from, 
>> QTestState **to,
>>  got_stop = false;
>>  
>>  if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
>> -init_bootfile_x86(bootpath);
>> +init_bootfile(bootpath, x86_bootsect);
>>  cmd_src = g_strdup_printf("-machine accel=%s -m 150M"
>>" -name source,debug-threads=on"
>>" -serial file:%s/src_serial"
>> @@ -422,6 +424,39 @@ static void test_migrate_start(QTestState **from, 
>> QTestState **to,
>>" -serial file:%s/dest_serial"
>>" -incoming %s",
>>accel, tmpfs, uri);
>> +} else if (strcmp(arch, "aarch64") == 0) {
>> +const char *cpu;
>> +const char *gic_ver;
>> +struct utsname utsname;
>> +
>> +/* kvm and tcg need different cpu and gic-version configs */
>> +if (access("/dev/kvm", F_OK) == 0 && uname() == 0 &&
>> +strcmp(utsname.machine, "aarch64") == 0) {
>> +accel = "kvm";
>> +cpu = "host";
>> +gic_ver = "host";
>> +} else {
>> +accel = "tcg";
>> +cpu = "cortex-a57";
>> +gic_ver = "2";
>> +}
>> +
>> +init_bootfile(bootpath, aarch64_kernel);
>> +cmd_src = g_strdup_printf("-machine virt,accel=%s,gic-version=%s "
>> +  "-name vmsource,debug-threads=on -cpu %s "
>> +  "-m 150M -serial file:%s/src_serial "
>> +  "-kernel %s ",
>> +  accel, gic_ver, cpu, tmpfs, 

Re: [Qemu-devel] [PATCH v3 03/31] target/arm/cpu.h: update comment for half-precision values

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> @@ -168,6 +168,7 @@ typedef struct {
>   *  Qn = regs[n].d[1]:regs[n].d[0]
>   *  Dn = regs[n].d[0]
>   *  Sn = regs[n].d[0] bits 31..0
> + *  Hn = regs[n].d[0] bits 15..0 for even n, and bits 31..16 for odd n

Everything past here --^ is wrong.


r~



[Qemu-devel] [RFC v4 05/21] blockjobs: add state transition table

2018-02-23 Thread John Snow
The state transition table has mostly been implied. We're about to make
it a bit more complex, so let's make the STM explicit instead.

Perform state transitions with a function that for now just asserts the
transition is appropriate.

Transitions:
Undefined -> Created: During job initialization.
Created   -> Running: Once the job is started.
  Jobs cannot transition from "Created" to "Paused"
  directly, but will instead synchronously transition
  to running to paused immediately.
Running   -> Paused:  Normal workflow for pauses.
Running   -> Ready:   Normal workflow for jobs reaching their sync point.
  (e.g. mirror)
Ready -> Standby: Normal workflow for pausing ready jobs.
Paused-> Running: Normal resume.
Standby   -> Ready:   Resume of a Standby job.


+-+
|UNDEFINED|
+--+--+
   |
+--v+
|CREATED|
+--++
   |
+--v+ +--+
|RUNNING<->PAUSED|
+--++ +--+
   |
+--v--+   +---+
|READY<--->STANDBY|
+-+   +---+


Notably, there is no state presently defined as of this commit that
deals with a job after the "running" or "ready" states, so this table
will be adjusted alongside the commits that introduce those states.

Signed-off-by: John Snow 
---
 block/trace-events |  3 +++
 blockjob.c | 42 --
 2 files changed, 39 insertions(+), 6 deletions(-)

diff --git a/block/trace-events b/block/trace-events
index 02dd80ff0c..b75a0c8409 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -4,6 +4,9 @@
 bdrv_open_common(void *bs, const char *filename, int flags, const char 
*format_name) "bs %p filename \"%s\" flags 0x%x format_name \"%s\""
 bdrv_lock_medium(void *bs, bool locked) "bs %p locked %d"
 
+# blockjob.c
+block_job_state_transition(void *job,  int ret, const char *legal, const char 
*s0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)"
+
 # block/block-backend.c
 blk_co_preadv(void *blk, void *bs, int64_t offset, unsigned int bytes, int 
flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
 blk_co_pwritev(void *blk, void *bs, int64_t offset, unsigned int bytes, int 
flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
diff --git a/blockjob.c b/blockjob.c
index 1be9c20cff..d745b3bb69 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -28,6 +28,7 @@
 #include "block/block.h"
 #include "block/blockjob_int.h"
 #include "block/block_int.h"
+#include "block/trace.h"
 #include "sysemu/block-backend.h"
 #include "qapi/error.h"
 #include "qapi/qmp/qerror.h"
@@ -41,6 +42,34 @@
  * block_job_enter. */
 static QemuMutex block_job_mutex;
 
+/* BlockJob State Transition Table */
+bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
+  /* U, C, R, P, Y, S */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0},
+};
+
+static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
+{
+BlockJobStatus s0 = job->status;
+if (s0 == s1) {
+return;
+}
+assert(s1 >= 0 && s1 <= BLOCK_JOB_STATUS__MAX);
+trace_block_job_state_transition(job, job->ret, BlockJobSTT[s0][s1] ?
+ "allowed" : "disallowed",
+ qapi_enum_lookup(_lookup,
+  s0),
+ qapi_enum_lookup(_lookup,
+  s1));
+assert(BlockJobSTT[s0][s1]);
+job->status = s1;
+}
+
 static void block_job_lock(void)
 {
 qemu_mutex_lock(_job_mutex);
@@ -320,7 +349,7 @@ void block_job_start(BlockJob *job)
 job->pause_count--;
 job->busy = true;
 job->paused = false;
-job->status = BLOCK_JOB_STATUS_RUNNING;
+block_job_state_transition(job, BLOCK_JOB_STATUS_RUNNING);
 bdrv_coroutine_enter(blk_bs(job->blk), job->co);
 }
 
@@ -704,6 +733,7 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 job->refcnt= 1;
 job->manual= (flags & BLOCK_JOB_MANUAL);
 job->status= BLOCK_JOB_STATUS_CREATED;
+block_job_state_transition(job, BLOCK_JOB_STATUS_CREATED);
 aio_timer_init(qemu_get_aio_context(), >sleep_timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
block_job_sleep_timer_cb, job);
@@ -818,13 +848,13 @@ void coroutine_fn block_job_pause_point(BlockJob *job)
 
 if (block_job_should_pause(job) && !block_job_is_cancelled(job)) {
 BlockJobStatus status = job->status;
-job->status = 

[Qemu-devel] [RFC v4 08/21] blockjobs: add ABORTING state

2018-02-23 Thread John Snow
Add a new state ABORTING.

This makes transitions from normative states to error states explicit
in the STM, and serves as a disambiguation for which states may complete
normally when normal end-states (CONCLUDED) are added in future commits.

Notably, Paused/Standby jobs do not transition directly to aborting,
as they must wake up first and cooperate in their cancellation.

Transitions:
Running -> Aborting: can be cancelled or encounter an error
Ready   -> Aborting: can be cancelled or encounter an error

Verbs:
None. The job must finish cleaning itself up and report its final status.

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED|
 +--++
|
 +--v+ +--+
   +-+RUNNING<->PAUSED|
   | +--++ +--+
   ||
   | +--v--+   +---+
   +-+READY<--->STANDBY|
   | +-+   +---+
   |
+--v-+
|ABORTING|
++

Signed-off-by: John Snow 
---
 blockjob.c   | 31 ++-
 qapi/block-core.json |  7 ++-
 2 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 4e424fef72..4c3fcda46c 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -44,22 +44,23 @@ static QemuMutex block_job_mutex;
 
 /* BlockJob State Transition Table */
 bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S */
-/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0},
-/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0},
-/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0},
-/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0},
-/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1},
-/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0},
+  /* U, C, R, P, Y, S, X */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0},
+/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0},
 };
 
 bool BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S */
-[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1},
-[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1},
-[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1},
-[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1},
-[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0},
+  /* U, C, R, P, Y, S, X */
+[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0},
+[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0},
+[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0},
+[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0},
+[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0},
 };
 
 static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
@@ -383,6 +384,10 @@ static void block_job_completed_single(BlockJob *job)
 {
 assert(job->completed);
 
+if (job->ret || block_job_is_cancelled(job)) {
+block_job_state_transition(job, BLOCK_JOB_STATUS_ABORTING);
+}
+
 if (!job->ret) {
 if (job->driver->commit) {
 job->driver->commit(job);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 11659496c5..3f7d559fc0 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -996,10 +996,15 @@
 # @standby: The job is ready, but paused. This is nearly identical to @paused.
 #   The job may return to @ready or otherwise be canceled.
 #
+# @aborting: The job is in the process of being aborted, and will finish with
+#an error. The job will afterwards report that it is @concluded.
+#This status may not be visible to the management process.
+#
 # Since: 2.12
 ##
 { 'enum': 'BlockJobStatus',
-  'data': ['undefined', 'created', 'running', 'paused', 'ready', 'standby'] }
+  'data': ['undefined', 'created', 'running', 'paused', 'ready', 'standby',
+   'aborting' ] }
 
 ##
 # @BlockJobInfo:
-- 
2.14.3




[Qemu-devel] [RFC v4 04/21] blockjobs: add status enum

2018-02-23 Thread John Snow
We're about to add several new states, and booleans are becoming
unwieldly and difficult to reason about. It would help to have a
more explicit bookkeeping of the state of blockjobs. To this end,
add a new "status" field and add our existing states in a redundant
manner alongside the bools they are replacing:

UNDEFINED: Placeholder, default state. Not currently visible to QMP
   unless changes occur in the future to allow creating jobs
   without starting them via QMP.
CREATED:   replaces !!job->co && paused && !busy
RUNNING:   replaces effectively (!paused && busy)
PAUSED:Nearly redundant with info->paused, which shows pause_count.
   This reports the actual status of the job, which almost always
   matches the paused request status. It differs in that it is
   strictly only true when the job has actually gone dormant.
READY: replaces job->ready.
STANDBY:   Paused, but job->ready is true.

New state additions in coming commits will not be quite so redundant:

WAITING:   Waiting on transaction. This job has finished all the work
   it can until the transaction converges, fails, or is canceled.
PENDING:   Pending authorization from user. This job has finished all the
   work it can until the job or transaction is finalized via
   block_job_finalize. This implies the transaction has converged
   and left the WAITING phase.
ABORTING:  Job has encountered an error condition and is in the process
   of aborting.
CONCLUDED: Job has ceased all operations and has a return code available
   for query and may be dismissed via block_job_dismiss.
NULL:  Job has been dismissed and (should) be destroyed. Should never
   be visible to QMP.

Some of these states appear somewhat superfluous, but it helps define the
expected flow of a job; so some of the states wind up being synchronous
empty transitions. Importantly, jobs can be in only one of these states
at any given time, which helps code and external users alike reason about
the current condition of a job unambiguously.

Signed-off-by: John Snow 
---
 blockjob.c |  9 +
 include/block/blockjob.h   |  7 +--
 qapi/block-core.json   | 31 ++-
 tests/qemu-iotests/109.out | 24 
 4 files changed, 56 insertions(+), 15 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 47468331ec..1be9c20cff 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -320,6 +320,7 @@ void block_job_start(BlockJob *job)
 job->pause_count--;
 job->busy = true;
 job->paused = false;
+job->status = BLOCK_JOB_STATUS_RUNNING;
 bdrv_coroutine_enter(blk_bs(job->blk), job->co);
 }
 
@@ -598,6 +599,7 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **errp)
 info->speed = job->speed;
 info->io_status = job->iostatus;
 info->ready = job->ready;
+info->status= job->status;
 return info;
 }
 
@@ -701,6 +703,7 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 job->pause_count   = 1;
 job->refcnt= 1;
 job->manual= (flags & BLOCK_JOB_MANUAL);
+job->status= BLOCK_JOB_STATUS_CREATED;
 aio_timer_init(qemu_get_aio_context(), >sleep_timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
block_job_sleep_timer_cb, job);
@@ -814,9 +817,14 @@ void coroutine_fn block_job_pause_point(BlockJob *job)
 }
 
 if (block_job_should_pause(job) && !block_job_is_cancelled(job)) {
+BlockJobStatus status = job->status;
+job->status = status == BLOCK_JOB_STATUS_READY ? \
+BLOCK_JOB_STATUS_STANDBY : \
+BLOCK_JOB_STATUS_PAUSED;
 job->paused = true;
 block_job_do_yield(job, -1);
 job->paused = false;
+job->status = status;
 }
 
 if (job->driver->resume) {
@@ -922,6 +930,7 @@ void block_job_iostatus_reset(BlockJob *job)
 
 void block_job_event_ready(BlockJob *job)
 {
+job->status = BLOCK_JOB_STATUS_READY;
 job->ready = true;
 
 if (block_job_is_internal(job)) {
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index 8ffabdcbc4..e254359d6b 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -143,10 +143,13 @@ typedef struct BlockJob {
 
 /**
  * Set to true when the management API has requested manual job
- * management semantics.
+ * management semantics. See @BlockJobStatus for details.
  */
 bool manual;
 
+/** Current state; See @BlockJobStatus for details. */
+BlockJobStatus status;
+
 /** Non-NULL if this job is part of a transaction */
 BlockJobTxn *txn;
 QLIST_ENTRY(BlockJob) txn_list;
@@ -157,7 +160,7 @@ typedef enum BlockJobCreateFlags {
 BLOCK_JOB_DEFAULT = 0x00,
 /* BlockJob is not QMP-created and should not send QMP events */
 BLOCK_JOB_INTERNAL = 0x01,

Re: [Qemu-devel] [PATCH v3 14/31] arm/translate-a64: add FP16 FMULX/MLS/FMLA to simd_indexed

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
>  case 0x9: /* FMUL, FMULX */
> -if (!extract32(size, 1, 1)) {
> +if (size == 1 ||
> +(size < 2 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
>  unallocated_encoding(s);

You get to drop the check here...

> +case 0: /* half precision */
> +size = MO_16;
> +index = h << 2 | l << 1 | m;
> +is_fp16 = true;
> +if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
> +break;
> +}

... because you added it here instead.


r~



Re: [Qemu-devel] [PATCH v3 17/31] arm/translate-a64: add FP16 FPRINTx to simd_two_reg_misc_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> This adds the full range of half-precision floating point to integral
> instructions.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - fix re-base conflicts
>   - move comment to previous commit
>   - don't double test is_scalar in unallocated checks
> ---

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [PATCH v3 28/31] arm/translate-a64: add FP16 FMOV to simd_mod_imm

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> Only one half-precision instruction has been added to this group.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v2
>   - checkpatch fixes
> v3
>   - use vfp_expand_imm
> ---
>  target/arm/translate-a64.c | 35 +--
>  1 file changed, 25 insertions(+), 10 deletions(-)

Reviewed-by: Richard Henderson 


r~




[Qemu-devel] [PULL v1 1/2] tests: Move common TPM test code into tpm-emu.c

2018-02-23 Thread Stefan Berger
Move threads and other common TPM test code into tpm-emu.c.

Signed-off-by: Stefan Berger 
Reviewed-by: Marc-André Lureau 
---
 tests/Makefile.include |   2 +-
 tests/tpm-crb-test.c   | 174 +
 tests/tpm-emu.c| 167 +++
 tests/tpm-emu.h|  38 +++
 4 files changed, 209 insertions(+), 172 deletions(-)
 create mode 100644 tests/tpm-emu.c
 create mode 100644 tests/tpm-emu.h

diff --git a/tests/Makefile.include b/tests/Makefile.include
index a1bcbff..ff44264 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -714,7 +714,7 @@ tests/test-crypto-tlssession$(EXESUF): 
tests/test-crypto-tlssession.o \
 tests/test-io-task$(EXESUF): tests/test-io-task.o $(test-io-obj-y)
 tests/test-io-channel-socket$(EXESUF): tests/test-io-channel-socket.o \
 tests/io-channel-helpers.o $(test-io-obj-y)
-tests/tpm-crb-test$(EXESUF): tests/tpm-crb-test.o $(test-io-obj-y)
+tests/tpm-crb-test$(EXESUF): tests/tpm-crb-test.o tests/tpm-emu.o 
$(test-io-obj-y)
 tests/test-io-channel-file$(EXESUF): tests/test-io-channel-file.o \
 tests/io-channel-helpers.o $(test-io-obj-y)
 tests/test-io-channel-tls$(EXESUF): tests/test-io-channel-tls.o \
diff --git a/tests/tpm-crb-test.c b/tests/tpm-crb-test.c
index 8bf1507..e1513cb 100644
--- a/tests/tpm-crb-test.c
+++ b/tests/tpm-crb-test.c
@@ -14,177 +14,9 @@
 #include 
 
 #include "hw/acpi/tpm.h"
-#include "hw/tpm/tpm_ioctl.h"
 #include "io/channel-socket.h"
 #include "libqtest.h"
-#include "qapi/error.h"
-
-#define TPM_RC_FAILURE 0x101
-#define TPM2_ST_NO_SESSIONS 0x8001
-
-struct tpm_hdr {
-uint16_t tag;
-uint32_t len;
-uint32_t code; /*ordinal/error */
-char buffer[];
-} QEMU_PACKED;
-
-typedef struct TestState {
-CompatGMutex data_mutex;
-CompatGCond data_cond;
-SocketAddress *addr;
-QIOChannel *tpm_ioc;
-GThread *emu_tpm_thread;
-struct tpm_hdr *tpm_msg;
-} TestState;
-
-static void test_wait_cond(TestState *s)
-{
-gint64 end_time = g_get_monotonic_time() + 5 * G_TIME_SPAN_SECOND;
-
-g_mutex_lock(>data_mutex);
-if (!g_cond_wait_until(>data_cond, >data_mutex, end_time)) {
-g_assert_not_reached();
-}
-g_mutex_unlock(>data_mutex);
-}
-
-static void *emu_tpm_thread(void *data)
-{
-TestState *s = data;
-QIOChannel *ioc = s->tpm_ioc;
-
-s->tpm_msg = g_new(struct tpm_hdr, 1);
-while (true) {
-int minhlen = sizeof(s->tpm_msg->tag) + sizeof(s->tpm_msg->len);
-
-if (!qio_channel_read(ioc, (char *)s->tpm_msg, minhlen, _abort)) 
{
-break;
-}
-s->tpm_msg->tag = be16_to_cpu(s->tpm_msg->tag);
-s->tpm_msg->len = be32_to_cpu(s->tpm_msg->len);
-g_assert_cmpint(s->tpm_msg->len, >=, minhlen);
-g_assert_cmpint(s->tpm_msg->tag, ==, TPM2_ST_NO_SESSIONS);
-
-s->tpm_msg = g_realloc(s->tpm_msg, s->tpm_msg->len);
-qio_channel_read(ioc, (char *)>tpm_msg->code,
- s->tpm_msg->len - minhlen, _abort);
-s->tpm_msg->code = be32_to_cpu(s->tpm_msg->code);
-
-/* reply error */
-s->tpm_msg->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
-s->tpm_msg->len = cpu_to_be32(sizeof(struct tpm_hdr));
-s->tpm_msg->code = cpu_to_be32(TPM_RC_FAILURE);
-qio_channel_write(ioc, (char *)s->tpm_msg, 
be32_to_cpu(s->tpm_msg->len),
-  _abort);
-}
-
-g_free(s->tpm_msg);
-s->tpm_msg = NULL;
-object_unref(OBJECT(s->tpm_ioc));
-return NULL;
-}
-
-static void *emu_ctrl_thread(void *data)
-{
-TestState *s = data;
-QIOChannelSocket *lioc = qio_channel_socket_new();
-QIOChannel *ioc;
-
-qio_channel_socket_listen_sync(lioc, s->addr, _abort);
-g_cond_signal(>data_cond);
-
-qio_channel_wait(QIO_CHANNEL(lioc), G_IO_IN);
-ioc = QIO_CHANNEL(qio_channel_socket_accept(lioc, _abort));
-g_assert(ioc);
-
-{
-uint32_t cmd = 0;
-struct iovec iov = { .iov_base = , .iov_len = sizeof(cmd) };
-int *pfd = NULL;
-size_t nfd = 0;
-
-qio_channel_readv_full(ioc, , 1, , , _abort);
-cmd = be32_to_cpu(cmd);
-g_assert_cmpint(cmd, ==, CMD_SET_DATAFD);
-g_assert_cmpint(nfd, ==, 1);
-s->tpm_ioc = QIO_CHANNEL(qio_channel_socket_new_fd(*pfd, 
_abort));
-g_free(pfd);
-
-cmd = 0;
-qio_channel_write(ioc, (char *), sizeof(cmd), _abort);
-
-s->emu_tpm_thread = g_thread_new(NULL, emu_tpm_thread, s);
-}
-
-while (true) {
-uint32_t cmd;
-ssize_t ret;
-
-ret = qio_channel_read(ioc, (char *), sizeof(cmd), NULL);
-if (ret <= 0) {
-break;
-}
-
-cmd = be32_to_cpu(cmd);
-switch (cmd) {
-case CMD_GET_CAPABILITY: {
-ptm_cap cap = cpu_to_be64(0x3fff);
-qio_channel_write(ioc, (char *), 

[Qemu-devel] [PULL v1 0/2] Merge tpm 2018/02/21 v1

2018-02-23 Thread Stefan Berger
This patch series adds a test case for the TPM TIS interface.

   Stefan

The following changes since commit a6e0344fa0e09413324835ae122c4cadd7890231:

  Merge remote-tracking branch 'remotes/kraxel/tags/ui-20180220-pull-request' 
into staging (2018-02-20 14:05:00 +)

are available in the git repository at:

  git://github.com/stefanberger/qemu-tpm.git tpm-pull-2018-02-21

for you to fetch changes up to adb0e917e6ee93631e40265ca145bc31cd3b6c9a:

  tests: add test for TPM TIS device (2018-02-21 07:24:50 -0500)


Stefan Berger (2):
  tests: Move common TPM test code into tpm-emu.c
  tests: add test for TPM TIS device

 MAINTAINERS|   1 +
 hw/tpm/tpm_tis.c   | 101 -
 include/hw/acpi/tpm.h  | 105 ++
 tests/Makefile.include |   4 +-
 tests/tpm-crb-test.c   | 174 +
 tests/tpm-emu.c| 174 +
 tests/tpm-emu.h|  38 +++
 tests/tpm-tis-test.c   | 486 
+
 8 files changed, 810 insertions(+), 273 deletions(-)
 create mode 100644 tests/tpm-emu.c
 create mode 100644 tests/tpm-emu.h
 create mode 100644 tests/tpm-tis-test.c

-- 
2.5.5




[Qemu-devel] [Bug 1688231] Re: [Qemu-ppc] sendkey is not working for any of the keystrokes

2018-02-23 Thread Fabiano Rosas
I see this happening in ppc64le and x86_64 with QEMU
v2.11.0-1684-ga6e0344fa0. The keystrokes are being sent to tty1:

in x86_64:

./v2.11.0-1684-ga6e0344fa0/bin/qemu-system-x86_64 -enable-kvm -m 512
-kernel vmlinuz -initrd initramfs.img -chardev
serial,id=s1,path=/dev/pts/10 -mon chardev=s1 -qmp
tcp:localhost:,server,nowait -vga none -nographic -append
"console=ttyS0 i8042.debug"

QEMU 2.11.50 monitor - type 'help' for more information
(qemu) sendkey a
(qemu) sendkey b
(qemu) sendkey c
(qemu) sendkey ret

# cat /dev/tty1
abc

---
same thing with input-send-event:

{"events": [{ "type": "key", "data" : { "down": true, "key": {"type": "qcode", 
"data": "a" } } }]}
{"events": [{ "type": "key", "data" : { "down": true, "key": {"type": "qcode", 
"data": "ret" } } }]}

# cat /dev/tty1
abc
a


I'm not sure what is the expected behavior when using two input sources in this 
way (serial line + PS/2 keyboard). I'm inclined to say that the keys should 
indeed not be seen in the serial console.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1688231

Title:
  [Qemu-ppc] sendkey is not working for any of the keystrokes

Status in QEMU:
  New

Bug description:
  sendkey option is not working for any of the keystrokes in ppc64le,

  Qemu version:
  # qemu-img --version
  qemu-img version 2.9.50 (v2.9.0-303-g81b2d5c-dirty)

  Qemu command line:
  # qemu-system-ppc64 --enable-kvm --nographic -vga none -machine pseries -m 
4G,slots=32,maxmem=32G -smp 16,maxcpus=32 -device virtio-blk-pci,drive=rootdisk 
-drive 
file=/var/lib/libvirt/images/f25-upstream-ppc64le.qcow2,if=none,cache=none,format=qcow2,id=rootdisk
 -monitor telnet:127.0.0.1:1234,server,nowait -net nic,model=virtio -net user 
-redir tcp:2000::22

  Guest booted successfully and logged in
  Fedora 25 (Twenty Five)
  Kernel 4.11.0-rc4 on an ppc64le (hvc0)

  atest-guest login: updatedb (5582) used greatest stack depth: 9568 bytes left
  root
  Password: 
  Last login: Mon Mar 27 01:57:51 on hvc0
  [root@atest-guest ~]# 

  Qemu monitor:
  # telnet 127.0.0.1 1234
  Trying 127.0.0.1...
  Connected to 127.0.0.1.
  Escape character is '^]'.
  QEMU 2.9.50 monitor - type 'help' for more information
  (qemu) sendkey a
  (qemu) sendkey ret

  But from the console, I couldn't observe the keystroke a or return.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1688231/+subscriptions



[Qemu-devel] [RFC v4 10/21] blockjobs: add NULL state

2018-02-23 Thread John Snow
Add a new state that specifically demarcates when we begin to permanently
demolish a job after it has performed all work. This makes the transition
explicit in the STM table and highlights conditions under which a job may
be demolished.


Transitions:
Created   -> Null: Early failure event before the job is started
Concluded -> Null: Standard transition.

Verbs:
None. This should not ever be visible to the monitor.

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED+--+
 +--++  |
|   |
 +--v+ +--+ |
   +-+RUNNING<->PAUSED| |
   | +--+-+--+ +--+ |
   || | |
   || +--+  |
   |||  |
   | +--v--+   +---+ |  |
   +-+READY<--->STANDBY| |  |
   | +--+--+   +---+ |  |
   |||  |
+--v-+   +--v--+ |  |
|ABORTING+--->CONCLUDED<-+  |
++   +--+--+|
|   |
 +--v-+ |
 |NULL<-+
 ++

Signed-off-by: John Snow 
---
 blockjob.c   | 35 +--
 qapi/block-core.json |  5 -
 2 files changed, 21 insertions(+), 19 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 93b0a36306..7b5c4063cf 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -44,24 +44,25 @@ static QemuMutex block_job_mutex;
 
 /* BlockJob State Transition Table */
 bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X, E */
-/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0},
-/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0},
-/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1},
-/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0},
-/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1},
-/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0},
-/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 1},
-/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0},
+  /* U, C, R, P, Y, S, X, E, N */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0, 1},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1, 0},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1, 0},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0, 0},
+/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 1, 0},
+/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0, 1},
+/* N: */ [BLOCK_JOB_STATUS_NULL]  = {0, 0, 0, 0, 0, 0, 0, 0, 0},
 };
 
 bool BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X, E */
-[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0, 0},
-[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0},
-[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0},
-[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0},
-[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0},
+  /* U, C, R, P, Y, S, X, E, N */
+[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0, 0, 0},
+[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0, 0},
+[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0},
+[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0},
+[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0, 0},
 };
 
 static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
@@ -423,6 +424,7 @@ static void block_job_completed_single(BlockJob *job)
 QLIST_REMOVE(job, txn_list);
 block_job_txn_unref(job->txn);
 block_job_event_concluded(job);
+block_job_state_transition(job, BLOCK_JOB_STATUS_NULL);
 block_job_unref(job);
 }
 
@@ -734,9 +736,6 @@ static void block_job_event_completed(BlockJob *job, const 
char *msg)
 
 static void block_job_event_concluded(BlockJob *job)
 {
-if (block_job_is_internal(job) || !job->manual) {
-return;
-}
 block_job_state_transition(job, BLOCK_JOB_STATUS_CONCLUDED);
 }
 
diff --git a/qapi/block-core.json b/qapi/block-core.json
index aeb9b1937b..578c0c91ca 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1003,11 +1003,14 @@
 # @concluded: The 

[Qemu-devel] [RFC v4 11/21] blockjobs: add block_job_dismiss

2018-02-23 Thread John Snow
For jobs that have reached their CONCLUDED state, prior to having their
last reference put down (meaning jobs that have completed successfully,
unsuccessfully, or have been canceled), allow the user to dismiss the
job's lingering status report via block-job-dismiss.

This gives management APIs the chance to conclusively determine if a job
failed or succeeded, even if the event broadcast was missed.

Note that jobs do not yet linger in any such state, they are freed
immediately upon reaching this previously-unnamed state. such a state is
added immediately in the next commit.

Verbs:
Dismiss: operates on CONCLUDED jobs only.
Signed-off-by: John Snow 
---
 block/trace-events   |  1 +
 blockdev.c   | 14 ++
 blockjob.c   | 34 --
 include/block/blockjob.h |  9 +
 qapi/block-core.json | 24 +++-
 5 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/block/trace-events b/block/trace-events
index 3fe89f7ea6..266afd9e99 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -50,6 +50,7 @@ qmp_block_job_cancel(void *job) "job %p"
 qmp_block_job_pause(void *job) "job %p"
 qmp_block_job_resume(void *job) "job %p"
 qmp_block_job_complete(void *job) "job %p"
+qmp_block_job_dismiss(void *job) "job %p"
 qmp_block_stream(void *bs, void *job) "bs %p job %p"
 
 # block/file-win32.c
diff --git a/blockdev.c b/blockdev.c
index cba935a0a6..3180130782 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3852,6 +3852,20 @@ void qmp_block_job_complete(const char *device, Error 
**errp)
 aio_context_release(aio_context);
 }
 
+void qmp_block_job_dismiss(const char *id, Error **errp)
+{
+AioContext *aio_context;
+BlockJob *job = find_block_job(id, _context, errp);
+
+if (!job) {
+return;
+}
+
+trace_qmp_block_job_dismiss(job);
+block_job_dismiss(, errp);
+aio_context_release(aio_context);
+}
+
 void qmp_change_backing_file(const char *device,
  const char *image_node_name,
  const char *backing_file,
diff --git a/blockjob.c b/blockjob.c
index 7b5c4063cf..4d29391673 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -63,6 +63,7 @@ bool 
BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
 [BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0},
 [BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0},
 [BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0, 0},
+[BLOCK_JOB_VERB_DISMISS]  = {0, 0, 0, 0, 0, 0, 0, 1, 0},
 };
 
 static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
@@ -424,7 +425,6 @@ static void block_job_completed_single(BlockJob *job)
 QLIST_REMOVE(job, txn_list);
 block_job_txn_unref(job->txn);
 block_job_event_concluded(job);
-block_job_state_transition(job, BLOCK_JOB_STATUS_NULL);
 block_job_unref(job);
 }
 
@@ -441,6 +441,13 @@ static void block_job_cancel_async(BlockJob *job)
 job->cancelled = true;
 }
 
+static void block_job_do_dismiss(BlockJob *job)
+{
+assert(job);
+block_job_state_transition(job, BLOCK_JOB_STATUS_NULL);
+block_job_unref(job);
+}
+
 static int block_job_finish_sync(BlockJob *job,
  void (*finish)(BlockJob *, Error **errp),
  Error **errp)
@@ -590,6 +597,19 @@ void block_job_complete(BlockJob *job, Error **errp)
 job->driver->complete(job, errp);
 }
 
+void block_job_dismiss(BlockJob **jobptr, Error **errp)
+{
+BlockJob *job = *jobptr;
+/* similarly to _complete, this is QMP-interface only. */
+assert(job->id);
+if (block_job_apply_verb(job, BLOCK_JOB_VERB_DISMISS, errp)) {
+return;
+}
+
+block_job_do_dismiss(job);
+*jobptr = NULL;
+}
+
 void block_job_user_pause(BlockJob *job, Error **errp)
 {
 if (block_job_apply_verb(job, BLOCK_JOB_VERB_PAUSE, errp)) {
@@ -626,7 +646,7 @@ void block_job_user_resume(BlockJob *job, Error **errp)
 void block_job_cancel(BlockJob *job)
 {
 if (job->status == BLOCK_JOB_STATUS_CONCLUDED) {
-return;
+block_job_do_dismiss(job);
 } else if (block_job_started(job)) {
 block_job_cancel_async(job);
 block_job_enter(job);
@@ -737,6 +757,10 @@ static void block_job_event_completed(BlockJob *job, const 
char *msg)
 static void block_job_event_concluded(BlockJob *job)
 {
 block_job_state_transition(job, BLOCK_JOB_STATUS_CONCLUDED);
+/* for pre-2.12 style jobs, automatically destroy */
+if (!job->manual) {
+block_job_do_dismiss(job);
+}
 }
 
 /*
@@ -841,6 +865,9 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 block_job_txn_add_job(txn, job);
 }
 
+/* For the expanded job control STM, grab an extra
+ * reference for finalize() to put down */
+block_job_ref(job);
 return job;
 }
 
@@ -859,6 +886,9 @@ void 

[Qemu-devel] [RFC v4 13/21] blockjobs: add commit, abort, clean helpers

2018-02-23 Thread John Snow
The completed_single function is getting a little mucked up with
checking to see which callbacks exist, so let's factor them out.

Signed-off-by: John Snow 
---
 blockjob.c | 35 ++-
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index ef17dea004..431ce9c220 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -394,6 +394,29 @@ static void block_job_update_rc(BlockJob *job)
 }
 }
 
+static void block_job_commit(BlockJob *job)
+{
+assert(!job->ret);
+if (job->driver->commit) {
+job->driver->commit(job);
+}
+}
+
+static void block_job_abort(BlockJob *job)
+{
+assert(job->ret);
+if (job->driver->abort) {
+job->driver->abort(job);
+}
+}
+
+static void block_job_clean(BlockJob *job)
+{
+if (job->driver->clean) {
+job->driver->clean(job);
+}
+}
+
 static void block_job_completed_single(BlockJob *job)
 {
 assert(job->completed);
@@ -402,17 +425,11 @@ static void block_job_completed_single(BlockJob *job)
 block_job_update_rc(job);
 
 if (!job->ret) {
-if (job->driver->commit) {
-job->driver->commit(job);
-}
+block_job_commit(job);
 } else {
-if (job->driver->abort) {
-job->driver->abort(job);
-}
-}
-if (job->driver->clean) {
-job->driver->clean(job);
+block_job_abort(job);
 }
+block_job_clean(job);
 
 if (job->cb) {
 job->cb(job->opaque, job->ret);
-- 
2.14.3




[Qemu-devel] [RFC v4 19/21] blockjobs: Expose manual property

2018-02-23 Thread John Snow
Expose the "manual" property via QAPI for the backup-related jobs.
As of this commit, this allows the management API to request the
"concluded" and "dismiss" semantics for backup jobs.

Signed-off-by: John Snow 
---
 blockdev.c   | 19 ---
 qapi/block-core.json | 32 ++--
 2 files changed, 42 insertions(+), 9 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 05fd421cdc..2eddb0e726 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3260,7 +3260,7 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
BlockJobTxn *txn,
 AioContext *aio_context;
 QDict *options = NULL;
 Error *local_err = NULL;
-int flags;
+int flags, job_flags = BLOCK_JOB_DEFAULT;
 int64_t size;
 bool set_backing_hd = false;
 
@@ -3279,6 +3279,9 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
BlockJobTxn *txn,
 if (!backup->has_job_id) {
 backup->job_id = NULL;
 }
+if (!backup->has_manual) {
+backup->manual = false;
+}
 if (!backup->has_compress) {
 backup->compress = false;
 }
@@ -3370,11 +3373,14 @@ static BlockJob *do_drive_backup(DriveBackup *backup, 
BlockJobTxn *txn,
 goto out;
 }
 }
+if (backup->manual) {
+job_flags |= BLOCK_JOB_MANUAL;
+}
 
 job = backup_job_create(backup->job_id, bs, target_bs, backup->speed,
 backup->sync, bmap, backup->compress,
 backup->on_source_error, backup->on_target_error,
-BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
+job_flags, NULL, NULL, txn, _err);
 bdrv_unref(target_bs);
 if (local_err != NULL) {
 error_propagate(errp, local_err);
@@ -3409,6 +3415,7 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
BlockJobTxn *txn,
 Error *local_err = NULL;
 AioContext *aio_context;
 BlockJob *job = NULL;
+int job_flags = BLOCK_JOB_DEFAULT;
 
 if (!backup->has_speed) {
 backup->speed = 0;
@@ -3422,6 +3429,9 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
BlockJobTxn *txn,
 if (!backup->has_job_id) {
 backup->job_id = NULL;
 }
+if (!backup->has_manual) {
+backup->manual = false;
+}
 if (!backup->has_compress) {
 backup->compress = false;
 }
@@ -3450,10 +3460,13 @@ BlockJob *do_blockdev_backup(BlockdevBackup *backup, 
BlockJobTxn *txn,
 goto out;
 }
 }
+if (backup->manual) {
+job_flags |= BLOCK_JOB_MANUAL;
+}
 job = backup_job_create(backup->job_id, bs, target_bs, backup->speed,
 backup->sync, NULL, backup->compress,
 backup->on_source_error, backup->on_target_error,
-BLOCK_JOB_DEFAULT, NULL, NULL, txn, _err);
+job_flags, NULL, NULL, txn, _err);
 if (local_err != NULL) {
 error_propagate(errp, local_err);
 }
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 549c6c02d8..7b3af93682 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1177,6 +1177,16 @@
 # @job-id: identifier for the newly-created block job. If
 #  omitted, the device name will be used. (Since 2.7)
 #
+# @manual: True to use an expanded, more explicit job control flow.
+#  Jobs may transition from a running state to a pending state,
+#  where they must be instructed to complete manually via
+#  block-job-finalize.
+#  Jobs belonging to a transaction must either all or all not use this
+#  setting. Once a transaction reaches a pending state, issuing the
+#  finalize command to any one job in the transaction is sufficient
+#  to finalize the entire transaction.
+#  (Since 2.12)
+#
 # @device: the device name or node-name of a root node which should be copied.
 #
 # @target: the target of the new image. If the file exists, or if it
@@ -1217,9 +1227,10 @@
 # Since: 1.6
 ##
 { 'struct': 'DriveBackup',
-  'data': { '*job-id': 'str', 'device': 'str', 'target': 'str',
-'*format': 'str', 'sync': 'MirrorSyncMode', '*mode': 
'NewImageMode',
-'*speed': 'int', '*bitmap': 'str', '*compress': 'bool',
+  'data': { '*job-id': 'str', '*manual': 'bool', 'device': 'str',
+'target': 'str', '*format': 'str', 'sync': 'MirrorSyncMode',
+'*mode': 'NewImageMode', '*speed': 'int',
+'*bitmap': 'str', '*compress': 'bool',
 '*on-source-error': 'BlockdevOnError',
 '*on-target-error': 'BlockdevOnError' } }
 
@@ -1229,6 +1240,16 @@
 # @job-id: identifier for the newly-created block job. If
 #  omitted, the device name will be used. (Since 2.7)
 #
+# @manual: True to use an expanded, more explicit job control flow.
+#  Jobs may transition from a running state to a pending state,
+#  where they must be 

Re: [Qemu-devel] [edk2] [PATCH 1/7] SecurityPkg/Tcg2Pei: drop Tcg2PhysicalPresenceLib dependency

2018-02-23 Thread Yao, Jiewen
Reviewed-by: jiewen@intel.com

> -Original Message-
> From: edk2-devel [mailto:edk2-devel-boun...@lists.01.org] On Behalf Of
> marcandre.lur...@redhat.com
> Sent: Friday, February 23, 2018 9:23 PM
> To: edk2-de...@lists.01.org
> Cc: qemu-devel@nongnu.org; javi...@redhat.com; pjo...@redhat.com; Yao,
> Jiewen ; ler...@redhat.com
> Subject: [edk2] [PATCH 1/7] SecurityPkg/Tcg2Pei: drop Tcg2PhysicalPresenceLib
> dependency
> 
> From: Marc-André Lureau 
> 
> Apparently, unnecessary. Avoids extra build dependency and churn.
> 
> CC: Laszlo Ersek 
> CC: Stefan Berger 
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Marc-André Lureau 
> ---
>  SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.c   | 2 --
>  SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.inf | 1 -
>  2 files changed, 3 deletions(-)
> 
> diff --git a/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.c
> b/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.c
> index a7ae3354b5..3758fc6a41 100644
> --- a/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.c
> +++ b/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.c
> @@ -18,7 +18,6 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 
> @@ -44,7 +43,6 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY
> KIND, EITHER EXPRESS OR IMPLIED.
>  #include 
>  #include 
>  #include 
> -#include 
> 
>  #define PERF_ID_TCG2_PEI  0x3080
> 
> diff --git a/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.inf
> b/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.inf
> index f7b85444d9..bc910c3baf 100644
> --- a/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.inf
> +++ b/SecurityPkg/Tcg/Tcg2Pei/Tcg2Pei.inf
> @@ -58,7 +58,6 @@
>PerformanceLib
>MemoryAllocationLib
>ReportStatusCodeLib
> -  Tcg2PhysicalPresenceLib
>ResetSystemLib
> 
>  [Guids]
> --
> 2.16.1.73.g5832b7e9f2
> 
> ___
> edk2-devel mailing list
> edk2-de...@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel


Re: [Qemu-devel] [PATCH v3 21/31] arm/translate-a64: add FP16 FNEG/FABS to simd_two_reg_misc_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> Neither of these operations alter the floating point status registers
> so we can do a pure bitwise operation, either squashing any sign
> bit (ABS) or inverting it (NEG).
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - fixup re-base conflicts
>   - make both operations pure bitwise TCG
> ---
>  target/arm/translate-a64.c | 16 +++-
>  1 file changed, 15 insertions(+), 1 deletion(-)

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [PATCH v6 11/23] Add symbol table callback function interface to load_elf

2018-02-23 Thread Richard Henderson
On 02/22/2018 04:11 PM, Michael Clark wrote:
> The RISC-V HTIF (Host Target Interface) console device requires access
> to the symbol table to locate the 'tohost' and 'fromhost' symbols.
> 
> Signed-off-by: Michael Clark 
> ---
>  hw/core/loader.c | 18 --
>  include/hw/elf_ops.h | 34 +-
>  include/hw/loader.h  | 17 -
>  3 files changed, 53 insertions(+), 16 deletions(-)

Reviewed-by: Richard Henderson 


r~



Re: [Qemu-devel] [PATCH v6 12/23] RISC-V HTIF Console

2018-02-23 Thread Richard Henderson
On 02/22/2018 04:11 PM, Michael Clark wrote:
> HTIF (Host Target Interface) provides console emulation for QEMU. HTIF
> allows identical copies of BBL (Berkeley Boot Loader) and linux to run
> on both Spike and QEMU. BBL provides HTIF console access via the
> SBI (Supervisor Binary Interface) and the linux kernel SBI console.
> 
> The HTIT chardev implements the pre qom legacy interface consistent
> with the 16550a UART in 'hw/char/serial.c'.
> 
> Signed-off-by: Michael Clark 
> ---
>  hw/riscv/riscv_htif.c | 263 
> ++
>  include/hw/riscv/riscv_htif.h |  64 ++
>  2 files changed, 327 insertions(+)
>  create mode 100644 hw/riscv/riscv_htif.c
>  create mode 100644 include/hw/riscv/riscv_htif.h

Reviewed-by: Richard Henderson 


r~



[Qemu-devel] [PATCH V5 4/4] tests: Add migration test for aarch64

2018-02-23 Thread Wei Huang
This patch adds migration test support for aarch64. The test code, which
implements the same functionality as x86, is booted as a kernel in qemu.
Here are the design choices we make for aarch64:

 * We choose this -kernel approach because aarch64 QEMU doesn't provide a
   built-in fw like x86 does. So instead of relying on a boot loader, we
   use -kernel approach for aarch64.
 * The serial output is sent to PL011 directly.
 * The physical memory base for mach-virt machine is 0x4000. We change
   the start_address and end_address for aarch64.

In addition to providing the binary, this patch also includes the source
code and the build script in tests/migration/. So users can change the
source and/or re-compile the binary as they wish.

Signed-off-by: Wei Huang 
---
 tests/Makefile.include   |  1 +
 tests/migration-test.c   | 50 ++---
 tests/migration/Makefile | 12 +-
 tests/migration/aarch64-a-b-kernel.S | 71 
 tests/migration/aarch64-a-b-kernel.h | 19 ++
 tests/migration/migration-test.h |  5 +++
 6 files changed, 150 insertions(+), 8 deletions(-)
 create mode 100644 tests/migration/aarch64-a-b-kernel.S
 create mode 100644 tests/migration/aarch64-a-b-kernel.h

diff --git a/tests/Makefile.include b/tests/Makefile.include
index a1bcbffe12..df9f64438f 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -372,6 +372,7 @@ check-qtest-arm-y += tests/sdhci-test$(EXESUF)
 check-qtest-aarch64-y = tests/numa-test$(EXESUF)
 check-qtest-aarch64-y += tests/sdhci-test$(EXESUF)
 check-qtest-aarch64-y += tests/boot-serial-test$(EXESUF)
+check-qtest-aarch64-y += tests/migration-test$(EXESUF)
 
 check-qtest-microblazeel-y = $(check-qtest-microblaze-y)
 
diff --git a/tests/migration-test.c b/tests/migration-test.c
index ce2922df6a..d60e34c82d 100644
--- a/tests/migration-test.c
+++ b/tests/migration-test.c
@@ -11,6 +11,7 @@
  */
 
 #include "qemu/osdep.h"
+#include 
 
 #include "libqtest.h"
 #include "qapi/qmp/qdict.h"
@@ -23,8 +24,8 @@
 
 #include "migration/migration-test.h"
 
-const unsigned start_address = TEST_MEM_START;
-const unsigned end_address = TEST_MEM_END;
+unsigned start_address = TEST_MEM_START;
+unsigned end_address = TEST_MEM_END;
 bool got_stop;
 
 #if defined(__linux__)
@@ -81,12 +82,13 @@ static const char *tmpfs;
  * repeatedly. It outputs a 'B' at a fixed rate while it's still running.
  */
 #include "tests/migration/x86-a-b-bootblock.h"
+#include "tests/migration/aarch64-a-b-kernel.h"
 
-static void init_bootfile_x86(const char *bootpath)
+static void init_bootfile(const char *bootpath, void *content)
 {
 FILE *bootfile = fopen(bootpath, "wb");
 
-g_assert_cmpint(fwrite(x86_bootsect, 512, 1, bootfile), ==, 1);
+g_assert_cmpint(fwrite(content, 512, 1, bootfile), ==, 1);
 fclose(bootfile);
 }
 
@@ -392,7 +394,7 @@ static void test_migrate_start(QTestState **from, 
QTestState **to,
 got_stop = false;
 
 if (strcmp(arch, "i386") == 0 || strcmp(arch, "x86_64") == 0) {
-init_bootfile_x86(bootpath);
+init_bootfile(bootpath, x86_bootsect);
 cmd_src = g_strdup_printf("-machine accel=%s -m 150M"
   " -name source,debug-threads=on"
   " -serial file:%s/src_serial"
@@ -421,6 +423,42 @@ static void test_migrate_start(QTestState **from, 
QTestState **to,
   " -serial file:%s/dest_serial"
   " -incoming %s",
   accel, tmpfs, uri);
+} else if (strcmp(arch, "aarch64") == 0) {
+const char *cpu;
+const char *gic_ver;
+struct utsname utsname;
+
+/* kvm and tcg need different cpu and gic-version configs */
+if (access("/dev/kvm", F_OK) == 0 && uname() == 0 &&
+strcmp(utsname.machine, "aarch64") == 0) {
+accel = "kvm";
+cpu = "host";
+gic_ver = "host";
+} else {
+accel = "tcg";
+cpu = "cortex-a57";
+gic_ver = "2";
+}
+
+init_bootfile(bootpath, aarch64_kernel);
+cmd_src = g_strdup_printf("-machine virt,accel=%s,gic-version=%s "
+  "-name vmsource,debug-threads=on -cpu %s "
+  "-m 150M -serial file:%s/src_serial "
+  "-kernel %s ",
+  accel, gic_ver, cpu, tmpfs, bootpath);
+cmd_dst = g_strdup_printf("-machine virt,accel=%s,gic-version=%s "
+  "-name vmdest,debug-threads=on -cpu %s "
+  "-m 150M -serial file:%s/dest_serial "
+  "-kernel %s "
+  "-incoming %s ",
+  accel, gic_ver, cpu, tmpfs, bootpath, uri);
+
+/* aarch64 virt 

[Qemu-devel] [PATCH V5 2/4] tests/migration: Convert the boot block compilation script into Makefile

2018-02-23 Thread Wei Huang
The x86 boot block header currently is generated with a shell script.
To better support other CPUs (e.g. aarch64), we convert the script
into Makefile. This allows us to 1) support cross-compilation easily,
and 2) avoid creating a script file for every architecture.

Signed-off-by: Wei Huang 
---
 tests/migration/Makefile | 36 
 tests/migration/rebuild-x86-bootblock.sh | 33 -
 tests/migration/x86-a-b-bootblock.h  |  2 +-
 tests/migration/x86-a-b-bootblock.s  |  5 ++---
 4 files changed, 39 insertions(+), 37 deletions(-)
 create mode 100644 tests/migration/Makefile
 delete mode 100755 tests/migration/rebuild-x86-bootblock.sh

diff --git a/tests/migration/Makefile b/tests/migration/Makefile
new file mode 100644
index 00..8fbedaa8b8
--- /dev/null
+++ b/tests/migration/Makefile
@@ -0,0 +1,36 @@
+#
+# Copyright (c) 2016-2018 Red Hat, Inc. and/or its affiliates
+#
+# Authors:
+#   Dave Gilbert 
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or later.
+# See the COPYING file in the top-level directory.
+#
+export __note
+override define __note
+/* This file is automatically generated from
+ * tests/migration/$<, edit that and then run
+ * "make $@" inside tests/migration to update,
+ * and then remember to send both in your patch submission.
+ */
+endef
+
+all: x86-a-b-bootblock.h
+# Dummy command so that make thinks it has done something
+   @true
+
+SRC_PATH=../..
+include $(SRC_PATH)/rules.mak
+
+x86_64_cross_prefix := $(call find-cross-prefix,x86_64)
+
+x86-a-b-bootblock.h: x86-a-b-bootblock.s
+   $(x86_64_cross_prefix)as --32 -march=i486 $< -o x86.o
+   $(x86_64_cross_prefix)objcopy -O binary x86.o x86.boot
+   dd if=x86.boot of=x86.bootsect bs=256 count=2 skip=124
+   echo "$$__note" > $@
+   xxd -i x86.bootsect | sed -e 's/.*int.*//' >> $@
+
+clean:
+   rm -f *.bootsect *.boot *.o
diff --git a/tests/migration/rebuild-x86-bootblock.sh 
b/tests/migration/rebuild-x86-bootblock.sh
deleted file mode 100755
index 86cec5d284..00
--- a/tests/migration/rebuild-x86-bootblock.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/sh
-# Copyright (c) 2016-2018 Red Hat, Inc. and/or its affiliates
-# This work is licensed under the terms of the GNU GPL, version 2 or later.
-# See the COPYING file in the top-level directory.
-#
-# Author: dgilb...@redhat.com
-
-ASMFILE=$PWD/tests/migration/x86-a-b-bootblock.s
-HEADER=$PWD/tests/migration/x86-a-b-bootblock.h
-
-if [ ! -e "$ASMFILE" ]
-then
-  echo "Couldn't find $ASMFILE" >&2
-  exit 1
-fi
-
-ASM_WORK_DIR=$(mktemp -d --tmpdir X86BB.XX)
-cd "$ASM_WORK_DIR" &&
-as --32 -march=i486 "$ASMFILE" -o x86.o &&
-objcopy -O binary x86.o x86.boot &&
-dd if=x86.boot of=x86.bootsect bs=256 count=2 skip=124 &&
-xxd -i x86.bootsect |
-sed -e 's/.*int.*//' > x86.hex &&
-cat - x86.hex < "$HEADER"
-/* This file is automatically generated from
- * tests/migration/x86-a-b-bootblock.s, edit that and then run
- * tests/migration/rebuild-x86-bootblock.sh to update,
- * and then remember to send both in your patch submission.
- */
-HERE
-
-rm x86.hex x86.bootsect x86.boot x86.o
-cd .. && rmdir "$ASM_WORK_DIR"
diff --git a/tests/migration/x86-a-b-bootblock.h 
b/tests/migration/x86-a-b-bootblock.h
index 78a151fe2a..9e8e2e028b 100644
--- a/tests/migration/x86-a-b-bootblock.h
+++ b/tests/migration/x86-a-b-bootblock.h
@@ -1,6 +1,6 @@
 /* This file is automatically generated from
  * tests/migration/x86-a-b-bootblock.s, edit that and then run
- * tests/migration/rebuild-x86-bootblock.sh to update,
+ * "make x86-a-b-bootblock.h" inside tests/migration to update,
  * and then remember to send both in your patch submission.
  */
 unsigned char x86_bootsect[] = {
diff --git a/tests/migration/x86-a-b-bootblock.s 
b/tests/migration/x86-a-b-bootblock.s
index b1642641a7..98dbfab084 100644
--- a/tests/migration/x86-a-b-bootblock.s
+++ b/tests/migration/x86-a-b-bootblock.s
@@ -3,9 +3,8 @@
 #  range.
 #  Outputs an initial 'A' on serial followed by repeated 'B's
 #
-# run   tests/migration/rebuild-x86-bootblock.sh
-#   to regenerate the hex, and remember to include both the .h and .s
-#   in any patches.
+#  In tests/migration dir, run 'make x86-a-b-bootblock.h' to regenerate
+#  the hex, and remember to include both the .h and .s in any patches.
 #
 # Copyright (c) 2016 Red Hat, Inc. and/or its affiliates
 # This work is licensed under the terms of the GNU GPL, version 2 or later.
-- 
2.14.3




[Qemu-devel] [PATCH v2 1/2] xilinx_spips: Enable only two slaves when reading/writing with stripe

2018-02-23 Thread Francisco Iglesias
Assert only the lower cs on bus 0 and upper cs on bus 1 when both buses and
chip selects are enabled (e.g reading/writing with stripe).

Signed-off-by: Francisco Iglesias 
---
 hw/ssi/xilinx_spips.c | 41 +
 1 file changed, 37 insertions(+), 4 deletions(-)

diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
index 8af36ca3d4..0cb484ecf4 100644
--- a/hw/ssi/xilinx_spips.c
+++ b/hw/ssi/xilinx_spips.c
@@ -223,7 +223,7 @@ static void xilinx_spips_update_cs(XilinxSPIPS *s, int 
field)
 {
 int i;
 
-for (i = 0; i < s->num_cs; i++) {
+for (i = 0; i < s->num_cs * s->num_busses; i++) {
 bool old_state = s->cs_lines_state[i];
 bool new_state = field & (1 << i);
 
@@ -234,7 +234,7 @@ static void xilinx_spips_update_cs(XilinxSPIPS *s, int 
field)
 }
 qemu_set_irq(s->cs_lines[i], !new_state);
 }
-if (!(field & ((1 << s->num_cs) - 1))) {
+if (!(field & ((1 << (s->num_cs * s->num_busses)) - 1))) {
 s->snoop_state = SNOOP_CHECKING;
 s->cmd_dummies = 0;
 s->link_state = 1;
@@ -248,7 +248,40 @@ static void 
xlnx_zynqmp_qspips_update_cs_lines(XlnxZynqMPQSPIPS *s)
 {
 if (s->regs[R_GQSPI_GF_SNAPSHOT]) {
 int field = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, CHIP_SELECT);
-xilinx_spips_update_cs(XILINX_SPIPS(s), field);
+bool upper_cs_sel = field & (1 << 1);
+bool lower_cs_sel = field & 1;
+bool bus0_enabled;
+bool bus1_enabled;
+uint8_t buses;
+int cs = 0;
+
+buses = ARRAY_FIELD_EX32(s->regs, GQSPI_GF_SNAPSHOT, DATA_BUS_SELECT);
+bus0_enabled = buses & 1;
+bus1_enabled = buses & (1 << 1);
+
+if (bus0_enabled && bus1_enabled) {
+if (lower_cs_sel) {
+cs |= 1;
+}
+if (upper_cs_sel) {
+cs |= 1 << 3;
+}
+} else if (bus0_enabled) {
+if (lower_cs_sel) {
+cs |= 1;
+}
+if (upper_cs_sel) {
+cs |= 1 << 1;
+}
+} else if (bus1_enabled) {
+if (lower_cs_sel) {
+cs |= 1 << 2;
+}
+if (upper_cs_sel) {
+cs |= 1 << 3;
+}
+}
+xilinx_spips_update_cs(XILINX_SPIPS(s), cs);
 }
 }
 
@@ -260,7 +293,7 @@ static void xilinx_spips_update_cs_lines(XilinxSPIPS *s)
 if (num_effective_busses(s) == 2) {
 /* Single bit chip-select for qspi */
 field &= 0x1;
-field |= field << 1;
+field |= field << 3;
 /* Dual stack U-Page */
 } else if (s->regs[R_LQSPI_CFG] & LQSPI_CFG_TWO_MEM &&
s->regs[R_LQSPI_STS] & LQSPI_CFG_U_PAGE) {
-- 
2.11.0




[Qemu-devel] [RFC v4 00/21] blockjobs: add explicit job management

2018-02-23 Thread John Snow
This series seeks to address two distinct but closely related issues
concerning the job management API.

(1) For jobs that complete when a monitor is not attached and receiving
events or notifications, there's no way to discern the job's final
return code. Jobs must remain in the query list until dismissed
for reliable management.

(2) Jobs that change the block graph structure at an indeterminate point
after the job starts compete with the management layer that relies
on that graph structure to issue meaningful commands.

This structure should change only at the behest of the management
API, and not asynchronously at unknown points in time. Before a job
issues such changes, it must rely on explicit and synchronous
confirmation from the management API.

This series is a rough sketch that solves these problems by adding three
new distinct job states, and two new job command verbs.

These changes are implemented by formalizing a State Transition Machine
for the BlockJob subsystem.

Job States:

UNDEFINED   Default state. Internal state only.
CREATED Job has been created
RUNNING Job has been started and is running
PAUSED  Job is not ready and has been paused
READY   Job is ready and is running
STANDBY Job is ready and is paused

WAITING Job is waiting on peers in transaction
PENDING Job is waiting on ACK from QMP
ABORTINGJob is aborting or has been cancelled
CONCLUDED   Job has finished and has a retcode available
NULLJob is being dismantled. Internal state only.

Job Verbs:

CANCEL  Instructs a running job to terminate with error,
(Except when that job is READY, which produces no error.)
PAUSE   Request a job to pause.
RESUME  Request a job to resume from a pause.
SET-SPEED   Change the speed limiting parameter of a job.
COMPLETEAsk a READY job to finish and exit.

FINALIZEAsk a PENDING job to perform its graph finalization.
DISMISS Finish cleaning up an empty job.

And here's my stab at a diagram:

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED+-+
 +--++ |
|  |
 +--++ +--+|
   +-+RUNNING<->PAUSED||
   | +--+-+--+ +--+|
   || ||
   || +--+ |
   ||| |
   | +--v--+   +---+ | |
   +-+READY<--->STANDBY| | |
   | +--+--+   +---+ | |
   ||| |
   | +--v+   | |
   +-+WAITING+---+ |
   | +--++ |
   ||  |
   | +--v+ |
   +-+PENDING| |
   | +--++ |
   ||  |
+--v-+   +--v--+   |
|ABORTING+--->CONCLUDED|   |
++   +--+--+   |
|  |
 +--v-+|
 |NULL++
 ++

V4:
 - All jobs are now transactions.
 - All jobs now transition through states in a uniform way.
 - Verb permissions are now enforced.

V3:
 - Added WAITING and PENDING events
 - Added block_job_finalize verb
 - Added .pending() callback for jobs
 - Tweaked how .commit/.abort work

V2:
 - Added tests!
 - Changed property name (Jeff, Paolo)

RFC / Known problems:
- I need a lot more tests, still...

- STANDBY is a dumb name, and maybe not even really needed or wanted.
  However, a Paused job will return to either READY or RUNNING depending on
  the state it was in when it was PAUSED. We can keep that in an internal
  variable, or we can make it explicit in the STM.

- is "manual" descriptive as a property name?
  Kevin conceives of the new workflow as
  "No automatic transitions, please." (i.e. automatic-transitions: False)
  Whereas I think of it more like:
  "Enable manual workflow mode, please." (manual-transitions: True)

  I like the idea of the new property defaulting to false and have coded
  in a way mindful of that.

- Mirror needs to be refactored to use the commit/abort/pending/clean callbacks
  to fulfill the promise made by "no graph changes without user authorization"
  that PENDING is supposed to offer



For convenience, this branch is available at:
https://github.com/jnsnow/qemu.git branch block-job-reap
https://github.com/jnsnow/qemu/tree/block-job-reap

This version is tagged 

[Qemu-devel] [RFC v4 09/21] blockjobs: add CONCLUDED state

2018-02-23 Thread John Snow
add a new state "CONCLUDED" that identifies a job that has ceased all
operations. The wording was chosen to avoid any phrasing that might
imply success, error, or cancellation. The task has simply ceased all
operation and can never again perform any work.

("finished", "done", and "completed" might all imply success.)

Transitions:
Running  -> Concluded: normal completion
Ready-> Concluded: normal completion
Aborting -> Concluded: error and cancellations

Verbs:
None as of this commit. (a future commit adds 'dismiss')

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED|
 +--++
|
 +--v+ +--+
   +-+RUNNING<->PAUSED|
   | +--+-+--+ +--+
   || |
   || +--+
   |||
   | +--v--+   +---+ |
   +-+READY<--->STANDBY| |
   | +--+--+   +---+ |
   |||
+--v-+   +--v--+ |
|ABORTING+--->CONCLUDED<-+
++   +-+

Signed-off-by: John Snow 
---
 blockjob.c   | 43 ---
 qapi/block-core.json |  5 -
 2 files changed, 32 insertions(+), 16 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 4c3fcda46c..93b0a36306 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -44,23 +44,24 @@ static QemuMutex block_job_mutex;
 
 /* BlockJob State Transition Table */
 bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X */
-/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0},
-/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0},
-/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1},
-/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0},
-/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1},
-/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0},
-/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0},
+  /* U, C, R, P, Y, S, X, E */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0},
+/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 1},
+/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0},
 };
 
 bool BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X */
-[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0},
-[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0},
-[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0},
-[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0},
-[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0},
+  /* U, C, R, P, Y, S, X, E */
+[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0, 0},
+[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0},
+[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0},
+[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0},
+[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0},
 };
 
 static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
@@ -114,6 +115,7 @@ static void __attribute__((__constructor__)) 
block_job_init(void)
 
 static void block_job_event_cancelled(BlockJob *job);
 static void block_job_event_completed(BlockJob *job, const char *msg);
+static void block_job_event_concluded(BlockJob *job);
 static void block_job_enter_cond(BlockJob *job, bool(*fn)(BlockJob *job));
 
 /* Transactional group of block jobs */
@@ -420,6 +422,7 @@ static void block_job_completed_single(BlockJob *job)
 
 QLIST_REMOVE(job, txn_list);
 block_job_txn_unref(job->txn);
+block_job_event_concluded(job);
 block_job_unref(job);
 }
 
@@ -620,7 +623,9 @@ void block_job_user_resume(BlockJob *job, Error **errp)
 
 void block_job_cancel(BlockJob *job)
 {
-if (block_job_started(job)) {
+if (job->status == BLOCK_JOB_STATUS_CONCLUDED) {
+return;
+} else if (block_job_started(job)) {
 block_job_cancel_async(job);
 block_job_enter(job);
 } else {
@@ -727,6 +732,14 @@ static void block_job_event_completed(BlockJob *job, const 
char *msg)
 _abort);
 }
 
+static void block_job_event_concluded(BlockJob *job)
+{
+if 

Re: [Qemu-devel] [PATCH v3 19/31] arm/translate-a64: add FP16 FCMxx (zero) to simd_two_reg_misc_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> I re-use the existing handle_2misc_fcmp_zero handler and tweak it
> slightly to deal with the half-precision case.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - use size directly wuth read/write_vec_element
>   - drop unneeded break
>   - WIP: mess with calculating maxpasses
> ---

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [PATCH v2 43/67] target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group

2018-02-23 Thread Richard Henderson
On 02/23/2018 09:25 AM, Peter Maydell wrote:
> On 17 February 2018 at 18:22, Richard Henderson
>  wrote:
>> Signed-off-by: Richard Henderson 
>> ---
>>  target/arm/helper-sve.h| 14 +++
>>  target/arm/helper.h| 19 ++
>>  target/arm/translate-sve.c | 41 
>>  target/arm/vec_helper.c| 94 
>> ++
>>  target/arm/Makefile.objs   |  2 +-
>>  target/arm/sve.decode  | 10 +
>>  6 files changed, 179 insertions(+), 1 deletion(-)
>>  create mode 100644 target/arm/vec_helper.c
>>
> 
>> +/* Floating-point trigonometric starting value.
>> + * See the ARM ARM pseudocode function FPTrigSMul.
>> + */
>> +static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
>> +{
>> +float16 result = float16_mul(op1, op1, stat);
>> +if (!float16_is_any_nan(result)) {
>> +result = float16_set_sign(result, op2 & 1);
>> +}
>> +return result;
>> +}
>> +
>> +static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
>> +{
>> +float32 result = float32_mul(op1, op1, stat);
>> +if (!float32_is_any_nan(result)) {
>> +result = float32_set_sign(result, op2 & 1);
>> +}
>> +return result;
>> +}
>> +
>> +static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
>> +{
>> +float64 result = float64_mul(op1, op1, stat);
>> +if (!float64_is_any_nan(result)) {
>> +result = float64_set_sign(result, op2 & 1);
>> +}
>> +return result;
>> +}
>> +
>> +#define DO_3OP(NAME, FUNC, TYPE) \
>> +void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
>> +{  \
>> +intptr_t i, oprsz = simd_oprsz(desc);  \
>> +TYPE *d = vd, *n = vn, *m = vm;\
>> +for (i = 0; i < oprsz / sizeof(TYPE); i++) {   \
>> +d[i] = FUNC(n[i], m[i], stat); \
>> +}  \
>> +}
>> +
>> +DO_3OP(gvec_fadd_h, float16_add, float16)
>> +DO_3OP(gvec_fadd_s, float32_add, float32)
>> +DO_3OP(gvec_fadd_d, float64_add, float64)
>> +
>> +DO_3OP(gvec_fsub_h, float16_sub, float16)
>> +DO_3OP(gvec_fsub_s, float32_sub, float32)
>> +DO_3OP(gvec_fsub_d, float64_sub, float64)
>> +
>> +DO_3OP(gvec_fmul_h, float16_mul, float16)
>> +DO_3OP(gvec_fmul_s, float32_mul, float32)
>> +DO_3OP(gvec_fmul_d, float64_mul, float64)
>> +
>> +DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
>> +DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
>> +DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
>> +
>> +#ifdef TARGET_AARCH64
> 
> This seems a bit odd given SVE is AArch64-only anyway...

Ah right.

The thing to notice here is that the helpers have been placed such that the
helpers can be shared with AA32 and AA64 AdvSIMD.  One call to one of these
would replace the 2-8 calls that we currently generate for such an operation.

I thought it better to plan ahead for that cleanup as opposed to moving them 
later.

Here you see where AA64 differs from AA32 (and in particular where the scalar
operation is also conditionalized).


r~



[Qemu-devel] [PULL v1 2/2] tests: add test for TPM TIS device

2018-02-23 Thread Stefan Berger
Move the TPM TIS related register and flag #defines into
include/hw/acpi/tpm.h for access by the test case.

Write a test case that covers the TIS functionality.

Add the tests cases to the MAINTAINERS file.

Signed-off-by: Stefan Berger 
Reviewed-by: Marc-André Lureau 
---
 MAINTAINERS|   1 +
 hw/tpm/tpm_tis.c   | 101 --
 include/hw/acpi/tpm.h  | 105 +++
 tests/Makefile.include |   2 +
 tests/tpm-emu.c|   7 +
 tests/tpm-tis-test.c   | 486 +
 6 files changed, 601 insertions(+), 101 deletions(-)
 create mode 100644 tests/tpm-tis-test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 6e7adad..8c1ae1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1633,6 +1633,7 @@ F: include/hw/acpi/tpm.h
 F: include/sysemu/tpm*
 F: qapi/tpm.json
 F: backends/tpm.c
+F: tests/*tpm*
 T: git git://github.com/stefanberger/qemu-tpm.git tpm-next
 
 Checkpatch
diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
index f81168a..834eef7 100644
--- a/hw/tpm/tpm_tis.c
+++ b/hw/tpm/tpm_tis.c
@@ -92,107 +92,6 @@ typedef struct TPMState {
 } \
 } while (0)
 
-/* tis registers */
-#define TPM_TIS_REG_ACCESS0x00
-#define TPM_TIS_REG_INT_ENABLE0x08
-#define TPM_TIS_REG_INT_VECTOR0x0c
-#define TPM_TIS_REG_INT_STATUS0x10
-#define TPM_TIS_REG_INTF_CAPABILITY   0x14
-#define TPM_TIS_REG_STS   0x18
-#define TPM_TIS_REG_DATA_FIFO 0x24
-#define TPM_TIS_REG_INTERFACE_ID  0x30
-#define TPM_TIS_REG_DATA_XFIFO0x80
-#define TPM_TIS_REG_DATA_XFIFO_END0xbc
-#define TPM_TIS_REG_DID_VID   0xf00
-#define TPM_TIS_REG_RID   0xf04
-
-/* vendor-specific registers */
-#define TPM_TIS_REG_DEBUG 0xf90
-
-#define TPM_TIS_STS_TPM_FAMILY_MASK (0x3 << 26)/* TPM 2.0 */
-#define TPM_TIS_STS_TPM_FAMILY1_2   (0 << 26)  /* TPM 2.0 */
-#define TPM_TIS_STS_TPM_FAMILY2_0   (1 << 26)  /* TPM 2.0 */
-#define TPM_TIS_STS_RESET_ESTABLISHMENT_BIT (1 << 25)  /* TPM 2.0 */
-#define TPM_TIS_STS_COMMAND_CANCEL  (1 << 24)  /* TPM 2.0 */
-
-#define TPM_TIS_STS_VALID (1 << 7)
-#define TPM_TIS_STS_COMMAND_READY (1 << 6)
-#define TPM_TIS_STS_TPM_GO(1 << 5)
-#define TPM_TIS_STS_DATA_AVAILABLE(1 << 4)
-#define TPM_TIS_STS_EXPECT(1 << 3)
-#define TPM_TIS_STS_SELFTEST_DONE (1 << 2)
-#define TPM_TIS_STS_RESPONSE_RETRY(1 << 1)
-
-#define TPM_TIS_BURST_COUNT_SHIFT 8
-#define TPM_TIS_BURST_COUNT(X) \
-((X) << TPM_TIS_BURST_COUNT_SHIFT)
-
-#define TPM_TIS_ACCESS_TPM_REG_VALID_STS  (1 << 7)
-#define TPM_TIS_ACCESS_ACTIVE_LOCALITY(1 << 5)
-#define TPM_TIS_ACCESS_BEEN_SEIZED(1 << 4)
-#define TPM_TIS_ACCESS_SEIZE  (1 << 3)
-#define TPM_TIS_ACCESS_PENDING_REQUEST(1 << 2)
-#define TPM_TIS_ACCESS_REQUEST_USE(1 << 1)
-#define TPM_TIS_ACCESS_TPM_ESTABLISHMENT  (1 << 0)
-
-#define TPM_TIS_INT_ENABLED   (1 << 31)
-#define TPM_TIS_INT_DATA_AVAILABLE(1 << 0)
-#define TPM_TIS_INT_STS_VALID (1 << 1)
-#define TPM_TIS_INT_LOCALITY_CHANGED  (1 << 2)
-#define TPM_TIS_INT_COMMAND_READY (1 << 7)
-
-#define TPM_TIS_INT_POLARITY_MASK (3 << 3)
-#define TPM_TIS_INT_POLARITY_LOW_LEVEL(1 << 3)
-
-#define TPM_TIS_INTERRUPTS_SUPPORTED (TPM_TIS_INT_LOCALITY_CHANGED | \
-  TPM_TIS_INT_DATA_AVAILABLE   | \
-  TPM_TIS_INT_STS_VALID | \
-  TPM_TIS_INT_COMMAND_READY)
-
-#define TPM_TIS_CAP_INTERFACE_VERSION1_3 (2 << 28)
-#define TPM_TIS_CAP_INTERFACE_VERSION1_3_FOR_TPM2_0 (3 << 28)
-#define TPM_TIS_CAP_DATA_TRANSFER_64B(3 << 9)
-#define TPM_TIS_CAP_DATA_TRANSFER_LEGACY (0 << 9)
-#define TPM_TIS_CAP_BURST_COUNT_DYNAMIC  (0 << 8)
-#define TPM_TIS_CAP_INTERRUPT_LOW_LEVEL  (1 << 4) /* support is mandatory */
-#define TPM_TIS_CAPABILITIES_SUPPORTED1_3 \
-(TPM_TIS_CAP_INTERRUPT_LOW_LEVEL | \
- TPM_TIS_CAP_BURST_COUNT_DYNAMIC | \
- TPM_TIS_CAP_DATA_TRANSFER_64B | \
- TPM_TIS_CAP_INTERFACE_VERSION1_3 | \
- TPM_TIS_INTERRUPTS_SUPPORTED)
-
-#define TPM_TIS_CAPABILITIES_SUPPORTED2_0 \
-(TPM_TIS_CAP_INTERRUPT_LOW_LEVEL | \
- TPM_TIS_CAP_BURST_COUNT_DYNAMIC | \
- TPM_TIS_CAP_DATA_TRANSFER_64B | \
- TPM_TIS_CAP_INTERFACE_VERSION1_3_FOR_TPM2_0 | \
- TPM_TIS_INTERRUPTS_SUPPORTED)
-
-#define TPM_TIS_IFACE_ID_INTERFACE_TIS1_3   (0xf) /* TPM 2.0 */
-#define TPM_TIS_IFACE_ID_INTERFACE_FIFO (0x0) /* TPM 2.0 */
-#define TPM_TIS_IFACE_ID_INTERFACE_VER_FIFO (0 << 4)  /* TPM 2.0 */
-#define TPM_TIS_IFACE_ID_CAP_5_LOCALITIES   (1 << 8)  /* TPM 2.0 */
-#define TPM_TIS_IFACE_ID_CAP_TIS_SUPPORTED  (1 << 13) /* TPM 2.0 */
-#define TPM_TIS_IFACE_ID_INT_SEL_LOCK   (1 << 19) /* TPM 2.0 */
-

Re: [Qemu-devel] [PATCH v6 00/23] RISC-V QEMU Port Submission

2018-02-23 Thread Michael Clark
On Sat, Feb 24, 2018 at 10:31 AM, Richard Henderson <
richard.hender...@linaro.org> wrote:

> On 02/22/2018 04:11 PM, Michael Clark wrote:
> > QEMU RISC-V Emulation Support (RV64GC, RV32GC)
> >
> > This is hopefully the "fix remaining issues in-tree" release.
>
> FWIW, I'm happy with this.
>
> For those patches that I haven't given an explicit R-b, e.g. most of hw/, I
> didn't see anything obviously wrong.  So I'll give them
>
> Acked-by: Richard Henderson 
>
> Unless anyone has any other comments, I would expect the next step would
> be for
> you to create a signed pull request for Peter.
>

Thanks Richard!

I might see if I can sort out the license changes for include/hw/riscv/*.h
and hw/riscv/*.h

In either case, I've set the wheels in motion for the license change so it
can happen out-of-tree or in-tree... as SiFive is fine with GPLv2+, at
least for their stuff...


Re: [Qemu-devel] [PATCH v8 09/21] null: Switch to .bdrv_co_block_status()

2018-02-23 Thread Eric Blake

On 02/23/2018 11:05 AM, Kevin Wolf wrote:

Am 23.02.2018 um 17:43 hat Eric Blake geschrieben:

OFFSET_VALID | DATA might be excusable because I can see that it's
convenient that a protocol driver refers to itself as *file instead of
returning NULL there and then the offset is valid (though it would be
pointless to actually follow the file pointer), but OFFSET_VALID without
DATA probably isn't.


So OFFSET_VALID | DATA for a protocol BDS is not just convenient, but
necessary to avoid breaking qemu-img map output.  But you are also right
that OFFSET_VALID without data makes little sense at a protocol layer. So
with that in mind, I'm auditing all of the protocol layers to make sure
OFFSET_VALID ends up as something sane.


That's one way to look at it.

The other way is that qemu-img map shouldn't ask the protocol layer for
its offset because it already knows the offset (it is what it passes as
a parameter to bdrv_co_block_status).

Anyway, it's probably not worth changing the interface, we should just
make sure that the return values of the individual drivers are
consistent.


Yet another inconsistency, and it's making me scratch my head today.

By the way, in my byte-based stuff that is now pending on your tree, I 
tried hard to NOT change semantics or the set of flags returned by a 
given driver, and we agreed that's why you'd accept the series as-is and 
make me do this followup exercise.  But it's looking like my followups 
may end up touching a lot of the same drivers again, now that I'm 
looking at what the semantics SHOULD be (and whatever I do end up 
tweaking, I will at least make sure that iotests is still happy with it).


First, let's read what states the NBD spec is proposing:


It defines the following flags for the flags field:

NBD_STATE_HOLE (bit 0): if set, the block represents a hole (and future 
writes to that area may cause fragmentation or encounter an ENOSPC error); if 
clear, the block is allocated or the server could not otherwise determine its 
status. Note that the use of NBD_CMD_TRIM is related to this status, but that 
the server MAY report a hole even where NBD_CMD_TRIM has not been requested, 
and also that a server MAY report that the block is allocated even where 
NBD_CMD_TRIM has been requested.
NBD_STATE_ZERO (bit 1): if set, the block contents read as all zeroes; if 
clear, the block contents are not known. Note that the use of 
NBD_CMD_WRITE_ZEROES is related to this status, but that the server MAY report 
zeroes even where NBD_CMD_WRITE_ZEROES has not been requested, and also that a 
server MAY report unknown content even where NBD_CMD_WRITE_ZEROES has been 
requested.

It is not an error for a server to report that a region of the export has both 
NBD_STATE_HOLE set and NBD_STATE_ZERO clear. The contents of such an area are 
undefined, and a client reading such an area should make no assumption as to 
its contents or stability.


So here's how Vladimir proposed implementing it in his series (written 
before my byte-based block status stuff went in to your tree):

https://lists.gnu.org/archive/html/qemu-devel/2018-02/msg04038.html

Server side (3/9):

+int ret = bdrv_block_status_above(bs, NULL, offset, tail_bytes, 
,

+  NULL, NULL);
+if (ret < 0) {
+return ret;
+}
+
+flags = (ret & BDRV_BLOCK_ALLOCATED ? 0 : NBD_STATE_HOLE) |
+(ret & BDRV_BLOCK_ZERO  ? NBD_STATE_ZERO : 0);

Client side (6/9):

+*pnum = extent.length >> BDRV_SECTOR_BITS;
+return (extent.flags & NBD_STATE_HOLE ? 0 : BDRV_BLOCK_DATA) |
+   (extent.flags & NBD_STATE_ZERO ? BDRV_BLOCK_ZERO : 0);

Does anything there strike you as odd?  In isolation, they seemed fine 
to me, but side-by-side, I'm scratching my head: the server queries the 
block layer, and turns BDRV_BLOCK_ALLOCATED into !NBD_STATE_HOLE; the 
client side then takes the NBD protocol and tries to turn it back into 
information to feed the block layer, where !NBD_STATE_HOLE now feeds 
BDRV_BLOCK_DATA.  Why the different choice of bits?


Part of the story is that right now, we document that ONLY the block 
layer sets _ALLOCATED, in io.c, as a result of the driver layer 
returning HOLE || ZERO (there are cases where the block layer can return 
ZERO but not ALLOCATED, because the driver layer returned 0 but the 
block layer still knows that area reads as zero).  So Victor's patch 
matches the fact that the driver shouldn't set ALLOCATED.  Still, if we 
are tying ALLOCATED to whether there is a hole, then that seems like 
information we should be getting from the driver, not something 
synthesized after we've left the driver!


Then there's the question of file-posix.c: what should it return for a 
hole, ZERO|OFFSET_VALID or DATA|ZERO|OFFSET_VALID?  The wording in 
block.h implies that if DATA is not set, then the area reads as zero to 
the guest, but may have indeterminate value on the underlying file - but 
we KNOW 

[Qemu-devel] [RFC v4 20/21] iotests: test manual job dismissal

2018-02-23 Thread John Snow
Signed-off-by: John Snow 
---
 tests/qemu-iotests/056 | 195 +
 tests/qemu-iotests/056.out |   4 +-
 2 files changed, 197 insertions(+), 2 deletions(-)

diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056
index 04f2c3c841..bc21ba9af8 100755
--- a/tests/qemu-iotests/056
+++ b/tests/qemu-iotests/056
@@ -29,6 +29,26 @@ backing_img = os.path.join(iotests.test_dir, 'backing.img')
 test_img = os.path.join(iotests.test_dir, 'test.img')
 target_img = os.path.join(iotests.test_dir, 'target.img')
 
+def img_create(img, fmt=iotests.imgfmt, size='64M', **kwargs):
+fullname = os.path.join(iotests.test_dir, '%s.%s' % (img, fmt))
+optargs = []
+for k,v in kwargs.iteritems():
+optargs = optargs + ['-o', '%s=%s' % (k,v)]
+args = ['create', '-f', fmt] + optargs + [fullname, size]
+iotests.qemu_img(*args)
+return fullname
+
+def try_remove(img):
+try:
+os.remove(img)
+except OSError:
+pass
+
+def io_write_patterns(img, patterns):
+for pattern in patterns:
+iotests.qemu_io('-c', 'write -P%s %s %s' % pattern, img)
+
+
 class TestSyncModesNoneAndTop(iotests.QMPTestCase):
 image_len = 64 * 1024 * 1024 # MB
 
@@ -108,5 +128,180 @@ class TestBeforeWriteNotifier(iotests.QMPTestCase):
 event = self.cancel_and_wait()
 self.assert_qmp(event, 'data/type', 'backup')
 
+class BackupTest(iotests.QMPTestCase):
+def setUp(self):
+self.vm = iotests.VM()
+self.test_img = img_create('test')
+self.dest_img = img_create('dest')
+self.vm.add_drive(self.test_img)
+self.vm.launch()
+
+def tearDown(self):
+self.vm.shutdown()
+try_remove(self.test_img)
+try_remove(self.dest_img)
+
+def hmp_io_writes(self, drive, patterns):
+for pattern in patterns:
+self.vm.hmp_qemu_io(drive, 'write -P%s %s %s' % pattern)
+self.vm.hmp_qemu_io(drive, 'flush')
+
+def qmp_job_pending_wait(self, device):
+event = self.vm.event_wait(name="BLOCK_JOB_PENDING",
+   match={'data': {'id': device}})
+self.assertNotEqual(event, None)
+res = self.vm.qmp("block-job-finalize", id=device)
+self.assert_qmp(res, 'return', {})
+
+def qmp_backup_and_wait(self, cmd='drive-backup', serror=None,
+aerror=None, **kwargs):
+if not self.qmp_backup(cmd, serror, **kwargs):
+return False
+if 'manual' in kwargs and kwargs['manual']:
+self.qmp_job_pending_wait(kwargs['device'])
+return self.qmp_backup_wait(kwargs['device'], aerror)
+
+def qmp_backup(self, cmd='drive-backup',
+   error=None, **kwargs):
+self.assertTrue('device' in kwargs)
+res = self.vm.qmp(cmd, **kwargs)
+if error:
+self.assert_qmp(res, 'error/desc', error)
+return False
+self.assert_qmp(res, 'return', {})
+return True
+
+def qmp_backup_wait(self, device, error=None):
+event = self.vm.event_wait(name="BLOCK_JOB_COMPLETED",
+   match={'data': {'device': device}})
+self.assertNotEqual(event, None)
+try:
+failure = self.dictpath(event, 'data/error')
+except AssertionError:
+# Backup succeeded.
+self.assert_qmp(event, 'data/offset', event['data']['len'])
+return True
+else:
+# Failure.
+self.assert_qmp(event, 'data/error', qerror)
+return False
+
+def test_dismiss_false(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.dest_img, 
manual=False)
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+
+def test_dismiss_true(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt,
+ sync='full', target=self.dest_img, 
manual=True)
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return[0]/status', 'concluded')
+res = self.vm.qmp('block-job-dismiss', id='drive0')
+self.assert_qmp(res, 'return', {})
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+
+def test_dismiss_bad_id(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+res = self.vm.qmp('block-job-dismiss', id='foobar')
+self.assert_qmp(res, 'error/class', 'DeviceNotActive')
+
+def test_dismiss_collision(self):
+res = self.vm.qmp('query-block-jobs')
+self.assert_qmp(res, 'return', [])
+

[Qemu-devel] [RFC v4 12/21] blockjobs: ensure abort is called for cancelled jobs

2018-02-23 Thread John Snow
Presently, even if a job is canceled post-completion as a result of
a failing peer in a transaction, it will still call .commit because
nothing has updated or changed its return code.

The reason why this does not cause problems currently is because
backup's implementation of .commit checks for cancellation itself.

I'd like to simplify this contract:

(1) Abort is called if the job/transaction fails
(2) Commit is called if the job/transaction succeeds

To this end: A job's return code, if 0, will be forcibly set as
-ECANCELED if that job has already concluded. Remove the now
redundant check in the backup job implementation.

We need to check for cancellation in both block_job_completed
AND block_job_completed_single, because jobs may be cancelled between
those two calls; for instance in transactions.

The check in block_job_completed could be removed, but there's no
point in starting to attempt to succeed a transaction that we know
in advance will fail.

This does NOT affect mirror jobs that are "canceled" during their
synchronous phase. The mirror job itself forcibly sets the canceled
property to false prior to ceding control, so such cases will invoke
the "commit" callback.

Signed-off-by: John Snow 
---
 block/backup.c |  2 +-
 block/trace-events |  1 +
 blockjob.c | 19 +++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 7e254dabff..453cd62c24 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -206,7 +206,7 @@ static void backup_cleanup_sync_bitmap(BackupBlockJob *job, 
int ret)
 BdrvDirtyBitmap *bm;
 BlockDriverState *bs = blk_bs(job->common.blk);
 
-if (ret < 0 || block_job_is_cancelled(>common)) {
+if (ret < 0) {
 /* Merge the successor back into the parent, delete nothing. */
 bm = bdrv_reclaim_dirty_bitmap(bs, job->sync_bitmap, NULL);
 assert(bm);
diff --git a/block/trace-events b/block/trace-events
index 266afd9e99..5e531e0310 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -5,6 +5,7 @@ bdrv_open_common(void *bs, const char *filename, int flags, 
const char *format_n
 bdrv_lock_medium(void *bs, bool locked) "bs %p locked %d"
 
 # blockjob.c
+block_job_completed(void *job, int ret, int jret) "job %p ret %d corrected ret 
%d"
 block_job_state_transition(void *job,  int ret, const char *legal, const char 
*s0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)"
 block_job_apply_verb(void *job, const char *state, const char *verb, const 
char *legal) "job %p in state %s; applying verb %s (%s)"
 
diff --git a/blockjob.c b/blockjob.c
index 4d29391673..ef17dea004 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -384,13 +384,22 @@ void block_job_start(BlockJob *job)
 bdrv_coroutine_enter(blk_bs(job->blk), job->co);
 }
 
+static void block_job_update_rc(BlockJob *job)
+{
+if (!job->ret && block_job_is_cancelled(job)) {
+job->ret = -ECANCELED;
+}
+if (job->ret) {
+block_job_state_transition(job, BLOCK_JOB_STATUS_ABORTING);
+}
+}
+
 static void block_job_completed_single(BlockJob *job)
 {
 assert(job->completed);
 
-if (job->ret || block_job_is_cancelled(job)) {
-block_job_state_transition(job, BLOCK_JOB_STATUS_ABORTING);
-}
+/* Ensure abort is called for late-transactional failures */
+block_job_update_rc(job);
 
 if (!job->ret) {
 if (job->driver->commit) {
@@ -898,7 +907,9 @@ void block_job_completed(BlockJob *job, int ret)
 assert(blk_bs(job->blk)->job == job);
 job->completed = true;
 job->ret = ret;
-if (ret < 0 || block_job_is_cancelled(job)) {
+block_job_update_rc(job);
+trace_block_job_completed(job, ret, job->ret);
+if (job->ret) {
 block_job_completed_txn_abort(job);
 } else {
 block_job_completed_txn_success(job);
-- 
2.14.3




[Qemu-devel] [RFC v4 15/21] blockjobs: add prepare callback

2018-02-23 Thread John Snow
Some jobs upon finalization may need to perform some work that can
still fail. If these jobs are part of a transaction, it's important
that these callbacks fail the entire transaction.

We allow for a new callback in addition to commit/abort/clean that
allows us the opportunity to have fairly late-breaking failures
in the transactional process.

The expected flow is:

- All jobs in a transaction converge to the WAITING state
  (added in a forthcoming commit)
- All jobs prepare to call either commit/abort
- If any job fails, is canceled, or fails preparation, all jobs
  call their .abort callback.
- All jobs enter the PENDING state, awaiting manual intervention
  (also added in a forthcoming commit)
- block-job-finalize is issued by the user/management layer
- All jobs call their commit callbacks.

Signed-off-by: John Snow 
---
 blockjob.c   | 34 +++---
 include/block/blockjob_int.h | 10 ++
 2 files changed, 41 insertions(+), 3 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 8f02c03880..1c010ec100 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -394,6 +394,18 @@ static void block_job_update_rc(BlockJob *job)
 }
 }
 
+static int block_job_prepare(BlockJob *job)
+{
+if (job->ret) {
+goto out;
+}
+if (job->driver->prepare) {
+job->ret = job->driver->prepare(job);
+}
+ out:
+return job->ret;
+}
+
 static void block_job_commit(BlockJob *job)
 {
 assert(!job->ret);
@@ -417,7 +429,7 @@ static void block_job_clean(BlockJob *job)
 }
 }
 
-static void block_job_completed_single(BlockJob *job)
+static int block_job_completed_single(BlockJob *job)
 {
 assert(job->completed);
 
@@ -452,6 +464,7 @@ static void block_job_completed_single(BlockJob *job)
 block_job_txn_unref(job->txn);
 block_job_event_concluded(job);
 block_job_unref(job);
+return 0;
 }
 
 static void block_job_cancel_async(BlockJob *job)
@@ -467,17 +480,22 @@ static void block_job_cancel_async(BlockJob *job)
 job->cancelled = true;
 }
 
-static void block_job_txn_apply(BlockJobTxn *txn, void fn(BlockJob *))
+static int block_job_txn_apply(BlockJobTxn *txn, int fn(BlockJob *))
 {
 AioContext *ctx;
 BlockJob *job, *next;
+int rc;
 
 QLIST_FOREACH_SAFE(job, >jobs, txn_list, next) {
 ctx = blk_get_aio_context(job->blk);
 aio_context_acquire(ctx);
-fn(job);
+rc = fn(job);
 aio_context_release(ctx);
+if (rc) {
+break;
+}
 }
+return rc;
 }
 
 static void block_job_do_dismiss(BlockJob *job)
@@ -567,6 +585,8 @@ static void block_job_completed_txn_success(BlockJob *job)
 {
 BlockJobTxn *txn = job->txn;
 BlockJob *other_job;
+int rc = 0;
+
 /*
  * Successful completion, see if there are other running jobs in this
  * txn.
@@ -576,6 +596,14 @@ static void block_job_completed_txn_success(BlockJob *job)
 return;
 }
 }
+
+/* Jobs may require some prep-work to complete without failure */
+rc = block_job_txn_apply(txn, block_job_prepare);
+if (rc) {
+block_job_completed_txn_abort(job);
+return;
+}
+
 /* We are the last completed job, commit the transaction. */
 block_job_txn_apply(txn, block_job_completed_single);
 }
diff --git a/include/block/blockjob_int.h b/include/block/blockjob_int.h
index 259d49b32a..642adce68b 100644
--- a/include/block/blockjob_int.h
+++ b/include/block/blockjob_int.h
@@ -53,6 +53,16 @@ struct BlockJobDriver {
  */
 void (*complete)(BlockJob *job, Error **errp);
 
+/**
+ * If the callback is not NULL, prepare will be invoked when all the jobs
+ * belonging to the same transaction complete; or upon this job's 
completion
+ * if it is not in a transaction.
+ *
+ * This callback will not be invoked if the job has already failed.
+ * If it fails, abort and then clean will be called.
+ */
+int (*prepare)(BlockJob *job);
+
 /**
  * If the callback is not NULL, it will be invoked when all the jobs
  * belonging to the same transaction complete; or upon this job's
-- 
2.14.3




[Qemu-devel] [RFC v4 16/21] blockjobs: add waiting status

2018-02-23 Thread John Snow
For jobs that are stuck waiting on others in a transaction, it would
be nice to know that they are no longer "running" in that sense, but
instead are waiting on other jobs in the transaction.

Jobs that are "waiting" in this sense cannot be meaningfully altered
any longer as they have left their running loop. The only meaningful
user verb for jobs in this state is "cancel," which will cancel the
whole transaction, too.

Transitions:
Running -> Waiting:   Normal transition.
Ready   -> Waiting:   Normal transition.
Waiting -> Aborting:  Transactional cancellation.
Waiting -> Concluded: Normal transition.

Removed Transitions:
Running -> Concluded: Jobs must go to WAITING first.
Ready   -> Concluded: Jobs must go to WAITING fisrt.

Verbs:
Cancel: Can be applied to WAITING jobs.

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED+-+
 +--++ |
|  |
 +--v+ +--+|
   +-+RUNNING<->PAUSED||
   | +--+-+--+ +--+|
   || ||
   || +--+ |
   ||| |
   | +--v--+   +---+ | |
   +-+READY<--->STANDBY| | |
   | +--+--+   +---+ | |
   ||| |
   | +--v+   | |
   +-+WAITING<---+ |
   | +--++ |
   ||  |
+--+-+   +--v--+   |
|ABORTING+--->CONCLUDED|   |
++   +--+--+   |
|  |
 +--v-+|
 |NULL<+
 ++

Signed-off-by: John Snow 
---
 blockjob.c   | 37 -
 qapi/block-core.json | 29 -
 2 files changed, 48 insertions(+), 18 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 1c010ec100..4aed86fc6b 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -44,26 +44,27 @@ static QemuMutex block_job_mutex;
 
 /* BlockJob State Transition Table */
 bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X, E, N */
-/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0, 0},
-/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0, 1},
-/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1, 0},
-/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0, 0},
-/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1, 0},
-/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0, 0},
-/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 1, 0},
-/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0, 1},
-/* N: */ [BLOCK_JOB_STATUS_NULL]  = {0, 0, 0, 0, 0, 0, 0, 0, 0},
+  /* U, C, R, P, Y, S, W, X, E, N */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0, 0, 1},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1, 0, 0},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1, 0, 0},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0, 0, 0},
+/* W: */ [BLOCK_JOB_STATUS_WAITING]   = {0, 0, 0, 0, 0, 0, 0, 1, 1, 0},
+/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
+/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 1},
+/* N: */ [BLOCK_JOB_STATUS_NULL]  = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
 };
 
 bool BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, X, E, N */
-[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 0, 0, 0},
-[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0, 0},
-[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0},
-[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0},
-[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0, 0},
-[BLOCK_JOB_VERB_DISMISS]  = {0, 0, 0, 0, 0, 0, 0, 1, 0},
+  /* U, C, R, P, Y, S, W, X, E, N */
+[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 1, 0, 0, 0},
+[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
+[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
+[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
+[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 

Re: [Qemu-devel] [PATCH v3 13/31] arm/translate-a64: add FP16 pairwise ops simd_three_reg_same_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> This includes FMAXNMP, FADDP, FMAXP, FMINNMP, FMINP.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v2
>   - checkpatch fixes
> ---
>  target/arm/translate-a64.c | 208 
> +
>  1 file changed, 133 insertions(+), 75 deletions(-)

Reviewed-by: Richard Henderson 


r~



Re: [Qemu-devel] [PATCH v3 22/31] arm/helper.c: re-factor recpe and add recepe_f16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> It looks like the ARM ARM has simplified the pseudo code for the
> calculation which is done on a fixed point 9 bit integer maths. So
> while adding f16 we can also clean this up to be a little less heavy
> on the floating point and just return the fractional part and leave
> the calle's to do the final packing of the result.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - fix comment 2.0^-16
>   - f16_exp >= 29 (biased)
>   - remove confusing comment about fpst
> ---

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [PATCH v3 5/5] aarch64-linux-user: Add support for SVE signal frame records

2018-02-23 Thread Richard Henderson
On 02/23/2018 01:59 AM, Peter Maydell wrote:
> On 22 February 2018 at 20:14, Richard Henderson
>  wrote:
>> On 02/22/2018 08:41 AM, Peter Maydell wrote:
>>> On 16 February 2018 at 21:56, Richard Henderson
>>>  wrote:
> 
 +if (sve_size <= std_size) {
 +sve_ofs = size;
 +size += sve_size;
 +end1_ofs = size;
 +} else {
 +/* Otherwise we need to allocate extra space.  */
 +extra_ofs = size;
 +size += sizeof(struct target_extra_context);
 +end1_ofs = size;
 +size += QEMU_ALIGN_UP(sizeof(struct target_aarch64_ctx), 16);
>>>
>>> Why do we add the size of target_aarch64_ctx to size here?
>>> We already account for the size of the end record later, so
>>> what is this one?
>>
>> This is for the end record within the extra space, as opposed to the end 
>> record
>> within the standard space which is what we accounted for before.  A comment
>> would help, I supposed.
> 
> Oh, so 'size' is accounting for both the standard space used
> and the extra space? I had thought that 'size' was just counting
> up the standard space used, and the extra space count was in
> extra_size.

Yes.  The revised code uses a different name, total_size, that hopefully makes
this more clear.


r~




[Qemu-devel] [RFC v4 02/21] blockjobs: model single jobs as transactions

2018-02-23 Thread John Snow
model all independent jobs as single job transactions.

It's one less case we have to worry about when we add more states to the
transition machine. This way, we can just treat all job lifetimes exactly
the same. This helps tighten assertions of the STM graph and removes some
conditionals that would have been needed in the coming commits adding a
more explicit job lifetime management API.

Signed-off-by: John Snow 
---
 block/backup.c   |  3 +--
 block/commit.c   |  2 +-
 block/mirror.c   |  2 +-
 block/stream.c   |  2 +-
 blockjob.c   | 25 -
 include/block/blockjob_int.h |  3 ++-
 tests/test-bdrv-drain.c  |  4 ++--
 tests/test-blockjob-txn.c| 19 +++
 tests/test-blockjob.c|  2 +-
 9 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 4a16a37229..7e254dabff 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -621,7 +621,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* job->common.len is fixed, so we can't allow resize */
-job = block_job_create(job_id, _job_driver, bs,
+job = block_job_create(job_id, _job_driver, txn, bs,
BLK_PERM_CONSISTENT_READ,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD,
@@ -677,7 +677,6 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 block_job_add_bdrv(>common, "target", target, 0, BLK_PERM_ALL,
_abort);
 job->common.len = len;
-block_job_txn_add_job(txn, >common);
 
 return >common;
 
diff --git a/block/commit.c b/block/commit.c
index bb6c904704..9682158ee7 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -289,7 +289,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
 return;
 }
 
-s = block_job_create(job_id, _job_driver, bs, 0, BLK_PERM_ALL,
+s = block_job_create(job_id, _job_driver, NULL, bs, 0, BLK_PERM_ALL,
  speed, BLOCK_JOB_DEFAULT, NULL, NULL, errp);
 if (!s) {
 return;
diff --git a/block/mirror.c b/block/mirror.c
index c9badc1203..6bab7cfdd8 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1166,7 +1166,7 @@ static void mirror_start_job(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* Make sure that the source is not resized while the job is running */
-s = block_job_create(job_id, driver, mirror_top_bs,
+s = block_job_create(job_id, driver, NULL, mirror_top_bs,
  BLK_PERM_CONSISTENT_READ,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD, speed,
diff --git a/block/stream.c b/block/stream.c
index 499cdacdb0..f3b53f49e2 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -244,7 +244,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
 /* Prevent concurrent jobs trying to modify the graph structure here, we
  * already have our own plans. Also don't allow resize as the image size is
  * queried only at the job start and then cached. */
-s = block_job_create(job_id, _job_driver, bs,
+s = block_job_create(job_id, _job_driver, NULL, bs,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
  BLK_PERM_GRAPH_MOD,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
diff --git a/blockjob.c b/blockjob.c
index 24833ef30f..7ba3683ee3 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -357,10 +357,8 @@ static void block_job_completed_single(BlockJob *job)
 }
 }
 
-if (job->txn) {
-QLIST_REMOVE(job, txn_list);
-block_job_txn_unref(job->txn);
-}
+QLIST_REMOVE(job, txn_list);
+block_job_txn_unref(job->txn);
 block_job_unref(job);
 }
 
@@ -647,7 +645,7 @@ static void block_job_event_completed(BlockJob *job, const 
char *msg)
  */
 
 void *block_job_create(const char *job_id, const BlockJobDriver *driver,
-   BlockDriverState *bs, uint64_t perm,
+   BlockJobTxn *txn, BlockDriverState *bs, uint64_t perm,
uint64_t shared_perm, int64_t speed, int flags,
BlockCompletionFunc *cb, void *opaque, Error **errp)
 {
@@ -729,6 +727,17 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 return NULL;
 }
 }
+
+/* Single jobs are modeled as single-job transactions for sake of
+ * consolidating the job management logic */
+if (!txn) {
+txn = block_job_txn_new();
+block_job_txn_add_job(txn, job);
+block_job_txn_unref(txn);
+} else {
+block_job_txn_add_job(txn, job);
+}
+
 return job;
 }
 
@@ -752,13 +761,11 @@ void 

[Qemu-devel] [RFC v4 21/21] blockjobs: add manual_mgmt option to transactions

2018-02-23 Thread John Snow
This allows us to easily force the option for all jobs belonging
to a transaction to ensure consistency with how all those jobs
will be handled.

This is purely a convenience.

Signed-off-by: John Snow 
---
 blockdev.c|  7 ++-
 blockjob.c| 10 +++---
 include/block/blockjob.h  |  5 -
 qapi/transaction.json |  3 ++-
 tests/test-blockjob-txn.c |  6 +++---
 5 files changed, 22 insertions(+), 9 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 2eddb0e726..34181c41c2 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -2225,6 +2225,11 @@ static TransactionProperties *get_transaction_properties(
 props->completion_mode = ACTION_COMPLETION_MODE_INDIVIDUAL;
 }
 
+if (!props->has_manual_mgmt) {
+props->has_manual_mgmt = true;
+props->manual_mgmt = false;
+}
+
 return props;
 }
 
@@ -2250,7 +2255,7 @@ void qmp_transaction(TransactionActionList *dev_list,
  */
 props = get_transaction_properties(props);
 if (props->completion_mode != ACTION_COMPLETION_MODE_INDIVIDUAL) {
-block_job_txn = block_job_txn_new();
+block_job_txn = block_job_txn_new(props->manual_mgmt);
 }
 
 /* drain all i/o before any operations */
diff --git a/blockjob.c b/blockjob.c
index f9e8a64261..eaaa2aea65 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -136,6 +136,9 @@ struct BlockJobTxn {
 
 /* Reference count */
 int refcnt;
+
+/* Participating jobs must use manual completion */
+bool manual;
 };
 
 static QLIST_HEAD(, BlockJob) block_jobs = QLIST_HEAD_INITIALIZER(block_jobs);
@@ -176,11 +179,12 @@ BlockJob *block_job_get(const char *id)
 return NULL;
 }
 
-BlockJobTxn *block_job_txn_new(void)
+BlockJobTxn *block_job_txn_new(bool manual_mgmt)
 {
 BlockJobTxn *txn = g_new0(BlockJobTxn, 1);
 QLIST_INIT(>jobs);
 txn->refcnt = 1;
+txn->manual = manual_mgmt;
 return txn;
 }
 
@@ -944,7 +948,7 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 job->paused= true;
 job->pause_count   = 1;
 job->refcnt= 1;
-job->manual= (flags & BLOCK_JOB_MANUAL);
+job->manual= (flags & BLOCK_JOB_MANUAL) || (txn && txn->manual);
 job->status= BLOCK_JOB_STATUS_CREATED;
 block_job_state_transition(job, BLOCK_JOB_STATUS_CREATED);
 aio_timer_init(qemu_get_aio_context(), >sleep_timer,
@@ -978,7 +982,7 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 /* Single jobs are modeled as single-job transactions for sake of
  * consolidating the job management logic */
 if (!txn) {
-txn = block_job_txn_new();
+txn = block_job_txn_new(false);
 block_job_txn_add_job(txn, job);
 block_job_txn_unref(txn);
 } else {
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index e09064c342..f3d026f13d 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -371,8 +371,11 @@ void block_job_iostatus_reset(BlockJob *job);
  * All jobs in the transaction either complete successfully or fail/cancel as a
  * group.  Jobs wait for each other before completing.  Cancelling one job
  * cancels all jobs in the transaction.
+ *
+ * @manual_mgmt: whether or not jobs that belong to this transaction will be
+ *   forced to use 2.12+ job management semantics
  */
-BlockJobTxn *block_job_txn_new(void);
+BlockJobTxn *block_job_txn_new(bool manual_mgmt);
 
 /**
  * block_job_ref:
diff --git a/qapi/transaction.json b/qapi/transaction.json
index bd312792da..9611758cb6 100644
--- a/qapi/transaction.json
+++ b/qapi/transaction.json
@@ -79,7 +79,8 @@
 ##
 { 'struct': 'TransactionProperties',
   'data': {
-   '*completion-mode': 'ActionCompletionMode'
+   '*completion-mode': 'ActionCompletionMode',
+   '*manual-mgmt': 'bool'
   }
 }
 
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
index 34f09ef8c1..2d84f9a41e 100644
--- a/tests/test-blockjob-txn.c
+++ b/tests/test-blockjob-txn.c
@@ -119,7 +119,7 @@ static void test_single_job(int expected)
 BlockJobTxn *txn;
 int result = -EINPROGRESS;
 
-txn = block_job_txn_new();
+txn = block_job_txn_new(false);
 job = test_block_job_start(1, true, expected, , txn);
 block_job_start(job);
 
@@ -158,7 +158,7 @@ static void test_pair_jobs(int expected1, int expected2)
 int result1 = -EINPROGRESS;
 int result2 = -EINPROGRESS;
 
-txn = block_job_txn_new();
+txn = block_job_txn_new(false);
 job1 = test_block_job_start(1, true, expected1, , txn);
 job2 = test_block_job_start(2, true, expected2, , txn);
 block_job_start(job1);
@@ -220,7 +220,7 @@ static void test_pair_jobs_fail_cancel_race(void)
 int result1 = -EINPROGRESS;
 int result2 = -EINPROGRESS;
 
-txn = block_job_txn_new();
+txn = block_job_txn_new(false);
 job1 = test_block_job_start(1, true, -ECANCELED, , txn);
 job2 = 

Re: [Qemu-devel] [PATCH v2 11/32] arm/translate-a64: add FP16 F[A]C[EQ/GE/GT] to simd_three_reg_same_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 03:59 AM, Alex Bennée wrote:
>> Not using float16_eq etc?
> 
> These don't actually exist.

Ah.

> But I guess we could make stubs for them
> based on the generic float_compare support. But would it buy us much?
...
>>> +return ADVSIMD_CMPRES(compare == float_relation_greater ||
>>> +  compare == float_relation_equal);
>>
>> Especially float16_le(b, a, fpst).

It buys us knowledge of the float_relation_* values, such that instead of the
two comparisons above you can use <= 0 (note that this only works for le not
ge, because of float_relation_unordered == 2).

I'll grant you that two compares vs one isn't much, but it is simpler...


r~



[Qemu-devel] [PATCH v2 2/2] xilinx_spips: Use 8 dummy cycles with the QIOR/QIOR4 commands

2018-02-23 Thread Francisco Iglesias
Use 8 dummy cycles (4 dummy bytes) with the QIOR/QIOR4 commands in legacy mode
for matching what is expected by Micron (Numonyx) flashes (the default target
flash type of the QSPI).

Signed-off-by: Francisco Iglesias 
Tested-by: Alistair Francis 
Reviewed-by: Alistair Francis 
---
 hw/ssi/xilinx_spips.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
index 0cb484ecf4..426f971311 100644
--- a/hw/ssi/xilinx_spips.c
+++ b/hw/ssi/xilinx_spips.c
@@ -577,7 +577,7 @@ static int xilinx_spips_num_dummies(XilinxQSPIPS *qs, 
uint8_t command)
 return 2;
 case QIOR:
 case QIOR_4:
-return 5;
+return 4;
 default:
 return -1;
 }
-- 
2.11.0




[Qemu-devel] [PATCH v2 0/2] xilinx_spips: Update CS assertion when striping

2018-02-23 Thread Francisco Iglesias
Hi,

The first patch in this series attempts to correct the slave selection when
using the striping functionality in the QSPI. The second patch in the series
updates the QIOR/QIOR4 commands to use 8 dummy cycles in the QSPI for matching
Micron (Numonyx) flashes (the default target flash type of the QSPI).

Best regards,
Francisco Iglesias

Changelog:
v1 -> v2
  * Attempted to improve readability in the patch 'xilinx_spips: Enable only
two slaves when reading/writing with stripe' when selecting chip selects.


Francisco Iglesias (2):
  xilinx_spips: Enable only two slaves when reading/writing with stripe
  xilinx_spips: Use 8 dummy cycles with the QIOR/QIOR4 commands

 hw/ssi/xilinx_spips.c | 43 ++-
 1 file changed, 38 insertions(+), 5 deletions(-)

-- 
2.11.0




[Qemu-devel] [RFC v4 18/21] blockjobs: add block-job-finalize

2018-02-23 Thread John Snow
Instead of automatically transitioning from PENDING to CONCLUDED, gate
the .prepare() and .commit() phases behind an explicit acknowledgement
provided by the QMP monitor if manual completion mode has been requested.

This allows us to perform graph changes in prepare and/or commit so that
graph changes do not occur autonomously without knowledge of the
controlling management layer.

Transactions that have reached the "PENDING" state together can all be
moved to invoke their finalization methods by issuing block_job_finalize
to any one job in the transaction.

Jobs in a transaction with mixed job->manual settings will remain stuck
in the "WAITING" state until block_job_finalize is authored on the job(s)
that have reached the "PENDING" state.

These jobs are not allowed to progress because other jobs in the
transaction may still fail during their preparation phase during
finalization, so these jobs must remain in the WAITING phase until
success is guaranteed. These jobs will then automatically dismiss
themselves, but jobs that had the manual property set will remain
at CONCLUDED as normal.

Signed-off-by: John Snow 
---
 block/trace-events   |  1 +
 blockdev.c   | 14 ++
 blockjob.c   | 69 +---
 include/block/blockjob.h | 17 
 qapi/block-core.json | 23 +++-
 5 files changed, 108 insertions(+), 16 deletions(-)

diff --git a/block/trace-events b/block/trace-events
index 5e531e0310..a81b66ff36 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -51,6 +51,7 @@ qmp_block_job_cancel(void *job) "job %p"
 qmp_block_job_pause(void *job) "job %p"
 qmp_block_job_resume(void *job) "job %p"
 qmp_block_job_complete(void *job) "job %p"
+qmp_block_job_finalize(void *job) "job %p"
 qmp_block_job_dismiss(void *job) "job %p"
 qmp_block_stream(void *bs, void *job) "bs %p job %p"
 
diff --git a/blockdev.c b/blockdev.c
index 3180130782..05fd421cdc 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -3852,6 +3852,20 @@ void qmp_block_job_complete(const char *device, Error 
**errp)
 aio_context_release(aio_context);
 }
 
+void qmp_block_job_finalize(const char *id, Error **errp)
+{
+AioContext *aio_context;
+BlockJob *job = find_block_job(id, _context, errp);
+
+if (!job) {
+return;
+}
+
+trace_qmp_block_job_finalize(job);
+block_job_finalize(job, errp);
+aio_context_release(aio_context);
+}
+
 void qmp_block_job_dismiss(const char *id, Error **errp)
 {
 AioContext *aio_context;
diff --git a/blockjob.c b/blockjob.c
index 23b4b99fd4..f9e8a64261 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -65,14 +65,15 @@ bool 
BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
 [BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0},
 [BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0},
 [BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0},
+[BLOCK_JOB_VERB_FINALIZE] = {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
 [BLOCK_JOB_VERB_DISMISS]  = {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
 };
 
-static void block_job_state_transition(BlockJob *job, BlockJobStatus s1)
+static bool block_job_state_transition(BlockJob *job, BlockJobStatus s1)
 {
 BlockJobStatus s0 = job->status;
 if (s0 == s1) {
-return;
+return false;
 }
 assert(s1 >= 0 && s1 <= BLOCK_JOB_STATUS__MAX);
 trace_block_job_state_transition(job, job->ret, BlockJobSTT[s0][s1] ?
@@ -83,6 +84,7 @@ static void block_job_state_transition(BlockJob *job, 
BlockJobStatus s1)
   s1));
 assert(BlockJobSTT[s0][s1]);
 job->status = s1;
+return true;
 }
 
 static int block_job_apply_verb(BlockJob *job, BlockJobVerb bv, Error **errp)
@@ -432,7 +434,7 @@ static void block_job_clean(BlockJob *job)
 }
 }
 
-static int block_job_completed_single(BlockJob *job)
+static int block_job_finalize_single(BlockJob *job)
 {
 assert(job->completed);
 
@@ -581,18 +583,44 @@ static void block_job_completed_txn_abort(BlockJob *job)
 assert(other_job->cancelled);
 block_job_finish_sync(other_job, NULL, NULL);
 }
-block_job_completed_single(other_job);
+block_job_finalize_single(other_job);
 aio_context_release(ctx);
 }
 
 block_job_txn_unref(txn);
 }
 
+static int block_job_is_manual(BlockJob *job)
+{
+return job->manual;
+}
+
+static void block_job_do_finalize(BlockJob *job)
+{
+int rc;
+assert(job && job->txn);
+
+/* For jobs set !job->manual, transition to pending synchronously now */
+block_job_txn_apply(job->txn, block_job_event_pending, false);
+
+/* prepare the transaction to complete */
+rc = block_job_txn_apply(job->txn, block_job_prepare, true);
+if (rc) {
+block_job_completed_txn_abort(job);
+} else {
+

[Qemu-devel] [RFC v4 14/21] blockjobs: add block_job_txn_apply function

2018-02-23 Thread John Snow
Simply apply a function transaction-wide.
A few more uses of this in forthcoming patches.

Signed-off-by: John Snow 
---
 blockjob.c | 24 +++-
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 431ce9c220..8f02c03880 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -467,6 +467,19 @@ static void block_job_cancel_async(BlockJob *job)
 job->cancelled = true;
 }
 
+static void block_job_txn_apply(BlockJobTxn *txn, void fn(BlockJob *))
+{
+AioContext *ctx;
+BlockJob *job, *next;
+
+QLIST_FOREACH_SAFE(job, >jobs, txn_list, next) {
+ctx = blk_get_aio_context(job->blk);
+aio_context_acquire(ctx);
+fn(job);
+aio_context_release(ctx);
+}
+}
+
 static void block_job_do_dismiss(BlockJob *job)
 {
 assert(job);
@@ -552,9 +565,8 @@ static void block_job_completed_txn_abort(BlockJob *job)
 
 static void block_job_completed_txn_success(BlockJob *job)
 {
-AioContext *ctx;
 BlockJobTxn *txn = job->txn;
-BlockJob *other_job, *next;
+BlockJob *other_job;
 /*
  * Successful completion, see if there are other running jobs in this
  * txn.
@@ -565,13 +577,7 @@ static void block_job_completed_txn_success(BlockJob *job)
 }
 }
 /* We are the last completed job, commit the transaction. */
-QLIST_FOREACH_SAFE(other_job, >jobs, txn_list, next) {
-ctx = blk_get_aio_context(other_job->blk);
-aio_context_acquire(ctx);
-assert(other_job->ret == 0);
-block_job_completed_single(other_job);
-aio_context_release(ctx);
-}
+block_job_txn_apply(txn, block_job_completed_single);
 }
 
 /* Assumes the block_job_mutex is held */
-- 
2.14.3




[Qemu-devel] [RFC v4 17/21] blockjobs: add PENDING status and event

2018-02-23 Thread John Snow
For jobs utilizing the new manual workflow, we intend to prohibit
them from modifying the block graph until the management layer provides
an explicit ACK via block-job-finalize to move the process forward.

To distinguish this runstate from "ready" or "waiting," we add a new
"pending" event.

For now, the transition from PENDING to CONCLUDED/ABORTING is automatic,
but a future commit will add the explicit block-job-finalize step.

Transitions:
Waiting -> Pending:   Normal transition.
Pending -> Concluded: Normal transition.
Pending -> Aborting:  Late transactional failures and cancellations.

Removed Transitions:
Waiting -> Concluded: Jobs must go to PENDING first.

Verbs:
Cancel: Can be applied to a pending job.

 +-+
 |UNDEFINED|
 +--+--+
|
 +--v+
 |CREATED+-+
 +--++ |
|  |
 +--++ +--+|
   +-+RUNNING<->PAUSED||
   | +--+-+--+ +--+|
   || ||
   || +--+ |
   ||| |
   | +--v--+   +---+ | |
   +-+READY<--->STANDBY| | |
   | +--+--+   +---+ | |
   ||| |
   | +--v+   | |
   +-+WAITING+---+ |
   | +--++ |
   ||  |
   | +--v+ |
   +-+PENDING| |
   | +--++ |
   ||  |
+--v-+   +--v--+   |
|ABORTING+--->CONCLUDED|   |
++   +--+--+   |
|  |
 +--v-+|
 |NULL++
 ++

Signed-off-by: John Snow 
---
 blockjob.c   | 66 +---
 qapi/block-core.json | 31 +++-
 2 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 4aed86fc6b..23b4b99fd4 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -44,27 +44,28 @@ static QemuMutex block_job_mutex;
 
 /* BlockJob State Transition Table */
 bool BlockJobSTT[BLOCK_JOB_STATUS__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, W, X, E, N */
-/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0, 0, 0},
-/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0, 0, 1},
-/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 1, 0, 0},
-/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
-/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 1, 0, 0},
-/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0, 0, 0},
-/* W: */ [BLOCK_JOB_STATUS_WAITING]   = {0, 0, 0, 0, 0, 0, 0, 1, 1, 0},
-/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
-/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 1},
-/* N: */ [BLOCK_JOB_STATUS_NULL]  = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+  /* U, C, R, P, Y, S, W, D, X, E, N */
+/* U: */ [BLOCK_JOB_STATUS_UNDEFINED] = {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+/* C: */ [BLOCK_JOB_STATUS_CREATED]   = {0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1},
+/* R: */ [BLOCK_JOB_STATUS_RUNNING]   = {0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0},
+/* P: */ [BLOCK_JOB_STATUS_PAUSED]= {0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0},
+/* Y: */ [BLOCK_JOB_STATUS_READY] = {0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0},
+/* S: */ [BLOCK_JOB_STATUS_STANDBY]   = {0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0},
+/* W: */ [BLOCK_JOB_STATUS_WAITING]   = {0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0},
+/* D: */ [BLOCK_JOB_STATUS_PENDING]   = {0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0},
+/* X: */ [BLOCK_JOB_STATUS_ABORTING]  = {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
+/* E: */ [BLOCK_JOB_STATUS_CONCLUDED] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1},
+/* N: */ [BLOCK_JOB_STATUS_NULL]  = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
 };
 
 bool BlockJobVerbTable[BLOCK_JOB_VERB__MAX][BLOCK_JOB_STATUS__MAX] = {
-  /* U, C, R, P, Y, S, W, X, E, N */
-[BLOCK_JOB_VERB_CANCEL]   = {0, 1, 1, 1, 1, 1, 1, 0, 0, 0},
-[BLOCK_JOB_VERB_PAUSE]= {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
-[BLOCK_JOB_VERB_RESUME]   = {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
-[BLOCK_JOB_VERB_SET_SPEED]= {0, 1, 1, 1, 1, 1, 0, 0, 0, 0},
-[BLOCK_JOB_VERB_COMPLETE] = {0, 0, 0, 0, 1, 0, 0, 0, 0, 0},
-[BLOCK_JOB_VERB_DISMISS]  = {0, 0, 0, 0, 0, 0, 0, 0, 1, 0},
+  /* U, C, R, P, Y, S, W, D, X, E, N */
+

Re: [Qemu-devel] [PATCH v3 31/31] arm/translate-a64: add all single op FP16 to handle_fp_1src_half

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> This includes FMOV, FABS, FNEG, FSQRT and  FRINT[NPMZAXI]. We re-use
> existing helpers to achieve this.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v3
>   - make fabs a bitwise operation
>   - use read_vec_element_i32 to read value
>   - properly wire into disas_fp_1rc
> ---
>  target/arm/translate-a64.c | 71 
> ++
>  1 file changed, 71 insertions(+)

Reviewed-by: Richard Henderson 


r~




Re: [Qemu-devel] [PATCH] hw/acpi-build: build SRAT memory affinity structures for NVDIMM

2018-02-23 Thread no-reply
Hi,

This series failed build test on s390x host. Please find the details below.

N/A. Internal error while reading log file



---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-de...@freelists.org

[Qemu-devel] [PATCH v2 1/5] target/i386: Fix a minor typo found while reviwing

2018-02-23 Thread Babu Moger
Changed KVM_CPUID_FLAG_SIGNIFCANT_INDEX to KVM_CPUID_FLAG_SIGNIFICANT_INDEX

Signed-off-by: Babu Moger 
---
 linux-headers/asm-x86/kvm.h | 2 +-
 target/i386/kvm.c   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h
index f3a9604..6aec661 100644
--- a/linux-headers/asm-x86/kvm.h
+++ b/linux-headers/asm-x86/kvm.h
@@ -220,7 +220,7 @@ struct kvm_cpuid_entry2 {
__u32 padding[3];
 };
 
-#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX(1 << 0)
+#define KVM_CPUID_FLAG_SIGNIFICANT_INDEX   (1 << 0)
 #define KVM_CPUID_FLAG_STATEFUL_FUNC   (1 << 1)
 #define KVM_CPUID_FLAG_STATE_READ_NEXT (1 << 2)
 
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index ad4b159..85856b6 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -844,7 +844,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
 break;
 }
 c->function = i;
-c->flags = KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+c->flags = KVM_CPUID_FLAG_SIGNIFICANT_INDEX;
 c->index = j;
 cpu_x86_cpuid(env, i, j, >eax, >ebx, >ecx, >edx);
 
-- 
1.8.3.1




[Qemu-devel] [PATCH v2 5/5] target/i386: Remove generic SMT thread check

2018-02-23 Thread Babu Moger
Remove generic non-intel check while validating hyperthreading support.
Certain AMD CPUs can support hyperthreading now.

CPU family with TOPOEXT feature can support hyperthreading now.

Signed-off-by: Babu Moger 
---
 target/i386/cpu.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 6d06637..295c409 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4336,17 +4336,20 @@ static void x86_cpu_realizefn(DeviceState *dev, Error 
**errp)
 
 qemu_init_vcpu(cs);
 
-/* Only Intel CPUs support hyperthreading. Even though QEMU fixes this
- * issue by adjusting CPUID__0001_EBX and CPUID_8000_0008_ECX
- * based on inputs (sockets,cores,threads), it is still better to gives
+/* Most Intel and certain AMD CPUs support hyperthreading. Even though QEMU
+ * fixes this issue by adjusting CPUID__0001_EBX and 
CPUID_8000_0008_ECX
+ * based on inputs (sockets,cores,threads), it is still better to give
  * users a warning.
  *
  * NOTE: the following code has to follow qemu_init_vcpu(). Otherwise
  * cs->nr_threads hasn't be populated yet and the checking is incorrect.
  */
-if (!IS_INTEL_CPU(env) && cs->nr_threads > 1 && !ht_warned) {
-error_report("AMD CPU doesn't support hyperthreading. Please configure"
- " -smp options properly.");
+ if (IS_AMD_CPU(env) &&
+ !(env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_TOPOEXT) &&
+ cs->nr_threads > 1 && !ht_warned) {
+error_report("This family of AMD CPU doesn't support "
+ "hyperthreading. Please configure -smp "
+ "options properly.");
 ht_warned = true;
 }
 
-- 
1.8.3.1




[Qemu-devel] [PATCH v2 4/5] target/i386: Enable TOPOEXT feature on AMD EPYC CPU

2018-02-23 Thread Babu Moger
Enable TOPOEXT feature on EPYC CPU. This is required to support
hyperthreading on VM guests. Also extend xlevel to 0x801E.
These are supported via CPUID_8000_001E extended functions.

Signed-off-by: Babu Moger 
---
 target/i386/cpu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 191e850..6d06637 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1955,7 +1955,8 @@ static X86CPUDefinition builtin_x86_defs[] = {
 .features[FEAT_8000_0001_ECX] =
 CPUID_EXT3_OSVW | CPUID_EXT3_3DNOWPREFETCH |
 CPUID_EXT3_MISALIGNSSE | CPUID_EXT3_SSE4A | CPUID_EXT3_ABM |
-CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM,
+CPUID_EXT3_CR8LEG | CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM |
+CPUID_EXT3_TOPOEXT,
 .features[FEAT_7_0_EBX] =
 CPUID_7_0_EBX_FSGSBASE | CPUID_7_0_EBX_BMI1 | CPUID_7_0_EBX_AVX2 |
 CPUID_7_0_EBX_SMEP | CPUID_7_0_EBX_BMI2 | CPUID_7_0_EBX_RDSEED |
@@ -1970,7 +1971,7 @@ static X86CPUDefinition builtin_x86_defs[] = {
 CPUID_XSAVE_XGETBV1,
 .features[FEAT_6_EAX] =
 CPUID_6_EAX_ARAT,
-.xlevel = 0x800A,
+.xlevel = 0x801E,
 .model_id = "AMD EPYC Processor",
 },
 {
-- 
1.8.3.1




[Qemu-devel] [PATCH v2 3/5] target/i386: Add support for CPUID_8000_001E for AMD

2018-02-23 Thread Babu Moger
From: Stanislav Lanci 

Populate threads/core_id/apic_ids/socket_id when CPUID_EXT3_TOPOEXT
feature is supported. This is required to support hyperthreading
feature on AMD CPUS. These are supported via CPUID_8000_001E extended
functions.

Signed-off-by: Stanislav Lanci 
Signed-off-by: Babu Moger 
---
 target/i386/cpu.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index a5a480e..191e850 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -3666,6 +3666,14 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *edx = 0;
 }
 break;
+case 0x801E:
+if (env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_TOPOEXT) {
+*eax = cpu->apic_id;
+*ebx = (cs->nr_threads - 1) << 8 | cpu->core_id;
+*ecx = cpu->socket_id;
+*edx = 0;
+}
+break;
 case 0xC000:
 *eax = env->cpuid_xlevel2;
 *ebx = 0;
-- 
1.8.3.1




Re: [Qemu-devel] [PATCH 01/19] loader: Add new load_ramdisk_as()

2018-02-23 Thread Richard Henderson
On 02/20/2018 10:03 AM, Peter Maydell wrote:
> Add a function load_ramdisk_as() which behaves like the existing
> load_ramdisk() but allows the caller to specify the AddressSpace
> to use. This matches the pattern we have already for various
> other loader functions.
> 
> Signed-off-by: Peter Maydell 
> Reviewed-by: Philippe Mathieu-Daudé 
> ---
>  include/hw/loader.h | 12 +++-
>  hw/core/loader.c|  8 +++-
>  2 files changed, 18 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [Qemu-devel] [PATCH v3 4/4] target/m68k: add fscale, fgetman and fgetexp

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:59 AM, Laurent Vivier wrote:
> Using local m68k floatx80_getman(), floatx80_getexp(), floatx80_scale()
> [copied from previous:
> Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
> 
> Signed-off-by: Laurent Vivier 
> ---
>  target/m68k/fpu_helper.c |  15 +
>  target/m68k/helper.h |   3 +
>  target/m68k/softfloat.c  | 144 
> +++
>  target/m68k/softfloat.h  |   3 +
>  target/m68k/translate.c  |   9 +++
>  5 files changed, 174 insertions(+)

Reviewed-by: Richard Henderson 

r~



[Qemu-devel] [PATCH v2 0/5] Enable TOPOEXT to support hyperthreading on AMD CPU

2018-02-23 Thread Babu Moger
These series enable TOPOEXT feature on AMD CPU. These are required to support
hyperthreading on kvm guests. Addresses the issues reported by these bugs
https://bugzilla.redhat.com/show_bug.cgi?id=1481253
https://bugs.launchpad.net/qemu/+bug/1703506 

v2:
Fixed few more minor issues per Gary Hooks comments. Thank you Gary.
Removed the patch#1. We need to handle the instruction cache associativity 
seperately. It varies based on the cpu family. I will comeback to that later.
Added two more typo corrections in patch#1 and patch#5.

v1:
Stanislav Lanci posted few patches earlier. 
https://patchwork.kernel.org/patch/10040903/

Rebased his patches with few changes.
1. Spit the patches into two, separating cpuid functions 
   0x801D and 0x801E (Patch 2 and 3).
2. Removed the generic non-intel check and made a separate patch
   with some changes(Patch 5).
3. Fixed L3_N_SETS_AMD(from 4096 to 8192) based on CPUID_Fn801D_ECX_x03.

Added 2 more patches.
Patch 1. Fixes cache associativity.
Patch 4. Adds TOPOEXT feature on AMD EPYC CPU.


Babu Moger (3):
  target/i386: Fix a minor typo found while reviwing
  target/i386: Enable TOPOEXT feature on AMD EPYC CPU
  target/i386: Remove generic SMT thread check

Stanislav Lanci (2):
  target/i386: Populate AMD Processor Cache Information
  target/i386: Add support for CPUID_8000_001E for AMD

 linux-headers/asm-x86/kvm.h |   2 +-
 target/i386/cpu.c   | 104 
 target/i386/kvm.c   |  31 +++--
 3 files changed, 124 insertions(+), 13 deletions(-)

-- 
1.8.3.1




Re: [Qemu-devel] [PATCH 03/19] hw/arm/armv7m: Honour CPU's address space for image loads

2018-02-23 Thread Richard Henderson
On 02/20/2018 10:03 AM, Peter Maydell wrote:
> Instead of loading guest images to the system address space, use the
> CPU's address space.  This is important if we're trying to load the
> file to memory or via an alias memory region that is provided by an
> SoC object and thus not mapped into the system address space.
> 
> Signed-off-by: Peter Maydell 
> Reviewed-by: Philippe Mathieu-Daudé 
> ---
>  hw/arm/armv7m.c | 17 ++---
>  1 file changed, 14 insertions(+), 3 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [Qemu-devel] [PATCH v3 2/4] target/m68k: add fmod/frem

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:59 AM, Laurent Vivier wrote:
> Using a local m68k floatx80_mod()
> [copied from previous:
> Written by Andreas Grabher for Previous, NeXT Computer Emulator.]
> 
> The quotient byte of the FPSR is updated with
> the result of the operation.
> 
> Signed-off-by: Laurent Vivier 
> ---
>  target/m68k/Makefile.objs |   3 +-
>  target/m68k/cpu.h |   1 +
>  target/m68k/fpu_helper.c  |  35 +++-
>  target/m68k/helper.h  |   2 +
>  target/m68k/softfloat.c   | 105 
> ++
>  target/m68k/softfloat.h   |  26 
>  target/m68k/translate.c   |   6 +++
>  7 files changed, 176 insertions(+), 2 deletions(-)
>  create mode 100644 target/m68k/softfloat.c
>  create mode 100644 target/m68k/softfloat.h

Reviewed-by: Richard Henderson 


r~



Re: [Qemu-devel] [PATCH] hw/acpi-build: build SRAT memory affinity structures for NVDIMM

2018-02-23 Thread Haozhong Zhang
Hi Fam,

On 02/23/18 17:17 -0800, no-re...@patchew.org wrote:
> Hi,
> 
> This series failed build test on s390x host. Please find the details below.
> 
> N/A. Internal error while reading log file

What does this message mean? Where can I get the log file?

Thanks,
Haozhong



Re: [Qemu-devel] [PATCH] scsi: Remove automatic creation of SCSI controllers with -drive if=scsi

2018-02-23 Thread no-reply
Hi,

This series failed build test on s390x host. Please find the details below.

N/A. Internal error while reading log file



---
Email generated automatically by Patchew [http://patchew.org/].
Please send your feedback to patchew-de...@freelists.org

Re: [Qemu-devel] [PATCH v3 3/4] softfloat: use floatx80_infinity in softfloat

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:59 AM, Laurent Vivier wrote:
> @@ -4550,8 +4556,8 @@ int64_t floatx80_to_int64(floatx80 a, float_status 
> *status)
>  if ( shiftCount ) {
>  float_raise(float_flag_invalid, status);
>  if (! aSign
> - || (( aExp == 0x7FFF )
> -  && ( aSig != LIT64( 0x8000 ) ) )
> + || ((aExp == floatx80_infinity_high)
> + && (aSig != floatx80_infinity_low))
> ) {

As long as you're cleaning this up, m68k ignores the explicit integer bit when
considering an infinity.  However, Intel doesn't ignore the bit -- it appears
to treat 7fff.0* as a NaN.


r~



Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as migration

2018-02-23 Thread Tan, Jianfeng
Hi Igor and all,

> -Original Message-
> From: Igor Mammedov [mailto:imamm...@redhat.com]
> Sent: Thursday, February 8, 2018 7:30 PM
> To: Tan, Jianfeng
> Cc: Paolo Bonzini; Jason Wang; Maxime Coquelin; qemu-devel@nongnu.org;
> Michael S . Tsirkin
> Subject: Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as
> migration
> 
[...]
> > > It could be solved by adding memdev option to machine,
> > > which would allow to specify backend object. And then on
> > > top make -mem-path alias new option to clean thing up.
> >
> > Do you mean?
> >
> > src vm: -m xG
> > dst vm: -m xG,memdev=pc.ram -object 
> > memory-backend-file,id=pc.ram,size=xG,mem-path=xxx,share=on ...
> Yep, I've meant something like it
> 
> src vm: -m xG,memdev=SHARED_RAM -object 
> memory-backend-file,id=SHARED_RAM,size=xG,mem-path=xxx,share=on
> dst vm: -m xG,memdev=SHARED_RAM -object 
> memory-backend-file,id=SHARED_RAM,size=xG,mem-path=xxx,share=on

After a second thought, I find adding a backend for nonnuma pc RAM is 
roundabout way.

And we actually have an existing way to add a file-backed RAM: commit 
c902760fb25f ("Add option to use file backed guest memory"). Basically, this 
commit adds two options, --mem-path and --mem-prealloc, without specify a 
backend explicitly.

So how about just adding a new option --mem-share to decide if that's a private 
memory or shared memory? That seems much straightforward way to me; after this 
change we can migrate like:

src vm: -m xG
dst vm: -m xG --mem-path xxx --mem-share

Thanks,
Jianfeng

> 
> or it could be -machine FOO,inital_ram_memdev=...
> maybe making -M optional in this case as size is specified by backend
> 
> PS:
> it's not a good idea to use QEMU's internal id 'pc.ram'
> for user specified objects as it might cause problems.



Re: [Qemu-devel] [PATCH v3 30/31] arm/translate-a64: implement simd_scalar_three_reg_same_fp16

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> This covers the encoding group:
> 
>   Advanced SIMD scalar three same FP16
> 
> As all the helpers are already there it is simply a case of calling the
> existing helpers in the scalar context.
> 
> Signed-off-by: Alex Bennée 
> 
> ---
> v2
>   - checkpatch fixes
> v3
>   - check for FP16 feature
>   - remove stray debug
>   - make abs a bitwise operation
>   - checkpatch long line
> ---
>  target/arm/translate-a64.c | 99 
> ++
>  1 file changed, 99 insertions(+)

Reviewed-by: Richard Henderson 


r~



Re: [Qemu-devel] [PATCH v3 1/4] softfloat: export some functions

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:59 AM, Laurent Vivier wrote:
> Move fpu/softfloat-macros.h to include/fpu/
> 
> Export floatx80 functions to be used by target floatx80
> specific implementations.
> 
> Exports:
>   propagateFloatx80NaN(), extractFloatx80Frac(),
>   extractFloatx80Exp(), extractFloatx80Sign(),
>   normalizeFloatx80Subnormal(), packFloatx80(),
>   roundAndPackFloatx80(), normalizeRoundAndPackFloatx80()
> 
> Also exports packFloat32() that will be used to implement
> m68k fsinh, fcos, fsin, ftan operations.
> 
> Signed-off-by: Laurent Vivier 
> ---
> CC: Aurelien Jarno 
> CC: Alex Bennée 
> CC: Peter Maydell 

Reviewed-by: Richard Henderson 


r~



[Qemu-devel] [PATCH v2 2/5] target/i386: Populate AMD Processor Cache Information

2018-02-23 Thread Babu Moger
From: Stanislav Lanci 

Adds information about cache size and topology from cpuid 0x801D leaf
for different cache types on AMD processors.

Signed-off-by: Stanislav Lanci 
Signed-off-by: Babu Moger 
---
 target/i386/cpu.c | 76 +++
 target/i386/kvm.c | 29 ++---
 2 files changed, 102 insertions(+), 3 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index b5e431e..a5a480e 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -118,6 +118,7 @@
 #define L1I_LINE_SIZE 64
 #define L1I_ASSOCIATIVITY  8
 #define L1I_SETS  64
+#define L1I_SETS_AMD 256
 #define L1I_PARTITIONS 1
 /* Size = LINE_SIZE*ASSOCIATIVITY*SETS*PARTITIONS = 32KiB */
 #define L1I_DESCRIPTOR CPUID_2_L1I_32KB_8WAY_64B
@@ -129,7 +130,9 @@
 /* Level 2 unified cache: */
 #define L2_LINE_SIZE  64
 #define L2_ASSOCIATIVITY  16
+#define L2_ASSOCIATIVITY_AMD   8
 #define L2_SETS 4096
+#define L2_SETS_AMD 1024
 #define L2_PARTITIONS  1
 /* Size = LINE_SIZE*ASSOCIATIVITY*SETS*PARTITIONS = 4MiB */
 /*FIXME: CPUID leaf 2 descriptor is inconsistent with CPUID leaf 4 */
@@ -146,6 +149,7 @@
 #define L3_N_LINE_SIZE 64
 #define L3_N_ASSOCIATIVITY 16
 #define L3_N_SETS   16384
+#define L3_N_SETS_AMD8192
 #define L3_N_PARTITIONS 1
 #define L3_N_DESCRIPTOR CPUID_2_L3_16MB_16WAY_64B
 #define L3_N_LINES_PER_TAG  1
@@ -3590,6 +3594,78 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *edx = 0;
 }
 break;
+case 0x801D: /* AMD TOPOEXT cache info */
+if (cpu->cache_info_passthrough) {
+host_cpuid(index, count, eax, ebx, ecx, edx);
+break;
+} else if (env->features[FEAT_8000_0001_ECX] & CPUID_EXT3_TOPOEXT) {
+*eax = 0;
+switch (count) {
+case 0: /* L1 dcache info */
+*eax |= CPUID_4_TYPE_DCACHE | \
+CPUID_4_LEVEL(1) | \
+CPUID_4_SELF_INIT_LEVEL | \
+((cs->nr_threads - 1) << 14);
+*ebx = (L1D_LINE_SIZE - 1) | \
+   ((L1D_PARTITIONS - 1) << 12) | \
+   ((L1D_ASSOCIATIVITY - 1) << 22);
+*ecx = L1D_SETS - 1;
+*edx = 0;
+break;
+case 1: /* L1 icache info */
+*eax |= CPUID_4_TYPE_ICACHE | \
+CPUID_4_LEVEL(1) | \
+CPUID_4_SELF_INIT_LEVEL | \
+((cs->nr_threads - 1) << 14);
+*ebx = (L1I_LINE_SIZE - 1) | \
+   ((L1I_PARTITIONS - 1) << 12) | \
+   ((L1I_ASSOCIATIVITY_AMD - 1) << 22);
+*ecx = L1I_SETS_AMD - 1;
+*edx = 0;
+break;
+case 2: /* L2 cache info */
+*eax |= CPUID_4_TYPE_UNIFIED | \
+CPUID_4_LEVEL(2) | \
+CPUID_4_SELF_INIT_LEVEL | \
+((cs->nr_threads - 1) << 14);
+*ebx = (L2_LINE_SIZE - 1) | \
+   ((L2_PARTITIONS - 1) << 12) | \
+   ((L2_ASSOCIATIVITY_AMD - 1) << 22);
+*ecx = L2_SETS_AMD - 1;
+*edx = CPUID_4_INCLUSIVE;
+break;
+case 3: /* L3 cache info */
+if (!cpu->enable_l3_cache) {
+*eax = 0;
+*ebx = 0;
+*ecx = 0;
+*edx = 0;
+break;
+}
+*eax |= CPUID_4_TYPE_UNIFIED | \
+CPUID_4_LEVEL(3) | \
+CPUID_4_SELF_INIT_LEVEL | \
+((cs->nr_cores * cs->nr_threads - 1) << 14);
+*ebx = (L3_N_LINE_SIZE - 1) | \
+   ((L3_N_PARTITIONS - 1) << 12) | \
+   ((L3_N_ASSOCIATIVITY - 1) << 22);
+*ecx = L3_N_SETS_AMD - 1;
+*edx = CPUID_4_NO_INVD_SHARING;
+break;
+default: /* end of info */
+*eax = 0;
+*ebx = 0;
+*ecx = 0;
+*edx = 0;
+break;
+}
+} else {
+*eax = 0;
+*ebx = 0;
+*ecx = 0;
+*edx = 0;
+}
+break;
 case 0xC000:
 *eax = env->cpuid_xlevel2;
 *ebx = 0;
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 85856b6..8adf7d1 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -909,9 +909,32 @@ int kvm_arch_init_vcpu(CPUState *cs)
 }
 c = _data.entries[cpuid_i++];
 
-c->function = i;
-c->flags = 0;
-cpu_x86_cpuid(env, i, 0, >eax, >ebx, 

Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as migration

2018-02-23 Thread Tan, Jianfeng


> -Original Message-
> From: Tan, Jianfeng
> Sent: Saturday, February 24, 2018 11:08 AM
> To: 'Igor Mammedov'
> Cc: Paolo Bonzini; Jason Wang; Maxime Coquelin; qemu-devel@nongnu.org;
> Michael S . Tsirkin
> Subject: RE: [Qemu-devel] [RFC] exec: eliminate ram naming issue as
> migration
> 
> Hi Igor and all,
> 
> > -Original Message-
> > From: Igor Mammedov [mailto:imamm...@redhat.com]
> > Sent: Thursday, February 8, 2018 7:30 PM
> > To: Tan, Jianfeng
> > Cc: Paolo Bonzini; Jason Wang; Maxime Coquelin; qemu-
> de...@nongnu.org;
> > Michael S . Tsirkin
> > Subject: Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as
> > migration
> >
> [...]
> > > > It could be solved by adding memdev option to machine,
> > > > which would allow to specify backend object. And then on
> > > > top make -mem-path alias new option to clean thing up.
> > >
> > > Do you mean?
> > >
> > > src vm: -m xG
> > > dst vm: -m xG,memdev=pc.ram -object memory-backend-
> file,id=pc.ram,size=xG,mem-path=xxx,share=on ...
> > Yep, I've meant something like it
> >
> > src vm: -m xG,memdev=SHARED_RAM -object memory-backend-
> file,id=SHARED_RAM,size=xG,mem-path=xxx,share=on
> > dst vm: -m xG,memdev=SHARED_RAM -object memory-backend-
> file,id=SHARED_RAM,size=xG,mem-path=xxx,share=on
> 
> After a second thought, I find adding a backend for nonnuma pc RAM is
> roundabout way.
> 
> And we actually have an existing way to add a file-backed RAM: commit
> c902760fb25f ("Add option to use file backed guest memory"). Basically, this
> commit adds two options, --mem-path and --mem-prealloc, without specify
> a backend explicitly.
> 
> So how about just adding a new option --mem-share to decide if that's a
> private memory or shared memory? That seems much straightforward way
> to me; after this change we can migrate like:
> 
> src vm: -m xG
> dst vm: -m xG --mem-path xxx --mem-share
> 

Attach the patch FYI. Look forward to your thoughts.

diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index 31612ca..5eaf367 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -127,6 +127,7 @@ extern bool enable_mlock;
 extern uint8_t qemu_extra_params_fw[2];
 extern QEMUClockType rtc_clock;
 extern const char *mem_path;
+extern int mem_share;
 extern int mem_prealloc;
 
 #define MAX_NODES 128
diff --git a/numa.c b/numa.c
index 7b9c33a..322289f 100644
--- a/numa.c
+++ b/numa.c
@@ -456,7 +456,7 @@ static void allocate_system_memory_nonnuma(MemoryRegion 
*mr, Object *owner,
 if (mem_path) {
 #ifdef __linux__
 Error *err = NULL;
-memory_region_init_ram_from_file(mr, owner, name, ram_size, false,
+memory_region_init_ram_from_file(mr, owner, name, ram_size, mem_share,
  mem_path, );
 if (err) {
 error_report_err(err);
diff --git a/qemu-options.hx b/qemu-options.hx
index 678181c..c968c53 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -389,6 +389,15 @@ STEXI
 Allocate guest RAM from a temporarily created file in @var{path}.
 ETEXI
 
+DEF("mem-share", 0, QEMU_OPTION_memshare,
+"-mem-share   make guest memory shareable (use with -mem-path)\n",
+QEMU_ARCH_ALL)
+STEXI
+@item -mem-share
+@findex -mem-share
+Make file-backed guest RAM shareable when using -mem-path.
+ETEXI
+
 DEF("mem-prealloc", 0, QEMU_OPTION_mem_prealloc,
 "-mem-prealloc   preallocate guest memory (use with -mem-path)\n",
 QEMU_ARCH_ALL)
diff --git a/vl.c b/vl.c
index 444b750..0ff06c2 100644
--- a/vl.c
+++ b/vl.c
@@ -140,6 +140,7 @@ int display_opengl;
 const char* keyboard_layout = NULL;
 ram_addr_t ram_size;
 const char *mem_path = NULL;
+int mem_share = 0;
 int mem_prealloc = 0; /* force preallocation of physical target memory */
 bool enable_mlock = false;
 int nb_nics;
@@ -3395,6 +3396,9 @@ int main(int argc, char **argv, char **envp)
 case QEMU_OPTION_mempath:
 mem_path = optarg;
 break;
+case QEMU_OPTION_memshare:
+mem_share = 1;
+break;
 case QEMU_OPTION_mem_prealloc:
 mem_prealloc = 1;
 break;



Re: [Qemu-devel] [PATCH v3 00/31] Add ARMv8.2 half-precision functions

2018-02-23 Thread Richard Henderson
On 02/23/2018 07:36 AM, Alex Bennée wrote:
> Now that the softfloat re-factoring has been merged I re-based this
> directly from master. Alternatively you can grab the full tree from:
> 
>   https://github.com/stsquad/qemu/tree/arm-fp16-v3
> 
> I've tested with the following RISU test binaries:
> 
>   
> http://people.linaro.org/~alex.bennee/testcases/arm64.risu/testcases.armv8.2_hp.tar.xz
> 
> Which now includes insn_FP1SRC.risu.bin which tests the final patch in
> the series which wasn't being exercised by my previous set of tests.
> 
> I've dropped the fp16 patch to both avoid the bikesheding but also
> because I could achieve the same effect by running RISU with:
> 
>   -cpu cortex-a57
> 
> The changes are all relatively minor based on feedback. The details
> are as usual included in the commit messages bellow ---.

Unless I've missed something, that's the whole patch set reviewed.


r~



Re: [Qemu-devel] [RFC, PATCH, v1] hw/audio/opl2lpt: add support for OPL2LPT

2018-02-23 Thread no-reply
Hi,

This series failed build test on ppcbe host. Please find the details below.

Type: series
Message-id: 20180218144021.11641-1-vinc...@bernat.im
Subject: [Qemu-devel] [RFC, PATCH, v1] hw/audio/opl2lpt: add support for OPL2LPT

=== TEST SCRIPT BEGIN ===
#!/bin/bash
# Testing script will be invoked under the git checkout with
# HEAD pointing to a commit that has the patches applied on top of "base"
# branch
set -e
echo "=== ENV ==="
env
echo "=== PACKAGES ==="
rpm -qa
echo "=== TEST BEGIN ==="
INSTALL=$PWD/install
BUILD=$PWD/build
mkdir -p $BUILD $INSTALL
SRC=$PWD
cd $BUILD
$SRC/configure --prefix=$INSTALL
make -j100
# XXX: we need reliable clean up
# make check -j100 V=1
make install
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] 
patchew/1519439425-27883-1-git-send-email-babu.mo...@amd.com -> 
patchew/1519439425-27883-1-git-send-email-babu.mo...@amd.com
 - [tag update]  patchew/20180223145959.18761-1-laur...@vivier.eu -> 
patchew/20180223145959.18761-1-laur...@vivier.eu
Submodule 'capstone' (git://git.qemu.org/capstone.git) registered for path 
'capstone'
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (git://git.qemu.org/QemuMacDrivers.git) 
registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (git://git.qemu-project.org/SLOF.git) registered for path 
'roms/SLOF'
Submodule 'roms/ipxe' (git://git.qemu-project.org/ipxe.git) registered for path 
'roms/ipxe'
Submodule 'roms/openbios' (git://git.qemu-project.org/openbios.git) registered 
for path 'roms/openbios'
Submodule 'roms/openhackware' (git://git.qemu-project.org/openhackware.git) 
registered for path 'roms/openhackware'
Submodule 'roms/qemu-palcode' (git://github.com/rth7680/qemu-palcode.git) 
registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (git://git.qemu-project.org/seabios.git/) registered 
for path 'roms/seabios'
Submodule 'roms/seabios-hppa' (git://github.com/hdeller/seabios-hppa.git) 
registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (git://git.qemu-project.org/sgabios.git) registered 
for path 'roms/sgabios'
Submodule 'roms/skiboot' (git://git.qemu.org/skiboot.git) registered for path 
'roms/skiboot'
Submodule 'roms/u-boot' (git://git.qemu-project.org/u-boot.git) registered for 
path 'roms/u-boot'
Submodule 'roms/vgabios' (git://git.qemu-project.org/vgabios.git/) registered 
for path 'roms/vgabios'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered 
for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out 
'22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out 
'd4e7d7ac663fcb55f1b93575445fcbca372f17a7'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 
'fa981320a1e0968d6fc1b8de319723ff8212b337'
Cloning into 'roms/ipxe'...
Submodule path 'roms/ipxe': checked out 
'0600d3ae94f93efd10fc6b3c7420a9557a3a1670'
Cloning into 'roms/openbios'...
Submodule path 'roms/openbios': checked out 
'54d959d97fb331708767b2fd4a878efd2bbc41bb'
Cloning into 'roms/openhackware'...
Submodule path 'roms/openhackware': checked out 
'c559da7c8eec5e45ef1f67978827af6f0b9546f5'
Cloning into 'roms/qemu-palcode'...
Submodule path 'roms/qemu-palcode': checked out 
'f3c7e44c70254975df2a00af39701eafbac4d471'
Cloning into 'roms/seabios'...
Submodule path 'roms/seabios': checked out 
'63451fca13c75870e1703eb3e20584d91179aebc'
Cloning into 'roms/seabios-hppa'...
Submodule path 'roms/seabios-hppa': checked out 
'649e6202b8d65d46c69f542b1380f840fbe8ab13'
Cloning into 'roms/sgabios'...
Submodule path 'roms/sgabios': checked out 
'cbaee52287e5f32373181cff50a00b6c4ac9015a'
Cloning into 'roms/skiboot'...
Submodule path 'roms/skiboot': checked out 
'e0ee24c27a172bcf482f6f2bc905e6211c134bcc'
Cloning into 'roms/u-boot'...
Submodule path 'roms/u-boot': checked out 
'd85ca029f257b53a96da6c2fb421e78a003a9943'
Cloning into 'roms/vgabios'...
Submodule path 'roms/vgabios': checked out 
'19ea12c230ded95928ecaef0db47a82231c2e485'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
Switched to a new branch 'test'
M   roms/openbios
c83e23c hw/audio/opl2lpt: add support for OPL2LPT

=== OUTPUT BEGIN ===
=== ENV ===
XDG_SESSION_ID=29961
SHELL=/bin/sh
USER=patchew
PATCHEW=./patchew-cli -s https://patchew.org
PATH=/usr/bin:/bin
PWD=/var/tmp/patchew-tester-tmp-pbz7m8xl/src
LANG=en_US.UTF-8
HOME=/home/patchew
SHLVL=2
LOGNAME=patchew
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
=== PACKAGES ===
telepathy-filesystem-0.0.2-6.el7.noarch
ipa-common-4.5.0-20.el7.centos.noarch
ipa-client-common-4.5.0-20.el7.centos.noarch
nhn-nanum-fonts-common-3.020-9.el7.noarch

Re: [Qemu-devel] [PATCH 02/19] hw/arm/boot: Honour CPU's address space for image loads

2018-02-23 Thread Richard Henderson
On 02/20/2018 10:03 AM, Peter Maydell wrote:
> Instead of loading kernels, device trees, and the like to
> the system address space, use the CPU's address space. This
> is important if we're trying to load the file to memory or
> via an alias memory region that is provided by an SoC
> object and thus not mapped into the system address space.
> 
> Signed-off-by: Peter Maydell 
> Reviewed-by: Philippe Mathieu-Daudé 
> ---
> Function name changed to arm_boot_address_space()
> rather than arm_boot_addressspace(), following irc
> conversation...
> ---
>  hw/arm/boot.c | 119 
> +-
>  1 file changed, 76 insertions(+), 43 deletions(-)

Reviewed-by: Richard Henderson 

r~



Re: [Qemu-devel] [PATCH 0/9] nbd block status base:allocation

2018-02-23 Thread no-reply
Hi,

This series failed docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

Type: series
Message-id: 1518702707-7077-1-git-send-email-vsement...@virtuozzo.com
Subject: [Qemu-devel] [PATCH 0/9] nbd block status base:allocation

=== TEST SCRIPT BEGIN ===
#!/bin/bash
set -e
git submodule update --init dtc
# Let docker tests dump environment info
export SHOW_ENV=1
export J=8
time make docker-test-mingw@fedora
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
7d95dcdd92 iotests: new test 206 for NBD BLOCK_STATUS
906b0164c4 iotests: add file_path helper
015ee723d2 iotests.py: tiny refactor: move system imports up
1377201bee nbd: BLOCK_STATUS for standard get_block_status function: client part
a750bdb375 nbd/client: fix error messages in nbd_handle_reply_err
6ec660434e block/nbd-client: save first fatal error in nbd_iter_error
1b609ef226 nbd: BLOCK_STATUS for standard get_block_status function: server part
ac6e460a1f nbd: change indenting in nbd.h
5e399e16f0 nbd/server: add nbd_opt_invalid helper

=== OUTPUT BEGIN ===
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-ffe51cdm/src/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
  BUILD   fedora
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-ffe51cdm/src'
  GEN 
/var/tmp/patchew-tester-tmp-ffe51cdm/src/docker-src.2018-02-24-01.47.30.11028/qemu.tar
Cloning into 
'/var/tmp/patchew-tester-tmp-ffe51cdm/src/docker-src.2018-02-24-01.47.30.11028/qemu.tar.vroot'...
done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into 
'/var/tmp/patchew-tester-tmp-ffe51cdm/src/docker-src.2018-02-24-01.47.30.11028/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered 
for path 'ui/keycodemapdb'
Cloning into 
'/var/tmp/patchew-tester-tmp-ffe51cdm/src/docker-src.2018-02-24-01.47.30.11028/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
  COPYRUNNER
RUN test-mingw in qemu:fedora 
Packages installed:
PyYAML-3.12-5.fc27.x86_64
SDL-devel-1.2.15-29.fc27.x86_64
bc-1.07.1-3.fc27.x86_64
bison-3.0.4-8.fc27.x86_64
bzip2-1.0.6-24.fc27.x86_64
ccache-3.3.5-1.fc27.x86_64
clang-5.0.1-1.fc27.x86_64
findutils-4.6.0-14.fc27.x86_64
flex-2.6.1-5.fc27.x86_64
gcc-7.3.1-2.fc27.x86_64
gcc-c++-7.3.1-2.fc27.x86_64
gettext-0.19.8.1-12.fc27.x86_64
git-2.14.3-2.fc27.x86_64
glib2-devel-2.54.3-2.fc27.x86_64
hostname-3.18-4.fc27.x86_64
libaio-devel-0.3.110-9.fc27.x86_64
libasan-7.3.1-2.fc27.x86_64
libfdt-devel-1.4.6-1.fc27.x86_64
libubsan-7.3.1-2.fc27.x86_64
make-4.2.1-4.fc27.x86_64
mingw32-SDL-1.2.15-9.fc27.noarch
mingw32-bzip2-1.0.6-9.fc27.noarch
mingw32-curl-7.54.1-2.fc27.noarch
mingw32-glib2-2.54.1-1.fc27.noarch
mingw32-gmp-6.1.2-2.fc27.noarch
mingw32-gnutls-3.5.13-2.fc27.noarch
mingw32-gtk2-2.24.31-4.fc27.noarch
mingw32-gtk3-3.22.16-1.fc27.noarch
mingw32-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw32-libpng-1.6.29-2.fc27.noarch
mingw32-libssh2-1.8.0-3.fc27.noarch
mingw32-libtasn1-4.13-1.fc27.noarch
mingw32-nettle-3.3-3.fc27.noarch
mingw32-pixman-0.34.0-3.fc27.noarch
mingw32-pkg-config-0.28-9.fc27.x86_64
mingw64-SDL-1.2.15-9.fc27.noarch
mingw64-bzip2-1.0.6-9.fc27.noarch
mingw64-curl-7.54.1-2.fc27.noarch
mingw64-glib2-2.54.1-1.fc27.noarch
mingw64-gmp-6.1.2-2.fc27.noarch
mingw64-gnutls-3.5.13-2.fc27.noarch
mingw64-gtk2-2.24.31-4.fc27.noarch
mingw64-gtk3-3.22.16-1.fc27.noarch
mingw64-libjpeg-turbo-1.5.1-3.fc27.noarch
mingw64-libpng-1.6.29-2.fc27.noarch
mingw64-libssh2-1.8.0-3.fc27.noarch
mingw64-libtasn1-4.13-1.fc27.noarch
mingw64-nettle-3.3-3.fc27.noarch
mingw64-pixman-0.34.0-3.fc27.noarch
mingw64-pkg-config-0.28-9.fc27.x86_64
nettle-devel-3.4-1.fc27.x86_64
perl-5.26.1-402.fc27.x86_64
pixman-devel-0.34.0-4.fc27.x86_64
python3-3.6.2-13.fc27.x86_64
sparse-0.5.1-2.fc27.x86_64
tar-1.29-7.fc27.x86_64
which-2.21-4.fc27.x86_64
zlib-devel-1.2.11-4.fc27.x86_64

Environment variables:
TARGET_LIST=
PACKAGES=ccache gettext git tar PyYAML sparse flex bison python3 bzip2 hostname 
glib2-devel pixman-devel zlib-devel SDL-devel libfdt-devel gcc gcc-c++ 
clang make perl which bc findutils libaio-devel nettle-devel libasan 
libubsan mingw32-pixman mingw32-glib2 mingw32-gmp mingw32-SDL 
mingw32-pkg-config mingw32-gtk2 mingw32-gtk3 mingw32-gnutls mingw32-nettle 
mingw32-libtasn1 mingw32-libjpeg-turbo mingw32-libpng mingw32-curl 
mingw32-libssh2 mingw32-bzip2 mingw64-pixman mingw64-glib2 mingw64-gmp 
mingw64-SDL mingw64-pkg-config mingw64-gtk2 mingw64-gtk3 mingw64-gnutls 
mingw64-nettle mingw64-libtasn1 mingw64-libjpeg-turbo 

Re: [Qemu-devel] [PATCH 00/19] Add Cortex-M33 and mps2-an505 board model

2018-02-23 Thread no-reply
Hi,

This series failed docker-quick@centos6 build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

Type: series
Message-id: 20180220180325.29818-1-peter.mayd...@linaro.org
Subject: [Qemu-devel] [PATCH 00/19] Add Cortex-M33 and mps2-an505 board model

=== TEST SCRIPT BEGIN ===
#!/bin/bash
set -e
git submodule update --init dtc
# Let docker tests dump environment info
export SHOW_ENV=1
export J=8
time make docker-test-quick@centos6
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
7d3e83003a mps2-an505: New board model: MPS2 with AN505 Cortex-M33 FPGA image
008d624b9c hw/arm/iotkit: Model Arm IOT Kit
c9fe4b0446 hw/misc/iotkit-secctl: Add remaining simple registers
3e31bda208 hw/misc/iotkit-secctl: Add handling for PPCs
49891ade94 hw/misc/iotkit-secctl: Arm IoT Kit security controller initial 
skeleton
01484af03f hw/misc/tz-ppc: Model TrustZone peripheral protection controller
21a39893f5 hw/misc/mps2-fpgaio: FPGA control block for MPS2 AN505
58df172c0a hw/core/split-irq: Device that splits IRQ lines
93f8bafcf7 qdev: Add new qdev_init_gpio_in_named_with_opaque()
e8b01923f5 include/hw/or-irq.h: Add missing include guard
c84bfdeac7 hw/misc/unimp: Move struct to header file
0eef77da17 target/arm: Add Cortex-M33
6bc121a492 armv7m: Forward init-svtor property to CPU object
06b6e8cd1f target/arm: Define init-svtor property for the reset secure VTOR 
value
856e5cff56 armv7m: Forward idau property to CPU object
a0d1e1c86d target/arm: Define an IDAU interface
c33cc72800 hw/arm/armv7m: Honour CPU's address space for image loads
bca17d128e hw/arm/boot: Honour CPU's address space for image loads
a5069eedc1 loader: Add new load_ramdisk_as()

=== OUTPUT BEGIN ===
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into '/var/tmp/patchew-tester-tmp-e9pukhul/src/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
  BUILD   centos6
make[1]: Entering directory '/var/tmp/patchew-tester-tmp-e9pukhul/src'
  GEN 
/var/tmp/patchew-tester-tmp-e9pukhul/src/docker-src.2018-02-24-01.10.28.9237/qemu.tar
Cloning into 
'/var/tmp/patchew-tester-tmp-e9pukhul/src/docker-src.2018-02-24-01.10.28.9237/qemu.tar.vroot'...
done.
Your branch is up-to-date with 'origin/test'.
Submodule 'dtc' (git://git.qemu-project.org/dtc.git) registered for path 'dtc'
Cloning into 
'/var/tmp/patchew-tester-tmp-e9pukhul/src/docker-src.2018-02-24-01.10.28.9237/qemu.tar.vroot/dtc'...
Submodule path 'dtc': checked out 'e54388015af1fb4bf04d0bca99caba1074d9cc42'
Submodule 'ui/keycodemapdb' (git://git.qemu.org/keycodemapdb.git) registered 
for path 'ui/keycodemapdb'
Cloning into 
'/var/tmp/patchew-tester-tmp-e9pukhul/src/docker-src.2018-02-24-01.10.28.9237/qemu.tar.vroot/ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out 
'6b3d716e2b6472eb7189d3220552280ef3d832ce'
  COPYRUNNER
RUN test-quick in qemu:centos6 
Packages installed:
SDL-devel-1.2.14-7.el6_7.1.x86_64
bison-2.4.1-5.el6.x86_64
bzip2-devel-1.0.5-7.el6_0.x86_64
ccache-3.1.6-2.el6.x86_64
csnappy-devel-0-6.20150729gitd7bc683.el6.x86_64
flex-2.5.35-9.el6.x86_64
gcc-4.4.7-18.el6.x86_64
gettext-0.17-18.el6.x86_64
git-1.7.1-9.el6_9.x86_64
glib2-devel-2.28.8-9.el6.x86_64
libepoxy-devel-1.2-3.el6.x86_64
libfdt-devel-1.4.0-1.el6.x86_64
librdmacm-devel-1.0.21-0.el6.x86_64
lzo-devel-2.03-3.1.el6_5.1.x86_64
make-3.81-23.el6.x86_64
mesa-libEGL-devel-11.0.7-4.el6.x86_64
mesa-libgbm-devel-11.0.7-4.el6.x86_64
package g++ is not installed
pixman-devel-0.32.8-1.el6.x86_64
spice-glib-devel-0.26-8.el6.x86_64
spice-server-devel-0.12.4-16.el6.x86_64
tar-1.23-15.el6_8.x86_64
vte-devel-0.25.1-9.el6.x86_64
xen-devel-4.6.6-2.el6.x86_64
zlib-devel-1.2.3-29.el6.x86_64

Environment variables:
PACKAGES=bison bzip2-devel ccache csnappy-devel flex g++
 gcc gettext git glib2-devel libepoxy-devel libfdt-devel
 librdmacm-devel lzo-devel make mesa-libEGL-devel 
mesa-libgbm-devel pixman-devel SDL-devel spice-glib-devel 
spice-server-devel tar vte-devel xen-devel zlib-devel
HOSTNAME=c9a30019043f
MAKEFLAGS= -j8
J=8
CCACHE_DIR=/var/tmp/ccache
EXTRA_CONFIGURE_OPTS=
V=
SHOW_ENV=1
PATH=/usr/lib/ccache:/usr/lib64/ccache:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
TARGET_LIST=
SHLVL=1
HOME=/root
TEST_DIR=/tmp/qemu-test
FEATURES= dtc
DEBUG=
_=/usr/bin/env

Configure options:
--enable-werror --target-list=x86_64-softmmu,aarch64-softmmu 
--prefix=/tmp/qemu-test/install
No C++ compiler available; disabling C++ specific optional code
Install prefix/tmp/qemu-test/install
BIOS directory/tmp/qemu-test/install/share/qemu
firmware path /tmp/qemu-test/install/share/qemu-firmware
binary directory  /tmp/qemu-test/install/bin
library directory /tmp/qemu-test/install/lib
module directory  

[Qemu-devel] [Bug 1751422] [NEW] some instructions translate error in x86

2018-02-23 Thread yabi
Public bug reported:

There is some instructions translation error on target i386 in many versions, 
such as 2.11.1, 2.10.2, 2.7.1 and so on.
The error translation instructions include les, lds. I has got a patch, but I 
have no idea how to apply it.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1751422

Title:
  some instructions translate error in x86

Status in QEMU:
  New

Bug description:
  There is some instructions translation error on target i386 in many versions, 
such as 2.11.1, 2.10.2, 2.7.1 and so on.
  The error translation instructions include les, lds. I has got a patch, but I 
have no idea how to apply it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1751422/+subscriptions



Re: [Qemu-devel] [PATCH v1 4/8] new contrib/generate_all.sh: batch risugen script

2018-02-23 Thread Peter Maydell
On 23 February 2018 at 16:49, Daniel P. Berrangé  wrote:
> If it is going to live in QEMU source tree, then avoiding introducing a new
> license is desirable.

It isn't -- risu has its own repository.

thanks
-- PMM



Re: [Qemu-devel] [PATCH v2 08/29] qapi-gen: New common driver for code and doc generators

2018-02-23 Thread Markus Armbruster
Eric Blake  writes:

> On 02/11/2018 03:35 AM, Markus Armbruster wrote:
>> Whenever qapi-schema.json changes, we run six programs eleven times to
>> update eleven files.  Similar for qga/qapi-schema.json.  This is
>> silly.  Replace the six programs by a single program that spits out
>> all eleven files.
>>
>> The programs become modules in new Python package qapi, along with the
>> helper library.  This requires moving them to scripts/qapi/.
>>
>> Signed-off-by: Markus Armbruster 
>> Reviewed-by: Marc-André Lureau 
>> ---
>>   .gitignore |  2 +
>>   Makefile   | 86 +--
>>   docs/devel/qapi-code-gen.txt   | 97 
>> ++
>>   monitor.c  |  2 +-
>>   qapi-schema.json   |  2 +-
>>   scripts/qapi-gen.py| 41 +
>>   scripts/qapi/__init__.py   |  0
>>   scripts/{qapi-commands.py => qapi/commands.py} | 23 ++---
>>   scripts/{qapi.py => qapi/common.py}| 18 +---
>>   scripts/{qapi2texi.py => qapi/doc.py}  | 29 ++-
>>   scripts/{qapi-event.py => qapi/events.py}  | 23 ++---
>>   scripts/{qapi-introspect.py => qapi/introspect.py} | 32 ++-
>>   scripts/{qapi-types.py => qapi/types.py}   | 34 ++--
>>   scripts/{qapi-visit.py => qapi/visit.py}   | 34 ++--
>>   tests/Makefile.include | 56 ++---
>>   tests/qapi-schema/test-qapi.py |  4 +-
>>   16 files changed, 193 insertions(+), 290 deletions(-)
>>   create mode 100755 scripts/qapi-gen.py
>>   create mode 100644 scripts/qapi/__init__.py
>>   rename scripts/{qapi-commands.py => qapi/commands.py} (94%)
>>   rename scripts/{qapi.py => qapi/common.py} (99%)
>>   rename scripts/{qapi2texi.py => qapi/doc.py} (92%)
>>   mode change 100755 => 100644
>
> Still forgot mention that the mode bit change was intentional, but not
> worth a respin for just that.
>
>>   rename scripts/{qapi-event.py => qapi/events.py} (92%)
>>   rename scripts/{qapi-introspect.py => qapi/introspect.py} (90%)
>>   rename scripts/{qapi-types.py => qapi/types.py} (90%)
>>   rename scripts/{qapi-visit.py => qapi/visit.py} (92%)
>
> Reviewed-by: Eric Blake 
>
>> +++ b/docs/devel/qapi-code-gen.txt
>
>> -$ python scripts/qapi-event.py --output-dir="qapi-generated"
>> ---prefix="example-" example-schema.json
>>   $ cat qapi-generated/example-qapi-event.h
>>   [Uninteresting stuff omitted...]
>>   @@ -1302,23 +1296,22 @@ Example:
>>   }
>> const char *const example_QAPIEvent_lookup[] = {
>> -[EXAMPLE_QAPI_EVENT_MY_EVENT] = "MY_EVENT",
>> +
>> +[EXAMPLE_QAPI_EVENT_MY_EVENT] = "MY_EVENT",
>>   [EXAMPLE_QAPI_EVENT__MAX] = NULL,
>>   };
>
> Looks like our generated code indentation has slightly regressed from
> what we would write by hand, but it's still okay for generated code
> (and the commit message in 5/29 did call that out)

I'd prefer to have this tidied up.

>> +++ b/scripts/qapi-gen.py
>
>> +++ b/scripts/qapi/commands.py
>> @@ -13,7 +13,7 @@ This work is licensed under the terms of the GNU GPL, 
>> version 2.
>>   See the COPYING file in the top-level directory.
>>   """
>>   -from qapi import *
>> +from qapi.common import *
>>   def gen_command_decl(name, arg_type, boxed, ret_type):
>> @@ -255,13 +255,8 @@ class QAPISchemaGenCommandVisitor(QAPISchemaVisitor):
>>   self._regy += gen_register_command(name, success_response)
>> -def main(argv):
>> -(input_file, output_dir, do_c, do_h, prefix, opts) = 
>> parse_command_line()
>> -
>> -blurb = '''
>> - * Schema-defined QAPI/QMP commands
>> -'''
>> -
>> +def gen_commands(schema, output_dir, prefix):
>> +blurb = ' * Schema-defined QAPI/QMP commands'
>
> We discussed whether to make the assignment to blurb be a one-liner in
> an earlier patch, but I'm also fine with the churn you have over the
> course of the series.

I ran out of time, and decided to leave this one as is.



Re: [Qemu-devel] [PATCH v2 10/29] qapi: Touch generated files only when they change

2018-02-23 Thread Markus Armbruster
Marc-Andre Lureau  writes:

> Hi
>
> On Mon, Feb 12, 2018 at 8:48 PM, Eric Blake  wrote:
>> On 02/11/2018 03:35 AM, Markus Armbruster wrote:
>>>
>>> A massive number of objects depends on QAPI-generated headers.  In my
>>> "build everything" tree, it's roughly 4800 out of 5100.  This is
>>> particularly annoying when only some of the generated files change,
>>> say for a doc fix.
>>>
>>> Improve qapi-gen.py to touch its output files only if they actually
>>> change.  Rebuild time for a QAPI doc fix drops from many minutes to a
>>> few seconds.  Rebuilds get faster for certain code changes, too.  For
>>> instance, adding a simple QMP event now recompiles less than 200
>>> instead of 4800 objects.  But adding a QAPI type is as bad as ever;
>>> we've clearly got more work to do.
>>>
>>> Signed-off-by: Markus Armbruster 
>>> Reviewed-by: Eric Blake 
>>> ---
>>>   scripts/qapi/common.py | 11 +--
>>>   1 file changed, 9 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
>>> index 8290795dc1..2e58573a39 100644
>>> --- a/scripts/qapi/common.py
>>> +++ b/scripts/qapi/common.py
>>> @@ -1951,9 +1951,16 @@ class QAPIGen(object):
>>>   except os.error as e:
>>>   if e.errno != errno.EEXIST:
>>>   raise
>>> -f = open(os.path.join(output_dir, fname), 'w')
>>> -f.write(self._top(fname) + self._preamble + self._body
>>> +fd = os.open(os.path.join(output_dir, fname),
>>> + os.O_RDWR | os.O_CREAT, 0666)
>>
>>
>> patchew complained here for mingw; I'm not sure why.
>
> python3 syntax error.
> https://stackoverflow.com/questions/1837874/invalid-token-when-using-octal-numbers

Yes, we need to spell the mode 0o666.



Re: [Qemu-devel] [PATCH v2 43/67] target/arm: Implement SVE Floating Point Arithmetic - Unpredicated Group

2018-02-23 Thread Peter Maydell
On 17 February 2018 at 18:22, Richard Henderson
 wrote:
> Signed-off-by: Richard Henderson 
> ---
>  target/arm/helper-sve.h| 14 +++
>  target/arm/helper.h| 19 ++
>  target/arm/translate-sve.c | 41 
>  target/arm/vec_helper.c| 94 
> ++
>  target/arm/Makefile.objs   |  2 +-
>  target/arm/sve.decode  | 10 +
>  6 files changed, 179 insertions(+), 1 deletion(-)
>  create mode 100644 target/arm/vec_helper.c
>

> +/* Floating-point trigonometric starting value.
> + * See the ARM ARM pseudocode function FPTrigSMul.
> + */
> +static float16 float16_ftsmul(float16 op1, uint16_t op2, float_status *stat)
> +{
> +float16 result = float16_mul(op1, op1, stat);
> +if (!float16_is_any_nan(result)) {
> +result = float16_set_sign(result, op2 & 1);
> +}
> +return result;
> +}
> +
> +static float32 float32_ftsmul(float32 op1, uint32_t op2, float_status *stat)
> +{
> +float32 result = float32_mul(op1, op1, stat);
> +if (!float32_is_any_nan(result)) {
> +result = float32_set_sign(result, op2 & 1);
> +}
> +return result;
> +}
> +
> +static float64 float64_ftsmul(float64 op1, uint64_t op2, float_status *stat)
> +{
> +float64 result = float64_mul(op1, op1, stat);
> +if (!float64_is_any_nan(result)) {
> +result = float64_set_sign(result, op2 & 1);
> +}
> +return result;
> +}
> +
> +#define DO_3OP(NAME, FUNC, TYPE) \
> +void HELPER(NAME)(void *vd, void *vn, void *vm, void *stat, uint32_t desc) \
> +{  \
> +intptr_t i, oprsz = simd_oprsz(desc);  \
> +TYPE *d = vd, *n = vn, *m = vm;\
> +for (i = 0; i < oprsz / sizeof(TYPE); i++) {   \
> +d[i] = FUNC(n[i], m[i], stat); \
> +}  \
> +}
> +
> +DO_3OP(gvec_fadd_h, float16_add, float16)
> +DO_3OP(gvec_fadd_s, float32_add, float32)
> +DO_3OP(gvec_fadd_d, float64_add, float64)
> +
> +DO_3OP(gvec_fsub_h, float16_sub, float16)
> +DO_3OP(gvec_fsub_s, float32_sub, float32)
> +DO_3OP(gvec_fsub_d, float64_sub, float64)
> +
> +DO_3OP(gvec_fmul_h, float16_mul, float16)
> +DO_3OP(gvec_fmul_s, float32_mul, float32)
> +DO_3OP(gvec_fmul_d, float64_mul, float64)
> +
> +DO_3OP(gvec_ftsmul_h, float16_ftsmul, float16)
> +DO_3OP(gvec_ftsmul_s, float32_ftsmul, float32)
> +DO_3OP(gvec_ftsmul_d, float64_ftsmul, float64)
> +
> +#ifdef TARGET_AARCH64

This seems a bit odd given SVE is AArch64-only anyway...

> +
> +DO_3OP(gvec_recps_h, helper_recpsf_f16, float16)
> +DO_3OP(gvec_recps_s, helper_recpsf_f32, float32)
> +DO_3OP(gvec_recps_d, helper_recpsf_f64, float64)
> +
> +DO_3OP(gvec_rsqrts_h, helper_rsqrtsf_f16, float16)
> +DO_3OP(gvec_rsqrts_s, helper_rsqrtsf_f32, float32)
> +DO_3OP(gvec_rsqrts_d, helper_rsqrtsf_f64, float64)
> +
> +#endif
> +#undef DO_3OP

> +### SVE Floating Point Arithmetic - Unpredicated Group
> +
> +# SVE floating-point arithmetic (unpredicated)
> +FADD_zzz   01100101 .. 0 . 000 000 . . @rd_rn_rm
> +FSUB_zzz   01100101 .. 0 . 000 001 . . @rd_rn_rm
> +FMUL_zzz   01100101 .. 0 . 000 010 . . @rd_rn_rm
> +FTSMUL 01100101 .. 0 . 000 011 . . @rd_rn_rm
> +FRECPS 01100101 .. 0 . 000 110 . . @rd_rn_rm
> +FRSQRTS01100101 .. 0 . 000 111 . . 
> @rd_rn_rm

Another misaligned line.

Reviewed-by: Peter Maydell 

thanks
-- PMM



Re: [Qemu-devel] [PATCH v2 14/67] target/arm: Implement SVE Integer Arithmetic - Unary Predicated Group

2018-02-23 Thread Richard Henderson
On 02/23/2018 05:08 AM, Peter Maydell wrote:
>> +# SVE unary bit operations (predicated)
>> +# Note esz != 0 for FABS and FNEG.
>> +CLS0100 .. 011 000 101 ... . . @rd_pg_rn
>> +CLZ0100 .. 011 001 101 ... . . @rd_pg_rn
>> +CNT_zpz0100 .. 011 010 101 ... . . 
>> @rd_pg_rn
>> +CNOT   0100 .. 011 011 101 ... . . @rd_pg_rn
>> +NOT_zpz0100 .. 011 110 101 ... . . 
>> @rd_pg_rn
>> +FABS   0100 .. 011 100 101 ... . . @rd_pg_rn
>> +FNEG   0100 .. 011 101 101 ... . . @rd_pg_rn
> 
> Indentation seems to be a bit skew for the _zpz lines.

There are tabs in here.  I know they're not allowed for C, but this isn't.


r~



Re: [Qemu-devel] [PATCH v2 14/29] qapi: Concentrate QAPISchemaParser.exprs updates in .__init__()

2018-02-23 Thread Markus Armbruster
Michael Roth  writes:

> Quoting Markus Armbruster (2018-02-11 03:35:52)
>> Signed-off-by: Markus Armbruster 
>> Reviewed-by: Marc-André Lureau 
>> ---
>>  scripts/qapi/common.py | 15 +--
>>  1 file changed, 9 insertions(+), 6 deletions(-)
>> 
>> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
>> index dce289ae21..cc5a5941dd 100644
>> --- a/scripts/qapi/common.py
>> +++ b/scripts/qapi/common.py
>> @@ -290,8 +290,12 @@ class QAPISchemaParser(object):
>>  if not isinstance(include, str):
>>  raise QAPISemError(info,
>> "Value of 'include' must be a 
>> string")
>> -self._include(include, info, os.path.dirname(self.fname),
>> -  previously_included)
>> +exprs_include = self._include(include, info,
>> +  os.path.dirname(self.fname),
>> +  previously_included)
>> +if exprs_include:
>> +self.exprs.extend(exprs_include.exprs)
>> +self.docs.extend(exprs_include.docs)
>>  elif "pragma" in expr:
>>  self.reject_expr_doc(cur_doc)
>>  if len(expr) != 1:
>> @@ -334,14 +338,13 @@ class QAPISchemaParser(object):
>> 
>>  # skip multiple include of the same file
>>  if incl_abs_fname in previously_included:
>> -return
>> +return None
>> +
>>  try:
>>  fobj = open(incl_fname, 'r')
>>  except IOError as e:
>>  raise QAPISemError(info, '%s: %s' % (e.strerror, incl_fname))
>> -exprs_include = QAPISchemaParser(fobj, previously_included, info)
>> -self.exprs.extend(exprs_include.exprs)
>> -self.docs.extend(exprs_include.docs)
>
> minor nit, but the function of _include() seems more appropriately
> described now as _parse_include() or _parse_if_new() or something similar.

I'm open to renaming, but "parse" doesn't really fit.  _include()'s
caller checks syntax, then calls _include() to check semantic
constraints and evaluate the include, then splices in the evaluated
result.

> Maybe it would be more readable to just move the previously_included
> checks into init() and just drop _include() entirely?

Maybe.  But the conditional gets rather long then.

> Reviewed-by: Michael Roth 

Thanks!



Re: [Qemu-devel] [Qemu-arm] [PATCH v2 21/67] target/arm: Implement SVE floating-point exponential accelerator

2018-02-23 Thread Richard Henderson
On 02/23/2018 05:48 AM, Peter Maydell wrote:
>> +void HELPER(sve_fexpa_d)(void *vd, void *vn, uint32_t desc)
>> +{
>> +static const uint64_t coeff[] = {
>> +0x0, 0x02C9A3E778061, 0x059B0D3158574, 0x0874518759BC8,
>> +0x0B5586CF9890F, 0x0E3EC32D3D1A2, 0x11301D0125B51, 0x1429AAEA92DE0,
>> +0x172B83C7D517B, 0x1A35BEB6FCB75, 0x1D4873168B9AA, 0x2063B88628CD6,
>> +0x2387A6E756238, 0x26B4565E27CDD, 0x29E9DF51FDEE1, 0x2D285A6E4030B,
>> +0x306FE0A31B715, 0x33C08B26416FF, 0x371A7373AA9CB, 0x3A7DB34E59FF7,
>> +0x3DEA64C123422, 0x4160A21F72E2A, 0x44E086061892D, 0x486A2B5C13CD0,
>> +0x4BFDAD5362A27, 0x4F9B2769D2CA7, 0x5342B569D4F82, 0x56F4736B527DA,
>> +0x5AB07DD485429, 0x5E76F15AD2148, 0x6247EB03A5585, 0x6623882552225,
>> +0x6A09E667F3BCD, 0x6DFB23C651A2F, 0x71F75E8EC5F74, 0x75FEB564267C9,
>> +0x7A11473EB0187, 0x7E2F336CF4E62, 0x82589994CCE13, 0x868D99B4492ED,
>> +0x8ACE5422AA0DB, 0x8F1AE99157736, 0x93737B0CDC5E5, 0x97D829FDE4E50,
>> +0x9C49182A3F090, 0xA0C667B5DE565, 0xA5503B23E255D, 0xA9E6B5579FDBF,
>> +0xAE89F995AD3AD, 0xB33A2B84F15FB, 0xB7F76F2FB5E47, 0xBCC1E904BC1D2,
>> +0xC199BDD85529C, 0xC67F12E57D14B, 0xCB720DCEF9069, 0xD072D4A07897C,
>> +0xD5818DCFBA487, 0xDA9E603DB3285, 0xDFC97337B9B5F, 0xE502EE78B3FF6,
>> +0xEA4AFA2A490DA, 0xEFA1BEE615A27, 0xF50765B6E4540, 0xFA7C1819E90D8,
> 
> This confused me at first because it looks like these are 64-bit numbers
> but they are only 52 bits. Maybe comment? (or add the leading '000'?)

Interesting... I didn't even notice.  This was pure cut-and-paste from the
pseudocode.  As such, with the comment, I wouldn't modify them.


r~



Re: [Qemu-devel] [PATCH v2 25/67] target/arm: Implement SVE Integer Wide Immediate - Predicated Group

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:18 AM, Peter Maydell wrote:
>> +mm = (mm & 0xff) * (-1ull / 0xff);
> 
> What is this expression doing? I guess from context that it's
> replicating the low 8 bits of mm across the 64-bit value,
> but this is too obscure to do without a comment or wrapping
> it in a helper function with a useful name, I think.

I do have a helper now -- dup_const.  I thought I'd converted all of the uses,
but clearly missed one/some.


r~



Re: [Qemu-devel] [PATCH v2 17/29] qapi: Record 'include' directives in intermediate representation

2018-02-23 Thread Markus Armbruster
Eric Blake  writes:

> On 02/11/2018 03:35 AM, Markus Armbruster wrote:
>> The include directive permits modular QAPI schemata, but the generated
>> code is monolithic all the same.  To permit generating modular code,
>> the front end needs to pass more information on inclusions to the back
>> ends.  The commit before last added the necessary information to the
>> parse tree.  This commit adds it to the intermediate representation
>> and its QAPISchemaVisitor.  A later commit will use this to to
>> generate modular code.
>>
>> New entity QAPISchemaInclude represents inclusions.  Call new visitor
>> method visit_include() for it, so visitors can see the sub-modules a
>> module includes.
>>
>> Note that unlike other entities, QAPISchemaInclude has no name, and is
>> therefore not added to entity_dict.
>>
>> New QAPISchemaEntity attribute @module names the entity's source file.
>> Call new visitor method visit_module() when it changes during a visit,
>> so visitors can keep track of the module being visited.
>>
>> Signed-off-by: Markus Armbruster 
>> Reviewed-by: Marc-André Lureau 
>> ---
>> @@ -1479,16 +1497,19 @@ class QAPISchema(object):
>>   self._entity_dict = {}
>>   self._predefining = True
>>   self._def_predefineds()
>> -self._predefining = False
>>   self._def_exprs(exprs)
>>   self.check()
>
> Why does self._predfining not need to be toggled anymore?  Do we even
> need this variable any more...
>
>>
>>  def _def_entity(self, ent):
>>  # Only the predefined types are allowed to not have info
>>  assert ent.info or self._predefining
>> -assert ent.name not in self._entity_dict
>
> ...and/or is this assert now worthless?
>
>
>> +++ b/tests/qapi-schema/comments.out
>> @@ -1,4 +1,5 @@
>>   object q_empty
>>   enum QType ['none', 'qnull', 'qnum', 'qstring', 'qdict', 'qlist', 'qbool']
>>   prefix QTYPE
>> +module comments.json
>>   enum Status ['good', 'bad', 'ugly']
>
> Based on the generated output, it looks like you can tell whether you
> are in the predefining stage by not having any module at all; the
> first visit_module call is what flips the switch that everything else
> is defined by a module and must therefore have associated info.

Editing accident.  I started down the road you described, decided I lack
the time to reach its end, then failed to back out completely.



Re: [Qemu-devel] [PATCH v2 19/29] qapi: Make code-generating visitors use QAPIGen more

2018-02-23 Thread Markus Armbruster
Michael Roth  writes:

> Quoting Markus Armbruster (2018-02-11 03:35:57)
>> The use of QAPIGen is rather shallow so far: most of the output
>> accumulation is not converted.  Take the next step: convert output
>> accumulation in the code-generating visitor classes.  Helper functions
>> outside these classes are not converted.
>> 
>> Signed-off-by: Markus Armbruster 
>> ---
>>  scripts/qapi/commands.py   | 71 
>>  scripts/qapi/common.py | 13 
>>  scripts/qapi/doc.py| 74 --
>>  scripts/qapi/events.py | 55 ---
>>  scripts/qapi/introspect.py | 56 +---
>>  scripts/qapi/types.py  | 81 
>> +++---
>>  scripts/qapi/visit.py  | 80 
>> +++--
>>  7 files changed, 188 insertions(+), 242 deletions(-)
>> 
>
> 
>
>> diff --git a/scripts/qapi/common.py b/scripts/qapi/common.py
>> index 29d98ca934..31d2f73e7e 100644
>> --- a/scripts/qapi/common.py
>> +++ b/scripts/qapi/common.py
>> @@ -2049,3 +2049,16 @@ class QAPIGenDoc(QAPIGen):
>>  def _top(self, fname):
>>  return (QAPIGen._top(self, fname)
>>  + '@c AUTOMATICALLY GENERATED, DO NOT MODIFY\n\n')
>> +
>> +
>> +class QAPISchemaMonolithicCVisitor(QAPISchemaVisitor):
>> +
>> +def __init__(self, prefix, what, blurb, pydoc):
>> +self._prefix = prefix
>> +self._what = what
>> +self._genc = QAPIGenC(blurb, pydoc)
>> +self._genh = QAPIGenH(blurb, pydoc)
>> +
>> +def write(self, output_dir):
>> +self._genc.write(output_dir, self._prefix + self._what + '.c')
>> +self._genh.write(output_dir, self._prefix + self._what + '.h')
>
> minor nit: since subclasses of QAPISchemaVisitor and
> QAPISchemaMonolithicCVisitor all rely on .write() now, should we declare it
> in the abstract QAPISchemaVisitor?

Perhaps.  If they had more in common, the case for an abstract super
class would be clearer.

> Reviewed-by: Michael Roth 

Thanks!



Re: [Qemu-devel] [PATCH v2 20/29] qapi/types qapi/visit: Generate built-in stuff into separate files

2018-02-23 Thread Markus Armbruster
Eric Blake  writes:

> On 02/11/2018 03:35 AM, Markus Armbruster wrote:
>> Linking code from multiple separate QAPI schemata into the same
>> program is possible, but involves some weirdness around built-in
>> types:
>>
>> * We generate code for built-in types into .c only with option
>>--builtins.  The user is responsible for generating code for exactly
>>one QAPI schema per program with --builtins.
>>
>> * We generate code for built-in types into .h regardless of
>>--builtins, but guarded by #ifndef QAPI_VISIT_BUILTIN.  Because all
>>copies of this code are exactly the same, including any combination
>>of these headers works.
>>
>> Replace this contraption by something more conventional: generate code
>> for built-in types into their very own files: qapi-builtin-types.c,
>> qapi-builtin-visit.c, qapi-builtin-types.h, qapi-builtin-visit.h, but
>> only with --builtins.  Obey --output-dir, but ignore --prefix for
>> them.
>>
>> Make qapi-types.h include qapi-builtin-types.h.  With multiple
>> schemata you now have multiple qapi-types.[ch], but only one
>> qapi-builtin-types.[ch].  Same for qapi-visit.[ch] and
>> qapi-builtin-visit.[ch].
>>
>> Bonus: if all you need is built-in stuff, you can include a much
>> smaller header.  To be exploited shortly.
>>
>> Signed-off-by: Markus Armbruster 
>> ---
>> @@ -2046,6 +2046,7 @@ class QAPIGenH(QAPIGenC):
>>   class QAPIGenDoc(QAPIGen):
>> +
>>   def _top(self, fname):
>>   return (QAPIGen._top(self, fname)
>>   + '@c AUTOMATICALLY GENERATED, DO NOT MODIFY\n\n')
>
> Does this hunk belong in an earlier patch?

Yes: PATCH 05.

> Otherwise,
> Reviewed-by: Eric Blake 

Thanks!



Re: [Qemu-devel] [Qemu-arm] [PATCH v2 26/67] target/arm: Implement SVE Permute - Extract Group

2018-02-23 Thread Richard Henderson
On 02/23/2018 06:24 AM, Peter Maydell wrote:
> On 17 February 2018 at 18:22, Richard Henderson
>  wrote:
>> Signed-off-by: Richard Henderson 
>> ---
>>  target/arm/helper-sve.h|  2 ++
>>  target/arm/sve_helper.c| 81 
>> ++
>>  target/arm/translate-sve.c | 29 +
>>  target/arm/sve.decode  |  9 +-
>>  4 files changed, 120 insertions(+), 1 deletion(-)
>>
>> diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
>> index 79493ab647..94f4356ce9 100644
>> --- a/target/arm/helper-sve.h
>> +++ b/target/arm/helper-sve.h
>> @@ -414,6 +414,8 @@ DEF_HELPER_FLAGS_4(sve_cpy_z_h, TCG_CALL_NO_RWG, void, 
>> ptr, ptr, i64, i32)
>>  DEF_HELPER_FLAGS_4(sve_cpy_z_s, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
>>  DEF_HELPER_FLAGS_4(sve_cpy_z_d, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
>>
>> +DEF_HELPER_FLAGS_4(sve_ext, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
>> +
>>  DEF_HELPER_FLAGS_5(sve_and_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
>> i32)
>>  DEF_HELPER_FLAGS_5(sve_bic_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
>> i32)
>>  DEF_HELPER_FLAGS_5(sve_eor_, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, 
>> i32)
>> diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
>> index 6a95d1ec48..fb3f54300b 100644
>> --- a/target/arm/sve_helper.c
>> +++ b/target/arm/sve_helper.c
>> @@ -1469,3 +1469,84 @@ void HELPER(sve_cpy_z_d)(void *vd, void *vg, uint64_t 
>> val, uint32_t desc)
>>  d[i] = (pg[H1(i)] & 1 ? val : 0);
>>  }
>>  }
>> +
>> +/* Big-endian hosts need to frob the byte indicies.  If the copy
>> + * happens to be 8-byte aligned, then no frobbing necessary.
>> + */
> 
> Have you run risu tests with a big endian host?

Some, early on.  It's probably time to do it again.

Running those tests was why I dropped the ZIP/UZP/TRN patches from the host
vector support patch set.  Supporting those endian agnostic is incompatible
with our "pdp-endian-like" storage of vectors for ARM -- we would have to put
the vectors in full host-endian order for that.

In the meantime, the frobbing within helpers does work.


r~



Re: [Qemu-devel] [PATCH v2 27/29] qapi: Move qapi-schema.json to qapi/, rename generated files

2018-02-23 Thread Markus Armbruster
Eric Blake  writes:

> On 02/11/2018 03:36 AM, Markus Armbruster wrote:
>> Move qapi-schema.json to qapi/, so it's next to its modules, and all
>> files get generated to qapi/, not just the ones generated for modules.
>>
>> Consistently name the generated files qapi-MODULE.EXT:
>> qmp-commands.[ch] become qapi-commands.[ch], qapi-event.[ch] become
>> qapi-events.[ch], and qmp-introspect.[ch] become qapi-introspect.[ch].
>> This gets rid of the temporary hacks in scripts/qapi/commands.py and
>> scripts/qapi/events.py.
>
> Ah, so my parallel series that proposed naming the file
> qapi/qmp-schema.qapi gets interesting, with your patch favoring the
> qapi- naming everywhere.  I'll have to think about how much (or
> little) of my series to rebase on top of this (I like my notion of
> renaming to the .qapi suffix, though, as we really are using files
> that aren't JSON, but only resemble it).
>
>>
>> Signed-off-by: Markus Armbruster 
>> ---
>
>> +++ b/.gitignore
>> @@ -29,8 +29,8 @@
>>   /qga/qapi-generated
>>   /qapi-generated
>>   /qapi-gen-timestamp
>> -/qapi-builtin-types.[ch]
>> -/qapi-builtin-visit.[ch]
>> +/qapi/qapi-builtin-types.[ch]
>> +/qapi/qapi-builtin-visit.[ch]
>
> Might be some interesting churn if you like my idea of using globs for
> easier maintenance of this file.
>
>> +++ b/tpm.c
>> @@ -182,7 +182,6 @@ int tpm_config_parse(QemuOptsList *opts_list, const char 
>> *optarg)
>> /*
>>* Walk the list of active TPM backends and collect information about them
>> - * following the schema description in qapi-schema.json.
>>*/
>
> Should the overall comment keep the trailing '.'?

I'm fine either way.

> Reviewed-by: Eric Blake 

Thanks!



Re: [Qemu-devel] [PATCH] loader: don't perform overlapping address check for memory region ROM images

2018-02-23 Thread Peter Maydell
On 23 February 2018 at 11:29, Mark Cave-Ayland
 wrote:
> All memory region ROM images have a base address of 0 which causes the 
> overlapping
> address check to fail if more than one memory region ROM image is present, or 
> an
> existing ROM image is loaded at address 0.
>
> Make sure that we ignore the overlapping address check in
> rom_check_and_register_reset() if this is a memory region ROM image. In 
> particular
> this fixes the "rom: requested regions overlap" error on startup when trying 
> to
> run qemu-system-sparc with a -kernel image since commit 7497638642: "tcx: 
> switch to
> load_image_mr() and remove prom_addr hack".
>
> Suggested-by: Peter Maydell 
> Signed-off-by: Mark Cave-Ayland 

Reviewed-by: Peter Maydell 

Do you want to take this via your sparc tree?

thanks
-- PMM



Re: [Qemu-devel] [PATCH v2 40/67] target/arm: Implement SVE Integer Compare - Scalars Group

2018-02-23 Thread Peter Maydell
On 17 February 2018 at 18:22, Richard Henderson
 wrote:
> Signed-off-by: Richard Henderson 
> ---
>  target/arm/helper-sve.h|  2 +
>  target/arm/sve_helper.c| 31 
>  target/arm/translate-sve.c | 92 
> ++
>  target/arm/sve.decode  |  8 
>  4 files changed, 133 insertions(+)
>
> diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
> index dd4f8f754d..1863106d0f 100644
> --- a/target/arm/helper-sve.h
> +++ b/target/arm/helper-sve.h
> @@ -678,3 +678,5 @@ DEF_HELPER_FLAGS_4(sve_brkn, TCG_CALL_NO_RWG, void, ptr, 
> ptr, ptr, i32)
>  DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, ptr, ptr, i32)
>
>  DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
> +
> +DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
> diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
> index dd884bdd1c..80b78da834 100644
> --- a/target/arm/sve_helper.c
> +++ b/target/arm/sve_helper.c
> @@ -2716,3 +2716,34 @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t 
> pred_desc)
>  }
>  return sum;
>  }
> +
> +uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)

This could really use a comment about what part of the overall
instruction it's doing.

> +{
> +uintptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
> +intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
> +uint64_t esz_mask = pred_esz_masks[esz];
> +ARMPredicateReg *d = vd;
> +uint32_t flags;
> +intptr_t i;
> +
> +/* Begin with a zero predicate register.  */
> +flags = do_zero(d, oprsz);
> +if (count == 0) {
> +return flags;
> +}
> +
> +/* Scale from predicate element count to bits.  */
> +count <<= esz;
> +/* Bound to the bits in the predicate.  */
> +count = MIN(count, oprsz * 8);
> +
> +/* Set all of the requested bits.  */
> +for (i = 0; i < count / 64; ++i) {
> +d->p[i] = esz_mask;
> +}
> +if (count & 63) {
> +d->p[i] = ~(-1ull << (count & 63)) & esz_mask;
> +}
> +
> +return predtest_ones(d, oprsz, esz_mask);
> +}
> diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
> index 038800cc86..4b92a55c21 100644
> --- a/target/arm/translate-sve.c
> +++ b/target/arm/translate-sve.c
> @@ -2847,6 +2847,98 @@ static void trans_SINCDECP_z(DisasContext *s, 
> arg_incdec2_pred *a,
>  do_sat_addsub_vec(s, a->esz, a->rd, a->rn, val, a->u, a->d);
>  }
>
> +static void trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
> +{
> +TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
> +TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
> +TCGv_i64 t0 = tcg_temp_new_i64();
> +TCGv_i64 t1 = tcg_temp_new_i64();
> +TCGv_i32 t2, t3;
> +TCGv_ptr ptr;
> +unsigned desc, vsz = vec_full_reg_size(s);
> +TCGCond cond;
> +
> +if (!a->sf) {
> +if (a->u) {
> +tcg_gen_ext32u_i64(op0, op0);
> +tcg_gen_ext32u_i64(op1, op1);
> +} else {
> +tcg_gen_ext32s_i64(op0, op0);
> +tcg_gen_ext32s_i64(op1, op1);
> +}
> +}
> +
> +/* For the helper, compress the different conditions into a computation
> + * of how many iterations for which the condition is true.
> + *
> + * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
> + * 2**64 iterations, overflowing to 0.  Of course, predicate registers
> + * aren't that large, so any value >= predicate size is sufficient.
> + */
> +tcg_gen_sub_i64(t0, op1, op0);
> +
> +/* t0 = MIN(op1 - op0, vsz).  */
> +if (a->eq) {
> +/* Equality means one more iteration.  */
> +tcg_gen_movi_i64(t1, vsz - 1);
> +tcg_gen_movcond_i64(TCG_COND_LTU, t0, t0, t1, t0, t1);
> +tcg_gen_addi_i64(t0, t0, 1);
> +} else {
> +tcg_gen_movi_i64(t1, vsz);
> +tcg_gen_movcond_i64(TCG_COND_LTU, t0, t0, t1, t0, t1);
> +}
> +
> +/* t0 = (condition true ? t0 : 0).  */
> +cond = (a->u
> +? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
> +: (a->eq ? TCG_COND_LE : TCG_COND_LT));
> +tcg_gen_movi_i64(t1, 0);
> +tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
> +
> +t2 = tcg_temp_new_i32();
> +tcg_gen_extrl_i64_i32(t2, t0);
> +tcg_temp_free_i64(t0);
> +tcg_temp_free_i64(t1);
> +
> +desc = (vsz / 8) - 2;
> +desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
> +t3 = tcg_const_i32(desc);
> +
> +ptr = tcg_temp_new_ptr();
> +tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
> +
> +gen_helper_sve_while(t2, ptr, t2, t3);
> +do_pred_flags(t2);
> +
> +tcg_temp_free_ptr(ptr);
> +tcg_temp_free_i32(t2);
> +tcg_temp_free_i32(t3);
> +}

I got confused by this -- it is too far different from what the
pseudocode is doing. Could we have more explanatory comments, please?


Re: [Qemu-devel] [PATCH v8 09/21] null: Switch to .bdrv_co_block_status()

2018-02-23 Thread Kevin Wolf
Am 23.02.2018 um 17:43 hat Eric Blake geschrieben:
> > OFFSET_VALID | DATA might be excusable because I can see that it's
> > convenient that a protocol driver refers to itself as *file instead of
> > returning NULL there and then the offset is valid (though it would be
> > pointless to actually follow the file pointer), but OFFSET_VALID without
> > DATA probably isn't.
> 
> So OFFSET_VALID | DATA for a protocol BDS is not just convenient, but
> necessary to avoid breaking qemu-img map output.  But you are also right
> that OFFSET_VALID without data makes little sense at a protocol layer. So
> with that in mind, I'm auditing all of the protocol layers to make sure
> OFFSET_VALID ends up as something sane.

That's one way to look at it.

The other way is that qemu-img map shouldn't ask the protocol layer for
its offset because it already knows the offset (it is what it passes as
a parameter to bdrv_co_block_status).

Anyway, it's probably not worth changing the interface, we should just
make sure that the return values of the individual drivers are
consistent.

Kevin



Re: [Qemu-devel] [PATCH v2 21/36] rbd: Pass BlockdevOptionsRbd to qemu_rbd_connect()

2018-02-23 Thread Kevin Wolf
Am 23.02.2018 um 17:43 hat Max Reitz geschrieben:
> On 2018-02-23 17:19, Kevin Wolf wrote:
> > Am 23.02.2018 um 00:25 hat Max Reitz geschrieben:
> >> On 2018-02-21 14:53, Kevin Wolf wrote:
> >>> With the conversion to a QAPI options object, the function is now
> >>> prepared to be used in a .bdrv_co_create implementation.
> >>>
> >>> Signed-off-by: Kevin Wolf 
> > 
> >>> -*s_snap = g_strdup(snap);
> >>> -*s_image_name = g_strdup(image_name);
> >>> +*s_snap = g_strdup(opts->snapshot);
> >>> +*s_image_name = g_strdup(opts->image);
> >>>  
> >>>  /* try default location when conf=NULL, but ignore failure */
> >>> -r = rados_conf_read_file(*cluster, conf);
> >>> -if (conf && r < 0) {
> >>> -error_setg_errno(errp, -r, "error reading conf file %s", conf);
> >>> +r = rados_conf_read_file(*cluster, opts->conf);
> >>> +if (opts->has_conf && r < 0) {
> >>
> >> Reading opts->conf without knowing whether opts->has_conf is true is a
> >> bit weird.  Would you mind "s->has_conf ? opts->conf : NULL" for the
> >> rados_conf_read() call?
> >>
> >> On that thought, opts->snapshot and opts->user are optional, too.  Are
> >> they guaranteed to be NULL if they haven't been specified?  Should we
> >> guard those accesses with opts->has_* queries, too?
> > 
> > These days, both the QMP marshalling code (for the outermost struct when
> > called from x-blockdev-create) and the input visitor (for nested structs
> > and non-QMP callers) initialise the objects with {0} and g_malloc0().
> > 
> > I think Markus once told me that I shouldn't do pointless has_* checks
> > any more in QMP commands, so I intentionally did the same here.
> 
> I'm a bit cautious because of non-zero defaults (like sslverify in the
> ssh driver), but as long as you're aware...

I still hope that QAPI will allow specifying default values in the
schema sometime. But yes, for the time being, not checking has_*
obviously only works when the default is 0/false/NULL.

Kevin


signature.asc
Description: PGP signature


Re: [Qemu-devel] [PATCH v2 41/67] target/arm: Implement FDUP/DUP

2018-02-23 Thread Peter Maydell
On 17 February 2018 at 18:22, Richard Henderson
 wrote:
> Signed-off-by: Richard Henderson 
> ---
>  target/arm/translate-sve.c | 35 +++
>  target/arm/sve.decode  |  8 
>  2 files changed, 43 insertions(+)
>

Reviewed-by: Peter Maydell 

thanks
-- PMM



Re: [Qemu-devel] [PATCH v2 05/29] qapi: New classes QAPIGenC, QAPIGenH, QAPIGenDoc

2018-02-23 Thread Markus Armbruster
Michael Roth  writes:

> Quoting Markus Armbruster (2018-02-11 03:35:43)
>> These classes encapsulate accumulating and writing output.
>> 
>> Convert C code generation to QAPIGenC and QAPIGenH.  The conversion is
>> rather shallow: most of the output accumulation is not converted.
>> Left for later.
>> 
>> The indentation machinery uses a single global variable indent_level,
>> even though we generally interleave creation of a .c and its .h.  It
>> should become instance variable of QAPIGenC.  Also left for later.
>> 
>> Documentation generation isn't converted, and QAPIGenDoc isn't used.
>> This will change shortly.
>> 
>> Signed-off-by: Markus Armbruster 
>> Reviewed-by: Eric Blake 
>> Reviewed-by: Marc-André Lureau 
>
> 2 minor nits below, but in any case:
>
> Reviewed-by: Michael Roth 
>
>> ---
>>  scripts/qapi-commands.py   | 23 +--
>>  scripts/qapi-event.py  | 22 ++-
>>  scripts/qapi-introspect.py | 18 +
>>  scripts/qapi-types.py  | 22 ++-
>>  scripts/qapi-visit.py  | 22 ++-
>>  scripts/qapi.py| 99 
>> +-
>>  6 files changed, 112 insertions(+), 94 deletions(-)
>> 
>> diff --git a/scripts/qapi-commands.py b/scripts/qapi-commands.py
>> index c3aa52fce1..8d38ade076 100644
>> --- a/scripts/qapi-commands.py
>> +++ b/scripts/qapi-commands.py
>> @@ -260,12 +260,10 @@ blurb = '''
>>   * Schema-defined QAPI/QMP commands
>>  '''
>> 
>> -(fdef, fdecl) = open_output(output_dir, do_c, do_h, prefix,
>> -'qmp-marshal.c', 'qmp-commands.h',
>> -blurb, __doc__)
>> -
>> -fdef.write(mcgen('''
>> +genc = QAPIGenC(blurb, __doc__)
>> +genh = QAPIGenH(blurb, __doc__)
>> 
>> +genc.add(mcgen('''
>>  #include "qemu/osdep.h"
>>  #include "qemu-common.h"
>>  #include "qemu/module.h"
>> @@ -280,20 +278,23 @@ fdef.write(mcgen('''
>>  #include "%(prefix)sqmp-commands.h"
>> 
>>  ''',
>> - prefix=prefix))
>> +   prefix=prefix))
>> 
>> -fdecl.write(mcgen('''
>> +genh.add(mcgen('''
>>  #include "%(prefix)sqapi-types.h"
>>  #include "qapi/qmp/dispatch.h"
>> 
>>  void %(c_prefix)sqmp_init_marshal(QmpCommandList *cmds);
>>  ''',
>> -  prefix=prefix, c_prefix=c_name(prefix, protect=False)))
>> +   prefix=prefix, c_prefix=c_name(prefix, protect=False)))
>> 
>>  schema = QAPISchema(input_file)
>>  vis = QAPISchemaGenCommandVisitor()
>>  schema.visit(vis)
>> -fdef.write(vis.defn)
>> -fdecl.write(vis.decl)
>> +genc.add(vis.defn)
>> +genh.add(vis.decl)
>> 
>> -close_output(fdef, fdecl)
>> +if do_c:
>> +genc.write(output_dir, prefix + 'qmp-marshal.c')
>> +if do_h:
>> +genh.write(output_dir, prefix + 'qmp-commands.h')
>> diff --git a/scripts/qapi-event.py b/scripts/qapi-event.py
>> index edb9ddb650..bd7a9be3dc 100644
>> --- a/scripts/qapi-event.py
>> +++ b/scripts/qapi-event.py
>> @@ -176,11 +176,10 @@ blurb = '''
>>   * Schema-defined QAPI/QMP events
>>  '''
>> 
>> -(fdef, fdecl) = open_output(output_dir, do_c, do_h, prefix,
>> -'qapi-event.c', 'qapi-event.h',
>> -blurb, __doc__)
>> +genc = QAPIGenC(blurb, __doc__)
>> +genh = QAPIGenH(blurb, __doc__)
>> 
>> -fdef.write(mcgen('''
>> +genc.add(mcgen('''
>>  #include "qemu/osdep.h"
>>  #include "qemu-common.h"
>>  #include "%(prefix)sqapi-event.h"
>> @@ -191,21 +190,24 @@ fdef.write(mcgen('''
>>  #include "qapi/qmp-event.h"
>> 
>>  ''',
>> - prefix=prefix))
>> +   prefix=prefix))
>> 
>> -fdecl.write(mcgen('''
>> +genh.add(mcgen('''
>>  #include "qapi/util.h"
>>  #include "%(prefix)sqapi-types.h"
>> 
>>  ''',
>> -  prefix=prefix))
>> +   prefix=prefix))
>> 
>>  event_enum_name = c_name(prefix + 'QAPIEvent', protect=False)
>> 
>>  schema = QAPISchema(input_file)
>>  vis = QAPISchemaGenEventVisitor()
>>  schema.visit(vis)
>> -fdef.write(vis.defn)
>> -fdecl.write(vis.decl)
>> +genc.add(vis.defn)
>> +genh.add(vis.decl)
>> 
>> -close_output(fdef, fdecl)
>> +if do_c:
>> +genc.write(output_dir, prefix + 'qapi-event.c')
>> +if do_h:
>> +genh.write(output_dir, prefix + 'qapi-event.h')
>> diff --git a/scripts/qapi-introspect.py b/scripts/qapi-introspect.py
>> index ebe8706f41..3d65690fe3 100644
>> --- a/scripts/qapi-introspect.py
>> +++ b/scripts/qapi-introspect.py
>> @@ -181,21 +181,23 @@ blurb = '''
>>   * QAPI/QMP schema introspection
>>  '''
>> 
>> -(fdef, fdecl) = open_output(output_dir, do_c, do_h, prefix,
>> -'qmp-introspect.c', 'qmp-introspect.h',
>> -blurb, __doc__)
>> +genc = QAPIGenC(blurb, __doc__)
>> +genh = QAPIGenH(blurb, __doc__)
>> 
>> -fdef.write(mcgen('''
>> +genc.add(mcgen('''
>>  #include "qemu/osdep.h"
>>  #include "%(prefix)sqmp-introspect.h"
>> 
>>  ''',
>> -

Re: [Qemu-devel] [PATCH v2 09/29] qapi-gen: Convert from getopt to argparse

2018-02-23 Thread Markus Armbruster
Michael Roth  writes:

> Quoting Markus Armbruster (2018-02-11 03:35:47)
>> argparse is nicer to use than getopt, and gives us --help almost for
>> free.
>> 
>> Signed-off-by: Markus Armbruster 
>> ---
>>  scripts/qapi-gen.py| 48 ++--
>>  scripts/qapi/common.py | 43 ---
>>  2 files changed, 30 insertions(+), 61 deletions(-)
>> 
>> diff --git a/scripts/qapi-gen.py b/scripts/qapi-gen.py
>> index 2100ca1145..e5be484e3e 100755
>> --- a/scripts/qapi-gen.py
>> +++ b/scripts/qapi-gen.py
>> @@ -4,8 +4,11 @@
>>  # This work is licensed under the terms of the GNU GPL, version 2 or later.
>>  # See the COPYING file in the top-level directory.
>> 
>> +from __future__ import print_function
>> +import argparse
>> +import re
>>  import sys
>> -from qapi.common import parse_command_line, QAPISchema
>> +from qapi.common import QAPISchema
>>  from qapi.types import gen_types
>>  from qapi.visit import gen_visit
>>  from qapi.commands import gen_commands
>> @@ -15,26 +18,35 @@ from qapi.doc import gen_doc
>> 
>> 
>>  def main(argv):
>> -(input_file, output_dir, prefix, opts) = \
>> -parse_command_line('bu', ['builtins', 'unmask-non-abi-names'])
>> +parser = argparse.ArgumentParser(
>> +description='Generate code from a QAPI schema')
>> +parser.add_argument('-b', '--builtins', action='store_true',
>> +help="generate code for built-in types")
>> +parser.add_argument('-o', '--output_dir', action='store', default='',
>
> Was the change from --output-dir to --output_dir intentional? The former
> seems more consistent.

Editing accident, good catch!



  1   2   3   4   >