Re: [Qemu-devel] [RFC v4 02/21] blockjobs: model single jobs as transactions

2018-02-27 Thread Kevin Wolf
Am 24.02.2018 um 00:51 hat John Snow geschrieben:
> model all independent jobs as single job transactions.
> 
> It's one less case we have to worry about when we add more states to the
> transition machine. This way, we can just treat all job lifetimes exactly
> the same. This helps tighten assertions of the STM graph and removes some
> conditionals that would have been needed in the coming commits adding a
> more explicit job lifetime management API.
> 
> Signed-off-by: John Snow 

Reviewed-by: Kevin Wolf 



Re: [Qemu-devel] [RFC v4 02/21] blockjobs: model single jobs as transactions

2018-02-27 Thread Eric Blake

On 02/23/2018 05:51 PM, John Snow wrote:

model all independent jobs as single job transactions.

It's one less case we have to worry about when we add more states to the
transition machine. This way, we can just treat all job lifetimes exactly
the same. This helps tighten assertions of the STM graph and removes some
conditionals that would have been needed in the coming commits adding a
more explicit job lifetime management API.

Signed-off-by: John Snow 
---


Reviewed-by: Eric Blake 


@@ -729,6 +727,17 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
  return NULL;
  }
  }
+
+/* Single jobs are modeled as single-job transactions for sake of
+ * consolidating the job management logic */
+if (!txn) {
+txn = block_job_txn_new();
+block_job_txn_add_job(txn, job);
+block_job_txn_unref(txn);
+} else {
+block_job_txn_add_job(txn, job);
+}
+
  return job;


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



[Qemu-devel] [RFC v4 02/21] blockjobs: model single jobs as transactions

2018-02-23 Thread John Snow
model all independent jobs as single job transactions.

It's one less case we have to worry about when we add more states to the
transition machine. This way, we can just treat all job lifetimes exactly
the same. This helps tighten assertions of the STM graph and removes some
conditionals that would have been needed in the coming commits adding a
more explicit job lifetime management API.

Signed-off-by: John Snow 
---
 block/backup.c   |  3 +--
 block/commit.c   |  2 +-
 block/mirror.c   |  2 +-
 block/stream.c   |  2 +-
 blockjob.c   | 25 -
 include/block/blockjob_int.h |  3 ++-
 tests/test-bdrv-drain.c  |  4 ++--
 tests/test-blockjob-txn.c| 19 +++
 tests/test-blockjob.c|  2 +-
 9 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index 4a16a37229..7e254dabff 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -621,7 +621,7 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* job->common.len is fixed, so we can't allow resize */
-job = block_job_create(job_id, _job_driver, bs,
+job = block_job_create(job_id, _job_driver, txn, bs,
BLK_PERM_CONSISTENT_READ,
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD,
@@ -677,7 +677,6 @@ BlockJob *backup_job_create(const char *job_id, 
BlockDriverState *bs,
 block_job_add_bdrv(>common, "target", target, 0, BLK_PERM_ALL,
_abort);
 job->common.len = len;
-block_job_txn_add_job(txn, >common);
 
 return >common;
 
diff --git a/block/commit.c b/block/commit.c
index bb6c904704..9682158ee7 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -289,7 +289,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
 return;
 }
 
-s = block_job_create(job_id, _job_driver, bs, 0, BLK_PERM_ALL,
+s = block_job_create(job_id, _job_driver, NULL, bs, 0, BLK_PERM_ALL,
  speed, BLOCK_JOB_DEFAULT, NULL, NULL, errp);
 if (!s) {
 return;
diff --git a/block/mirror.c b/block/mirror.c
index c9badc1203..6bab7cfdd8 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1166,7 +1166,7 @@ static void mirror_start_job(const char *job_id, 
BlockDriverState *bs,
 }
 
 /* Make sure that the source is not resized while the job is running */
-s = block_job_create(job_id, driver, mirror_top_bs,
+s = block_job_create(job_id, driver, NULL, mirror_top_bs,
  BLK_PERM_CONSISTENT_READ,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
  BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD, speed,
diff --git a/block/stream.c b/block/stream.c
index 499cdacdb0..f3b53f49e2 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -244,7 +244,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
 /* Prevent concurrent jobs trying to modify the graph structure here, we
  * already have our own plans. Also don't allow resize as the image size is
  * queried only at the job start and then cached. */
-s = block_job_create(job_id, _job_driver, bs,
+s = block_job_create(job_id, _job_driver, NULL, bs,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
  BLK_PERM_GRAPH_MOD,
  BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
diff --git a/blockjob.c b/blockjob.c
index 24833ef30f..7ba3683ee3 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -357,10 +357,8 @@ static void block_job_completed_single(BlockJob *job)
 }
 }
 
-if (job->txn) {
-QLIST_REMOVE(job, txn_list);
-block_job_txn_unref(job->txn);
-}
+QLIST_REMOVE(job, txn_list);
+block_job_txn_unref(job->txn);
 block_job_unref(job);
 }
 
@@ -647,7 +645,7 @@ static void block_job_event_completed(BlockJob *job, const 
char *msg)
  */
 
 void *block_job_create(const char *job_id, const BlockJobDriver *driver,
-   BlockDriverState *bs, uint64_t perm,
+   BlockJobTxn *txn, BlockDriverState *bs, uint64_t perm,
uint64_t shared_perm, int64_t speed, int flags,
BlockCompletionFunc *cb, void *opaque, Error **errp)
 {
@@ -729,6 +727,17 @@ void *block_job_create(const char *job_id, const 
BlockJobDriver *driver,
 return NULL;
 }
 }
+
+/* Single jobs are modeled as single-job transactions for sake of
+ * consolidating the job management logic */
+if (!txn) {
+txn = block_job_txn_new();
+block_job_txn_add_job(txn, job);
+block_job_txn_unref(txn);
+} else {
+block_job_txn_add_job(txn, job);
+}
+
 return job;
 }
 
@@ -752,13 +761,11 @@ void