Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-12-03 Thread Vladimir Sementsov-Ogievskiy

02.12.2020 21:18, Andrey Shinkevich wrote:


On 27.10.2020 21:24, Andrey Shinkevich wrote:


On 27.10.2020 20:57, Vladimir Sementsov-Ogievskiy wrote:

27.10.2020 20:48, Andrey Shinkevich wrote:


On 27.10.2020 19:13, Vladimir Sementsov-Ogievskiy wrote:

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 ++
  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c



[...]


+    s = block_job_create(job_id, _job_driver, NULL, cor_filter_bs,
+ BLK_PERM_CONSISTENT_READ,
+ basic_flags | BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD,


I think that BLK_PERM_GRAPH_MOD is something outdated. We have chain-feeze, 
what BLK_PERM_GRAPH_MOD adds to it? I don't know, and doubt that somebody knows.



That is true for the commit/mirror jobs also. If we agree to remove the flag 
BLK_PERM_GRAPH_MOD from all these jobs, it will be made in a separate series, 
won't it?


Hmm. At least, let's not implement new logic based on BLK_PERM_GRAPH_MOD. In 
original code it's only block_job_create's perm, not in shared_perm, not 
somewhere else.. So, if we keep it, let's keep it as is: only in perm in 
block_job_create, not implementing additional perm/shared_perm logic.



With @perm=0 in the block_job_add_bdrv(>common, "active node"...), it won't.




   speed, creation_flags, NULL, NULL, errp);
  if (!s) {
  goto fail;
  }
+    /*
+ * Prevent concurrent jobs trying to modify the graph structure here, we
+ * already have our own plans. Also don't allow resize as the image size is
+ * queried only at the job start and then cached.
+ */
+    if (block_job_add_bdrv(>common, "active node", bs,
+   basic_flags | BLK_PERM_GRAPH_MOD,


why not 0, like for other nodes? We don't use this BdrvChild at all, why to 
requre permissions?



Yes, '0' s right.


+   basic_flags | BLK_PERM_WRITE, _abort)) {
+    goto fail;
+    }
+
  /* Block all intermediate nodes between bs and base, because 



[...]


diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index dcb4b5d..0064590 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -227,61 +227,20 @@ class TestParallelOps(iotests.QMPTestCase):
  for img in self.imgs:
  os.remove(img)
-    # Test that it's possible to run several block-stream operations
-    # in parallel in the same snapshot chain
-    @unittest.skipIf(os.environ.get('QEMU_CHECK_BLOCK_AUTO'), 'disabled in CI')
-    def test_stream_parallel(self):


Didn't we agree to add "bottom" paramter to qmp? Than this test-case can be 
rewritten using
node-names and new "bottom" stream argument.



The QMP new "bottom" option is passed to the COR-driver. It is done withing the 
stream-job code. So, it works.


Yes. But we also want "bottom" option for stream-job, and deprecate "base" option. Then 
we can rewrite the test using "bottom" option, all should work





I guess it will not help for the whole test. Particularly, there is an issue 
with freezing the child link to COR-filter of the cuncurrent job, then it fails 
to finish first.


We should not have such frozen link, as our bottom node should be above 
COR-filter of concurrent job.




The bdrv_freeze_backing_chain(bs, above_base, errp) does that job. Max insisted 
on keeping it.

Andrey


I have kept the test_stream_parallel() deleted in the coming v13 because it was 
agreed to make the above_base node frozen. With this, the test case can not 
pass. It is also true because the operations over the COR-filter node are 
blocked for the parallel jobs.

Andrey



--
Best regards,
Vladimir



Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-12-02 Thread Andrey Shinkevich



On 27.10.2020 21:24, Andrey Shinkevich wrote:


On 27.10.2020 20:57, Vladimir Sementsov-Ogievskiy wrote:

27.10.2020 20:48, Andrey Shinkevich wrote:


On 27.10.2020 19:13, Vladimir Sementsov-Ogievskiy wrote:

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for 
copied

regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 
++

  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c



[...]

+    s = block_job_create(job_id, _job_driver, NULL, 
cor_filter_bs,

+ BLK_PERM_CONSISTENT_READ,
+ basic_flags | BLK_PERM_WRITE | 
BLK_PERM_GRAPH_MOD,


I think that BLK_PERM_GRAPH_MOD is something outdated. We have 
chain-feeze, what BLK_PERM_GRAPH_MOD adds to it? I don't know, and 
doubt that somebody knows.




That is true for the commit/mirror jobs also. If we agree to remove 
the flag BLK_PERM_GRAPH_MOD from all these jobs, it will be made in a 
separate series, won't it?


Hmm. At least, let's not implement new logic based on 
BLK_PERM_GRAPH_MOD. In original code it's only block_job_create's 
perm, not in shared_perm, not somewhere else.. So, if we keep it, 
let's keep it as is: only in perm in block_job_create, not 
implementing additional perm/shared_perm logic.




With @perm=0 in the block_job_add_bdrv(>common, "active node"...), it 
won't.





   speed, creation_flags, NULL, NULL, errp);
  if (!s) {
  goto fail;
  }
+    /*
+ * Prevent concurrent jobs trying to modify the graph 
structure here, we
+ * already have our own plans. Also don't allow resize as the 
image size is

+ * queried only at the job start and then cached.
+ */
+    if (block_job_add_bdrv(>common, "active node", bs,
+   basic_flags | BLK_PERM_GRAPH_MOD,


why not 0, like for other nodes? We don't use this BdrvChild at all, 
why to requre permissions?




Yes, '0' s right.

+   basic_flags | BLK_PERM_WRITE, 
_abort)) {

+    goto fail;
+    }
+
  /* Block all intermediate nodes between bs and base, because 



[...]


diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index dcb4b5d..0064590 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -227,61 +227,20 @@ class TestParallelOps(iotests.QMPTestCase):
  for img in self.imgs:
  os.remove(img)
-    # Test that it's possible to run several block-stream operations
-    # in parallel in the same snapshot chain
-    @unittest.skipIf(os.environ.get('QEMU_CHECK_BLOCK_AUTO'), 
'disabled in CI')

-    def test_stream_parallel(self):


Didn't we agree to add "bottom" paramter to qmp? Than this test-case 
can be rewritten using

node-names and new "bottom" stream argument.



The QMP new "bottom" option is passed to the COR-driver. It is done 
withing the stream-job code. So, it works.




I guess it will not help for the whole test. Particularly, there is 
an issue with freezing the child link to COR-filter of the cuncurrent 
job, then it fails to finish first.


We should not have such frozen link, as our bottom node should be 
above COR-filter of concurrent job.





The bdrv_freeze_backing_chain(bs, above_base, errp) does that job. Max 
insisted on keeping it.


Andrey


I have kept the test_stream_parallel() deleted in the coming v13 because 
it was agreed to make the above_base node frozen. With this, the test 
case can not pass. It is also true because the operations over the 
COR-filter node are blocked for the parallel jobs.


Andrey



Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-10-27 Thread Andrey Shinkevich



On 27.10.2020 20:57, Vladimir Sementsov-Ogievskiy wrote:

27.10.2020 20:48, Andrey Shinkevich wrote:


On 27.10.2020 19:13, Vladimir Sementsov-Ogievskiy wrote:

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 
++

  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c



[...]

+    s = block_job_create(job_id, _job_driver, NULL, 
cor_filter_bs,

+ BLK_PERM_CONSISTENT_READ,
+ basic_flags | BLK_PERM_WRITE | 
BLK_PERM_GRAPH_MOD,


I think that BLK_PERM_GRAPH_MOD is something outdated. We have 
chain-feeze, what BLK_PERM_GRAPH_MOD adds to it? I don't know, and 
doubt that somebody knows.




That is true for the commit/mirror jobs also. If we agree to remove 
the flag BLK_PERM_GRAPH_MOD from all these jobs, it will be made in a 
separate series, won't it?


Hmm. At least, let's not implement new logic based on 
BLK_PERM_GRAPH_MOD. In original code it's only block_job_create's perm, 
not in shared_perm, not somewhere else.. So, if we keep it, let's keep 
it as is: only in perm in block_job_create, not implementing additional 
perm/shared_perm logic.




With @perm=0 in the block_job_add_bdrv(>common, "active node"...), it 
won't.





   speed, creation_flags, NULL, NULL, errp);
  if (!s) {
  goto fail;
  }
+    /*
+ * Prevent concurrent jobs trying to modify the graph structure 
here, we
+ * already have our own plans. Also don't allow resize as the 
image size is

+ * queried only at the job start and then cached.
+ */
+    if (block_job_add_bdrv(>common, "active node", bs,
+   basic_flags | BLK_PERM_GRAPH_MOD,


why not 0, like for other nodes? We don't use this BdrvChild at all, 
why to requre permissions?




Yes, '0' s right.

+   basic_flags | BLK_PERM_WRITE, 
_abort)) {

+    goto fail;
+    }
+
  /* Block all intermediate nodes between bs and base, because 



[...]


diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index dcb4b5d..0064590 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -227,61 +227,20 @@ class TestParallelOps(iotests.QMPTestCase):
  for img in self.imgs:
  os.remove(img)
-    # Test that it's possible to run several block-stream operations
-    # in parallel in the same snapshot chain
-    @unittest.skipIf(os.environ.get('QEMU_CHECK_BLOCK_AUTO'), 
'disabled in CI')

-    def test_stream_parallel(self):


Didn't we agree to add "bottom" paramter to qmp? Than this test-case 
can be rewritten using

node-names and new "bottom" stream argument.



I guess it will not help for the whole test. Particularly, there is an 
issue with freezing the child link to COR-filter of the cuncurrent 
job, then it fails to finish first.


We should not have such frozen link, as our bottom node should be above 
COR-filter of concurrent job.





The bdrv_freeze_backing_chain(bs, above_base, errp) does that job. Max 
insisted on keeping it.


Andrey



Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-10-27 Thread Vladimir Sementsov-Ogievskiy

27.10.2020 20:48, Andrey Shinkevich wrote:


On 27.10.2020 19:13, Vladimir Sementsov-Ogievskiy wrote:

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 ++
  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c



[...]


+    s = block_job_create(job_id, _job_driver, NULL, cor_filter_bs,
+ BLK_PERM_CONSISTENT_READ,
+ basic_flags | BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD,


I think that BLK_PERM_GRAPH_MOD is something outdated. We have chain-feeze, 
what BLK_PERM_GRAPH_MOD adds to it? I don't know, and doubt that somebody knows.



That is true for the commit/mirror jobs also. If we agree to remove the flag 
BLK_PERM_GRAPH_MOD from all these jobs, it will be made in a separate series, 
won't it?


Hmm. At least, let's not implement new logic based on BLK_PERM_GRAPH_MOD. In 
original code it's only block_job_create's perm, not in shared_perm, not 
somewhere else.. So, if we keep it, let's keep it as is: only in perm in 
block_job_create, not implementing additional perm/shared_perm logic.




   speed, creation_flags, NULL, NULL, errp);
  if (!s) {
  goto fail;
  }
+    /*
+ * Prevent concurrent jobs trying to modify the graph structure here, we
+ * already have our own plans. Also don't allow resize as the image size is
+ * queried only at the job start and then cached.
+ */
+    if (block_job_add_bdrv(>common, "active node", bs,
+   basic_flags | BLK_PERM_GRAPH_MOD,


why not 0, like for other nodes? We don't use this BdrvChild at all, why to 
requre permissions?



Yes, '0' s right.


+   basic_flags | BLK_PERM_WRITE, _abort)) {
+    goto fail;
+    }
+
  /* Block all intermediate nodes between bs and base, because 



[...]


diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index dcb4b5d..0064590 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -227,61 +227,20 @@ class TestParallelOps(iotests.QMPTestCase):
  for img in self.imgs:
  os.remove(img)
-    # Test that it's possible to run several block-stream operations
-    # in parallel in the same snapshot chain
-    @unittest.skipIf(os.environ.get('QEMU_CHECK_BLOCK_AUTO'), 'disabled in CI')
-    def test_stream_parallel(self):


Didn't we agree to add "bottom" paramter to qmp? Than this test-case can be 
rewritten using
node-names and new "bottom" stream argument.



I guess it will not help for the whole test. Particularly, there is an issue 
with freezing the child link to COR-filter of the cuncurrent job, then it fails 
to finish first.


We should not have such frozen link, as our bottom node should be above 
COR-filter of concurrent job.


--
Best regards,
Vladimir



Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-10-27 Thread Andrey Shinkevich



On 27.10.2020 19:13, Vladimir Sementsov-Ogievskiy wrote:

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 
++

  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c



[...]

+    s = block_job_create(job_id, _job_driver, NULL, 
cor_filter_bs,

+ BLK_PERM_CONSISTENT_READ,
+ basic_flags | BLK_PERM_WRITE | 
BLK_PERM_GRAPH_MOD,


I think that BLK_PERM_GRAPH_MOD is something outdated. We have 
chain-feeze, what BLK_PERM_GRAPH_MOD adds to it? I don't know, and doubt 
that somebody knows.




That is true for the commit/mirror jobs also. If we agree to remove the 
flag BLK_PERM_GRAPH_MOD from all these jobs, it will be made in a 
separate series, won't it?



   speed, creation_flags, NULL, NULL, errp);
  if (!s) {
  goto fail;
  }
+    /*
+ * Prevent concurrent jobs trying to modify the graph structure 
here, we
+ * already have our own plans. Also don't allow resize as the 
image size is

+ * queried only at the job start and then cached.
+ */
+    if (block_job_add_bdrv(>common, "active node", bs,
+   basic_flags | BLK_PERM_GRAPH_MOD,


why not 0, like for other nodes? We don't use this BdrvChild at all, why 
to requre permissions?




Yes, '0' s right.

+   basic_flags | BLK_PERM_WRITE, 
_abort)) {

+    goto fail;
+    }
+
  /* Block all intermediate nodes between bs and base, because 



[...]


diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
index dcb4b5d..0064590 100755
--- a/tests/qemu-iotests/030
+++ b/tests/qemu-iotests/030
@@ -227,61 +227,20 @@ class TestParallelOps(iotests.QMPTestCase):
  for img in self.imgs:
  os.remove(img)
-    # Test that it's possible to run several block-stream operations
-    # in parallel in the same snapshot chain
-    @unittest.skipIf(os.environ.get('QEMU_CHECK_BLOCK_AUTO'), 
'disabled in CI')

-    def test_stream_parallel(self):


Didn't we agree to add "bottom" paramter to qmp? Than this test-case can 
be rewritten using

node-names and new "bottom" stream argument.



I guess it will not help for the whole test. Particularly, there is an 
issue with freezing the child link to COR-filter of the cuncurrent job, 
then it fails to finish first.


Andrey



Re: [PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-10-27 Thread Vladimir Sementsov-Ogievskiy

22.10.2020 21:13, Andrey Shinkevich wrote:

This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
  block/stream.c | 98 ++
  tests/qemu-iotests/030 | 51 +++-
  tests/qemu-iotests/030.out |  4 +-
  tests/qemu-iotests/141.out |  2 +-
  tests/qemu-iotests/245 | 22 +++
  5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c
index 1ba74ab..f6ed315 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -17,8 +17,10 @@
  #include "block/blockjob_int.h"
  #include "qapi/error.h"
  #include "qapi/qmp/qerror.h"
+#include "qapi/qmp/qdict.h"
  #include "qemu/ratelimit.h"
  #include "sysemu/block-backend.h"
+#include "block/copy-on-read.h"
  
  enum {

  /*
@@ -33,6 +35,8 @@ typedef struct StreamBlockJob {
  BlockJob common;
  BlockDriverState *base_overlay; /* COW overlay (stream from this) */
  BlockDriverState *above_base;   /* Node directly above the base */
+BlockDriverState *cor_filter_bs;
+BlockDriverState *target_bs;
  BlockdevOnError on_error;
  char *backing_file_str;
  bool bs_read_only;
@@ -44,8 +48,7 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
  {
  assert(bytes < SIZE_MAX);
  
-return blk_co_preadv(blk, offset, bytes, NULL,

- BDRV_REQ_COPY_ON_READ | BDRV_REQ_PREFETCH);
+return blk_co_preadv(blk, offset, bytes, NULL, BDRV_REQ_PREFETCH);
  }
  
  static void stream_abort(Job *job)

@@ -53,23 +56,20 @@ static void stream_abort(Job *job)
  StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
  
  if (s->chain_frozen) {

-BlockJob *bjob = >common;
-bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->above_base);
+bdrv_unfreeze_backing_chain(s->cor_filter_bs, s->above_base);
  }
  }
  
  static int stream_prepare(Job *job)

  {
  StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
-BlockJob *bjob = >common;
-BlockDriverState *bs = blk_bs(bjob->blk);
-BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
+BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
  BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
  BlockDriverState *base_unfiltered = NULL;
  Error *local_err = NULL;
  int ret = 0;
  
-bdrv_unfreeze_backing_chain(bs, s->above_base);

+bdrv_unfreeze_backing_chain(s->cor_filter_bs, s->above_base);
  s->chain_frozen = false;
  
  if (bdrv_cow_child(unfiltered_bs)) {

@@ -105,15 +105,16 @@ static void stream_clean(Job *job)
  {
  StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
  BlockJob *bjob = >common;
-BlockDriverState *bs = blk_bs(bjob->blk);
  
  /* Reopen the image back in read-only mode if necessary */

  if (s->bs_read_only) {
  /* Give up write permissions before making it read-only */
  blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, _abort);
-bdrv_reopen_set_read_only(bs, true, NULL);
+bdrv_reopen_set_read_only(s->target_bs, true, NULL);
  }
  
+bdrv_cor_filter_drop(s->cor_filter_bs);

+
  g_free(s->backing_file_str);
  }
  
@@ -121,9 +122,7 @@ static int coroutine_fn stream_run(Job *job, Error **errp)

  {
  StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
  BlockBackend *blk = s->common.blk;
-BlockDriverState *bs = blk_bs(blk);
-BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
-bool enable_cor = !bdrv_cow_child(s->base_overlay);
+BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
  int64_t len;
  int64_t offset = 0;
  uint64_t delay_ns = 0;
@@ -135,21 +134,12 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
  return 0;
  }
  
-len = bdrv_getlength(bs);

+len = bdrv_getlength(s->target_bs);
  if (len < 0) {
  return len;
  }
  job_progress_set_remaining(>common.job, len);
  
-/* Turn on 

[PATCH v12 14/14] block: apply COR-filter to block-stream jobs

2020-10-22 Thread Andrey Shinkevich via
This patch completes the series with the COR-filter insertion for
block-stream operations. Adding the filter makes it possible for copied
regions to be discarded in backing files during the block-stream job,
what will reduce the disk overuse.
The COR-filter insertion incurs changes in the iotests case
245:test_block_stream_4 that reopens the backing chain during a
block-stream job. There are changes in the iotests #030 as well.
The iotests case 030:test_stream_parallel was deleted due to multiple
conflicts between the concurrent job operations over the same backing
chain. The base backing node for one job is the top node for another
job. It may change due to the filter node inserted into the backing
chain while both jobs are running. Another issue is that the parts of
the backing chain are being frozen by the running job and may not be
changed by the concurrent job when needed. The concept of the parallel
jobs with common nodes is considered vital no more.

Signed-off-by: Andrey Shinkevich 
---
 block/stream.c | 98 ++
 tests/qemu-iotests/030 | 51 +++-
 tests/qemu-iotests/030.out |  4 +-
 tests/qemu-iotests/141.out |  2 +-
 tests/qemu-iotests/245 | 22 +++
 5 files changed, 87 insertions(+), 90 deletions(-)

diff --git a/block/stream.c b/block/stream.c
index 1ba74ab..f6ed315 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -17,8 +17,10 @@
 #include "block/blockjob_int.h"
 #include "qapi/error.h"
 #include "qapi/qmp/qerror.h"
+#include "qapi/qmp/qdict.h"
 #include "qemu/ratelimit.h"
 #include "sysemu/block-backend.h"
+#include "block/copy-on-read.h"
 
 enum {
 /*
@@ -33,6 +35,8 @@ typedef struct StreamBlockJob {
 BlockJob common;
 BlockDriverState *base_overlay; /* COW overlay (stream from this) */
 BlockDriverState *above_base;   /* Node directly above the base */
+BlockDriverState *cor_filter_bs;
+BlockDriverState *target_bs;
 BlockdevOnError on_error;
 char *backing_file_str;
 bool bs_read_only;
@@ -44,8 +48,7 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
 {
 assert(bytes < SIZE_MAX);
 
-return blk_co_preadv(blk, offset, bytes, NULL,
- BDRV_REQ_COPY_ON_READ | BDRV_REQ_PREFETCH);
+return blk_co_preadv(blk, offset, bytes, NULL, BDRV_REQ_PREFETCH);
 }
 
 static void stream_abort(Job *job)
@@ -53,23 +56,20 @@ static void stream_abort(Job *job)
 StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
 
 if (s->chain_frozen) {
-BlockJob *bjob = >common;
-bdrv_unfreeze_backing_chain(blk_bs(bjob->blk), s->above_base);
+bdrv_unfreeze_backing_chain(s->cor_filter_bs, s->above_base);
 }
 }
 
 static int stream_prepare(Job *job)
 {
 StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
-BlockJob *bjob = >common;
-BlockDriverState *bs = blk_bs(bjob->blk);
-BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
+BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
 BlockDriverState *base = bdrv_filter_or_cow_bs(s->above_base);
 BlockDriverState *base_unfiltered = NULL;
 Error *local_err = NULL;
 int ret = 0;
 
-bdrv_unfreeze_backing_chain(bs, s->above_base);
+bdrv_unfreeze_backing_chain(s->cor_filter_bs, s->above_base);
 s->chain_frozen = false;
 
 if (bdrv_cow_child(unfiltered_bs)) {
@@ -105,15 +105,16 @@ static void stream_clean(Job *job)
 {
 StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
 BlockJob *bjob = >common;
-BlockDriverState *bs = blk_bs(bjob->blk);
 
 /* Reopen the image back in read-only mode if necessary */
 if (s->bs_read_only) {
 /* Give up write permissions before making it read-only */
 blk_set_perm(bjob->blk, 0, BLK_PERM_ALL, _abort);
-bdrv_reopen_set_read_only(bs, true, NULL);
+bdrv_reopen_set_read_only(s->target_bs, true, NULL);
 }
 
+bdrv_cor_filter_drop(s->cor_filter_bs);
+
 g_free(s->backing_file_str);
 }
 
@@ -121,9 +122,7 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
 {
 StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
 BlockBackend *blk = s->common.blk;
-BlockDriverState *bs = blk_bs(blk);
-BlockDriverState *unfiltered_bs = bdrv_skip_filters(bs);
-bool enable_cor = !bdrv_cow_child(s->base_overlay);
+BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
 int64_t len;
 int64_t offset = 0;
 uint64_t delay_ns = 0;
@@ -135,21 +134,12 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
 return 0;
 }
 
-len = bdrv_getlength(bs);
+len = bdrv_getlength(s->target_bs);
 if (len < 0) {
 return len;
 }
 job_progress_set_remaining(>common.job, len);
 
-/* Turn on copy-on-read for the whole block device so that guest read
- * requests help us make progress.  Only do this when copying