Create a job that remains on STANDBY after a drained section, and see
that invoking job_wait_unpaused() will get it unstuck.
Signed-off-by: Max Reitz
---
tests/unit/test-blockjob.c | 121 +
1 file changed, 121 insertions(+)
diff --git a/tests/unit/test
and job_complete()
refuses to do anything).
I’m not sure we want that iotest, because it does quite a bit of I/O and
it’s unreliable, and I don’t think there’s anything I can do to make it
reliable.
Max Reitz (5):
mirror: Move open_backing_file to exit_common
mirror: Do not enter a paused job
The only job that implements .complete is the mirror job, and it can
handle completion requests just fine while the job is paused.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1945635
Signed-off-by: Max Reitz
---
job.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
This is a graph change and therefore should be done in job-finalize
(which is what invokes mirror_exit_common()).
Signed-off-by: Max Reitz
---
block/mirror.c | 22 --
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index
Test what happens when you have multiple busy block jobs, drain all (via
an empty transaction), and immediately issue a block-job-complete on one
of the jobs.
Sometimes it will still be in STANDBY, in which case block-job-complete
used to fail. It should not.
Signed-off-by: Max Reitz
is paused.
So technically this is a no-op, but obviously the intention is to accept
block-job-complete even for jobs on standby, which we need this patch
for first.
Signed-off-by: Max Reitz
---
block/mirror.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/mirror.c b
On 09.04.21 12:07, Vladimir Sementsov-Ogievskiy wrote:
09.04.2021 12:51, Max Reitz wrote:
On 08.04.21 19:26, Vladimir Sementsov-Ogievskiy wrote:
08.04.2021 20:04, John Snow wrote:
On 4/8/21 12:58 PM, Vladimir Sementsov-Ogievskiy wrote:
job-complete command is async. Can we instead just add
On 09.04.21 11:44, Kevin Wolf wrote:
Am 08.04.2021 um 18:55 hat John Snow geschrieben:
On 4/8/21 12:20 PM, Max Reitz wrote:
block-job-complete can only be applied when the job is READY, not when
it is on STANDBY (ready, but paused). Draining a job technically pauses
it (which makes a READY
On 08.04.21 19:26, Vladimir Sementsov-Ogievskiy wrote:
08.04.2021 20:04, John Snow wrote:
On 4/8/21 12:58 PM, Vladimir Sementsov-Ogievskiy wrote:
job-complete command is async. Can we instead just add a boolean like
job->completion_requested, and set it if job-complete called in
STANDBY
On 08.04.21 18:58, Vladimir Sementsov-Ogievskiy wrote:
08.04.2021 19:20, Max Reitz wrote:
block-job-complete can only be applied when the job is READY, not when
it is on STANDBY (ready, but paused). Draining a job technically pauses
it (which makes a READY job enter STANDBY), and ending
On 08.04.21 18:55, John Snow wrote:
On 4/8/21 12:20 PM, Max Reitz wrote:
block-job-complete can only be applied when the job is READY, not when
it is on STANDBY (ready, but paused). Draining a job technically pauses
it (which makes a READY job enter STANDBY), and ending the drained
section
to have as part of the iotests.
Instead, I opted for a unit test, which allows me to cheat a bit
(specifically, locking the job IO thread before ending the drained
section).
Max Reitz (3):
job: Add job_wait_unpaused() for block-job-complete
test-blockjob: Test job_wait_unpaused()
iotests/041
then.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1945635
Signed-off-by: Max Reitz
---
include/qemu/job.h | 15 +++
blockdev.c | 3 +++
job.c | 42 ++
3 files changed, 60 insertions(+)
diff --git a/include/qemu
Create a job that remains on STANDBY after a drained section, and see
that invoking job_wait_unpaused() will get it unstuck.
Signed-off-by: Max Reitz
---
tests/unit/test-blockjob.c | 140 +
1 file changed, 140 insertions(+)
diff --git a/tests/unit/test
Expand test_pause() to check what happens when issuing
block-job-complete on a job that is on STANDBY because it has been
paused by the user. (This should be an error, and in particular not
hang job_wait_unpaused().)
Signed-off-by: Max Reitz
---
tests/qemu-iotests/041 | 13 -
1
On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
If main job coroutine called job_yield (while some background process
is in progress), we should give it a chance to call job_pause_point().
It will be used in backup, when moved on async block-copy.
Note, that job_user_pause is not enough:
file changed, 6 insertions(+), 4 deletions(-)
Reviewed-by: Max Reitz
I’m not quite sure whether this is fit for 6.0... I think it’s too late
for rc2, so I don’t know.
Max
On 01.04.21 23:01, Connor Kuehl wrote:
Sometimes the parser needs to further split a token it has collected
from the token input stream. Right now, it does a cursory check to see
if the relevant characters appear in the token to determine if it should
break it down further.
However,
On 01.04.21 17:52, Connor Kuehl wrote:
That's qemu_rbd_unescape()'s job! No need to duplicate the labor.
Furthermore, this was causing some confusion in the parsing logic to
where the caller might test for the presence of a character to split on
like so:
if (strchr(image_name, '/')) {
On 01.04.21 18:52, Max Reitz wrote:
On 01.04.21 17:52, Connor Kuehl wrote:
The deprecation message changed slightly at some point in the past but
the expected output wasn't updated along with it; causing it to fail.
Fix it, so it passes.
Signed-off-by: Connor Kuehl
---
tests/qemu-iotests
On 01.04.21 17:52, Connor Kuehl wrote:
The deprecation message changed slightly at some point in the past but
the expected output wasn't updated along with it; causing it to fail.
Fix it, so it passes.
Signed-off-by: Connor Kuehl
---
tests/qemu-iotests/231.out | 4 +---
1 file changed, 1
On 01.04.21 16:44, Vladimir Sementsov-Ogievskiy wrote:
01.04.2021 16:28, Max Reitz wrote:
Using common.qemu allows us to wait for specific replies, so we can for
example wait for events. This allows starting the active commit job and
then wait for it to be ready before quitting the QSD, so we
On 01.04.21 16:37, Vladimir Sementsov-Ogievskiy wrote:
01.04.2021 17:37, Vladimir Sementsov-Ogievskiy wrote:
01.04.2021 16:28, Max Reitz wrote:
The job may or may not be ready before the 'quit' is issued. Whether it
is is irrelevant; for the purpose of the test, it only needs to still
in
qsd-jobs, but we might as well make the second one use common.qemu's
infrastructure, too.)
Reported-by: Peter Maydell
Signed-off-by: Max Reitz
---
tests/qemu-iotests/tests/qsd-jobs | 55 ---
tests/qemu-iotests/tests/qsd-jobs.out | 10 -
2 files changed, 49
Sementsov-Ogievskiy
Signed-off-by: Max Reitz
---
This is an alternative to "iotests/qsd-jobs: Use common.qemu for the
QSD". I can't disagree with Vladimir that perhaps this test just should
not care about the job status events, because all that matters is that
the job is still running
For block things, we often do not need to run all of qemu, so allow
using the qemu-storage-daemon instead.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/common.qemu | 53 +++---
1 file changed, 43 insertions(+), 10 deletions(-)
diff --git a/tests/qemu-iotests
might
want to fix qsd-jobs in 6.0.
Max Reitz (2):
iotests/common.qemu: Allow using the QSD
iotests/qsd-jobs: Use common.qemu for the QSD
tests/qemu-iotests/common.qemu| 53 +-
tests/qemu-iotests/tests/qsd-jobs | 55 ---
tests/qemu
On 01.04.21 10:32, Vladimir Sementsov-Ogievskiy wrote:
31.03.2021 15:28, Max Reitz wrote:
Add a test accompanying commit 53431b9086b2832ca1aeff0c55e186e9ed79bd11
("block/mirror: Fix mirror_top's permissions").
Signed-off-by: Max Reitz
---
tests/qemu-iotests/tests/mirror-top-perms
Add a test accompanying commit 53431b9086b2832ca1aeff0c55e186e9ed79bd11
("block/mirror: Fix mirror_top's permissions").
Signed-off-by: Max Reitz
---
tests/qemu-iotests/tests/mirror-top-perms | 121 ++
tests/qemu-iotests/tests/mirror-top-perms.out | 5 +
2 fil
On 30.03.21 18:47, Vladimir Sementsov-Ogievskiy wrote:
29.03.2021 16:26, Max Reitz wrote:
pylint complains that discards1_sha256 and all_discards_sha256 are first
set in non-__init__ methods. Let's make it happy.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/tests/migrate-bitmaps
On 30.03.21 15:25, Vladimir Sementsov-Ogievskiy wrote:
30.03.2021 15:51, Max Reitz wrote:
On 30.03.21 12:51, Vladimir Sementsov-Ogievskiy wrote:
30.03.2021 12:49, Max Reitz wrote:
On 25.03.21 20:12, Vladimir Sementsov-Ogievskiy wrote:
ping. Do we want it for 6.0?
I’d rather wait. I think
On 30.03.21 12:51, Vladimir Sementsov-Ogievskiy wrote:
30.03.2021 12:49, Max Reitz wrote:
On 25.03.21 20:12, Vladimir Sementsov-Ogievskiy wrote:
ping. Do we want it for 6.0?
I’d rather wait. I think the conclusion was that guests shouldn’t hit
this because they serialize discards?
I
error messages pertaining to
'node-name'")
Signed-off-by: Connor Kuehl
Message-Id: <20210318200949.1387703-2-cku...@redhat.com>
Tested-by: Christian Borntraeger
Reviewed-by: John Snow
Signed-off-by: Max Reitz
---
tests/qemu-iotests/051.out | 6 +++---
1 file changed, 3 insertions(+), 3 de
Let's fix the reference output, which has the nice side effect of
demonstrating 15ce94a68ca's improvements.
Fixes: 15ce94a68ca6730466c565c3d29971aab3087bf1
("block/qed: bdrv_qed_do_open: deal with errp")
Signed-off-by: Max Reitz
Message-Id: <20210326141419.156831-1-mre...@redh
Implementing FUSE exports required no changes to the storage daemon, so
we forgot to document them there. Considering that both NBD and
vhost-user-blk exports are documented in its man page (and NBD exports
in its --help text), we should probably do the same for FUSE.
Signed-off-by: Max Reitz
request), the test fails (for no good reason).
Filter the length, too.
Signed-off-by: Max Reitz
Message-Id: <20200918153323.108932-1-mre...@redhat.com>
---
tests/qemu-iotests/046 | 3 +-
tests/qemu-iotests/046.out | 104 ++---
2 files changed, 54 inse
to be the case, because without the L2 tables
preallocated, all clusters would appear as unallocated, and so the
qcow2 driver would fall through to the backing file.)
Signed-off-by: Max Reitz
Message-Id: <20210326145509.163455-3-mre...@redhat.com>
Reviewed-by: Eric Blake
---
tests/qemu-iotes
area.
Signed-off-by: Max Reitz
Message-Id: <20210326145509.163455-2-mre...@redhat.com>
Reviewed-by: Eric Blake
---
block/qcow2.c | 34 ++
tests/qemu-iotests/244.out | 9 -
2 files changed, 38 insertions(+), 5 deletions(-)
diff --git a
external flag to the timer for qcow2
cache clean.
Signed-off-by: Pavel Dovgalyuk
Reviewed-by: Paolo Bonzini
Message-Id: <161700516327.1141158.8366564693714562536.stgit@pasha-ThinkPad-X280>
Signed-off-by: Max Reitz
---
block/qcow2.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
for
all images in the backing chain, so the mirror job can take it for the
target BB).
Signed-off-by: Max Reitz
Message-Id: <20210211172242.146671-2-mre...@redhat.com>
Reviewed-by: Eric Blake
Reviewed-by: Vladimir Sementsov-Ogievskiy
---
block/mirror.c | 32 +---
From: Tao Xu
There is an typo in iotest 051, correct it.
Signed-off-by: Tao Xu
Message-Id: <20210324084321.90952-1-tao3...@intel.com>
Signed-off-by: Max Reitz
---
tests/qemu-iotests/051| 2 +-
tests/qemu-iotests/051.pc.out | 4 ++--
2 files changed, 3 insertions(+), 3 del
structures should be preallocated
- iotest fixes
Connor Kuehl (1):
iotests: fix 051.out expected output after error text touchups
Max Reitz (6):
iotests/116: Fix reference output
iotests/046: Filter request length
block
On 30.03.21 13:32, Max Reitz wrote:
On 26.03.21 15:23, Paolo Bonzini wrote:
This series adds a few usability improvements to qemu-iotests, in
particular:
- arguments can be passed to Python unittests scripts, for example
to run only a subset of the test cases (patches 1-2)
- it is possible
On 26.03.21 15:23, Paolo Bonzini wrote:
This series adds a few usability improvements to qemu-iotests, in
particular:
- arguments can be passed to Python unittests scripts, for example
to run only a subset of the test cases (patches 1-2)
- it is possible to do "./check --
On 26.03.21 15:55, Max Reitz wrote:
v1: https://lists.nongnu.org/archive/html/qemu-block/2020-06/msg00992.html
Hi,
I think that qcow2 images with data-file-raw should always have
preallocated 1:1 L1/L2 tables, so that the image always looks the same
whether you respect or ignore the qcow2
On 30.03.21 12:44, Max Reitz wrote:
On 30.03.21 12:38, Max Reitz wrote:
On 26.03.21 16:05, Max Reitz wrote:
On 26.03.21 15:23, Paolo Bonzini wrote:
Right now there is no easy way for "check" to print a reproducer
command.
Because such a reproducer command line would be huge, we c
On 30.03.21 12:38, Max Reitz wrote:
On 26.03.21 16:05, Max Reitz wrote:
On 26.03.21 15:23, Paolo Bonzini wrote:
Right now there is no easy way for "check" to print a reproducer
command.
Because such a reproducer command line would be huge, we can instead
teach
check to start a comm
On 26.03.21 16:05, Max Reitz wrote:
On 26.03.21 15:23, Paolo Bonzini wrote:
Right now there is no easy way for "check" to print a reproducer command.
Because such a reproducer command line would be huge, we can instead
teach
check to start a command of our choice. This can be f
On 25.03.21 20:12, Vladimir Sementsov-Ogievskiy wrote:
ping. Do we want it for 6.0?
I’d rather wait. I think the conclusion was that guests shouldn’t hit
this because they serialize discards?
There’s also something Kevin wrote on IRC a couple of weeks ago, for
which I had hoped he’d sent
On 17.02.21 12:58, Max Reitz wrote:
Implementing FUSE exports required no changes to the storage daemon, so
we forgot to document them there. Considering that both NBD and
vhost-user-blk exports are documented in its man page (and NBD exports
in its --help text), we should probably do the same
On 11.02.21 18:22, Max Reitz wrote:
Hi,
[...]
(Speaking of “unless the WRITE permission is shared”: mirror_top is a
bit broken in that it takes no permissions (but WRITE if necessary) and
shares everything. That seems wrong. Patch 1 addresses that, so that
patch 2 can actually do something
On 18.09.20 17:33, Max Reitz wrote:
For its concurrent requests, 046 has always filtered the offset,
probably because concurrent requests may settle in any order. However,
it did not filter the request length, and so if requests with different
lengths settle in an unexpected order (notably
org/archive/html/qemu-block/2021-03/msg00654.html)
to rewrite the patch to use common.qemu to test QSD. With that we could
issue the block-commit command, wait for BLOCK_JOB_READY, and only then
issue 'quit'.
Max
>From e77dc43cae17883cefb5766e6932fde359806dbd Mon Sep 17 00:00:00 2001
From: M
297 so far does not check the named tests, which reside in the tests/
directory (i.e. full path tests/qemu-iotests/tests). Fix it.
Thanks to the previous two commits, all named tests pass its scrutiny,
so we do not have to add anything to SKIP_FILES.
Signed-off-by: Max Reitz
---
tests/qemu
yet. I think it would be nice if we could keep
all of tests/ clean.
Max Reitz (4):
iotests/297: Drop 169 and 199 from the skip list
migrate-bitmaps-postcopy-test: Fix pylint warnings
migrate-bitmaps-test: Fix pylint warnings
iotests/297: Cover tests/
tests/qemu-iotests/297
nes are too long (80 characters instead of 79)
- inject_test_case()'s @name parameter shadows a top-level @name
variable
- "lambda self: mc(self)" is equivalent to just "mc"
- Always put two empty lines after a function
- f'exec: cat > /dev/null' does not need to be an f-string
Fix
169 and 199 have been renamed and moved to tests/ (commit a44be0334be:
"iotests: rename and move 169 and 199 tests"), so we can drop them from
the skip list.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/297 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/
pylint complains that discards1_sha256 and all_discards_sha256 are first
set in non-__init__ methods. Let's make it happy.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/tests/migrate-bitmaps-postcopy-test | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tests/qemu-iotests/tests/migrate
On 17.03.21 17:02, Vladimir Sementsov-Ogievskiy wrote:
Just demonstrate one of x-blockdev-reopen usecases. We can't simply
remove persistent bitmap from RO node (for example from backing file),
as we need to remove it from the image too. So, we should reopen the
node first.
Signed-off-by:
On 26.03.21 16:17, Eric Blake wrote:
On 3/26/21 9:55 AM, Max Reitz wrote:
Three test cases:
(1) Adding a qcow2 (metadata) file to an existing data file, see whether
we can read the existing data through the qcow2 image.
(2) Append data to the data file, grow the qcow2 image accordingly
On 26.03.21 15:23, Paolo Bonzini wrote:
Right now there is no easy way for "check" to print a reproducer command.
Because such a reproducer command line would be huge, we can instead teach
check to start a command of our choice. This can be for example a Python
unit test with arguments to only
)ontextual differences, respectively
001/2:[0012] [FC] 'qcow2: Force preallocation with data-file-raw'
002/2:[0110] [FC] 'iotests/244: Test preallocation for data-file-raw'
Max Reitz (2):
qcow2: Force preallocation with data-file-raw
iotests/244: Test preallocation for data-file-raw
block/qcow2.c
area.
Signed-off-by: Max Reitz
---
block/qcow2.c | 34 ++
tests/qemu-iotests/244.out | 9 -
2 files changed, 38 insertions(+), 5 deletions(-)
diff --git a/block/qcow2.c b/block/qcow2.c
index 0db1227ac9..9920c756eb 100644
--- a/block/qcow2.c
to be the case, because without the L2 tables
preallocated, all clusters would appear as unallocated, and so the
qcow2 driver would fall through to the backing file.)
Signed-off-by: Max Reitz
---
tests/qemu-iotests/244 | 104 +
tests/qemu-iotests/244.out | 59
Let's fix the reference output, which has the nice side effect of
demonstrating 15ce94a68ca's improvements.
Fixes: 15ce94a68ca6730466c565c3d29971aab3087bf1
("block/qed: bdrv_qed_do_open: deal with errp")
Signed-off-by: Max Reitz
---
tests/qemu-iotests/116.out | 12 ++-
On 26.03.21 12:48, Vladimir Sementsov-Ogievskiy wrote:
10.03.2021 18:59, Max Reitz wrote:
When rebuilding the refcount structures (when qemu-img check -r found
errors with refcount = 0, but reference count > 0), the new refcount
table defaults to being put at the image file en
On 23.03.21 19:19, Paolo Bonzini wrote:
This series adds a few usability improvements to qemu-iotests, in
particular:
- arguments can be passed to Python unittests scripts, for example
to run only a subset of the test cases (patches 1-2)
- it is possible to do "./check --
On 23.03.21 19:19, Paolo Bonzini wrote:
This series adds a few usability improvements to qemu-iotests, in
particular:
- arguments can be passed to Python unittests scripts, for example
to run only a subset of the test cases (patches 1-2)
- it is possible to do "./check --
On 18.03.21 21:09, Connor Kuehl wrote:
Oops, sorry about the churn. I can see why this would have caused a
failure but I'm surprised I can't reproduce this when I run the test
locally.
Christian, would you be willing to test this patch out as a quick sanity
check too?
Connor Kuehl (1):
On 24.03.21 09:43, Tao Xu wrote:
There is an typo in iotest 051, correct it.
Signed-off-by: Tao Xu
---
tests/qemu-iotests/051| 2 +-
tests/qemu-iotests/051.pc.out | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
Thanks, applied to my block branch:
On 22.03.21 10:25, ChangLimin wrote:
For Linux 5.10/5.11, qemu write zeros to a multipath device using
ioctl(fd, BLKZEROOUT, range) with cache none or directsync return -EBUSY
permanently.
So as far as I can track back the discussion, Kevin asked on v1 why we’d
set has_write_zeroes to false,
weird to keep
that check if *block were freed by that point, I suppose this isn’t
making it worse, though, so:
Reviewed-by: Max Reitz
+header = g_malloc(sizeof(*header));
+
logout("now writing modified header\n");
assert(VDI_IS_ALLOCATED(bmap_first));
On 17.03.21 19:00, Paolo Bonzini wrote:
This is a resubmit of David Edmondson's series at
https://patchew.org/QEMU/20210309144015.557477-1-david.edmond...@oracle.com/.
After closer analysis on IRC, the CoRwlock's attempt to ensure
fairness turned out to be flawed. Therefore, this series
-Id: <20210309144015.557477-2-david.edmond...@oracle.com>
Signed-off-by: Paolo Bonzini
---
block/vdi.c | 1 +
1 file changed, 1 insertion(+)
Reviewed-by: Max Reitz
On 22.03.21 12:27, Patrik Janoušek wrote:
On 3/22/21 11:48 AM, Max Reitz wrote:
Hi,
On 20.03.21 11:01, Patrik Janoušek wrote:
I'm sorry, but I forgot to add you to the cc, so I'm forwarding the
patch to you additionally. I don't want to spam the mailing list
unnecessarily.
I think it’s
On 22.03.21 11:48, Klaus Jensen wrote:
On Mar 22 11:02, Max Reitz wrote:
On 22.03.21 07:19, Klaus Jensen wrote:
From: Klaus Jensen
In nvme_format_ns(), if the namespace is of zero size (which might be
useless, but not invalid), the `count` variable will leak. Fix this by
returning early
Hi,
On 20.03.21 11:01, Patrik Janoušek wrote:
I'm sorry, but I forgot to add you to the cc, so I'm forwarding the
patch to you additionally. I don't want to spam the mailing list
unnecessarily.
I think it’s better to still CC the list. It’s so full of mail, one
more won’t hurt. :)
On 22.03.21 07:19, Klaus Jensen wrote:
From: Klaus Jensen
In nvme_format_ns(), if the namespace is of zero size (which might be
useless, but not invalid), the `count` variable will leak. Fix this by
returning early in that case.
When looking at the Coverity report, something else caught my
On 19.03.21 11:51, Max Reitz wrote:
On 19.03.21 11:50, Laurent Vivier wrote:
Le 19/03/2021 à 10:20, Max Reitz a écrit :
On 19.03.21 07:32, Thomas Huth wrote:
On 18/03/2021 18.28, Max Reitz wrote:
[...]
From that it follows that I don’t see much use in testing
specific devices either. Say
On 19.03.21 11:50, Laurent Vivier wrote:
Le 19/03/2021 à 10:20, Max Reitz a écrit :
On 19.03.21 07:32, Thomas Huth wrote:
On 18/03/2021 18.28, Max Reitz wrote:
[...]
From that it follows that I don’t see much use in testing specific devices
either. Say there’s
a platform that provides
On 19.03.21 07:32, Thomas Huth wrote:
On 18/03/2021 18.28, Max Reitz wrote:
[...]
From that it follows that I don’t see much use in testing specific
devices either. Say there’s a platform that provides both virtio-pci
and virtio-mmio, the default (say virtio-pci) is fine for the iotests.
I
On 18.03.21 17:25, Philippe Mathieu-Daudé wrote:
On 3/18/21 4:56 PM, Laurent Vivier wrote:
Le 18/03/2021 à 16:51, Laurent Vivier a écrit :
Le 18/03/2021 à 16:36, Philippe Mathieu-Daudé a écrit :
On 3/18/21 11:06 AM, Laurent Vivier wrote:
Le 18/03/2021 à 11:02, Philippe Mathieu-Daudé a écrit
On 16.03.21 18:48, Vladimir Sementsov-Ogievskiy wrote:
16.03.2021 15:25, Max Reitz wrote:
On 15.03.21 15:40, Vladimir Sementsov-Ogievskiy wrote:
15.03.2021 12:58, Max Reitz wrote:
[...]
The question is whether it really makes sense to even have a
seqcache_read() path when in reality it’s
On 15.03.21 15:40, Vladimir Sementsov-Ogievskiy wrote:
15.03.2021 12:58, Max Reitz wrote:
[...]
The question is whether it really makes sense to even have a
seqcache_read() path when in reality it’s probably never accessed. I
mean, besides the fact that it seems based purely on chance
On 12.03.21 19:43, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 21:15, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
Compressed writes are unaligned to 512, which works very slow in
O_DIRECT mode. Let's use the cache.
Signed-off-by: Vladimir Sementsov-Ogievskiy
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
Compressed writes are unaligned to 512, which works very slow in
O_DIRECT mode. Let's use the cache.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
block/coroutines.h | 3 +
block/qcow2.h | 4 ++
deletions(-)
Reviewed-by: Max Reitz
On 12.03.21 16:24, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 18:10, Max Reitz wrote:
On 12.03.21 13:46, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 15:32, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 14:17, Max Reitz wrote:
On 12.03.21 10:09, Vladimir Sementsov-Ogievskiy wrote
On 12.03.21 15:37, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 16:41, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
Implement cache for small sequential unaligned writes, so that they may
be cached until we get a complete cluster and then write it.
The cache
On 12.03.21 13:46, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 15:32, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 14:17, Max Reitz wrote:
On 12.03.21 10:09, Vladimir Sementsov-Ogievskiy wrote:
11.03.2021 22:58, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote
On 12.03.21 13:42, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 15:32, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 14:17, Max Reitz wrote:
On 12.03.21 10:09, Vladimir Sementsov-Ogievskiy wrote:
11.03.2021 22:58, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote
On 12.03.21 13:32, Vladimir Sementsov-Ogievskiy wrote:
12.03.2021 14:17, Max Reitz wrote:
On 12.03.21 10:09, Vladimir Sementsov-Ogievskiy wrote:
11.03.2021 22:58, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
There is a bug in qcow2: host cluster can be discarded
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
Implement cache for small sequential unaligned writes, so that they may
be cached until we get a complete cluster and then write it.
The cache is intended to be used for backup to qcow2 compressed target
opened in O_DIRECT mode, but can be
On 12.03.21 10:09, Vladimir Sementsov-Ogievskiy wrote:
11.03.2021 22:58, Max Reitz wrote:
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
There is a bug in qcow2: host cluster can be discarded (refcount
becomes 0) and reused during data write. In this case data write may
pollute another
On 05.03.21 18:35, Vladimir Sementsov-Ogievskiy wrote:
There is a bug in qcow2: host cluster can be discarded (refcount
becomes 0) and reused during data write. In this case data write may
pollute another cluster (recently allocated) or even metadata.
I was about to ask whether we couldn’t
++
block/mirror.c | 2 ++
block/stream.c | 2 ++
blockjob.c | 16
6 files changed, 45 insertions(+), 3 deletions(-)
Reviewed-by: Max Reitz
Just a nit on the function’s description.
diff --git a/include/block/blockjob_int.h b
On 10.03.21 17:35, Fam Zheng wrote:
On Wed, 10 Mar 2021 at 15:02, Max Reitz <mailto:mre...@redhat.com>> wrote:
On 10.03.21 15:17, f...@euphon.net <mailto:f...@euphon.net> wrote:
> From: Fam Zheng mailto:famzh...@amazon.com>>
>
> null-co://
_cleanup_qemu cleans up all qemu instances, which sometimes is not very
useful. Pull out _cleanup_single_qemu, which does the same only for a
single instance.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/common.qemu | 55 +-
1 file changed, 34 insertions
e end, i.e. outside of what the block device provides, which
cannot work. HEAD^ should have fixed that.
("Something like a block device" means a loop device if we can use
one ("sudo -n losetup" works), or a FUSE block export with
growable=false otherwise.)
Signed-off-
For block things, we often do not need to run all of qemu, so allow
using the qemu-storage-daemon instead.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/common.qemu | 53 +++---
1 file changed, 43 insertions(+), 10 deletions(-)
diff --git a/tests/qemu-iotests
301 - 400 of 9029 matches
Mail list logo