On 26.02.26 16:15, Kevin Wolf wrote:
Am 25.02.2026 um 21:18 hat Vladimir Sementsov-Ogievskiy geschrieben:
On 25.02.26 19:38, Kevin Wolf wrote:
Am 02.04.2025 um 10:37 hat Vladimir Sementsov-Ogievskiy geschrieben:
Actually block job is not completed without the final flush. It's
rather unexpected to have broken target when job was successfully
completed long ago and now we fail to flush or process just
crashed/killed.
Mirror job already has mirror_flush() for this. So, it's OK.
Do this for stream, commit and backup jobs too.
Note that jobs behave a bit different around IGNORE action:
backup and commit just retry the operation, when stream skip
failed operation and store the error to report later. Keep
these different behaviors for final flush too.
Signed-off-by: Vladimir Sementsov-Ogievskiy <[email protected]>
---
v2 was old "[PATCH v2 0/3] block-jobs: add final flush"[1]
v3: follow Kevin's suggestion to introduce block_job_handle_error()
(still, it's not obvious how to rewrite commit and stream operation
loops reusing this helper, not making things more complicated..
I decided too keep them as is, using new helper only for final flush.)
[1] https://patchew.org/QEMU/[email protected]/
Supersedes: <[email protected]>
block/backup.c | 8 ++++++++
block/commit.c | 6 +++++-
block/stream.c | 8 ++++++--
blockjob.c | 34 ++++++++++++++++++++++++++++++++++
include/block/blockjob.h | 9 +++++++++
5 files changed, 62 insertions(+), 3 deletions(-)
diff --git a/block/commit.c b/block/commit.c
index 5df3d05346..711093504f 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -201,7 +201,11 @@ static int coroutine_fn commit_run(Job *job, Error **errp)
}
}
- return 0;
+ do {
+ ret = blk_co_flush(s->base);
+ } while (block_job_handle_error(&s->common, ret, s->on_error, true, true));
Why does commit still flush even if we return an error?
Hmm. We are on success path here, we do flush where "return 0;" was.
True, disregard it here. I saw it on stream and didn't read commit
carefully enough assuming it was the same.
@@ -235,8 +235,12 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
}
}
+ do {
+ ret = blk_co_flush(s->blk);
+ } while (block_job_handle_error(&s->common, ret, s->on_error, true,
false));
Same here about flushing even on error.
Here I keep the behavior of stream: for "ignore" action, it continues streaming,
but in the end return failure. It's documented in QAPI, and I note it in commit
message.
If we continue writes after failure, I assume we still need these writes, so,
why not flush them?
It probably doesn't hurt much, but since we already know that the
streaming won't complete successfully, it's probably also not incorrect
not to flush - nobody can rely on everything being present in the active
layer.
The only thing that can happen is that you get another failure,
which could be nasty if the failure comes only after a long timeout. But
I guess either way is fine.
As far as I understand semantics of "ignore", it means "ignore errors, but still
try to do as much progress as possible". And it seems strange that other jobs do
endless retry in case of "ignore" action..
Yes, I wonder if anyone is using this mode in practice.
Me too. Maybe, try to deprecate it? And we'll see, will someone be against it.
+
/* Do not remove the backing file if an error was there but ignored. */
- return error;
+ return error ?: ret;
}
static const BlockJobDriver stream_job_driver = {
diff --git a/blockjob.c b/blockjob.c
index 32007f31a9..70a7af2077 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -626,3 +626,37 @@ AioContext *block_job_get_aio_context(BlockJob *job)
GLOBAL_STATE_CODE();
return job->job.aio_context;
}
+
+bool coroutine_fn
+block_job_handle_error(BlockJob *job, int ret, BlockdevOnError on_err,
+ bool is_read, bool retry_on_ignore)
+{
+ assert(ret >= 0);
+
+ if (ret == 0) {
+ return false;
+ }
+
+ if (job_is_cancelled(&job->job)) {
+ return false;
+ }
+
+ BlockErrorAction action =
+ block_job_error_action(job, on_err, is_read, -ret);
+ switch (action) {
+ case BLOCK_ERROR_ACTION_REPORT:
+ return false;
+ case BLOCK_ERROR_ACTION_IGNORE:
+ if (!retry_on_ignore) {
+ return false;
+ }
+ /* fallthrough */
What is the idea behind having a pause point for "ignore"? Is it to at
least avoid QEMU hanging completely if it goes into an infinite loop
with retry_on_ignore?
Yes, just to have a pause point on every iteration of retrying loop,
like we do it in data-copying iterations of block jobs.
Ok, that's fair. If you want, we could add a comment to this effect.
Will do. Honestly it cost me some minutes to understand why we "fallthrough"
from "ignore" to "stop" case, even though I wrote it myself.
--
Best regards,
Vladimir