On 2022-04-21 07:51, Mark Cave-Ayland wrote:
> One of the mechanisms MacOS uses to identify drives compatible with MacOS is
> to
> send a custom MODE SELECT command for page 0x30 to the drive. The response to
> this is a hard-coded manufacturer string which must match in order for the
> drive to
> On 22 Jun 2021, at 13:42, Philippe Mathieu-Daudé wrote:
>
> On 6/22/21 10:06 AM, Philippe Mathieu-Daudé wrote:
>> On 6/22/21 9:29 AM, Philippe Mathieu-Daudé wrote:
>>> On 6/21/21 5:36 PM, Fam Zheng wrote:
>>>>> On 21 Jun 2021, at 16:13, Philippe Mathie
> On 21 Jun 2021, at 16:13, Philippe Mathieu-Daudé wrote:
>
> On 6/21/21 3:18 PM, Fam Zheng wrote:
>>
>>
>>> On 21 Jun 2021, at 10:32, Philippe Mathieu-Daudé wrote:
>>>
>>> When the NVMe block driver was introduced (see commit bdd6a90a9e5,
&
: blk_get_aio_context:
> Assertion `ctx == blk->ctx' failed.
Hi Phil,
The diff looks good to me, but I’m not sure what exactly caused the assertion
failure. There is `if (r) { goto fail; }` that handles -ENOSPC before, so it
should be treated as a general case. What am I missing?
Reviewed-b
| 25 +
> util/main-loop.c | 4 ++--
> 5 files changed, 55 insertions(+), 11 deletions(-)
>
> --
> 2.30.2
>
Reviewed-by: Fam Zheng
On Wed, 10 Mar 2021 at 15:02, Max Reitz wrote:
> On 10.03.21 15:17, f...@euphon.net wrote:
> > From: Fam Zheng
> >
> > null-co:// has a read-zeroes=off default, when used to in security
> > analysis, this can cause false positives because the driver doesn't
&
On Wed, 10 Mar 2021 at 14:51, Philippe Mathieu-Daudé
wrote:
> On 3/10/21 3:28 PM, Fam Zheng wrote:
> > On Wed, 10 Mar 2021 at 14:24, Philippe Mathieu-Daudé > <mailto:phi...@redhat.com>> wrote:
> >
> > On 3/10/21 3:17 PM, f...@euphon.net <mailto:f...@eu
On Wed, 10 Mar 2021 at 14:41, Vladimir Sementsov-Ogievskiy <
vsement...@virtuozzo.com> wrote:
> 10.03.2021 17:17, f...@euphon.net wrote:
> > From: Fam Zheng
> >
> > null-co:// has a read-zeroes=off default, when used to in security
> > analysis, this can cause f
On Wed, 10 Mar 2021 at 14:24, Philippe Mathieu-Daudé
wrote:
> On 3/10/21 3:17 PM, f...@euphon.net wrote:
> > From: Fam Zheng
> >
> > null-co:// has a read-zeroes=off default, when used to in security
> > analysis, this can cause false positives because the driver do
On Wed, 10 Mar 2021 at 12:38, Philippe Mathieu-Daudé
wrote:
> On 3/10/21 1:32 PM, Fam Zheng wrote:
> > On Wed, 10 Mar 2021 at 11:44, Philippe Mathieu-Daudé > <mailto:phi...@redhat.com>> wrote:
> >
> > Hi,
> >
> > This is an alternative approa
On Wed, 10 Mar 2021 at 11:44, Philippe Mathieu-Daudé
wrote:
> Hi,
>
> This is an alternative approach to changing null-co driver
> default 'read-zeroes' option to true:
> https://www.mail-archive.com/qemu-block@nongnu.org/msg80873.html
>
> Instead we introduce yet another block driver with an
On 2021-02-23 17:01, Max Reitz wrote:
> On 23.02.21 10:21, Fam Zheng wrote:
> > On 2021-02-22 18:55, Philippe Mathieu-Daudé wrote:
> > > On 2/22/21 6:35 PM, Fam Zheng wrote:
> > > > On 2021-02-19 15:09, Philippe Mathieu-Daudé wrote:
> > > &
On 2021-02-22 18:55, Philippe Mathieu-Daudé wrote:
> On 2/22/21 6:35 PM, Fam Zheng wrote:
> > On 2021-02-19 15:09, Philippe Mathieu-Daudé wrote:
> >> On 2/19/21 12:07 PM, Max Reitz wrote:
> >>> On 13.02.21 22:54, Fam Zheng wrote:
> >>>> On 2
On 2021-02-19 15:09, Philippe Mathieu-Daudé wrote:
> On 2/19/21 12:07 PM, Max Reitz wrote:
> > On 13.02.21 22:54, Fam Zheng wrote:
> >> On 2021-02-11 15:26, Philippe Mathieu-Daudé wrote:
> >>> The null-co driver doesn't zeroize buffer in its default config,
> >&
On 2021-02-11 15:26, Philippe Mathieu-Daudé wrote:
> The null-co driver doesn't zeroize buffer in its default config,
> because it is designed for testing and tests want to run fast.
> However this confuses security researchers (access to uninit
> buffers).
I'm a little surprised.
Is changing
On Tue, 2020-10-20 at 09:34 +0800, Zhenyu Ye wrote:
> On 2020/10/19 21:25, Paolo Bonzini wrote:
> > On 19/10/20 14:40, Zhenyu Ye wrote:
> > > The kernel backtrace for io_submit in GUEST is:
> > >
> > > guest# ./offcputime -K -p `pgrep -nx fio`
> > > b'finish_task_switch'
> > >
On Wed, 2020-10-14 at 14:42 +0200, Philippe Mathieu-Daudé wrote:
> On 10/14/20 2:34 PM, Fam Zheng wrote:
> > On Wed, 2020-10-14 at 13:52 +0200, Philippe Mathieu-Daudé wrote:
> > > A bunch of boring patches that have been proven helpful
> > > while debugging.
> > &
++
>
> util/trace-events | 10 --
> 4 files changed, 54 insertions(+), 38 deletions(-)
>
> --
> 2.26.2
>
>
>
Modular the g_strdup_printf() memleak I pointed out:
Reviewed-by: Fam Zheng
On Wed, 2020-10-14 at 13:52 +0200, Philippe Mathieu-Daudé wrote:
> For debug purpose, trace BAR regions info.
>
> Signed-off-by: Philippe Mathieu-Daudé
> ---
> util/vfio-helpers.c | 8
> util/trace-events | 1 +
> 2 files changed, 9 insertions(+)
>
> diff --git a/util/vfio-helpers.c
;
> -VmdkExtent *extent;
> +VmdkExtent *extent = NULL;
> BDRVVmdkState *s = bs->opaque;
> int64_t l1_backup_offset = 0;
> bool compressed;
> @@ -1088,7 +1088,7 @@ static int vmdk_parse_extents(const char *desc,
> BlockDriverState *bs,
> BdrvChild *extent_file;
> BdrvChildRole extent_role;
> BDRVVmdkState *s = bs->opaque;
> -VmdkExtent *extent;
> +VmdkExtent *extent = NULL;
> char extent_opt_prefix[32];
> Error *local_err = NULL;
>
Looks trivial, and correct.
Reviewed-by: Fam Zheng
> util/vfio-helpers.c | 4 +-
> 3 files changed, 44 insertions(+), 35 deletions(-)
>
Reviewed-by: Fam Zheng
On Tue, 2020-09-22 at 10:41 +0200, Philippe Mathieu-Daudé wrote:
> Hi Fam,
>
> +Paolo?
>
> On 9/22/20 10:18 AM, Fam Zheng wrote:
> > On Mon, 2020-09-21 at 18:29 +0200, Philippe Mathieu-Daudé wrote:
> > > Per the datasheet sections 3.1.13/3.1.14:
> > >
On Mon, 2020-09-21 at 18:29 +0200, Philippe Mathieu-Daudé wrote:
> Per the datasheet sections 3.1.13/3.1.14:
> "The host should not read the doorbell registers."
>
> As we don't need read access, map the doorbells with write-only
> permission. We keep a reference to this mapped address in the
>
On 2020-09-19 10:22, Zhenyu Ye wrote:
> On 2020/9/18 22:06, Fam Zheng wrote:
> >
> > I can see how blocking in a slow io_submit can cause trouble for main
> > thread. I think one way to fix it (until it's made truly async in new
> > kernels) is moving the io
On 2020-09-18 19:23, Zhenyu Ye wrote:
> Thread 5 (LWP 4802):
> #0 0x83086b54 in syscall () at /lib64/libc.so.6
> #1 0x834598b8 in io_submit () at /lib64/libaio.so.1
> #2 0xe851e89c in ioq_submit (s=0xfffd3c001bb0) at
> ../block/linux-aio.c:299
>
On 2020-09-17 16:44, Stefan Hajnoczi wrote:
> On Thu, Sep 17, 2020 at 03:36:57PM +0800, Zhenyu Ye wrote:
> > When the hang occurs, the QEMU is blocked at:
> >
> > #0 0x95762b64 in ?? () from target:/usr/lib64/libpthread.so.0
> > #1 0x9575bd88 in pthread_mutex_lock ()
On 2020-09-17 15:36, Zhenyu Ye wrote:
> Hi Stefan,
>
> On 2020/9/14 21:27, Stefan Hajnoczi wrote:
> >>
> >> Theoretically, everything running in an iothread is asynchronous. However,
> >> some 'asynchronous' actions are not non-blocking entirely, such as
> >> io_submit(). This will block while
irqs
> function to initialize multiple MSIX IRQs and attach eventfd to
> them.
>
> Since RFC v4:
> - addressed Alex review comment:
> check ioctl(VFIO_DEVICE_SET_IRQS) return value
Reviewed-by: Fam Zheng
On 2020-09-08 20:03, Philippe Mathieu-Daudé wrote:
> Instead of initializing one MSIX IRQ with the generic
> qemu_vfio_pci_init_irq() function, use the MSIX specific one which
> ill allow us to use multiple IRQs. For now we provide an array of
s/ill/will/
> a single IRQ.
>
> Signed-off-by:
On 2020-09-07 12:16, Stefan Hajnoczi wrote:
> Development of the userspace NVMe block driver picked up again recently.
> After talking with Fam I am stepping up as block/nvme.c maintainer.
> Patches will be merged through my 'block' tree.
>
> Cc: Kevin Wolf
> Cc: Klaus Jense
hieu-Daudé (3):
> block/nvme: Group controller registers in NVMeRegs structure
> block/nvme: Use generic NvmeBar structure
> block/nvme: Pair doorbell registers
>
> block/nvme.c | 43 +++
> 1 file changed, 15 insertions(+), 28 deletions(-)
>
> --
> 2.26.2
>
>
Reviewed-by: Fam Zheng
d
> NvmeCmd struct.
Yeah, that part looks sane to me. For the block/nvme.c bit:
Acked-by: Fam Zheng
On Mon, 06/10 19:19, Aarushi Mehta wrote:
> Signed-off-by: Aarushi Mehta
> ---
> qemu-io.c | 13 +
> 1 file changed, 13 insertions(+)
>
> diff --git a/qemu-io.c b/qemu-io.c
> index 8d5d5911cb..54b82151c4 100644
> --- a/qemu-io.c
> +++ b/qemu-io.c
> @@ -129,6 +129,7 @@ static void
On Mon, 06/10 19:19, Aarushi Mehta wrote:
> Signed-off-by: Aarushi Mehta
> ---
> block/io_uring.c | 14 --
> block/trace-events | 8
> 2 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/block/io_uring.c b/block/io_uring.c
> index f327c7ef96..47e027364a
On Mon, 06/10 19:18, Aarushi Mehta wrote:
> Aborts when sqe fails to be set as sqes cannot be returned to the ring.
>
> Signed-off-by: Aarushi Mehta
> ---
> MAINTAINERS | 7 +
> block/Makefile.objs | 3 +
> block/io_uring.c| 314
On Mon, 06/10 19:18, Aarushi Mehta wrote:
> Option only enumerates for hosts that support it.
>
> Signed-off-by: Aarushi Mehta
> Reviewed-by: Stefan Hajnoczi
> ---
> qapi/block-core.json | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/qapi/block-core.json
On Wed, 04/17 22:53, Maxim Levitsky wrote:
> Signed-off-by: Maxim Levitsky
> ---
> block/nvme.c | 80 ++
> block/trace-events | 2 ++
> 2 files changed, 82 insertions(+)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index
On Wed, 04/17 22:53, Maxim Levitsky wrote:
> Signed-off-by: Maxim Levitsky
> ---
> block/nvme.c | 69 +++-
> block/trace-events | 1 +
> include/block/nvme.h | 19 +++-
> 3 files changed, 87 insertions(+), 2 deletions(-)
>
> diff --git
ret = raw_apply_lock_bytes(s, s->fd, s->perm | new_perm,
> ~s->shared_perm | ~new_shared,
> false, errp);
> --
> 2.18.0
>
>
Reviewed-by: Fam Zheng
_file);
> +pstrcpy(bs->backing_format, sizeof(bs->backing_format),
> +"vmdk");
> }
>
> out:
> --
> 2.13.3
>
Reviewed-by: Fam Zheng
> On Mar 15, 2019, at 01:31, Sergio Lopez wrote:
>
> Hi,
>
> Our current AIO path does a great job at unloading the work from the VM,
> and combined with IOThreads provides a good performance in most
> scenarios. But it also comes with its costs, in both a longer execution
> path and the
> On Mar 12, 2019, at 18:03, Kevin Wolf wrote:
>
> Am 12.03.2019 um 03:18 hat Fam Zheng geschrieben:
>>> On Mar 11, 2019, at 19:06, Kevin Wolf wrote:
>>> Am 09.03.2019 um 02:46 hat Yaowei Bai geschrieben:
>>>> Thanks for explaining the backgroun
> On Mar 11, 2019, at 19:06, Kevin Wolf wrote:
>
> Am 09.03.2019 um 02:46 hat Yaowei Bai geschrieben:
>> Thanks for explaining the background. It comes to my mind that actually we
>> talked about these two cases with Fam a bit long time ago and decided to
>> support both these two cases. The
On Tue, 11/20 20:34, Dongli Zhang wrote:
> Hi,
>
> Would you please help explain in which case AioHandler->is_external is true,
> and
> when it is false?
>
> I read about iothread and mainloop and I am little bit confused about it.
VirtIO's ioeventfd is an example of is_external == true. It
On Thu, 10/11 15:21, Fam Zheng wrote:
> v5: Address Max's comments (Thanks for reviewing):
> - Clean up after test done.
> - Add rev-by to patch 1 and 2.
Ping?
Fam
On Thu, 11/01 18:38, Li Feng wrote:
> When the IO size is larger than 2 pages, we move the the pointer one by
> one in the pagelist, this is inefficient.
>
> This is a simple benchmark result:
>
> Before:
> $ qemu-io -c 'write 0 1G' nvme://:00:04.0/1
>
> wrote 1073741824/1073741824 bytes at
.
Suggested-by: Markus Armbruster
Signed-off-by: Fam Zheng
---
v3: Use error_setg_errno. [Eric]
v2: Add Error ** to raw_normalize_devicepath. [Markus]
Use error_printf for splitting multi-sentence message. [Markus]
---
block/file-posix.c | 39 ---
1 file
On Wed, 10/31 15:51, Markus Armbruster wrote:
> Fam Zheng writes:
>
> > Use error_report for situations that affect user operation (i.e. we're
> > actually returning error), and warn_report/warn_report_err when some
> > less critical error happened but the user o
On Wed, 10/31 16:04, Fam Zheng wrote:
> Commit 9cbef9d68e (qemu-option: improve qemu_opts_print_help() output)
> affected qemu-img help output, and broke this test case.
>
> Update the output reference to fix it.
>
> Signed-off-by: Fam Zheng
>
> ---
>
> I'm
Commit 9cbef9d68e (qemu-option: improve qemu_opts_print_help() output)
affected qemu-img help output, and broke this test case.
Update the output reference to fix it.
Signed-off-by: Fam Zheng
---
I'm once again looking at enabling iotests on patchew (via vm based
tests), but immediately got
.
Suggested-by: Markus Armbruster
Signed-off-by: Fam Zheng
---
v2: Add Error ** to raw_normalize_devicepath. [Markus]
Use error_printf for splitting multi-sentence message. [Markus]
---
block/file-posix.c | 39 ---
1 file changed, 16 insertions(+), 23 deletions
Use error_report for situations that affect user operation (i.e. we're
actually returning error), and warn_report/warn_report_err when some
less critical error happened but the user operation can still carry on.
Suggested-by: Markus Armbruster
Signed-off-by: Fam Zheng
---
block/file-posix.c
Signed-off-by: Fam Zheng
---
tests/Makefile.include | 2 +
tests/test-image-locking.c | 157 +
2 files changed, 159 insertions(+)
create mode 100644 tests/test-image-locking.c
diff --git a/tests/Makefile.include b/tests/Makefile.include
index
The lock_fd field is not strictly necessary because transferring locked
bytes from old fd to the new one shouldn't fail anyway. This spares the
user one fd per image.
Signed-off-by: Fam Zheng
Reviewed-by: Max Reitz
---
block/file-posix.c | 37 +
1 file
changes.
This patch is an easy fix to this and the change is regardlessly
reasonable, so do it.
Signed-off-by: Fam Zheng
Reviewed-by: Max Reitz
---
block/file-posix.c | 54 +-
1 file changed, 44 insertions(+), 10 deletions(-)
diff --git a/block/file
message).
The second patch halves fd for images.
The third adds some more test for patch one (would have caught the regression
caused by v2).
Fam Zheng (3):
file-posix: Skip effectiveless OFD lock operations
file-posix: Drop s->lock_fd
tests: Add unit tests for image locking
block/file-posi
On Wed, 10/10 13:19, Paolo Bonzini wrote:
> On 09/10/2018 21:37, John Snow wrote:
> >
> >
> > On 08/14/2018 02:27 AM, Paolo Bonzini wrote:
> >> nvme_poll_queues is already protected by q->lock, and
> >> AIO callbacks are invoked outside the AioContext lock.
> >> So remove the acquire/release
On Wed, 10/10 13:19, Paolo Bonzini wrote:
> On 09/10/2018 21:37, John Snow wrote:
> >
> >
> > On 08/14/2018 02:27 AM, Paolo Bonzini wrote:
> >> nvme_poll_queues is already protected by q->lock, and
> >> AIO callbacks are invoked outside the AioContext lock.
> >> So remove the acquire/release
On Fri, 10/05 10:00, yuchenlin wrote:
> Ping?
Hi,
This was merged as 51b3c6b73acae1e3fd3c7d441fc86dd17356695f.
Fam
>
> On 2018-09-13 16:34, Fam Zheng wrote:
> > On Thu, 09/13 16:29, yuchen...@synology.com wrote:
> > > From: yuchenlin
> > >
> > > T
hared_perm = shared_perm;
> blk_set_enable_write_cache(blk, true);
>
> +blk->on_read_error = BLOCKDEV_ON_ERROR_REPORT;
> +blk->on_write_error = BLOCKDEV_ON_ERROR_ENOSPC;
> +
> block_acct_init(>stats);
>
> notifier_list_init(>remove_bs_notifiers);
> --
> 2.13.6
>
>
Reviewed-by: Fam Zheng
On Tue, 09/25 09:37, Markus Armbruster wrote:
> Do we want to have a dedicated VHDX driver submaintainer again? Fam,
> you're maintaining VMDK, could you cover VHDX as well?
I don't know a lot VHDX internals. Considering my capacity at the moment I'd
rather not take this one.
Fam
om/codyprime/qemu-kvm-jtc.git block
> >
> > CURL
> > -M: Jeff Cody
> > L: qemu-block@nongnu.org
> > S: Supported
> > F: block/curl.c
> > -T: git git://github.com/codyprime/qemu-kvm-jtc.git block
>
> Likewise.
>
> > GLUSTER
> > -M: Jeff Cody
> > L: qemu-block@nongnu.org
> > S: Supported
> > F: block/gluster.c
> > -T: git git://github.com/codyprime/qemu-kvm-jtc.git block
>
> Likewise.
>
> > Null Block Driver
> > M: Fam Zheng
>
Block drivers without an M: naturally fall under the overall maintainership of
block layer (Kevin), so IMO keeping the statuses is fine. Maybe CURL can be
degraded to Maintained, though.
Fam
ge.
Hoist the error_append_hint to the caller of raw_check_lock_bytes where
file name is known, and include it in the error hint.
Signed-off-by: Fam Zheng
---
block/file-posix.c | 10 +++--
tests/qemu-iotests/153.out | 76 +++---
tests/qemu-iotests/182.out
On Tue, 08/21 08:58, Fam Zheng wrote:
> v4: Fix test on systems without OFD. [Patchew]
Ping?
>
> The first patch reduces chances of QEMU crash in unusual (but not unlikely)
> cases especially when used by Libvirt (see commit message).
>
> The second patch halves fd for imag
On Wed, 09/12 19:10, Paolo Bonzini wrote:
> Patch 1 fixes a too-strict assertion that could fire when aio_poll
> is called in parallel with aio_set_fd_handler.
>
> Patch 2 and 3 reinstate the performance benefits of polling, which were
> essentially disabled by commit 70232b5253 ("aio-posix:
On Mon, 09/17 10:55, Thomas Huth wrote:
> On 2018-09-17 10:31, Fam Zheng wrote:
> > This option is added together with scsi-disk but is never honoured,
> > becuase we don't emulate the VPD page for scsi-block. We could intercept
> > and inject the user specified value
On Thu, 09/13 18:59, Kevin Wolf wrote:
> Am 13.09.2018 um 17:10 hat Paolo Bonzini geschrieben:
> > On 13/09/2018 14:52, Kevin Wolf wrote:
> > > + if (qemu_get_current_aio_context() == qemu_get_aio_context()) {
> > > + /* If we are in the main thread, the callback is allowed to unref
> > > + * the
ter_sector is aligned
> to sector, the last one should be like this, too. Third, it ease
> reading with sector based I/Os.
>
> Signed-off-by: yuchenlin
Reviewed-by: Fam Zheng
On Thu, 09/13 10:29, Paolo Bonzini wrote:
> On 13/09/2018 08:56, Fam Zheng wrote:
> >> +/* No need to order poll_disable_cnt writes against other updates;
> >> + * the counter is only used to avoid wasting time and latency on
> >> + * iterate
On Thu, 09/13 15:47, yuchenlin wrote:
> On 2018-09-13 10:54, Fam Zheng wrote:
> > On Thu, 09/13 10:31, yuchen...@synology.com wrote:
> > > From: yuchenlin
> > >
> > > There is a rare case which the size of last compressed cluster
> > > is larger tha
er) {
> +node->io_poll(node->opaque)) {
> *timeout = 0;
> -progress = true;
> +if (node->opaque != >notifier) {
> +progress = true;
> +}
> }
>
> /* Caller handles freeing deleted nodes. Don't do it here. */
> --
> 2.17.1
>
Reviewed-by: Fam Zheng
g.txt for syntax documentation.
>
> # util/aio-posix.c
> -run_poll_handlers_begin(void *ctx, int64_t max_ns) "ctx %p max_ns %"PRId64
> -run_poll_handlers_end(void *ctx, bool progress) "ctx %p progress %d"
> +run_poll_handlers_begin(void *ctx, int64_t max_ns, int64_t timeout) "ctx %p
> max_ns %"PRId64 " timeout %"PRId64
> +run_poll_handlers_end(void *ctx, bool progress, int64_t timeout) "ctx %p
> progress %d new timeout %"PRId64
> poll_shrink(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new
> %"PRId64
> poll_grow(void *ctx, int64_t old, int64_t new) "ctx %p old %"PRId64" new
> %"PRId64
>
> --
> 2.17.1
>
>
Reviewed-by: Fam Zheng
c_read(>poll_disable_cnt));
>
> trace_run_poll_handlers_end(ctx, progress);
>
> @@ -552,7 +556,7 @@ static bool run_poll_handlers(AioContext *ctx, int64_t
> max_ns)
> */
> static bool try_poll_mode(AioContext *ctx, bool blocking)
> {
> - if (blocking && ctx->poll_max_ns && ctx->poll_disable_cnt == 0) {
> +if (blocking && ctx->poll_max_ns &&
> !atomic_read(>poll_disable_cnt)) {
> /* See qemu_soonest_timeout() uint64_t hack */
> int64_t max_ns = MIN((uint64_t)aio_compute_timeout(ctx),
> (uint64_t)ctx->poll_ns);
> --
> 2.17.1
>
>
Reviewed-by: Fam Zheng
On Thu, 09/13 10:31, yuchen...@synology.com wrote:
> From: yuchenlin
>
> There is a rare case which the size of last compressed cluster
> is larger than the cluster size, which will cause the file is
> not aligned at the sector boundary.
The code looks good to me. Can you also explain why it is
On Wed, 09/12 17:52, yuchenlin wrote:
>
> Fam Zheng 於 2018-09-12 17:34 寫道:
> > On Tue, 08/28 11:17, yuchen...@synology.com wrote: > From: yuchenlin
> > > > There is a rare case which the size of last
> > compressed cluster > is larger than the cluster
On Wed, 09/12 13:11, Paolo Bonzini wrote:
> On 12/09/2018 03:31, Fam Zheng wrote:
> >>>
> >>> ctx is qemu_aio_context here, so there's no interaction with IOThread.
> >> In this case, it should be okay to have the reentrancy, what is the bug
> >> th
On Tue, 08/28 11:17, yuchen...@synology.com wrote:
> From: yuchenlin
>
> There is a rare case which the size of last compressed cluster
> is larger than the cluster size, which will cause the file is
> not aligned at the sector boundary.
I don't understand. Doesn't it mean that if you force the
On Wed, 09/05 11:33, Sergio Lopez wrote:
> AIO Coroutines shouldn't by managed by an AioContext different than the
> one assigned when they are created. aio_co_enter avoids entering a
> coroutine from a different AioContext, calling aio_co_schedule instead.
>
> Scheduled coroutines are then
*opaque)
>
> /* Protected by write barrier in qemu_aio_coroutine_enter */
> atomic_set(>scheduled, NULL);
> -qemu_coroutine_enter(co);
> +qemu_aio_coroutine_enter(ctx, co);
> aio_context_release(ctx);
> }
> }
> --
> 2.17.0
>
Reviewed-by: Fam Zheng
On Tue, 09/11 17:30, Paolo Bonzini wrote:
> On 11/09/2018 16:12, Fam Zheng wrote:
> > On Tue, 09/11 13:32, Paolo Bonzini wrote:
> >> On 10/09/2018 16:56, Fam Zheng wrote:
> >>> We have this unwanted call stack:
> >>>
> >>> > ...
>
On Tue, 09/11 13:32, Paolo Bonzini wrote:
> On 10/09/2018 16:56, Fam Zheng wrote:
> > We have this unwanted call stack:
> >
> > > ...
> > > #13 0x5586602b7793 in virtio_scsi_handle_cmd_vq
> > > #14 0x5586602b8d66 in virtio_scsi_data_plane
would cause a possible AIO_WAIT_WHILE() in
> the callback to hang.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
ll(), which waits for block jobs to reach a quiescent
> state.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
won't
> deadlock because they ignore the job, and outer drains will wait for the
> job to really reach a quiescent state because the callback is already
> running.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
On Fri, 09/07 18:15, Kevin Wolf wrote:
> When starting an active commit job, other callbacks can run before
> mirror_start_job() calls bdrv_ref() where needed and cause the nodes to
> go away. Add another pair of bdrv_ref/unref() around it to protect
> against this case.
>
> Signed-off-by: Kevin
leting while the callback hasn't actually completed
> yet.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
BlockBackend before decreasing to refcount to 0.
> Assert in blk_ref() that it never takes the first refcount (which would
> mean that the BlockBackend is already being deleted).
>
> Signed-off-by: Kevin Wolf
Good one!
Reviewed-by: Fam Zheng
new requests.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
d and should then be able to kick a drain
> in the main loop context.
We can now move the atomic_inc/atomic_dec pair outside the if/else block,
but that's cosmetic.
Reviewed-by: Fam Zheng
>
> Signed-off-by: Kevin Wolf
> ---
> include/block/aio-wait.h | 2 ++
> 1 file changed, 2
On Fri, 09/07 18:15, Kevin Wolf wrote:
> bdrv_do_drained_begin/end() assume that they are called with the
> AioContext lock of bs held. If we call drain functions from a coroutine
> with the AioContext lock held, we yield and schedule a BH to move out of
> coroutine context. This means that the
. This would cause AIO_WAIT_WHILE() to hang.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
> -while (!job->deferred_to_main_loop && !job_is_completed(job)) {
> -job_drain(job);
> -}
> -while (!job_is_completed(job)) {
> -aio_poll(qemu_get_aio_context(), true);
> -}
> +
> +AIO_WAIT_WHILE(_wait, job->aio_context,
> +
On Fri, 09/07 18:15, Kevin Wolf wrote:
> All callers in QEMU proper hold the AioContext lock when calling
> job_finish_sync(). test-blockjob should do the same.
I think s/job_finish_sync/job_cancel_sync/ in the subject is more accurate?
Reviewed-by: Fam Zheng
>
> Signed-off-by
On Fri, 09/07 18:15, Kevin Wolf wrote:
> This extends the existing drain test with a block job to include
> variants where the block job runs in a different AioContext.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
> threads.
>
> Signed-off-by: Kevin Wolf
Reviewed-by: Fam Zheng
With this patch, virtio_queue_empty will now return 1 as soon as the
vdev is marked as broken, after a "virtio: zero sized buffers are not
allowed" error.
To be consistent, update virtio_queue_empty_rcu as well.
Signed-off-by: Fam Zheng
---
hw/virtio/virtio.c | 8
1 file
0558660227148 in qemu_kvm_cpu_thread_fn
> #34 0x55866078bde7 in qemu_thread_start
> #35 0x7f5784906594 in start_thread
> #36 0x7f5784639e6f in clone
Avoid it with the aio_disable_external/aio_enable_external pair, so that
no vq poll handlers can be called in aio_wait_bh_oneshot.
due to virtio_error; the handler shouldn't be called in that situation in
the first place.
Fam Zheng (2):
virtio: Return true from virtio_queue_empty if broken
virtio-scsi/virtio-blk: Disable poll handlers when stopping vq handler
hw/block/dataplane/virtio-blk.c | 2 ++
hw/scsi/virtio-scsi
On Fri, 09/07 17:51, Kevin Wolf wrote:
> Am 09.08.2018 um 15:22 hat Fam Zheng geschrieben:
> > Furthermore, blocking aio_poll is only allowed on home thread
> > (in_aio_context_home_thread), because otherwise two blocking
> > aio_poll()'s can steal each other's ctx->
On Fri, 08/24 10:43, Fam Zheng wrote:
> All callers have acquired ctx already. Doing that again results in
> aio_poll() hang. This fixes the problem that a BDRV_POLL_WHILE() in the
> callback cannot make progress because ctx is recursively locked, for
> example, when drive-bac
1 - 100 of 2965 matches
Mail list logo