bdrv_set_aio_context_ignore() can only work in the main loop:
bdrv_drained_begin() only works in the main loop and the node's (old)
AioContext; and bdrv_drained_end() really only works in the main loop
and the node's (new) AioContext (contrary to its current comment, which
is just wrong).
Decrementing drained_end_counter after bdrv_dec_in_flight() (which in
turn invokes bdrv_wakeup() and thus aio_wait_kick()) is not very clever.
We should decrement it beforehand, so that any waiting aio_poll() that
is woken by bdrv_dec_in_flight() sees the decremented
drained_end_counter.
Because
From: Maxim Levitsky
Currently the driver hardcodes the sector size to 512,
and doesn't check the underlying device. Fix that.
Also fail if underlying nvme device is formatted with metadata
as this needs special support.
Signed-off-by: Maxim Levitsky
Message-id:
From: Maxim Levitsky
Fix the math involving non standard doorbell stride
Signed-off-by: Maxim Levitsky
Reviewed-by: Max Reitz
Message-id: 20190716163020.13383-2-mlevi...@redhat.com
Signed-off-by: Max Reitz
---
block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
From: Maxim Levitsky
Completion entries are meant to be only read by the host and written by the
device.
The driver is supposed to scan the completions from the last point where it
left,
and until it sees a completion with non flipped phase bit.
Signed-off-by: Maxim Levitsky
Reviewed-by: Max
The following changes since commit 23da9e297b4120ca9702cabec91599a44255fe96:
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190722'
into staging (2019-07-22 15:16:48 +0100)
are available in the Git repository at:
https://github.com/XanClic/qemu.git tags/pull-block
On 7/22/19 8:17 AM, Fabian Grünbichler wrote:
> On Tue, Jul 09, 2019 at 07:25:32PM -0400, John Snow wrote:
>> This series adds a new "BITMAP" sync mode that is meant to replace the
>> existing "INCREMENTAL" sync mode.
>>
>> This mode can have its behavior modified by issuing any of three bitmap
On 22.07.19 15:30, Max Reitz wrote:
> Hi,
>
> I noted that test-bdrv-drain sometimgs hangs (very rarely, though), and
> tried to write a test that triggers the issue. I failed to do so (there
> is a good reason for that, see patch 1), but on my way I noticed that
> calling
On Mon, 22 Jul 2019 at 17:51, Laszlo Ersek wrote:
>
> On 07/19/19 18:19, Philippe Mathieu-Daudé wrote:
> > Hi Laszlo,
> >
> > On 7/18/19 9:35 PM, Philippe Mathieu-Daudé wrote:
> >> On 7/18/19 8:38 PM, Laszlo Ersek wrote:
> >>> Regression-tested-by: Laszlo Ersek
> >
> > Patchwork doesn't
The following changes since commit 9d2e1fcd14c2bae5be1992214a03c0ddff714c80:
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
(2019-07-22 13:20:49 +0100)
are available in the Git repository at:
https://gitlab.com/philmd/qemu.git tags/pflash-next-20190722
GCC9 is confused by this comment when building with CFLAG
-Wimplicit-fallthrough=2:
hw/block/pflash_cfi02.c: In function ‘pflash_write’:
hw/block/pflash_cfi02.c:574:16: error: this statement may fall through
[-Werror=implicit-fallthrough=]
574 | if (boff == 0x55 && cmd ==
To avoid incoherent states when the machine resets (see bug report
below), add the device reset callback.
A "system reset" sets the device state machine in READ_ARRAY mode
and, after some delay, set the SR.7 READY bit.
Since we do not model timings, we set the SR.7 bit directly.
Fixes:
On 7/18/19 12:48 PM, Philippe Mathieu-Daudé wrote:
> To avoid incoherent states when the machine resets (see but report
> below), add the device reset callback.
>
> A "system reset" sets the device state machine in READ_ARRAY mode
> and, after some delay, set the SR.7 READY bit.
>
> Since we do
On 7/22/19 6:51 PM, Laszlo Ersek wrote:
> On 07/19/19 18:19, Philippe Mathieu-Daudé wrote:
>> Hi Laszlo,
>>
>> On 7/18/19 9:35 PM, Philippe Mathieu-Daudé wrote:
>>> On 7/18/19 8:38 PM, Laszlo Ersek wrote:
On 07/18/19 17:03, Laszlo Ersek wrote:
> On 07/18/19 12:48, Philippe Mathieu-Daudé
On 07/19/19 18:19, Philippe Mathieu-Daudé wrote:
> Hi Laszlo,
>
> On 7/18/19 9:35 PM, Philippe Mathieu-Daudé wrote:
>> On 7/18/19 8:38 PM, Laszlo Ersek wrote:
>>> On 07/18/19 17:03, Laszlo Ersek wrote:
On 07/18/19 12:48, Philippe Mathieu-Daudé wrote:
> To avoid incoherent states when the
Signed-off-by: Max Reitz
---
tests/test-bdrv-drain.c | 167
1 file changed, 167 insertions(+)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 03fa1142a1..1600d41e9a 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
We already have 030 for that in general, but this tests very specific
cases of both jobs finishing concurrently.
Signed-off-by: Max Reitz
---
tests/qemu-iotests/258 | 163 +
tests/qemu-iotests/258.out | 33
tests/qemu-iotests/group | 1 +
3
Add a test for what happens when you call bdrv_replace_child_noperm()
for various drain situations ({old,new} child {drained,not drained}).
Most importantly, if both the old and the new child are drained, the
parent must not be undrained at any point.
Signed-off-by: Max Reitz
---
bdrv_drop_intermediate() calls BdrvChildRole.update_filename(). That
may poll, thus changing the graph, which potentially breaks the
QLIST_FOREACH_SAFE() loop.
Just keep the whole subtree drained. This is probably the right thing
to do anyway (dropping nodes while the subtree is not drained
Currently, bdrv_replace_child_noperm() undrains the parent until it is
completely undrained, then re-drains it after attaching the new child
node.
This is a problem with bdrv_drop_intermediate(): We want to keep the
whole subtree drained, including parents, while the operation is
under way.
I think the patches speak for themselves now.
(The title of this series alludes to what the iotest added in the final
patch tests.)
v3:
- Rebased on master
- Added two tests to test-bdrv-drain [Kevin]
- Removed new iotest from auto [Thomas]
git-backport-diff against v2:
Key:
[] : patches
Hi,
I noted that test-bdrv-drain sometimgs hangs (very rarely, though), and
tried to write a test that triggers the issue. I failed to do so (there
is a good reason for that, see patch 1), but on my way I noticed that
calling bdrv_set_aio_context_ignore() from any AioContext but the main
one is
Decrementing drained_end_counter after bdrv_dec_in_flight() (which in
turn invokes bdrv_wakeup() and thus aio_wait_kick()) is not very clever.
We should decrement it beforehand, so that any waiting aio_poll() that
is woken by bdrv_dec_in_flight() sees the decremented
drained_end_counter.
Because
bdrv_set_aio_context_ignore() can only work in the main loop:
bdrv_drained_begin() only works in the main loop and the node's (old)
AioContext; and bdrv_drained_end() really only works in the main loop
and the node's (new) AioContext (contrary to its current comment, which
is just wrong).
Not sure if it has been reported before, but test 059 currently fails:
059 fail [14:55:21] [14:55:26]output
mismatch (see 059.out.bad)
--- /home/thuth/devel/qemu/tests/qemu-iotests/059.out 2019-07-19
10:19:18.0 +0200
+++
On Tue, Jul 09, 2019 at 07:25:32PM -0400, John Snow wrote:
> This series adds a new "BITMAP" sync mode that is meant to replace the
> existing "INCREMENTAL" sync mode.
>
> This mode can have its behavior modified by issuing any of three bitmap sync
> modes, passed as arguments to the job.
>
>
On 21/07/19 10:08, l00284672 wrote:
> commit a6f230c move blockbackend back to main AioContext on unplug. It set
> the AioContext of
> SCSIDevice to the main AioContex, but s->ctx is still the iothread
> AioContex(if the scsi controller
> is configure with iothread). So if there are having
On 7/19/19 3:14 PM, Philippe Mathieu-Daudé wrote:
> GCC9 is confused by this comment when building with CFLAG
> -Wimplicit-fallthrough=2:
>
> hw/block/pflash_cfi02.c: In function ‘pflash_write’:
> hw/block/pflash_cfi02.c:574:16: error: this statement may fall through
>
On 19/07/2019 20:20, Eric Blake wrote:
> We've had two separate reports of different callers running into use
> of uninitialized data if s->quit is set (one detected by gcc -O3,
> another by valgrind), due to checking 'nbd_reply_is_simple(reply) ||
> s->quit' in the wrong order. Rather than
On Mon, 2019-07-22 at 10:15 +0100, Daniel P. Berrangé wrote:
> On Sun, Jul 21, 2019 at 09:15:08PM +0300, Maxim Levitsky wrote:
> > Currently we print message like that:
> >
> > "
> > new_file.qcow2 : error message
> > "
> >
> > However the error could have come from opening the backing file (e.g
On Mon, 2019-07-22 at 11:41 +0200, Kevin Wolf wrote:
> Am 21.07.2019 um 20:15 hat Maxim Levitsky geschrieben:
> > Currently we print message like that:
> >
> > "
> > new_file.qcow2 : error message
> > "
> >
> > However the error could have come from opening the backing file (e.g when
> > it
On Thu, Jul 18, 2019 at 07:00:37AM +0200, Philippe Mathieu-Daudé wrote:
> Cc'ing qemu-block@
>
> On 7/18/19 5:25 AM, no-re...@patchew.org wrote:
> > Patchew URL:
> > https://patchew.org/QEMU/20190717094728.31006-1-pbonz...@redhat.com/
> [...]> time make docker-test-debug@fedora
Am 21.07.2019 um 20:15 hat Maxim Levitsky geschrieben:
> Currently we print message like that:
>
> "
> new_file.qcow2 : error message
> "
>
> However the error could have come from opening the backing file (e.g when it
> missing encryption keys),
> thus try to clarify this by using this format:
On Sun, Jul 21, 2019 at 09:15:08PM +0300, Maxim Levitsky wrote:
> Currently we print message like that:
>
> "
> new_file.qcow2 : error message
> "
>
> However the error could have come from opening the backing file (e.g when it
> missing encryption keys),
> thus try to clarify this by using
On Fri, 19 Jul 2019 at 14:43, Kevin Wolf wrote:
>
> The following changes since commit 0274f45bdef73283f2c213610f11d4e5dcba43b6:
>
> Merge remote-tracking branch
> 'remotes/vivier2/tags/linux-user-for-4.1-pull-request' into staging
> (2019-07-19 09:44:43 +0100)
>
> are available in the Git
On Sun, Jul 21, 2019 at 09:15:07PM +0300, Maxim Levitsky wrote:
> Currently if you attampt to create too large file with luks you
> get the following error message:
>
> Formatting 'test.luks', fmt=luks size=17592186044416 key-secret=sec0
> qemu-img: test.luks: Could not resize file: File too
36 matches
Mail list logo