Quick drive-by comment:
Kevin Wolf writes:
[...]
> Let me try to just consolidate all of the above into a single state
> machine:
>
> 1. CREATED --> RUNNING
> driver callback: .start
> 2a. RUNNING --> READY | CANCELLED
> via: auto transition (when bulk copy is
Jan Dakinevich writes:
> On 10/03/2017 05:02 PM, Eric Blake wrote:
>> On 10/03/2017 07:47 AM, Jan Dakinevich wrote:
>>> The command is intended for gathering virtio information such as status,
>>> feature bits, negotiation status. It is convenient and useful for
On 10/05/2017 05:36 AM, Paolo Bonzini wrote:
> On 05/10/2017 12:02, Vladimir Sementsov-Ogievskiy wrote:
>> 03.10.2017 17:06, Paolo Bonzini wrote:
>>> On 03/10/2017 15:35, Vladimir Sementsov-Ogievskiy wrote:
>> In the end this probably means that you have a read_chunk_header
>> function and
Add a test for qcow2 copy-on-read behavior, including exposure
for the just-fixed bugs.
The copy-on-read behavior is always to a qcow2 image, but the
test is careful to allow running with most image protocol/format
combos as the backing file being copied from (luks being the
exception, as it is
Improve our braindead copy-on-read implementation. Pre-patch,
we have multiple issues:
- we create a bounce buffer and perform a write for the entire
request, even if the active image already has 99% of the
clusters occupied, and really only needs to copy-on-read the
remaining 1% of the clusters
Make it possible to inject errors on writes performed during a
read operation due to copy-on-read semantics.
Signed-off-by: Eric Blake
Reviewed-by: Jeff Cody
Reviewed-by: Kevin Wolf
Reviewed-by: John Snow
Reviewed-by:
Handle a 0-length block status request up front, with a uniform
return value claiming the area is not allocated.
Most callers don't pass a length of 0 to bdrv_get_block_status()
and friends; but it definitely happens with a 0-length read when
copy-on-read is enabled. While we could audit all
During my quest to switch block status to be byte-based, John
forced me to evaluate whether we have a situation during
copy-on-read where we could exceed BDRV_REQUEST_MAX_BYTES [1].
Sure enough, we have a number of pre-existing bugs in the
copy-on-read code. Fix those, along with adding a test.
Executing qemu with a terminal as stdin will temporarily alter stty
settings on that terminal (for example, disabling echo), because of
how we run both the monitor and any multiplexing with guest input.
Normally, qemu restores the original settings on exit; but if an
iotest triggers qemu to abort
Make it easier to enable copy-on-read during iotests, by
exposing a new bool option to main and open.
Signed-off-by: Eric Blake
Reviewed-by: Jeff Cody
Reviewed-by: Kevin Wolf
Reviewed-by: John Snow
Reviewed-by: Stefan
Nikolay: You mentioned a while ago that you had issues with incremental
backup's eventual return status being unknown. Can you please elaborate
for me why this is a problem?
I assume due to the long running of a backup job it's entirely possible
to imagine losing connection to QEMU and missing
On 10/04/2017 07:00 PM, Eric Blake wrote:
> On 10/04/2017 09:26 AM, Jan Dakinevich wrote:
>
>> +{
>> +'struct': 'VirtioInfo',
>> +'data': {
>> +'feature-names': ['VirtioInfoBit'],
>
> Why is feature-names listed at two different nestings of the return
Am 25.09.2017 um 22:19 hat Eric Blake geschrieben:
> On 09/25/2017 07:28 AM, Kevin Wolf wrote:
> > +"data": {
> > +"compat": "1.1",
>
> You should make the test specifically exclude compat=0.10 images, or
> else have further filtering in place
On Tue, Oct 03, 2017 at 08:43:46PM -0500, Eric Blake wrote:
> Improve our braindead copy-on-read implementation. Pre-patch,
> we have multiple issues:
> - we create a bounce buffer and perform a write for the entire
> request, even if the active image already has 99% of the
> clusters occupied,
Am 04.10.2017 um 03:43 hat Eric Blake geschrieben:
> Improve our braindead copy-on-read implementation. Pre-patch,
> we have multiple issues:
> - we create a bounce buffer and perform a write for the entire
> request, even if the active image already has 99% of the
> clusters occupied, and really
On 10/05/2017 09:44 AM, Eric Blake wrote:
>>
>> Aside from the 2GB request issue:
>
> I'm wondering if it is easy enough to just capture the qemu-io output
> into a temporary holding area, grep that for success or OOM, then skip
> the test on OOM (for small machines) or log the success (for beefy
On 10/05/2017 09:41 AM, Stefan Hajnoczi wrote:
> On Tue, Oct 03, 2017 at 08:43:47PM -0500, Eric Blake wrote:
>> Add a test for qcow2 copy-on-read behavior, including exposure
>> for the just-fixed bugs.
>>
>> The copy-on-read behavior is always to a qcow2 image, but the
>> test is careful to allow
On 10/05/2017 09:35 AM, Stefan Hajnoczi wrote:
> On Tue, Oct 03, 2017 at 08:43:44PM -0500, Eric Blake wrote:
>> Handle a 0-length block status request up front, with a uniform
>> return value claiming the area is not allocated.
>>
>> Most callers don't pass a length of 0 to bdrv_get_block_status()
On Tue, Oct 03, 2017 at 08:43:47PM -0500, Eric Blake wrote:
> Add a test for qcow2 copy-on-read behavior, including exposure
> for the just-fixed bugs.
>
> The copy-on-read behavior is always to a qcow2 image, but the
> test is careful to allow running with most image protocol/format
> combos as
On Tue, Oct 03, 2017 at 08:43:44PM -0500, Eric Blake wrote:
> Handle a 0-length block status request up front, with a uniform
> return value claiming the area is not allocated.
>
> Most callers don't pass a length of 0 to bdrv_get_block_status()
> and friends; but it definitely happens with a
xen-pt doesn't set the is_express field, but is supposed to be
able to handle PCI Express devices too. Mark it as hybrid.
Suggested-by: Jan Beulich
Signed-off-by: Eduardo Habkost
---
hw/xen/xen_pt.c | 1 +
1 file changed, 1 insertion(+)
diff --git
Am 05.10.2017 um 03:46 hat John Snow geschrieben:
> On 10/04/2017 02:27 PM, Kevin Wolf wrote:
> > Am 04.10.2017 um 03:52 hat John Snow geschrieben:
> >> For jobs that complete when a monitor isn't looking, there's no way to
> >> tell what the job's final return code was. We need to allow jobs to
>
On 05/10/2017 12:02, Vladimir Sementsov-Ogievskiy wrote:
> 03.10.2017 17:06, Paolo Bonzini wrote:
>> On 03/10/2017 15:35, Vladimir Sementsov-Ogievskiy wrote:
> In the end this probably means that you have a read_chunk_header
> function and a read_chunk function. READ has a loop that calls
On Wed 04 Oct 2017 05:25:49 PM CEST, Max Reitz wrote:
> Besides the macro itself, this patch also adds a corresponding
> Coccinelle rule.
>
> Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Berto
03.10.2017 17:06, Paolo Bonzini wrote:
On 03/10/2017 15:35, Vladimir Sementsov-Ogievskiy wrote:
In the end this probably means that you have a read_chunk_header
function and a read_chunk function. READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
03.10.2017 17:06, Paolo Bonzini wrote:
On 03/10/2017 15:35, Vladimir Sementsov-Ogievskiy wrote:
In the end this probably means that you have a read_chunk_header
function and a read_chunk function. READ has a loop that calls
read_chunk_header followed by direct reading into the QEMUIOVector,
12.09.2017 12:46, Kevin Wolf wrote:
Am 11.09.2017 um 18:51 hat Vladimir Sementsov-Ogievskiy geschrieben:
Hi Kevin!
I'm confused with relations of permissions and invalidation, can you please
help?
Now dirty bitmaps are loaded in invalidate_cache. Here is a problem with
migration:
1.
27 matches
Mail list logo