On 6/18/19 6:43 AM, Vladimir Sementsov-Ogievskiy wrote:
> Reconnect will be implemented in the following commit, so for now,
> in semantics below, disconnect itself is a "serious error".
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> Reviewed-by: Eric Blake
> ---
> qapi/block-core.json | 11
On 6/18/19 6:43 AM, Vladimir Sementsov-Ogievskiy wrote:
> To implement reconnect we need several states for the client:
> CONNECTED, QUIT and two different CONNECTING states. CONNECTING states
> will be added in the following patches. This patch implements CONNECTED
> and QUIT.
>
> QUIT means,
On 8/7/19 2:06 AM, Denis Plotnikov wrote:
> The patch allows to provide a pattern file for write
> command. There was no similar ability before.
>
> Signed-off-by: Denis Plotnikov
> ---
>
> +static void *qemu_io_alloc_from_file(BlockBackend *blk, size_t len,
> +
On 7/4/19 8:09 AM, Denis Plotnikov wrote:
> zstd significantly reduces cluster compression time.
> It provides better compression performance maintaining
> the same level of compression ratio in comparison with
> zlib, which, by the moment, has been the only compression
s/by/at/
> method
On 7/4/19 8:09 AM, Denis Plotnikov wrote:
> The patch adds some preparation parts for incompatible compression type
> feature to QCOW2 header that indicates that *all* compressed clusters
> must be (de)compressed using a certain compression type.
>
> It is implied that the compression type is set
On 8/7/19 6:12 PM, Max Reitz wrote:
>>
>> +static int check_compression_type(BDRVQcow2State *s, Error **errp)
>> +{
>> +switch (s->compression_type) {
>> +case QCOW2_COMPRESSION_TYPE_ZLIB:
>> +break;
>> +
>> +default:
>> +error_setg(errp, "qcow2: unknown compression
On 8/5/19 12:46 PM, Vladimir Sementsov-Ogievskiy wrote:
> Test that hbitmap_next_zero and hbitmap_next_dirty_area can find things
> after old bitmap end.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
>
> It's a follow-up for
>
> [PATCH for-4.1] util/hbitmap: update orig_size on
On 7/31/19 6:29 AM, Vladimir Sementsov-Ogievskiy wrote:
> 30.07.2019 21:41, John Snow wrote:
>>
>>
>> On 7/30/19 12:32 PM, Vladimir Sementsov-Ogievskiy wrote:
>>> Hi all!
>>>
>>> Here are two small fixes.
>>>
>>> 01 is not a degradation at all, so it's OK for 4.2
>>> 02 is degradation of 3.0,
On 04.07.19 15:09, Denis Plotnikov wrote:
> The patch adds some preparation parts for incompatible compression type
> feature to QCOW2 header that indicates that *all* compressed clusters
> must be (de)compressed using a certain compression type.
>
> It is implied that the compression type is set
FYI: I rebased jsnow/bitmaps on top of kwolf/block-next, itself based on
top of v4.1.0-rc4.
I'll post this along with the eventual pull request, but here's the
diffstat against the published patches:
011/33:[0003] [FC] 'block/backup: upgrade copy_bitmap to BdrvDirtyBitmap'
016/33:[] [-C]
On 07.08.19 16:46, Kevin Wolf wrote:
> This fixes devices like IDE that can still start new requests from I/O
> handlers in the CPU thread while the block backend is drained.
>
> The basic assumption is that in a drain section, no new requests should
> be allowed through a BlockBackend
On 07.08.19 16:46, Kevin Wolf wrote:
> mirror_top_bs is currently implicitly drained through its connection to
> the source or the target node. However, the drain section for target_bs
> ends early after moving mirror_top_bs from src to target_bs, so that
> requests can already be restarted while
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> Use effective bdrv_dirty_bitmap_next_dirty_area interface.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/backup.c | 56 ++
> 1 file changed, 24 insertions(+), 32 deletions(-)
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> backup_cow_with_offload and backup_cow_with_bounce_buffer contains a
> lot of duplicated logic. Move it into backup_do_cow.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/backup.c | 83
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> backup_cow_with_offload can transfer more than on cluster. Let
> backup_cow_with_bounce_buffer behave similarly. It reduces number
> of IO and there are no needs to copy cluster by cluster.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
>
On 8/7/19 11:00 AM, Paolo Bonzini wrote:
> On 07/08/19 19:49, Richard Henderson wrote:
>> On 8/7/19 1:33 AM, tony.ngu...@bt.com wrote:
>>> @@ -551,6 +551,7 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy,
>>> hwaddr addr,
>>> /* As length is under guest control, handle illegal
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> We shouldn't try to copy bytes beyond EOF. Fix it.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> block/backup.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Max Reitz
signature.asc
Description:
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> Limit block_status querying to request bounds on write notifier to
> avoid extra seeking.
I don’t understand this reasoning. Checking whether something is
allocated for qcow2 should just mean an L2 cache lookup. Which we have
to do anyway
On 07/08/19 19:49, Richard Henderson wrote:
> On 8/7/19 1:33 AM, tony.ngu...@bt.com wrote:
>> @@ -551,6 +551,7 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy,
>> hwaddr addr,
>> /* As length is under guest control, handle illegal values. */
>> return;
>> }
>> +
On 8/7/19 1:33 AM, tony.ngu...@bt.com wrote:
> @@ -551,6 +551,7 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy,
> hwaddr addr,
> /* As length is under guest control, handle illegal values. */
> return;
> }
> +/* FIXME: memory_region_dispatch_write ignores
On 8/7/19 1:33 AM, tony.ngu...@bt.com wrote:
> @@ -1246,7 +1246,7 @@ typedef uint64_t FullLoadHelper(CPUArchState *env,
> target_ulong addr,
>
> static inline uint64_t __attribute__((always_inline))
> load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
> -uintptr_t
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> copy_range ignores these limitations, let's improve it. block/backup
> code handles max_transfer for copy_range by itself, now it's not needed
> more, drop it.
Shouldn’t this be two separate patches?
> Signed-off-by: Vladimir
On 8/7/19 5:07 PM, Peter Maydell wrote:
> On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>>
>> This add the reset related sections for every QOM
>> device.
>
> A bit more detail in the commit message would help, I think --
> this is adding extra machinery which has to copy and modify
> the
On 8/7/19 5:18 PM, Peter Maydell wrote:
> On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>>
>> It adds the possibility to add 2 gpios to control the warm and cold reset.
>> With theses ios, the reset can be maintained during some time.
>> Each io is associated with a state to detect level
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> write flags are constant, let's store it in BackupBlockJob instead of
> recalculating. It also makes two boolean fields to be unused, so,
> drop them.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> Reviewed-by: John Snow
> ---
>
On 8/6/19 12:19 PM, Vladimir Sementsov-Ogievskiy wrote:
> 06.08.2019 19:09, Max Reitz wrote:
>> On 06.08.19 17:26, Vladimir Sementsov-Ogievskiy wrote:
>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>> It may provoke bugs, like in recently fixed write-blocking mode
On 8/7/19 1:32 AM, tony.ngu...@bt.com wrote:
> device_endian has been made redundant by MemOp.
>
> Signed-off-by: Tony Nguyen
> ---
> include/exec/cpu-common.h | 8
> 1 file changed, 8 deletions(-)
Reviewed-by: Richard Henderson
r~
On 8/7/19 1:31 AM, tony.ngu...@bt.com wrote:
> Simplify endianness comparisons with consistent use of the more
> expressive MemOp.
>
> Suggested-by: Richard Henderson
> Signed-off-by: Tony Nguyen ---
Reviewed-by: Richard Henderson
r~
On 8/7/19 8:59 AM, Richard Henderson wrote:
> On 8/7/19 1:31 AM, tony.ngu...@bt.com wrote:
>> + _mm_ops[end == DEVICE_LITTLE_ENDIAN ? 0 :
>> 1],
>
> This is of course "end != DEVICE_LITTLE_ENDIAN".
And by that I mean drop the ?: operator.
r~
On Wed, 31 Jul 2019 at 07:33, David Gibson wrote:
>
> On Mon, Jul 29, 2019 at 04:56:30PM +0200, Damien Hedde wrote:
> > Signed-off-by: Damien Hedde
> > +For Devices and Buses there is also the corresponding helpers:
> > +void device_reset(Device *dev, bool cold)
> > +void bus_reset(Device *dev,
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
> +For Devices and Buses there is also the corresponding helpers:
> +void device_reset(Device *dev, bool cold)
> +void bus_reset(Device *dev, bool cold
Just noticed, but the prototype here is wrong: bus_reset() takes
a BusState*, not a Device*.
On 8/7/19 1:31 AM, tony.ngu...@bt.com wrote:
> + _mm_ops[end == DEVICE_LITTLE_ENDIAN ? 0 :
> 1],
This is of course "end != DEVICE_LITTLE_ENDIAN".
r~
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Signed-off-by: Damien Hedde
> ---
> docs/devel/reset.txt | 165 +++
> 1 file changed, 165 insertions(+)
> create mode 100644 docs/devel/reset.txt
>
> diff --git a/docs/devel/reset.txt
On 8/7/19 1:31 AM, tony.ngu...@bt.com wrote:
> Preparation to replace device_endian with MemOp.
>
> Mapping device_endian onto MemOp limits behaviour changes to this
> relatively smaller patch.
>
> The next patch will replace all device_endian usages with the
> equivalent MemOp. That patch will
On 8/7/19 1:30 AM, tony.ngu...@bt.com wrote:
> Temporarily no-op size_memop was introduced to aid the conversion of
> memory_region_dispatch_{read|write} operand "unsigned size" into
> "MemOp op".
>
> Now size_memop is implemented, again hard coded size but with
> MO_{8|16|32|64}. This is more
On 8/7/19 1:30 AM, tony.ngu...@bt.com wrote:
> Temporarily no-op size_memop was introduced to aid the conversion of
> memory_region_dispatch_{read|write} operand "unsigned size" into
> "MemOp op".
>
> Now size_memop is implemented, again hard coded size but with
> MO_{8|16|32|64}. This is more
On 8/7/19 1:30 AM, tony.ngu...@bt.com wrote:
> Temporarily no-op size_memop was introduced to aid the conversion of
> memory_region_dispatch_{read|write} operand "unsigned size" into
> "MemOp op".
>
> Now size_memop is implemented, again hard coded size but with
> MO_{8|16|32|64}. This is more
On 8/7/19 1:29 AM, tony.ngu...@bt.com wrote:
> Convert memory_region_dispatch_{read|write} operand "unsigned size"
> into a "MemOp op".
>
> Signed-off-by: Tony Nguyen
> ---
> include/exec/memop.h | 18 +-
> include/exec/memory.h | 9 +
> memory.c | 7
On 8/7/19 1:29 AM, tony.ngu...@bt.com wrote:
> The memory_region_dispatch_{read|write} operand "unsigned size" is
> being converted into a "MemOp op".
>
> Convert interfaces by using no-op size_memop.
>
> After all interfaces are converted, size_memop will be implemented
> and the
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Replace the zynq_slcr registers enum and macros using the
> hw/registerfields.h macros.
>
> Signed-off-by: Damien Hedde
> Reviewed-by: Philippe Mathieu-Daudé
> Reviewed-by: Alistair Francis
> ---
> hw/misc/zynq_slcr.c | 472
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Replace deprecated qdev/bus_reset_all by device/bus_reset_warm.
>
> This does not impact the behavior.
>
> Signed-off-by: Damien Hedde
I'll come back to patches 12-28 later. They're all ok
in principle, we just need to check that in each
On 8/7/19 1:26 AM, tony.ngu...@bt.com wrote:
> +/* Size in bytes to MemOp. */
> +static inline MemOp size_memop(unsigned size)
> +{
> +/*
> + * FIXME: No-op to aid conversion of memory_region_dispatch_{read|write}
> + * "unsigned size" operand into a "MemOp op".
> + */
> +
On Wed, 7 Aug 2019 at 16:23, Damien Hedde wrote:
> On 8/7/19 4:41 PM, Peter Maydell wrote:
> > On Mon, 29 Jul 2019 at 15:58, Damien Hedde
> > wrote:
> >> legacy resets are called in the "post" order (ie: children then parent)
> >> in hierarchical reset. That is the same order as legacy
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Remove the functions now they are unused:
> + device_legacy_reset
> + qdev_reset_all[_fn]
> + qbus_reset_all[_fn]
>
> Signed-off-by: Damien Hedde
> ---
Reviewed-by: Peter Maydell
thanks
-- PMM
On 8/7/19 4:54 PM, Peter Maydell wrote:
> On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>>
>> It contains the resetting counter and cold flag status.
>>
>> At this point, migration of bus reset related state (counter and cold/warm
>> flag) is handled by parent device. This done using the
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Replace deprecated qbus_reset_all by resettable_reset_cold_fn for
> the ipl registration in the main reset handlers.
>
> This does not impact the behavior.
>
> Signed-off-by: Damien Hedde
> ---
> hw/s390x/ipl.c | 6 +-
> 1 file changed,
On 8/7/19 4:41 PM, Peter Maydell wrote:
> On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>>
>> This add Resettable interface implementation for both Bus and Device.
>>
>> *resetting* counter and *reset_is_cold* flag are added in DeviceState
>> and BusState.
>>
>> Compatibility with existing
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> Replace deprecated qbus_reset_all by resettable_reset_cold_fn for
> the sysbus reset registration.
> This does not impact the behavior.
>
> Signed-off-by: Damien Hedde
> ---
> vl.c | 6 +-
> 1 file changed, 5 insertions(+), 1 deletion(-)
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> It adds the possibility to add 2 gpios to control the warm and cold reset.
> With theses ios, the reset can be maintained during some time.
> Each io is associated with a state to detect level changes.
>
> Vmstate subsections are also added to
Patchew URL: https://patchew.org/QEMU/20190807144628.4988-1-kw...@redhat.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Subject: [Qemu-devel] [PATCH v2 0/3] block-backend: Queue requests while drained
Message-id:
On Mon, 29 Jul 2019 at 15:59, Damien Hedde wrote:
>
> This add the reset related sections for every QOM
> device.
A bit more detail in the commit message would help, I think --
this is adding extra machinery which has to copy and modify
the VMStateDescription passed in by the device in order to
On 8/7/19 3:22 AM, Paolo Bonzini wrote:
> On 07/08/19 10:32, tony.ngu...@bt.com wrote:
>> +#if defined(HOST_WORDS_BIGENDIAN)
>> + .endianness = MO_BE,
>> +#else
>> + .endianness = MO_LE,
>> +#endif
>
> Host endianness is just 0, isn't it?
Yes. Just leaving a comment to that effect here
On 8/7/19 4:20 PM, Peter Maydell wrote:
> On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>>
>> This commit defines an interface allowing multi-phase reset.
>> The phases are INIT, HOLD and EXIT. Each phase has an associated method
>> in the class.
>>
>> The reset of a Resettable is
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> It contains the resetting counter and cold flag status.
>
> At this point, migration of bus reset related state (counter and cold/warm
> flag) is handled by parent device. This done using the post_load
> function in the vmsd subsection.
>
>
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> It contains the resetting counter and cold flag status.
>
> At this point, migration of bus reset related state (counter and cold/warm
> flag) is handled by parent device. This done using the post_load
"is done"
> function in the vmsd
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> Deprecate old reset apis and make them use the new one while they
> are still used somewhere.
>
> Signed-off-by: Damien Hedde
> ---
> hw/core/qdev.c | 22 +++---
> include/hw/qdev-core.h | 28
This fixes devices like IDE that can still start new requests from I/O
handlers in the CPU thread while the block backend is drained.
The basic assumption is that in a drain section, no new requests should
be allowed through a BlockBackend (blk_drained_begin/end don't exist,
we get drain sections
mirror_top_bs is currently implicitly drained through its connection to
the source or the target node. However, the drain section for target_bs
ends early after moving mirror_top_bs from src to target_bs, so that
requests can already be restarted while mirror_top_bs is still present
in the chain,
The functionality offered by blk_pread_unthrottled() goes back to commit
498e386c584. Then, we couldn't perform I/O throttling with synchronous
requests because timers wouldn't be executed in polling loops. So the
commit automatically disabled I/O throttling as soon as a synchronous
request was
This series fixes the problem that devices like IDE, which submit
requests as a direct result of I/O from the CPU thread, can continue to
submit new requests even in a drained section.
v2:
- Rebased on top of block-next
- Replaced patch 2 with draining mirror_top_bs instead [Max]
- Removed
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> This add Resettable interface implementation for both Bus and Device.
>
> *resetting* counter and *reset_is_cold* flag are added in DeviceState
> and BusState.
>
> Compatibility with existing code base is ensured.
> The legacy bus or device
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> Provide a temporary function doing what device_reset does to do the
> transition with Resettable API which will trigger a prototype change
> of device_reset.
The other point here is that device_legacy_reset() resets
only that device, not any
On Mon, 29 Jul 2019 at 15:58, Damien Hedde wrote:
>
> This commit defines an interface allowing multi-phase reset.
> The phases are INIT, HOLD and EXIT. Each phase has an associated method
> in the class.
>
> The reset of a Resettable is controlled with 2 functions:
> - resettable_assert_reset
It's needed to fix reopening qcow2 with bitmaps to RW. Currently it
can't work, as qcow2 needs write access to file child, to mark bitmaps
in-image with IN_USE flag. But usually children goes after parents in
reopen queue and file child is still RO on qcow2 reopen commit. Reverse
reopen order to
Two testcases with persistent bitmaps are not added here, as there are
bugs to be fixed soon.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/260 | 87 ++
tests/qemu-iotests/260.out | 52 +++
tests/qemu-iotests/group
The only reason I can imagine for this strange code at the very-end of
bdrv_reopen_commit is the fact that bs->read_only updated after
calling drv->bdrv_reopen_commit in bdrv_reopen_commit. And in the same
time, prior to previous commit, qcow2_reopen_bitmaps_rw did a wrong
check for being
qcow2_reopen_bitmaps_ro wants to store bitmaps and then mark them all
readonly. But the latter don't work, as
qcow2_store_persistent_dirty_bitmaps removes bitmaps after storing.
It's OK for inactivation but bad idea for reopen-ro. And this leads to
the following bug:
Assume we have persistent
Firstly, no reason to optimize failure path. Then, function name is
ambiguous: it checks for readonly and similar things, but someone may
think that it will ignore normal bitmaps which was just unchanged, and
this is in bad relation with the fact that we should drop IN_USE flag
for unchanged
- Correct check for write access to file child, and in correct place
(only if we want to write).
- Support reopen rw -> rw (which will be used in following commit),
for example, !bdrv_dirty_bitmap_readonly() is not a corruption if
bitmap is marked IN_USE in the image.
- Consider unexpected
We'll need reverse-foreach in the following commit, QTAILQ support it,
so move to QTAILQ.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block.h | 2 +-
block.c | 22 +++---
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git
Reopening bitmaps to RW was broken prior to previous commit. Check that
it works now.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/165 | 46 --
tests/qemu-iotests/165.out | 4 ++--
2 files changed, 46 insertions(+), 4 deletions(-)
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/iotests.py | 10 ++
1 file changed, 10 insertions(+)
diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
index ce74177ab1..4ad265f140 100644
--- a/tests/qemu-iotests/iotests.py
+++
The function is unused, drop it.
Signed-off-by: Vladimir Sementsov-Ogievskiy
Reviewed-by: John Snow
---
block/qcow2.h| 2 --
block/qcow2-bitmap.c | 15 +--
2 files changed, 1 insertion(+), 16 deletions(-)
diff --git a/block/qcow2.h b/block/qcow2.h
index
Hi all!
Bitmaps reopening is buggy, reopening-rw just not working at all and
reopening-ro may lead to producing broken incremental
backup if we do temporary snapshot in a meantime.
v4: Drop complicated solution around reopening logic [Kevin], fix
the existing bug in a simplest way
Am 22.07.2019 um 15:33 hat Max Reitz geschrieben:
> I think the patches speak for themselves now.
>
> (The title of this series alludes to what the iotest added in the final
> patch tests.)
>
> v3:
> - Rebased on master
> - Added two tests to test-bdrv-drain [Kevin]
> - Removed new iotest from
The fast path is taken when TLB_FLAGS_MASK is all zero.
TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.
Signed-off-by: Tony Nguyen
Reviewed-by: Richard Henderson
---
include/exec/cpu-all.h | 10 --
1 file changed, 8 insertions(+),
On 8/7/19 8:37 PM, Philippe Mathieu-Daudé wrote:
> I'm confused I think I already reviewed various patches of your previous
?> series but don't see my Reviewed-by tags.?
Apologies Philippe! I am the confused one here =/
Will append.
Thank you very much for the reviews and qemu-devel newbie
Sorry, I missed a tag.
Reviewed-by: Philippe Mathieu-Daudé ?
Sorry, I missed a tag.?
Tested-by: Mark Cave-Ayland
Sorry, I missed a tag.
Reviewed-by: Philippe Mathieu-Daudé ?
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".
Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoid size_memop calls.
Signed-off-by: Tony
Now that MemOp has been pushed down into the memory API, and
callers are encoding endianness, we can collapse byte swaps
along the I/O path into the accelerator and target independent
adjust_endianness.
Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g.
Sorry, I missed a tag.
Reviewed-by: Philippe Mathieu-Daudé
This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.
Tested working on OpenBSD.
Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)
Signed-off-by: Tony Nguyen
Reviewed-by:
Temporarily no-op size_memop was introduced to aid the conversion of
memory_region_dispatch_{read|write} operand "unsigned size" into
"MemOp op".
Now size_memop is implemented, again hard coded size but with
MO_{8|16|32|64}. This is more expressive and avoid size_memop calls.
Signed-off-by: Tony
DEVICE_HOST_ENDIAN is conditional upon HOST_WORDS_BIGENDIAN.
Code is cleaner if the single use of DEVICE_HOST_ENDIAN is instead
directly conditional upon HOST_WORDS_BIGENDIAN.
Signed-off-by: Tony Nguyen
---
include/exec/cpu-common.h | 8
memory.c | 6 +-
2 files
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.
Signed-off-by: Tony Nguyen
Reviewed-by: Richard Henderson
---
target/sparc/mmu_helper.c | 32 ++--
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git
Sorry, I missed a tag.
Reviewed-by: Philippe Mathieu-Daudé ?
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.
Signed-off-by: Tony Nguyen
---
accel/tcg/cputlb.c | 170 +--
include/exec/memop.h | 6 ++
memory.c | 11 +---
3 files changed, 90
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
Notice new attribute, byte swap, and force the transaction through the
memory slow path.
Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.
Suggested-by: Richard Henderson
Signed-off-by: Tony Nguyen
Reviewed-by: Richard
Preparation for replacing device_endian with MemOp.
Device realizing code with MemorRegionOps endianness as
DEVICE_NATIVE_ENDIAN is not common code.
Corrected devices were identified by making the declaration of
DEVICE_NATIVE_ENDIAN conditional upon NEED_CPU_H and then listing
what failed to
Preparation to replace device_endian with MemOp.
Mapping device_endian onto MemOp limits behaviour changes to this
relatively smaller patch.
The next patch will replace all device_endian usages with the
equivalent MemOp. That patch will be large but have no behaviour
changes.
A subsequent patch
device_endian has been made redundant by MemOp.
Signed-off-by: Tony Nguyen
---
include/exec/cpu-common.h | 8
1 file changed, 8 deletions(-)
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 01a29ba..7eeb78c 100644
--- a/include/exec/cpu-common.h
+++
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented
and the memory_region_dispatch_{read|write} operand "unsigned size"
will be
1 - 100 of 130 matches
Mail list logo