On Wed, Nov 01, 2023 at 06:16:08AM -0700, Mattias Nissler wrote:
> When DMA memory can't be directly accessed, as is the case when
> running the device model in a separate process without shareable DMA
> file descriptors, bounce buffering is used.
>
> It is not uncommon for device models to
On Wed, Nov 01, 2023 at 06:16:07AM -0700, Mattias Nissler wrote:
> Instead of using a single global bounce buffer, give each AddressSpace
> its own bounce buffer. The MapClient callback mechanism moves to
> AddressSpace accordingly.
>
> This is in preparation for generalizing bounce buffer
Peter Xu writes:
> On Wed, Nov 01, 2023 at 02:20:32PM -0300, Fabiano Rosas wrote:
>> I wonder if adapting multifd to use a QIOTask for the channels would
>> make sense as an intermediary step. Seems simpler and would force us to
>> format multifd in more generic terms.
>
> Isn't QIOTask event
01.11.2023 18:38, Andrey Drobyshev wrote:
Hi Michael,
Since this series is already merged in master, I'm not sure whether it's
necessary to forward this particular patch to qemu-stable, or it should
rather be cherry-picked to -stable by one of the block maintainers.
It's been my job lately to
On Wed, Nov 01, 2023 at 02:20:32PM -0300, Fabiano Rosas wrote:
> I wonder if adapting multifd to use a QIOTask for the channels would
> make sense as an intermediary step. Seems simpler and would force us to
> format multifd in more generic terms.
Isn't QIOTask event based, too?
>From my
On Wed, Nov 01, 2023 at 04:37:12PM +, Daniel P. Berrangé wrote:
> It doesn't contain thread number information directly, but it can
> be implicit from the data layout.
>
> If you want parallel I/O, each thread has to know it is the only
> one reading/writing to a particular region of the
On Wed, Nov 01, 2023 at 06:03:36PM +0100, Denis V. Lunev wrote:
> On 11/1/23 17:51, Daniel P. Berrangé wrote:
> > On Tue, Oct 31, 2023 at 03:33:52PM +0100, Hanna Czenczek wrote:
> > > On 01.10.23 22:46, Denis V. Lunev wrote:
> > > > Can you please not top-post. This makes the discussion complex.
Peter Xu writes:
> On Tue, Oct 31, 2023 at 08:18:06PM -0300, Fabiano Rosas wrote:
>> Peter Xu writes:
>>
>> > On Mon, Oct 23, 2023 at 05:36:00PM -0300, Fabiano Rosas wrote:
>> >> Currently multifd does not need to have knowledge of pages on the
>> >> receiving side because all the information
On Mon, 30 Oct 2023, Marc-André Lureau wrote:
Hi
On Tue, Oct 10, 2023 at 5:03 PM BALATON Zoltan wrote:
Apparently these should be half the memory region sizes confirmed at
least by Radeon drivers while Rage 128 Pro drivers don't seem to use
these.
There doesn't seem to be adjustments for
On 11/1/23 17:51, Daniel P. Berrangé wrote:
On Tue, Oct 31, 2023 at 03:33:52PM +0100, Hanna Czenczek wrote:
On 01.10.23 22:46, Denis V. Lunev wrote:
Can you please not top-post. This makes the discussion complex. This
approach is followed in this mailing list and in other similar lists
like
Steven Sistare writes:
> On 11/1/2023 9:57 AM, Steven Sistare wrote:
>> On 11/1/2023 9:34 AM, Fabiano Rosas wrote:
>>> Steve Sistare writes:
>>>
Signed-off-by: Steve Sistare
---
tests/qtest/migration-test.c | 27 +++
1 file changed, 27 insertions(+)
On Tue, Oct 31, 2023 at 03:33:52PM +0100, Hanna Czenczek wrote:
> On 01.10.23 22:46, Denis V. Lunev wrote:
> > Can you please not top-post. This makes the discussion complex. This
> > approach is followed in this mailing list and in other similar lists
> > like LKML.
> >
> > On 10/1/23 19:08,
On Wed, Nov 01, 2023 at 12:24:22PM -0400, Peter Xu wrote:
> On Wed, Nov 01, 2023 at 03:52:18PM +, Daniel P. Berrangé wrote:
> > On Wed, Nov 01, 2023 at 11:23:37AM -0400, Peter Xu wrote:
> > > On Wed, Oct 25, 2023 at 10:39:58AM +0100, Daniel P. Berrangé wrote:
> > > > If I'm reading the code
On 11/1/2023 9:57 AM, Steven Sistare wrote:
> On 11/1/2023 9:34 AM, Fabiano Rosas wrote:
>> Steve Sistare writes:
>>
>>> Signed-off-by: Steve Sistare
>>> ---
>>> tests/qtest/migration-test.c | 27 +++
>>> 1 file changed, 27 insertions(+)
>>>
>>> diff --git
On Tue, Oct 31, 2023 at 03:33:52PM +0100, Hanna Czenczek wrote:
> Personally, and honestly, I see no actual use for qemu-img dd at all,
> because we’re trying to mimic a subset of an interface of a rather complex
> program that has been designed to do what it does. We can only fail at
> that.
On Wed, Nov 01, 2023 at 03:52:18PM +, Daniel P. Berrangé wrote:
> On Wed, Nov 01, 2023 at 11:23:37AM -0400, Peter Xu wrote:
> > On Wed, Oct 25, 2023 at 10:39:58AM +0100, Daniel P. Berrangé wrote:
> > > If I'm reading the code correctly the new format has some padding
> > > such that each
On 11/1/23 16:23, Andrey Drobyshev wrote:
Currently we emit GUEST_PANICKED event in case kvm_vcpu_ioctl() returns
KVM_EXIT_SYSTEM_EVENT with the event type KVM_SYSTEM_EVENT_CRASH. Let's
extend this scenario and emit GUEST_PANICKED in case of an abnormal KVM
exit. That's a natural thing to do
On Tue, Oct 31, 2023 at 08:18:06PM -0300, Fabiano Rosas wrote:
> Peter Xu writes:
>
> > On Mon, Oct 23, 2023 at 05:36:00PM -0300, Fabiano Rosas wrote:
> >> Currently multifd does not need to have knowledge of pages on the
> >> receiving side because all the information needed is within the
> >>
On Wed, Nov 01, 2023 at 11:23:37AM -0400, Peter Xu wrote:
> On Wed, Oct 25, 2023 at 10:39:58AM +0100, Daniel P. Berrangé wrote:
> > If I'm reading the code correctly the new format has some padding
> > such that each "ramblock pages" region starts on a 1 MB boundary.
> >
> > eg so we get:
> >
>
On 11/1/23 11:50, Michael Tokarev wrote:
> 19.09.2023 19:57, Andrey Drobyshev via wrote:
>> In case when we're rebasing within one backing chain, and when target
>> image
>> is larger than old backing file, bdrv_is_allocated_above() ends up
>> setting
>> *pnum = 0. As a result, target offset
On Wed, Oct 25, 2023 at 10:39:58AM +0100, Daniel P. Berrangé wrote:
> If I'm reading the code correctly the new format has some padding
> such that each "ramblock pages" region starts on a 1 MB boundary.
>
> eg so we get:
>
>
> | ramblock 1 header|
Currently we emit GUEST_PANICKED event in case kvm_vcpu_ioctl() returns
KVM_EXIT_SYSTEM_EVENT with the event type KVM_SYSTEM_EVENT_CRASH. Let's
extend this scenario and emit GUEST_PANICKED in case of an abnormal KVM
exit. That's a natural thing to do since in this case guest is no
longer
On Wed, Nov 01, 2023 at 02:28:24PM +, Daniel P. Berrangé wrote:
> On Wed, Nov 01, 2023 at 10:21:07AM -0400, Peter Xu wrote:
> > On Wed, Nov 01, 2023 at 09:26:46AM +, Daniel P. Berrangé wrote:
> > > On Tue, Oct 31, 2023 at 03:03:50PM -0400, Peter Xu wrote:
> > > > On Wed, Oct 25, 2023 at
The dirty limit feature has been introduced since the 8.1
QEMU release but has not reflected in the document, add a
section for that.
Signed-off-by: Hyman Huang
Reviewed-by: Fabiano Rosas
Message-Id:
<36194a8a23d937392bf13d9fff8e898030c827a3.1697815117.git.yong.hu...@smartx.com>
---
On 10/31/23 15:46, Anthony Harivel wrote:
+/* Get QEMU PID*/
+pid = getpid();
This should be gettid(), or perhaps a VCPU thread's TID.
+/* Those MSR values should not change as well */
+vmsr->msr_unit = vmsr_read_msr(MSR_RAPL_POWER_UNIT, 0, pid,
+
On 10/31/23 15:46, Anthony Harivel wrote:
+
+static uint64_t vmsr_read_msr(uint32_t reg, unsigned int cpu_id)
+{
+int fd;
+uint64_t data;
+
+char path[MAX_PATH_LEN];
+snprintf(path, MAX_PATH_LEN, "/dev/cpu/%u/msr", cpu_id);
If you allow any CPU here, the thread id is really
Dirty ring size configuration is not supported by guestperf tool.
Introduce dirty-ring-size (ranges in [1024, 65536]) option so
developers can play with dirty-ring and dirty-limit feature easier.
To set dirty ring size with 4096 during migration test:
$ ./tests/migration/guestperf.py
On Mon, Oct 23, 2023 at 05:36:02PM -0300, Fabiano Rosas wrote:
> We'll need to set the shadow_bmap bits from outside ram.c soon and
> TARGET_PAGE_BITS is poisoned, so add a wrapper to it.
>
> Signed-off-by: Fabiano Rosas
Merge this into existing patch to add ram.c usage?
> ---
>
On Wed, Nov 01, 2023 at 10:21:07AM -0400, Peter Xu wrote:
> On Wed, Nov 01, 2023 at 09:26:46AM +, Daniel P. Berrangé wrote:
> > On Tue, Oct 31, 2023 at 03:03:50PM -0400, Peter Xu wrote:
> > > On Wed, Oct 25, 2023 at 11:07:33AM -0300, Fabiano Rosas wrote:
> > > > >> +static int
Currently, guestperf does not cover the dirty-limit
migration, support this feature.
Note that dirty-limit requires 'dirty-ring-size' set.
To enable dirty-limit, setting x-vcpu-dirty-limit-period
as 500ms and x-vcpu-dirty-limit as 10MB/s:
$ ./tests/migration/guestperf.py \
--dirty-ring-size
On 11/1/23 11:20, Daniel P. Berrangé wrote:
On Tue, Oct 31, 2023 at 03:46:01PM +0100, Anthony Harivel wrote:
The function qio_channel_get_peercred() returns a pointer to the
credentials of the peer process connected to this socket.
This credentials structure is defined in as follows:
struct
On Wed, Nov 01, 2023 at 09:26:46AM +, Daniel P. Berrangé wrote:
> On Tue, Oct 31, 2023 at 03:03:50PM -0400, Peter Xu wrote:
> > On Wed, Oct 25, 2023 at 11:07:33AM -0300, Fabiano Rosas wrote:
> > > >> +static int parse_ramblock_fixed_ram(QEMUFile *f, RAMBlock *block,
> > > >> ram_addr_t
v3:
- do nothing but rebase on master
v2:
- rebase on master.
- fix the document typo.
v1:
This is a miscellaneous patchset for dirtylimit that contains
the following parts:
1. dirtylimit module: fix for a race situation and
replace usleep by g_usleep.
2. migration test: add dirtylimit test
Add migration dirty-limit capability test if kernel support
dirty ring.
Migration dirty-limit capability introduce dirty limit
capability, two parameters: x-vcpu-dirty-limit-period and
vcpu-dirty-limit are introduced to implement the live
migration with dirty limit.
The test case does the
Checking if dirty limit is in service is done by the
dirtylimit_query_all function, drop the reduplicative
check in the qmp_query_vcpu_dirty_limit function.
Signed-off-by: Hyman Huang
Reviewed-by: Fabiano Rosas
Message-Id:
Fix a race situation for global variable dirtylimit_state.
Also, replace usleep by g_usleep to increase platform
accessibility to the sleep function.
Signed-off-by: Hyman Huang
Reviewed-by: Fabiano Rosas
Message-Id:
---
system/dirtylimit.c | 20 ++--
1 file changed, 14
Eiichi Tsukata writes:
> FYI: The EINVAL in vmx_set_nested_state() is caused by the following
> condition:
> * vcpu->arch.hflags == 0
> * kvm_state->hdr.vmx.smm.flags == KVM_STATE_NESTED_SMM_VMXON
This is a weird state indeed,
'vcpu->arch.hflags == 0' means we're not in SMM and not in guest
On 11/1/2023 9:34 AM, Fabiano Rosas wrote:
> Steve Sistare writes:
>
>> Signed-off-by: Steve Sistare
>> ---
>> tests/qtest/migration-test.c | 27 +++
>> 1 file changed, 27 insertions(+)
>>
>> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
>>
There are a number of things that are broken on the test currently so
lets fix that up:
- replace retired Debian kernel for tuxrun_baseline one
- remove "detected repeat instructions test" since ea185a55
- log total counted instructions/memory accesses
Signed-off-by: Alex Bennée
---
Steve Sistare writes:
> Signed-off-by: Steve Sistare
> ---
> tests/qtest/migration-test.c | 27 +++
> 1 file changed, 27 insertions(+)
>
> diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
> index e1c1105..de29fc5 100644
> ---
The patch below fixes a bug in the VSX_CVT_FP_TO_INT and VSX_CVT_FP_TO_INT2
macros in target/ppc/fpu_helper.c where a non-NaN floating point value from the
source vector is incorrectly converted to 0, 0x8000, or 0x8000
instead of the expected value if a preceding source floating
PCI config space is little-endian, so on a big-endian host we need to
perform byte swaps for values as they are passed to and received from
the generic PCI config space access machinery.
Signed-off-by: Mattias Nissler
---
hw/remote/vfio-user-obj.c | 4 ++--
1 file changed, 2 insertions(+), 2
Brings in assorted bug fixes. The following are of particular interest
with respect to message-based DMA support:
* bb308a2 "Fix address calculation for message-based DMA"
Corrects a bug in DMA address calculation.
* 1569a37 "Pass server->client command over a separate socket pair"
Adds
When DMA memory can't be directly accessed, as is the case when
running the device model in a separate process without shareable DMA
file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA
regions at the same time. Examples include:
*
Wire up support for DMA for the case where the vfio-user client does not
provide mmap()-able file descriptors, but DMA requests must be performed
via the VFIO-user protocol. This installs an indirect memory region,
which already works for pci_dma_{read,write}, and pci_dma_map works
thanks to the
Instead of using a single global bounce buffer, give each AddressSpace
its own bounce buffer. The MapClient callback mechanism moves to
AddressSpace accordingly.
This is in preparation for generalizing bounce buffer handling further
to allow multiple bounce buffers, with a total allocation limit
This series adds basic support for message-based DMA in qemu's vfio-user
server. This is useful for cases where the client does not provide file
descriptors for accessing system memory via memory mappings. My motivating use
case is to hook up device models as PCIe endpoints to a hardware design.
Daniel P. Berrangé writes:
> On Wed, Nov 01, 2023 at 09:16:33AM -0300, Fabiano Rosas wrote:
>> Daniel P. Berrangé writes:
>>
>> >
>> > So the problem with add-fd is that when requesting a FD, the monitor
>> > code masks flags with O_ACCMODE. What if we extended it such that
>> > the monitor
On Wed, Nov 01, 2023 at 09:16:33AM -0300, Fabiano Rosas wrote:
> Daniel P. Berrangé writes:
>
> >
> > So the problem with add-fd is that when requesting a FD, the monitor
> > code masks flags with O_ACCMODE. What if we extended it such that
> > the monitor masked with O_ACCMODE | O_DIRECT.
> >
Daniel P. Berrangé writes:
> On Tue, Oct 31, 2023 at 04:05:46PM -0300, Fabiano Rosas wrote:
>> Daniel P. Berrangé writes:
>>
>> > On Tue, Oct 31, 2023 at 12:52:41PM -0300, Fabiano Rosas wrote:
>> >> Daniel P. Berrangé writes:
>> >> >
>> >> > I guess I'm not seeing the problem still. A single
> On 31-Oct-2023, at 9:13 PM, Philippe Mathieu-Daudé wrote:
>
> On 27/9/23 17:12, Peter Maydell wrote:
>> Convert docs/specs/vmgenid.txt to rST format.
>> Signed-off-by: Peter Maydell
>> ---
>> MAINTAINERS| 2 +-
>> docs/specs/index.rst | 1 +
>> docs/specs/vmgenid.rst |
On Tue, Oct 31, 2023 at 03:46:03PM +0100, Anthony Harivel wrote:
> Starting with the "Sandy Bridge" generation, Intel CPUs provide a RAPL
> interface (Running Average Power Limit) for advertising the accumulated
> energy consumption of various power domains (e.g. CPU packages, DRAM,
> etc.).
>
>
On Tue, 31 Oct 2023 at 18:45, Kevin Wolf wrote:
> Am 16.10.2023 um 13:58 hat Michael Tokarev geschrieben:
> > Almost everyone mentions -blockdev as a replacement for -drive.
>
> More specifically for -drive if=none. I honestly don't know many common
> use cases for that one.
One use case for it
Hi Alex,
On Tue, Oct 31, 2023 at 12:02:03PM +, Alex Bennée wrote:
>
> Hi All,
>
> Since 8.1 we enabled the FEAT_RME CPU feature to allow for Arm CCA
> guests to be run under QEMU's Arm emulation. While this is enough for
> pure software guests eventually we would want to support modelling
>
On Tue, Oct 31, 2023 at 03:46:02PM +0100, Anthony Harivel wrote:
> Introduce a privileged helper to access RAPL MSR.
>
> The privileged helper tool, qemu-vmsr-helper, is designed to provide
> virtual machines with the ability to read specific RAPL (Running Average
> Power Limit) MSRs without
On Tue, 24 Oct 2023, Mark Cave-Ayland wrote:
This series adds a simple implementation of legacy/native mode switching for PCI
IDE controllers and updates the via-ide device to use it.
This is needed for my amigaone machine to boot (as that uses the legacy
mode of this controller) so is
On Tue, 24 Oct 2023, BALATON Zoltan wrote:
These are some small clean ups for target/ppc/excp_helper.c trying to
make this code a bit simpler. No functional change is intended. This
series was submitted before but only partially merged due to freeze
and conflicting series os thia was postponed
On Tue, Oct 31, 2023 at 03:46:02PM +0100, Anthony Harivel wrote:
> Introduce a privileged helper to access RAPL MSR.
>
> The privileged helper tool, qemu-vmsr-helper, is designed to provide
> virtual machines with the ability to read specific RAPL (Running Average
> Power Limit) MSRs without
On Tue, Oct 31, 2023 at 03:46:01PM +0100, Anthony Harivel wrote:
> The function qio_channel_get_peercred() returns a pointer to the
> credentials of the peer process connected to this socket.
>
> This credentials structure is defined in as follows:
>
> struct ucred {
> pid_t pid;/*
31.07.2023 12:10, Akihiko Odaki:
A build of GCC 13.2 will have stack protector enabled by default if it was
configured with --enable-default-ssp option. For such a compiler, it is
necessary to explicitly disable stack protector when linking without
standard libraries.
This is a tree-wide change
19.09.2023 19:57, Andrey Drobyshev via wrote:
In case when we're rebasing within one backing chain, and when target image
is larger than old backing file, bdrv_is_allocated_above() ends up setting
*pnum = 0. As a result, target offset isn't getting incremented, and we
get stuck in an infinite
On Wed, Nov 01, 2023 at 06:27:02AM -0300, Daniel Henrique Barboza wrote:
>
>
> On 11/1/23 06:02, Andrew Jones wrote:
> > On Tue, Oct 31, 2023 at 05:39:03PM -0300, Daniel Henrique Barboza wrote:
> > > We don't have any form of a 'bare bones' CPU. rv64, our default CPUs,
> > > comes with a lot of
On Tue, Oct 31, 2023 at 04:05:46PM -0300, Fabiano Rosas wrote:
> Daniel P. Berrangé writes:
>
> > On Tue, Oct 31, 2023 at 12:52:41PM -0300, Fabiano Rosas wrote:
> >> Daniel P. Berrangé writes:
> >> >
> >> > I guess I'm not seeing the problem still. A single FD is passed across
> >> > from
On 11/1/23 06:02, Andrew Jones wrote:
On Tue, Oct 31, 2023 at 05:39:03PM -0300, Daniel Henrique Barboza wrote:
We don't have any form of a 'bare bones' CPU. rv64, our default CPUs,
comes with a lot of defaults. This is fine for most regular uses but
it's not suitable when more control of
On Tue, Oct 31, 2023 at 03:03:50PM -0400, Peter Xu wrote:
> On Wed, Oct 25, 2023 at 11:07:33AM -0300, Fabiano Rosas wrote:
> > >> +static int parse_ramblock_fixed_ram(QEMUFile *f, RAMBlock *block,
> > >> ram_addr_t length)
> > >> +{
> > >> +g_autofree unsigned long *bitmap = NULL;
> > >> +
On Tue, Oct 31, 2023 at 05:39:01PM -0300, Daniel Henrique Barboza wrote:
> We want to add a new CPU type for bare CPUs that will inherit specific
> traits of the 2 existing types:
>
> - it will allow for extensions to be enabled/disabled, like generic
> CPUs;
>
> - it will NOT inherit
On Tue, Oct 31, 2023 at 05:39:02PM -0300, Daniel Henrique Barboza wrote:
> Our current logic in get/setters of MISA and multi-letter extensions
> works because we have only 2 CPU types, generic and vendor, and by using
> "!generic" we're implying that we're talking about vendor CPUs. When adding
>
On Tue, Oct 31, 2023 at 05:39:16PM -0300, Daniel Henrique Barboza wrote:
> Expose all profile flags for all CPUs when executing
> query-cpu-model-expansion. This will allow callers to quickly determine
> if a certain profile is implemented by a given CPU. This includes
> vendor CPUs - the fact
On Tue, Oct 31, 2023 at 05:39:15PM -0300, Daniel Henrique Barboza wrote:
> Enabling a profile and then disabling some of its mandatory extensions
> is a valid use. It can be useful for debugging and testing. But the
> common expected use of enabling a profile is to enable all its mandatory
>
On 2023/11/01 18:09, Michael S. Tsirkin wrote:
On Wed, Nov 01, 2023 at 05:35:50PM +0900, Akihiko Odaki wrote:
On 2023/11/01 15:38, Michael S. Tsirkin wrote:
On Wed, Nov 01, 2023 at 01:50:00PM +0900, Akihiko Odaki wrote:
We had another discussion regarding migration for patch "virtio-net: Do
On Tue, Oct 31, 2023 at 05:39:05PM -0300, Daniel Henrique Barboza wrote:
> zic64b is defined in the RVA22U64 profile [1] as a named feature for
> "Cache blocks must be 64 bytes in size, naturally aligned in the address
> space". It's a fantasy name for 64 bytes cache blocks. The RVA22U64
> profile
On Wed, Nov 01, 2023 at 05:35:50PM +0900, Akihiko Odaki wrote:
> On 2023/11/01 15:38, Michael S. Tsirkin wrote:
> > On Wed, Nov 01, 2023 at 01:50:00PM +0900, Akihiko Odaki wrote:
> > > We had another discussion regarding migration for patch "virtio-net: Do
> > > not
> > > clear
On Tue, Oct 31, 2023 at 05:39:03PM -0300, Daniel Henrique Barboza wrote:
> We don't have any form of a 'bare bones' CPU. rv64, our default CPUs,
> comes with a lot of defaults. This is fine for most regular uses but
> it's not suitable when more control of what is actually loaded in the
> CPU is
On 2023/11/01 15:38, Michael S. Tsirkin wrote:
On Wed, Nov 01, 2023 at 01:50:00PM +0900, Akihiko Odaki wrote:
We had another discussion regarding migration for patch "virtio-net: Do not
clear VIRTIO_NET_F_HASH_REPORT". It does change the runtime behavior so we
need to take migration into
On Wed, Nov 01, 2023 at 01:50:00PM +0900, Akihiko Odaki wrote:
> We had another discussion regarding migration for patch "virtio-net: Do not
> clear VIRTIO_NET_F_HASH_REPORT". It does change the runtime behavior so we
> need to take migration into account. I still think the patch does not
>
201 - 275 of 275 matches
Mail list logo