using the same writev call */
> p->iov[0].iov_len = p->packet_len;
> @@ -728,6 +729,8 @@ static void *multifd_send_thread(void *opaque)
> break;
> }
>
> +stat64_add(_stats.multifd_bytes, p->next_packet_size);
> +
On Thu, May 18, 2023 at 06:40:18PM +0200, Juan Quintela wrote:
> Peter Xu wrote:
> > On Mon, May 08, 2023 at 03:09:09PM +0200, Juan Quintela wrote:
> >> In the past, we had to put the in the main thread all the operations
> >> related with sizes due to qemu_file not
On Mon, Feb 05, 2024 at 08:46:54AM +0100, Markus Armbruster wrote:
> qapi/migration.json
> MigrateSetParameters 1
It's tls-authz. I'll send a patch for this one.
Thanks,
--
Peter Xu
ry using a signed for size anyway, and it should be
compatible change as we doubled the size.
I'll hold a bit to see whether there's some comment, then I can try to post
a patch.
> + * that is in some other struct field, but it's a runtime constant and
> + * we can assume the memory has already been allocated.
> +*/
> +
> #define VMSTATE_VBUFFER_UINT32(_field, _state, _version, _test,
> _field_size) { \
> @@ -688,2 +698,9 @@ extern const VMStateInfo vmstate_info_qlist;
>
> +/**
> + * VMSTATE_VBUFFER_ALLOC_UINT32:
> + *
> + * We need to migrate an array of uint32_t of variable size dependent
> + * on the inbound migration data, and so the migration code must
> + * allocate it.
> +*/
> #define VMSTATE_VBUFFER_ALLOC_UINT32(_field, _state, _version, \
> ---
>
--
Peter Xu
On Thu, Nov 30, 2023 at 03:43:25PM -0500, Stefan Hajnoczi wrote:
> On Thu, Nov 30, 2023 at 03:08:49PM -0500, Peter Xu wrote:
> > On Wed, Nov 29, 2023 at 04:26:20PM -0500, Stefan Hajnoczi wrote:
> > > The Big QEMU Lock (BQL) has many names and they are confusing. The
> > >
oid)
> - bool qemu_bql_locked(void)
>
> There are more APIs with "iothread" in their names. Subsequent patches
> will rename them. There are also comments and documentation that will be
> updated in later patches.
>
> Signed-off-by: Stefan Hajnoczi
Acked-by: Pete
On Wed, Apr 10, 2024 at 09:49:15AM -0400, Peter Xu wrote:
> On Wed, Apr 10, 2024 at 02:28:59AM +, Zhijian Li (Fujitsu) via wrote:
> >
> >
> > on 4/10/2024 3:46 AM, Peter Xu wrote:
> >
> > >> Is there document/link about the unittest/CI for migration te
On Wed, Apr 10, 2024 at 02:28:59AM +, Zhijian Li (Fujitsu) via wrote:
>
>
> on 4/10/2024 3:46 AM, Peter Xu wrote:
>
> >> Is there document/link about the unittest/CI for migration tests, Why
> >> are those tests missing?
> >> Is it hard o
. I think we need people
that understand these stuff well enough, have dedicated time and look after
it.
Thanks,
--
Peter Xu
e/howto-configure-soft-roce__;!!GjvTz_vk!VEqNfg3Kdf58Oh1FkGL6ErDLfvUXZXPwMTaXizuIQeIgJiywPzuwbqx8wM0KUsyopw_EYQxWvGHE3ig$
> >
> > Thanks and best regards!
> >
> > On Thu, Apr 11, 2024 at 4:20 PM Peter Xu wrote:
> > > On Wed, Apr 10, 2024 at 09:49:15AM -0400, P
please check the whole thread discussion, it may help to understand
what we are looking for on rdma migrations [1]. Meanwhile please feel free
to sync with Jinpu's team and see how to move forward with such a project.
[1] https://lore.kernel.org/qemu-devel/87frwatp7n@suse.de/
Thanks,
--
Peter Xu
sy task.
It'll be good to know whether Dan's suggestion would work first, without
rewritting everything yet so far. Not sure whether some perf test could
help with the rsocket APIs even without QEMU's involvements (or looking for
test data supporting / invalidate such conversions).
Thanks,
--
Peter Xu
On Wed, May 01, 2024 at 04:59:38PM +0100, Daniel P. Berrangé wrote:
> On Wed, May 01, 2024 at 11:31:13AM -0400, Peter Xu wrote:
> > What I worry more is whether this is really what we want to keep rdma in
> > qemu, and that's also why I was trying to request for some serious
On Tue, Apr 30, 2024 at 09:00:49AM +0100, Daniel P. Berrangé wrote:
> On Tue, Apr 30, 2024 at 09:15:03AM +0200, Markus Armbruster wrote:
> > Peter Xu writes:
> >
> > > On Mon, Apr 29, 2024 at 08:08:10AM -0500, Michael Galaxy wrote:
> > >> Hi All
see how we can
test together. And btw I don't think we need a cluster, IIUC we simply
need two hosts, 100G nic on both sides? IOW, it seems to me we only need
two cards just for experiments, systems that can drive the cards, and a
wire supporting 100G?
>
> >
> > - Michael
>
&g
ode as Dan
mentioned?
Thanks,
>
> On Fri, May 3, 2024 at 4:33 PM Peter Xu wrote:
> >
> > On Fri, May 03, 2024 at 08:40:03AM +0200, Jinpu Wang wrote:
> > > I had a brief check in the rsocket changelog, there seems some
> > > improvement over time,
> > >
ke a decision
on whether to drop rdma, iow, even if rdma performs well, the community
still has the right to drop it if nobody can actively work and maintain it.
It's just that if nics can perform as good it's more a reason to drop,
unless companies can help to provide good support and work together.
Thanks,
--
Peter Xu
On Tue, May 07, 2024 at 01:50:43AM +, Gonglei (Arei) wrote:
> Hello,
>
> > -Original Message-
> > From: Peter Xu [mailto:pet...@redhat.com]
> > Sent: Monday, May 6, 2024 11:18 PM
> > To: Gonglei (Arei)
> > Cc: Daniel P. Berrangé ; Markus Armbru
ors < 0) {
> ret = sectors;
> bdrv_next_cleanup();
> goto out;
> --
> 2.44.0
>
>
--
Peter Xu
On Tue, Mar 12, 2024 at 05:34:26PM -0400, Stefan Hajnoczi wrote:
> I understand now. I missed that returning from init_blk_migration_it()
> did not abort iteration. Thank you!
I queued it, thanks both!
--
Peter Xu
t;
> > Remove:
> > - RDMA handling from migration
> > - dependencies on libibumad, libibverbs and librdmacm
> >
> > Keep the RAM_SAVE_FLAG_HOOK definition since it might appears
> > in old migration streams.
> >
> > Cc: Peter Xu
> > Cc:
On Tue, Apr 09, 2024 at 09:32:46AM +0200, Jinpu Wang wrote:
> Hi Peter,
>
> On Mon, Apr 8, 2024 at 6:18 PM Peter Xu wrote:
> >
> > On Mon, Apr 08, 2024 at 04:07:20PM +0200, Jinpu Wang wrote:
> > > Hi Peter,
> >
> > Jinpu,
> >
> > Thanks for
ly on top of :
> https://lore.kernel.org/qemu-devel/20240320064911.545001-1-...@redhat.com/
queued, thanks.
--
Peter Xu
two more
releases. Hopefully that can ring a louder alarm to the current users with
such warnings, so that people can either stick with old binaries, or invest
developer/test resources to the community.
Thanks,
--
Peter Xu
BLOCK_SIZE, the previous
> loop is entered at least once, so 'ret' is assigned a value in all conditions.
>
> Signed-off-by: Marc-André Lureau
Acked-by: Peter Xu
--
Peter Xu
d uninitialized
> [-Werror=maybe-uninitialized]
> ../migration/migration.c:2273:5: error: ‘file’ may be used uninitialized
> [-Werror=maybe-uninitialized]
>
> Signed-off-by: Marc-André Lureau
Acked-by: Peter Xu
--
Peter Xu
igned-off-by: Marc-André Lureau
Acked-by: Peter Xu
--
Peter Xu
On Mon, Apr 08, 2024 at 04:07:20PM +0200, Jinpu Wang wrote:
> Hi Peter,
Jinpu,
Thanks for joining the discussion.
>
> On Tue, Apr 2, 2024 at 11:24 PM Peter Xu wrote:
> >
> > On Mon, Apr 01, 2024 at 11:26:25PM +0200, Yu Zhang wrote:
> > > Hello Peter und Zhjian,
IC to outperform RDMAs, then it may make
little sense to maintain multiple protocols, considering RDMA migration
code is so special so that it has the most custom code comparing to other
protocols.
Thanks,
--
Peter Xu
erful now, but again as I mentioned I don't
think it's a reason we need to deprecate rdma, especially if QEMU's rdma
migration has the chance to be refactored using rsocket.
Is there anyone who started looking into that direction? Would it make
sense we start some PoC now?
Thanks,
--
Peter Xu
QMP core from
>
> An IO error has occurred
>
> to
> saving Xen device state failed
>
> and
>
> loading Xen device state failed
>
> respectively.
>
> Signed-off-by: Markus Armbruster
Acked-by: Peter Xu
--
Peter Xu
On Mon, May 27, 2024 at 12:53:22PM +0200, Markus Armbruster wrote:
> Peter Xu writes:
>
> > On Mon, May 13, 2024 at 04:17:02PM +0200, Markus Armbruster wrote:
> >> Functions that use an Error **errp parameter to return errors should
> >> not also report them
ut I didn't
further check either. I've put that issue aside just to see whether this
may or may not make sense.
Thanks,
--
Peter Xu
dea you mention), if
> the destination can issue an RDMA read itself, it doesn't need to send
> messages
> to the source to ask for a page fetch; it just goes and grabs it itself,
> that's got to be good for latency.
Oh, that's pretty internal stuff of rdma to me and beyond my knowledge..
but from what I can tell it sounds very reasonable indeed!
Thanks!
--
Peter Xu
On Tue, May 28, 2024 at 09:06:04AM +, Gonglei (Arei) wrote:
> Hi Peter,
>
> > -Original Message-
> > From: Peter Xu [mailto:pet...@redhat.com]
> > Sent: Wednesday, May 22, 2024 6:15 AM
> > To: Yu Zhang
> > Cc: Michael Galaxy ; Jinpu Wang
> >
icely for postcopy.
I'm not sure whether it'll still be a problem if rdma recv side is based on
zero-copy. It would be a matter of whether atomicity can be guaranteed so
that we don't want the guest vcpus to see a partially copied page during
on-flight DMAs. UFFDIO_COPY (or friend) is currently the only solution for
that.
Thanks,
--
Peter Xu
On Wed, Jun 05, 2024 at 10:10:57AM -0400, Peter Xu wrote:
> > e) Someone made a good suggestion (sorry can't remember who) - that the
> > RDMA migration structure was the wrong way around - it should be the
> > destination which initiates an RDMA read, rath
101 - 137 of 137 matches
Mail list logo