* Daniel P. Berrangé (berra...@redhat.com) wrote:
> On Thu, Apr 08, 2021 at 08:11:54PM +0100, Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <dgilb...@redhat.com>
> > 
> > Hi,
> >   This RFC set adds support for multipath TCP (mptcp),
> > in particular on the migration path - but should be extensible
> > to other users.
> > 
> >   Multipath-tcp is a bit like bonding, but at L3; you can use
> > it to handle failure, but can also use it to split traffic across
> > multiple interfaces.
> > 
> >   Using a pair of 10Gb interfaces, I've managed to get 19Gbps
> > (with the only tuning being using huge pages and turning the MTU up).
> > 
> >   It needs a bleeding-edge Linux kernel (in some older ones you get
> > false accept messages for the subflows), and a C lib that has the
> > constants defined (as current glibc does).
> > 
> >   To use it you just need to append ,mptcp to an address;
> > 
> >   -incoming tcp:0:4444,mptcp
> >   migrate -d tcp:192.168.11.20:4444,mptcp
> 
> What happens if you only enable mptcp flag on one side of the
> stream (whether client or server), does it degrade to boring
> old single path TCP, or does it result in an error ?

I've just tested this and it matches what pabeni said; it seems to just
fall back.

> >   I had a quick go at trying NBD as well, but I think it needs
> > some work with the parsing of NBD addresses.
> 
> In theory this is applicable to anywhere that we use sockets.
> Anywhere that is configured with the QAPI  SocketAddress /
> SocketAddressLegacy type will get it for free AFAICT.

That was my hope.

> Anywhere that is configured via QemuOpts will need an enhancement.
> 
> IOW, I would think NBD already works if you configure NBD via
> QMP with nbd-server-start, or block-export-add.  qemu-nbd will
> need cli options added.
> 
> The block layer clients for NBD, Gluster, Sheepdog and SSH also
> all get it for free when configured va QMP, or -blockdev AFAICT

Have you got some examples via QMP?
I'd failed trying -drive if=virtio,file=nbd://192.168.11.20:3333,mptcp=on/zero

> Legacy blocklayer filename syntax would need extra parsing, or
> we can just not bother and say if you want new features, use
> blockdev.
> 
> 
> Overall this is impressively simple.

Yeh; lots of small unexpected tidyups that took a while to fix.

> It feels like it obsoletes the multifd migration code, at least
> if you assume Linux platform and new enough kernel ?
>
> Except TLS... We already bottleneck on TLS encryption with
> a single FD, since userspace encryption is limited to a
> single thread.

Even without TLS we already run out of CPU, probably on the receiving
thread at around 20Gbps; which is a bit meh, compared to multifd which
I have seen hit 80Gbps on a particularly well greased 100Gbps
connection.
Curiously my attempts with multifd+mptcp so far have it being slower
than with just mptcp on it's own, not hitting the 20Gbps - not sure why
yet.

> There is the KTLS feature which offloads TLS encryption/decryption
> to the kernel. This benefits even regular single FD performance,
> because the encrytion work can be done by the kernel in a separate
> thread from the userspace IO syscalls.
> 
> Any idea if KTLS is fully compatible with MPTCP ?  If so, then that
> would look like it makes it a full replacementfor multifd on Linux.

I've not tried kTLS at all yet; as pabeni says, not currently
compatible.
The otherones I'd like to try are zero-copy offload receive/transmit
(again I'm not sure those are compatible).

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK


Reply via email to