-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Sun, Feb 09, 2025 at 09:46:43PM -0500, Demi Marie Obenour wrote:
> On Mon, Feb 10, 2025 at 02:07:17AM +0100, Marek Marczykowski-Górecki wrote:
> > On Sun, Feb 09, 2025 at 07:54:23PM -0500, Demi Marie Obenour wrote:
> > > On Sun, Feb 09, 2025 at 12:04:20PM +0100, David Hobach wrote:
> > > > On 2/8/25 15:11, Marek Marczykowski-Górecki wrote:
> > > > > Hi,
> > > > > 
> > > > > We've spent some time recently on improving qrexec performance,
> > > > > specifically lower the overhead on making a qrexec call. To have some
> > > > > visibility into effects, we started with adding simple performance
> > > > > tests:
> > > > > https://github.com/QubesOS/qubes-core-admin/pull/647
> > > > > 
> > > > > Here I'll focus on just one test that is making 500 calls and measure
> > > > > the total time in seconds - the lower the better.
> > > > > 
> > > > > Here are the results:
> > > > > baseline (qrexec 4.3.1): fedora-41-xfce_exec 53.047245962000034[1]
> > > > > remove qubes-rpc-multiplexer[2] (qrexec 4.3.2): fedora-41-xfce_exec 
> > > > > 21.449519581999994 [3]
> > > > > cache system info for policy[4]: fedora-41-xfce_exec 
> > > > > 9.012277056000016[5]
> > > > > 
> > > > > So, in total over 5x improvement :)
> > > > 
> > > > That sounds great and I look forward to that change. Thanks a lot in 
> > > > advance! :)
> > > > 
> > > > However for an overall improvement in user experience not only the 
> > > > qrexec speed is relevant, but also the time to get the qrexec service 
> > > > running inside a newly started VM.
> > > > For example on my machine a qrexec call on a running VM takes ~530ms 
> > > > (hopefully less in the future with the changes you mentioned) and one 
> > > > on a small non-running VM 6s, out of which the 
> > > > qubes-qrexec-agent.service takes 2,8s to start:
> > > >     qubes-qrexec-agent.service +20ms
> > > >     └─systemd-user-sessions.service @2.855s +18ms
> > > >       └─network.target @2.852s
> > > >         └─networking.service @2.750s +101ms
> > > >           └─network-pre.target @2.732s
> > > >             └─qubes-iptables.service @2.416s +315ms
> > > >               └─qubes-antispoof.service @2.210s +205ms
> > > >                 └─basic.target @2.206s
> > > >                   └─sockets.target @2.206s
> > > >                     └─qubes-updates-proxy-forwarder.socket @2.206s
> > > >                       └─sysinit.target @2.187s
> > > >                         └─systemd-binfmt.service @1.860s +327ms
> > > >                           └─proc-sys-fs-binfmt_misc.mount @2.114s +69ms
> > > >                             └─systemd-journald.socket @1.015s
> > > >                               └─-.mount @984ms
> > > >                                 └─-.slice @985ms
> > > > 
> > > > So improving the speed at which any of these services in the 
> > > > qubes-qrexec-agent.service critical chain start or possibly getting rid 
> > > > of dependencies entirely should improve the overall Qubes OS 
> > > > performance.
> > > > For example these numbers looked smaller in 4.1 on the same machine and 
> > > > a comparable VM [6].
> > > > 
> > > > [6] 
> > > > https://github.com/3hhh/qubes-performance/blob/master/samples/4.1/t530_debian-11_01.txt#L32-L40
> > > 
> > > Ouch.  500sms to set up networking is way too slow, and it looks like
> > > setting up the root filesystem is also slow.  dev-mapper-dmroot.device
> > > takes 1.310s to start up,
> > 
> > Where did you get that from? I don't see dev-mapper-dmroot.device
> > mentioned in any of the above...
> 
> systemd-analyze blame (output attached).
> 
> > Anyway, even if that would be there, it would be interesting to learn
> > what that actually mean. If dom0-provided kernel is used, the initramfs
> > is _not_ using systemd, and so there is no time measurements of how long
> > it takes to actually construct that device (which, in any currently
> > supported Qubes version is simply a symlink to /dev/xvda3, not real
> > dm device).
> 
> It means that 1.310s elapses between the kernel transferring control to
> systemd and systemd finding that /dev/mapper/dmroot is ready.

Interesting, since it exists there already literally before systemd gets
started... Maybe udev needs to enumerate it or something...

> > > which is nearly half of the 2.170s spent in
> > > userspace on the VM I used to write this message.  I suspect this is
> > > largely a problem with the Xen toolstack, which is not optimized, to
> > > put it mildly.  Replacing it with an optimized toolstack like the one
> > > Edera uses would make things much, much faster.
> > 
> > I have no idea how you got to the Xen toolstack here. The above is a
> > from from within a VM, after the toolstack did all its job. It isn't
> > even installed in the VM...
> 
> I assumed that the toolstack booted the VM and _then_ attached the
> devices.  That assumption is probably wrong.

Yes, devices are setup before VM is started (and if setting up devices
fails, VM kernel isn't started at all).

> > > > > And also, now it can do over 50 calls per second, I'd say it's way 
> > > > > more than
> > > > > enough for its intended use.
> > > 
> > > _Not_ fast enough for an internet-facing qrexec-call-per-request
> > > service, though, unless one checks authentication before the call to
> > > revent denial of service attacks.
> > 
> > As I said, "for its intended use". Qubes OS is not a server operating
> > system.
> 
> The Qubes OS build servers run Qubes OS 🙂.

That's _very_ stretched definition...

- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmep18oACgkQ24/THMrX
1yy5qwf/T05s9Q7+XH/LWKEAAaLKp1cj/EoRDHhjNVugw7WXs/p8OsHwbXJFtvGq
yDTG7caig8lv63QqM+Nb2LqzYAigEcALZpx68o28DcUG7fGDzbBZtYW+jrEQ1JAI
mvnrgTRuMVVgNh26iujqfbkJIcVRuyJXEzkTQT/oZb6jh21X9a7jbWMySUbJuWd3
z2j1jYjVZFWppq7GPA3gEI0pHdKitCo1RNB8dGhNHVQblDdW0JLIy3CbvLkVwqVE
6knFVEo0S6AYs8toBLiLTHq6xJ5RpTL+2xm6KC8OD9Nn7a8qXpM6T//rmnArYrsC
m7ZZz0t/yJVaHBWJyZCvFn2urSdL+w==
=xR6U
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-devel+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/qubes-devel/Z6nXynvz9NdNrLaO%40mail-itl.

Reply via email to