-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Mon, Feb 10, 2025 at 02:07:17AM +0100, Marek Marczykowski-Górecki wrote:
> On Sun, Feb 09, 2025 at 07:54:23PM -0500, Demi Marie Obenour wrote:
> > On Sun, Feb 09, 2025 at 12:04:20PM +0100, David Hobach wrote:
> > > On 2/8/25 15:11, Marek Marczykowski-Górecki wrote:
> > > > Hi,
> > > > 
> > > > We've spent some time recently on improving qrexec performance,
> > > > specifically lower the overhead on making a qrexec call. To have some
> > > > visibility into effects, we started with adding simple performance
> > > > tests:
> > > > https://github.com/QubesOS/qubes-core-admin/pull/647
> > > > 
> > > > Here I'll focus on just one test that is making 500 calls and measure
> > > > the total time in seconds - the lower the better.
> > > > 
> > > > Here are the results:
> > > > baseline (qrexec 4.3.1): fedora-41-xfce_exec 53.047245962000034[1]
> > > > remove qubes-rpc-multiplexer[2] (qrexec 4.3.2): fedora-41-xfce_exec 
> > > > 21.449519581999994 [3]
> > > > cache system info for policy[4]: fedora-41-xfce_exec 
> > > > 9.012277056000016[5]
> > > > 
> > > > So, in total over 5x improvement :)
> > > 
> > > That sounds great and I look forward to that change. Thanks a lot in 
> > > advance! :)
> > > 
> > > However for an overall improvement in user experience not only the qrexec 
> > > speed is relevant, but also the time to get the qrexec service running 
> > > inside a newly started VM.
> > > For example on my machine a qrexec call on a running VM takes ~530ms 
> > > (hopefully less in the future with the changes you mentioned) and one on 
> > > a small non-running VM 6s, out of which the qubes-qrexec-agent.service 
> > > takes 2,8s to start:
> > >     qubes-qrexec-agent.service +20ms
> > >     └─systemd-user-sessions.service @2.855s +18ms
> > >       └─network.target @2.852s
> > >         └─networking.service @2.750s +101ms
> > >           └─network-pre.target @2.732s
> > >             └─qubes-iptables.service @2.416s +315ms
> > >               └─qubes-antispoof.service @2.210s +205ms
> > >                 └─basic.target @2.206s
> > >                   └─sockets.target @2.206s
> > >                     └─qubes-updates-proxy-forwarder.socket @2.206s
> > >                       └─sysinit.target @2.187s
> > >                         └─systemd-binfmt.service @1.860s +327ms
> > >                           └─proc-sys-fs-binfmt_misc.mount @2.114s +69ms
> > >                             └─systemd-journald.socket @1.015s
> > >                               └─-.mount @984ms
> > >                                 └─-.slice @985ms
> > > 
> > > So improving the speed at which any of these services in the 
> > > qubes-qrexec-agent.service critical chain start or possibly getting rid 
> > > of dependencies entirely should improve the overall Qubes OS performance.
> > > For example these numbers looked smaller in 4.1 on the same machine and a 
> > > comparable VM [6].
> > > 
> > > [6] 
> > > https://github.com/3hhh/qubes-performance/blob/master/samples/4.1/t530_debian-11_01.txt#L32-L40
> > 
> > Ouch.  500sms to set up networking is way too slow, and it looks like
> > setting up the root filesystem is also slow.  dev-mapper-dmroot.device
> > takes 1.310s to start up,
> 
> Where did you get that from? I don't see dev-mapper-dmroot.device
> mentioned in any of the above...

systemd-analyze blame (output attached).

> Anyway, even if that would be there, it would be interesting to learn
> what that actually mean. If dom0-provided kernel is used, the initramfs
> is _not_ using systemd, and so there is no time measurements of how long
> it takes to actually construct that device (which, in any currently
> supported Qubes version is simply a symlink to /dev/xvda3, not real
> dm device).

It means that 1.310s elapses between the kernel transferring control to
systemd and systemd finding that /dev/mapper/dmroot is ready.

> > which is nearly half of the 2.170s spent in
> > userspace on the VM I used to write this message.  I suspect this is
> > largely a problem with the Xen toolstack, which is not optimized, to
> > put it mildly.  Replacing it with an optimized toolstack like the one
> > Edera uses would make things much, much faster.
> 
> I have no idea how you got to the Xen toolstack here. The above is a
> from from within a VM, after the toolstack did all its job. It isn't
> even installed in the VM...

I assumed that the toolstack booted the VM and _then_ attached the
devices.  That assumption is probably wrong.

> > > > And also, now it can do over 50 calls per second, I'd say it's way more 
> > > > than
> > > > enough for its intended use.
> > 
> > _Not_ fast enough for an internet-facing qrexec-call-per-request
> > service, though, unless one checks authentication before the call to
> > revent denial of service attacks.
> 
> As I said, "for its intended use". Qubes OS is not a server operating
> system.

The Qubes OS build servers run Qubes OS 🙂.
- -- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEopQtqVJW1aeuo9/sszaHOrMp8lMFAmepaI4ACgkQszaHOrMp
8lOvSQ/+Npxd3168FMhRCzDzofAUuxOuvzBUnhiWo1FMJmgwM8lF8wv9s2gLUdZA
XAZktBKUl3k7ReJ3gMzQfgvNTKQsG+LG+YgcPMzJ+odld0wvVFHygKTM8dZo8wt9
5YTUE+9aX1kSo7OR5MKutCNePXr4kY1RK+5I2N6bvbNGEL1J6/tjEqkrAjnUPPVZ
Q+R8xsZ4TcNMgoYcc/lJMm7xFA/0cUgYgxwfSjw3q1ScQNdPmy40tr1KF3HJ8TPH
4zrKVZ0jWYIeT7hjTdGai1nWP66Kpgau4WEBluG2OlP3tdQQOG/+WG0hOLOHKn8v
hBos5afm7SUUBxmBKIS1Chw9FfjjYBv2ndVbu0TtwRCu+muOvYRvnJIU/uhH33L0
8hcgsiaYrtFyDcpcS2FV7r99EDItoD9DivRsUdZcB8kTxPDez4z2JK/pSEVATQ7m
G9nz/CONic8h/0jMUu4fVR5Q4MRu9ZCvw2EJySMLHFlMt3mYvHuOB/+RrHcplcTo
6PD10MH9D4z9iH9b24bOdCB8Os172QTS5CTiqrtnIPTYkqodCaDKJJPza5+4dMN6
5ODp96vKamxoJPCQcCkiEDTA4JM1XsfWcDh3uyi1OySWv6XJPUF9EBB4HqC4chnG
PyXJ/3nQXxUANUs/JCMqi+OQCQRFYdBp00Cy9hOniCBGFH79Kt4=
=OxxG
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-devel+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/qubes-devel/Z6lokyThfV0BQjWD%40itl-email.

Reply via email to