Hi Richard,

Am 11.11.24 um 14:15 schrieb Richard Clark:
Your comment to not use malloc is extremely confusing. I've also seen your 
response that using a lot
of small malloc/free calls will slow down the kernel.

I'm sensing a misunderstanding here. There is the L4Re-Microkernel aka Fiasco.OC and there is the l4re_kernel as in pkg/l4re-core/l4re_kernel. These are two completely different things. The topic of the other thread was pkg/l4re-core/l4re_kernel which is a service running in each application.

Furthermore, I didn't want to suggest that there is any design-inherent slowdown attached to the usage of malloc/free in l4re_kernel. The l4re_kernel is just involved when you map new memory to your application, which only happens if the memory below the memory chunks malloc uses run out.

Cheers,
Philipp



-----Original Message-----
From: Adam Lackorzynski <a...@l4re.org>
Sent: Monday, November 11, 2024 5:29 AM
To: Richard Clark <richard.cl...@coheretechnology.us>; 
l4-hackers@os.inf.tu-dresden.de
Subject: Re: Throughput questions....

Hi Richard,

for using shared memory based communication I'd like to suggest to use L4::Irqs 
instead of IPC messages, especially ipc-calls which have a back and forth. 
Please also do not use malloc within a benchmark (or benchmark malloc 
separately to get an understanding how the share between L4 ops and libc is 
split). On QEMU it should be ok when running with KVM, less so without KVM.

I do not have a recommendation for an AMD-based laptop.


Cheers,
Adam

On Thu Nov 07, 2024 at 13:36:06 +0000, Richard Clark wrote:
Dear L4Re experts,

We now have a couple projects in which we are going to be utilizing
your OS, so I've been implementing and testing some of the basic functionality 
that we will need. Namely that would be message passing....
I've been using the Hello World QEMU example as my starting point and
have created a number of processes that communicate via a pair of
unidirectional channels with IPC and shared memory. One channel for
messages coming in, one channel for messages going out. The sender
does an IPC_CALL() when a message has been put into shared memory. The receiver 
completes an IPC_RECEIVE(), fetches the message, and then responds with the 
IPC_REPLY() to the original IPC_CALL(). It is all interrupt/event driven, no 
sleeping, no polling.
It works. I've tested it for robustness and it behaves exactly as expected, 
with the exception of throughput.

I seem to be getting only 4000 messages per second. Or roughly 4
messages per millisecond. Now there are a couple malloc() and free()
and condition_wait() and condition_signal()s going on as the events and 
messages get passed through the sender and receiver threads, but nothing (IMHO) 
that should slow things down too much.
Messages are very small, like 50 bytes, as I'm really just trying to
get a handle on basic overhead. So pretty much, yes, I'm beating the 
context-switching mechanisms to death...

My questions:
Is this normal(ish) throughput for a single-core x86_64 QEMU system?
Am I getting hit by a time-sliced scheduler issue and most of my CPU is being 
wasted?
How do I switch to a different non-time-sliced scheduler?
Thoughts on what I could try to improve throughput?

And lastly...
We are going to be signing up for training soon... do you have a recommendation 
for a big beefy AMD-based linux laptop?


Thanks!

Richard H. Clark
_______________________________________________
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de
To unsubscribe send an email to l4-hackers-le...@os.inf.tu-dresden.de

--
philipp.epp...@kernkonzept.com - Tel. 0351-41 883 221
http://www.kernkonzept.com

Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
Geschäftsführer: Dr.-Ing. Michael Hohmuth

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

_______________________________________________
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de
To unsubscribe send an email to l4-hackers-le...@os.inf.tu-dresden.de

Reply via email to