Hi Mohamed,

the scheduling proxy is there to describe the maximum resources accessible to its clients. You can schedule multiple applications through this proxy and they will only have access to the cores and priorities defined by the scheduling proxy. So there can be a one-to-one mapping of scheduling proxies to applications, but this can also be a one-to-many-mapping. In the latter case, each application's threads can set their priority and core themselves, but the scheduling proxy limits this. For example, if a client thread selects prio 100 and core 0, the proxy created with (0x50, 0xE) will translate this to the priority range [0x50,0x59] and use one of the cores 1-3. In this case the system-wide result for this thread will likely be prio 0x59 and core 1.

Of course, you can write your own scheduling proxy which implements another translation.

In your example, the thread priority of 0x52 is already a translated priority. 2 is the base priority every thread starts itself with and 0x50 is the lower priority range of the scheduling proxy.

Think of scheduling proxies as a tool to manage resources in the sense of partitioning and not in the sense of a scheduling algorithm.


Now to your question about vCPUs. From the host system view, a vCPU is a thread and gets scheduled like any other thread. For the thread to become a vCPU some kernel primitives must be invoked. However, it's best to use uvmm and let it do it. The scheduling proxy has nothing to do with vCPUs. A uvmm will create one thread per core it has access to and it will convert each to a vCPU for the guest. Note, the number of vCPUs is created by uvmm is the minimum of CPU nodes in the uvmm's device tree and the number of physically available cores to the uvmm. A uvmm will place only one thread per core, but you can of course have multiple uvmm's share a physical core. The CPU time will then be divided among the clients. (In the sharing case, be aware of thread priorities and the resulting schedule!)


Cheers,
Philipp


Am 25.11.24 um 11:22 schrieb Mohamed Dawod:
I know that Scheduling proxy in L4 is used to set cores and priority ranges for a dedicated task. For example, The below scheduling proxy makes the task runs on cores 1,2,3 with priority range [52,62].

    scheduler = vmm.new_sched(0x50, 0xE)

When I switch to debugging mode, I see that this task has priority = 52 in *pr* column! So why  does scheduling proxy set priority range instead of a single value like Linux ? How does L4 use the priority range ?


My second question is about the relationship between physical CPUs and virtual CPUs in L4. I know that CPU virtualization should give the ability to virtualize the available physical CPUs so that if we have single physical CPU core, CPU virtualization enables us to provide multiple virtual CPU cores for the VMs running ontop of the hypervisor. Actually I don't see this behaviour in the scheduling proxy in L4! instead, the cores parameter only takes a bitmap of the physical cores. Can I use the scheduling proxy or any other method/workarround to provide CPU Virtualization ?


Thanks,
Mohamed Dawod

_______________________________________________
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de
To unsubscribe send an email to l4-hackers-le...@os.inf.tu-dresden.de

--
philipp.epp...@kernkonzept.com - Tel. 0351-41 883 221
http://www.kernkonzept.com

Kernkonzept GmbH.  Sitz: Dresden.  Amtsgericht Dresden, HRB 31129.
Geschäftsführer: Dr.-Ing. Michael Hohmuth

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

_______________________________________________
l4-hackers mailing list -- l4-hackers@os.inf.tu-dresden.de
To unsubscribe send an email to l4-hackers-le...@os.inf.tu-dresden.de

Reply via email to