Hello Dario,

On 09.02.18 15:18, Dario Faggioli wrote:
Ok, so you're giving:
- 40% CPU time to Domain-0
- 50% CPU time to DomR
- 40% CPU time to DomA
- 40% CPU time to DomD
total utilization is 170%. As far as I've understood you have 4 CPUs,
right? If yes, there *should* be no problems. (Well, in theory, we'd
need a schedulability test to know for sure whether the system is
"feasible", but I'm going to assume that it sort of is, and leave to
Meng any further real-time scheduling analysis related configurations.
:-) ).
Being a bit more specific, I give:

 - 4*10% CPU time to Domain-0
 - 1*50% CPU time to DomR
 - 4*10% CPU time to DomA
 - 4*10% CPU time to DomD

Which seems to be schedulable on 4*100% CPU. I guess Meng could shed more light on this topic from theoretical point of view.

So, this should work, as allowing the other domains to use extratime
should *not* allow them to prevent DomR to get it's 50% share of CPU
time.
That is my point.

I wonder, though, if this case would not be better if cpupools are
used. E.g., you can leve the non real-time domains in the default pool
(and have Credit or Credit2 there), and then have an RTDS cpupool in
which you put DomR, with its 50% share, and perhaps someone else (just
to avoid wasting the other 50%).
The problem here is that domains can not have their vcpus from different cpupool. So we would waste that fraction of CPU. IMHO in case of pcpu partitioning with cpupools, we loose a practical application of RTDS scheduler. A pool with null scheduler will do the job for RT domain. IMHO one would benefit from a RTDS scheduler only in case he has rt vcpu(s) with utilization of fraction of pcpu and want to save remaining resources.

Basically, can you also fully load (like with dd as above, or just yes
or while(1)) DomR, and then check if it is getting 50%?
  For a first
approximation of this, you can check with xentop.

For sure I did this run. xentop clearly shows 50% in case of DomR loaded with dd, and equal distribution of CPU resources among other domains, i.e.:

            STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD VBD_WR  VBD_RSECT  VBD_WSECT SSID       DomA -----r      20325  117.0    2178960   53.6 218009653.7   n/a     4    0        0        0    0 0        0        0          0          0   11       DomD -----r      20282  116.4    1048464   25.8 104960025.8     1    04    0        0        0    0 0        0        0          0          0   11   Domain-0 -----r      21123  117.2     262144    6.5   no limit n/a     4    04    0        0        0    0        0 0        0          0          0    2       DomR -----r        284   50.0     196496    449600 197632 4.9     1     1    0        0        0    0        0 0        0          0          0   11

In case I run my test, with xentop I see something like following:

            STATE   CPU(sec) CPU(%)     MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT SSID   Domain-0 -----r      223493 120.2    2178960   53.6 218009653.7   n/a     4    0        0        0    0 0        0        0          0          0    2       DomD -----r      215036 118.7    1048464   25.8 104960025.8     4     4    0        0        0    0 0        0        0          0          0   11       DomA ------      215396 115.4    2178960   53.6 218009653.7     4    04    0        0        0    0 0        0        0          0          0   11       DomR -----r        6145  38.7     196496    4.8 197632 4.9     1     1    0        0        0    0        0 0        0          0          0   11

What is ok for litmus-rt test load, which by default runs task for 0.95 of given wcet.

I get several deadline misses in a 4 minute run of task with 10ms period. Xentop would not show such deviations, it is too coarse grained.

  If you want to be even more sure/you want to know it precisely, you can use 
tracing.
Yep, maybe its time for me to get familiar with tracing in XEN.

If DomR is not able to get its share, then we have an issue/bug in the
scheduler. If it does, then the scheduler is doing its job, and the
issue may be somewhere else (e.g., something inside the guest may eat
some of the budget, in such a way that not all of it is available when
you actually need it).
The DomR guest is really lean, I already shown processes list in it. I really doubt init together with getty on HVC can eat another 10% of CPU at any moment.


--

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to