On 10.09.19 17:36, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
+static bool sched_tasklet_check(unsigned int cpu)
+{
+ bool tasklet_work_scheduled = false;
+ const cpumask_t *mask = get_sched_res(cpu)->cpus;
+ int cpu_iter;
unsigned int ?
Yes.
+static void context_saved(struct vcpu *prev)
+{
+ struct sched_unit *unit = prev->sched_unit;
+
+ /* Clear running flag /after/ writing context to memory. */
+ smp_wmb();
+
+ prev->is_running = 0;
+ unit->is_running = 0;
+ unit->state_entry_time = NOW();
+
+ /* Check for migration request /after/ clearing running flag. */
+ smp_mb();
+
+ sched_context_saved(vcpu_scheduler(prev), unit);
+
+ sched_unit_migrate_finish(unit);
+}
+
+/*
+ * Rendezvous on end of context switch.
+ * As no lock is protecting this rendezvous function we need to use atomic
+ * access functions on the counter.
+ * The counter will be 0 in case no rendezvous is needed. For the rendezvous
+ * case it is initialised to the number of cpus to rendezvous plus 1. Each
+ * member entering decrements the counter. The last one will decrement it to
+ * 1 and perform the final needed action in that case (call of context_saved()
+ * if vcpu was switched), and then set the counter to zero. The other members
+ * will wait until the counter becomes zero until they proceed.
+ */
+void sched_context_switched(struct vcpu *vprev, struct vcpu *vnext)
+{
+ struct sched_unit *next = vnext->sched_unit;
+
+ if ( atomic_read(&next->rendezvous_out_cnt) )
+ {
+ int cnt = atomic_dec_return(&next->rendezvous_out_cnt);
+
+ /* Call context_saved() before releasing other waiters. */
+ if ( cnt == 1 )
+ {
+ if ( vprev != vnext )
+ context_saved(vprev);
+ atomic_set(&next->rendezvous_out_cnt, 0);
+ }
+ else
+ while ( atomic_read(&next->rendezvous_out_cnt) )
+ cpu_relax();
How come context_saved() is not called on this "else" branch? How
will vprev->is_running get cleared there? Or, since everything
else in the function is per-unit, does this clearing want to move
here?
That's a bug I've already corrected in my version 3 (and the one which
is being tested by Sergey).
-void context_saved(struct vcpu *prev)
+static void sched_slave(void)
{
- /* Clear running flag /after/ writing context to memory. */
- smp_wmb();
+ struct vcpu *vprev = current;
+ struct sched_unit *prev = vprev->sched_unit, *next;
+ s_time_t now;
+ spinlock_t *lock;
+ int cpu = smp_processor_id();
unsigned int?
Yes.
@@ -1971,6 +2164,7 @@ void __init scheduler_init(void)
int i;
open_softirq(SCHEDULE_SOFTIRQ, schedule);
+ open_softirq(SCHED_SLAVE_SOFTIRQ, sched_slave);
Noticing the "we have a race" comment and code in schedule() I
wonder if there isn't enough state for schedule() to know
whether to call sched_slave(), rather than having this extra
softirq.
Especially patch 35 adds further using of SCHED_SLAVE_SOFTIRQ. I have
tried to avoid it, but the results were looking rather ugly and still
full of deadlocks.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel