On 1/21/2016 11:06 PM, Tianyang Chen wrote:
Budget replenishment and enforcement are separated by adding
a replenishment timer, which fires at the next most imminent
release time of all runnable vcpus.
A new runningq has been added to keep track of all vcpus that
are on pcpus.
The following functions have major changes to manage the runningq
and replenishment
Well, I'm just gonna comment on couple things here. The major difference
between
this patch and the previous one is the addition of a runningq, which
leads to
extra code in context_save().
Since the scheduler doesn't run until the first vcpu wakes up, wake()
does some of the
timer stuff, in additional to the logic in the handler. It works nicely
especially when
a vcpu wakes up and has the earliest next release time.
@@ -160,6 +169,7 @@ struct rt_private {
*/
struct rt_vcpu {
struct list_head q_elem; /* on the runq/depletedq list */
+ struct list_head runningq_elem; /* on the runningq list */
struct list_head sdom_elem; /* on the domain VCPU list */
Is it better to re-use q-elem so extra helper functions can be
avoided? Maybe another flag thing that shows which q a vcpu is on?
@@ -848,8 +880,6 @@ rt_schedule(const struct scheduler *ops, s_time_t now,
bool_t tasklet_work_sched
/* burn_budget would return for IDLE VCPU */
burn_budget(ops, scurr, now);
- __repl_update(ops, now);
-
if ( tasklet_work_scheduled )
{
snext = rt_vcpu(idle_vcpu[cpu]);
@@ -875,6 +905,9 @@ rt_schedule(const struct scheduler *ops, s_time_t now,
bool_t tasklet_work_sched
set_bit(__RTDS_delayed_runq_add, &scurr->flags);
snext->last_start = now;
+
+ ret.time = -1; /* if an idle vcpu is picked */
+
I kinda just stick this in based on previous discussion on RTDS
busy-waiting.
+/* The replenishment timer handler scans
+ * all three queues and find the most
+ * imminent release time. If there is no
+ * vcpus, timer is set in wake()
+ */
+static void repl_handler(void *data){
+ unsigned long flags;
+ s_time_t now = NOW();
+ s_time_t min_repl = LONG_MAX; /* max time used in comparisoni */
+ struct scheduler *ops = data;
+ struct rt_private *prv = rt_priv(ops);
+ struct list_head *runq = rt_runq(ops);
+ struct list_head *depletedq = rt_depletedq(ops);
+ struct list_head *runningq = rt_runningq(ops);
+ struct list_head *iter;
+ struct list_head *tmp;
+ struct rt_vcpu *svc = NULL;
+
+ stop_timer(&repl_timer);
+
+ spin_lock_irqsave(&prv->lock, flags);
+
Here is a lock for protecting queues. Correct me if I'm wrong, since in
alloc_pdata(), the private lock points to
the global system lock, I just assume there is no difference here.
--
Tianyang Chen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel