Actually CCing Rik now!
On Thu, Dec 02, 2010 at 08:57:16PM +0530, Srivatsa Vaddagiri wrote:
> On Thu, Dec 02, 2010 at 03:49:44PM +0200, Avi Kivity wrote:
> > On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
> > >On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
> > >> >> What I'd lik
On Thu, Dec 02, 2010 at 05:33:40PM +0200, Avi Kivity wrote:
> A0 and A1's vruntime will keep growing, eventually B will become
> leftmost and become runnable (assuming leftmost == min vruntime, not
> sure what the terminology is).
Donation (in directed yield) will cause vruntime to drop as well (t
On 12/02/2010 05:27 PM, Srivatsa Vaddagiri wrote:
> >Even that would require some precaution in directed yield to ensure that it
> >doesn't unduly inflate vruntime of target, hurting fairness for other
guests on
> >same cpu as target (example guest code that can lead to this situation
> >bel
On Thu, Dec 02, 2010 at 03:49:44PM +0200, Avi Kivity wrote:
> On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
> >On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
> >> >> What I'd like to see in directed yield is donating exactly the
> >> >> amount of vruntime that's needed to mak
On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
> >> What I'd like to see in directed yield is donating exactly the
> >> amount of vruntime that's needed to make the target thread run.
> >
> >I presume this requires the target
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
> >> What I'd like to see in directed yield is donating exactly the
> >> amount of vruntime that's needed to make the target thread run.
> >
> >I presume this requires the target vcpu to move left in rb-tree to run
> >earlier than schedu
On 12/02/2010 02:19 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
> What I'd like to see in directed yield is donating exactly the
> amount of vruntime that's needed to make the target thread run. The
How would that work well with hard-limits? The t
On 12/02/2010 01:47 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
> On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
> >>
> >> We are dealing with just one task here (the task that is yielding).
> >> After recording how much timeslice we are "givin
On Thu, Dec 02, 2010 at 05:17:00PM +0530, Srivatsa Vaddagiri wrote:
> Just was wondering how this would work in case of buggy guests. Lets say that
> a
> guest ran into a AB<->BA deadlock. VCPU0 spins on lock B (held by VCPU1
> currently), while VCPU spins on lock A (held by VCPU0 currently). Both
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
> What I'd like to see in directed yield is donating exactly the
> amount of vruntime that's needed to make the target thread run. The
How would that work well with hard-limits? The target thread would have been
rate limited and no amoun
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
> On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
> >>
> >> We are dealing with just one task here (the task that is yielding).
> >> After recording how much timeslice we are "giving up" in
> >> current->donate_time
> >> (donate_time is
On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
>
> We are dealing with just one task here (the task that is yielding).
> After recording how much timeslice we are "giving up" in current->donate_time
> (donate_time is perhaps not the right name to use), we adjust the yielding
> task's vruntime
On 12/01/2010 07:29 PM, Srivatsa Vaddagiri wrote:
> > > A plain yield (ignoring no-opiness on Linux) will penalize the
> > > running guest wrt other guests. We need to maintain fairness.
Avi, any idea how much penalty are we talking of here in using plain yield?
If that is acceptable in p
On 12/01/2010 09:07 PM, Peter Zijlstra wrote:
>
> The pause loop exiting& directed yield patches I am working on
> preserve inter-vcpu fairness by round robining among the vcpus
> inside one KVM guest.
I don't necessarily think that's enough.
Suppose you've got 4 vcpus, one is holding a loc
On Wed, 2010-12-01 at 14:42 -0500, Rik van Riel wrote:
> On 12/01/2010 02:35 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
>
> >> Even if we equalized the amount of CPU time each VCPU
> >> ends up getting across some time interval, that is no
> >> guarantee t
On 12/01/2010 02:35 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
Even if we equalized the amount of CPU time each VCPU
ends up getting across some time interval, that is no
guarantee they get useful work done, or that the time
gets fairly divided to _user pr
On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
> On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
>
> >> The pause loop exiting& directed yield patches I am working on
> >> preserve
On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
The pause loop exiting& directed yield patches I am working on
preserve inter-vcpu fairness by round robining among the vcpus
inside one KVM gues
On Wed, 2010-12-01 at 23:30 +0530, Srivatsa Vaddagiri wrote:
> On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
> > >
> > > yield_task_fair(...)
> > > {
> > >
> > > + ideal_runtime = sched_slice(cfs_rq, curr);
>
On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
> >> Directed yield and fairness don't mix well either. You can end up
> >> feeding the other tasks more time than you'll ever get back.
On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
Directed yield and fairness don't mix well either. You can end up
feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup the
On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
> On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
> >
> > yield_task_fair(...)
> > {
> >
> > + ideal_runtime = sched_slice(cfs_rq, curr);
> > + delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtim
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
> On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
> > Directed yield and fairness don't mix well either. You can end up
> > feeding the other tasks more time than you'll ever get back.
>
> If the directed yield is always to another task in yo
On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
>
> yield_task_fair(...)
> {
>
> + ideal_runtime = sched_slice(cfs_rq, curr);
> + delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
> + rem_time_slice = ideal_runtime - delta_exec;
> +
> + curren
On Wed, Dec 01, 2010 at 05:25:18PM +0100, Peter Zijlstra wrote:
> On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
>
> > Not if yield() remembers what timeslice was given up and adds that back when
> > thread is finally ready to run. Figure below illustrates this idea:
> >
> >
> >
On Wed, 2010-12-01 at 09:17 -0800, Chris Wright wrote:
> Directed yield and fairness don't mix well either. You can end up
> feeding the other tasks more time than you'll ever get back.
If the directed yield is always to another task in your cgroup then
inter-guest scheduling fairness should be ma
* Peter Zijlstra (a.p.zijls...@chello.nl) wrote:
> On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
>
> > Not if yield() remembers what timeslice was given up and adds that back when
> > thread is finally ready to run. Figure below illustrates this idea:
> >
> >
> > A0/4C0/
On Wed, 2010-12-01 at 21:42 +0530, Srivatsa Vaddagiri wrote:
> Not if yield() remembers what timeslice was given up and adds that back when
> thread is finally ready to run. Figure below illustrates this idea:
>
>
> A0/4C0/4 D0/4 A0/4 C0/4 D0/4 A0/4 C0/4 D0/4 A0/4
> p0 ||-L|--
On Wed, Dec 01, 2010 at 02:56:44PM +0200, Avi Kivity wrote:
> >> (a directed yield implementation would find that all vcpus are
> >> runnable, yielding optimal results under this test case).
> >
> >I would think a plain yield() (rather than usleep/directed yield) would
> >suffice
> >here (yield
On 12/01/2010 02:37 PM, Srivatsa Vaddagiri wrote:
On Wed, Nov 24, 2010 at 04:23:15PM +0200, Avi Kivity wrote:
> >>I'm more concerned about lock holder preemption, and interaction
> >>of this mechanism with any kernel solution for LHP.
> >
> >Can you suggest some scenarios and I'll create some
On Wed, Nov 24, 2010 at 04:23:15PM +0200, Avi Kivity wrote:
> >>I'm more concerned about lock holder preemption, and interaction
> >>of this mechanism with any kernel solution for LHP.
> >
> >Can you suggest some scenarios and I'll create some test cases?
> >I'm trying figure out the best way to ev
On 11/24/2010 03:58 PM, Anthony Liguori wrote:
On 11/24/2010 02:18 AM, Avi Kivity wrote:
On 11/23/2010 06:49 PM, Anthony Liguori wrote:
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of
teaching
them to respond to these signals (which cannot be trapped), use
SIGUSR1 to
appr
On 11/24/2010 02:18 AM, Avi Kivity wrote:
On 11/23/2010 06:49 PM, Anthony Liguori wrote:
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of
teaching
them to respond to these signals (which cannot be trapped), use
SIGUSR1 to
approximate the behavior of SIGSTOP/SIGCONT.
The pu
On 11/23/2010 06:49 PM, Anthony Liguori wrote:
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of teaching
them to respond to these signals (which cannot be trapped), use SIGUSR1 to
approximate the behavior of SIGSTOP/SIGCONT.
The purpose of this is to implement CPU hard limits
On 11/24/2010 02:15 AM, Anthony Liguori wrote:
Is it signal safe?
Yes, at heart it is just a somewhat more expensive access to
pthread_self()->some_array[key].
BTW, this is all only theoretical. This is in the KVM io thread code
which is already highly unportable.
True, and newer versio
On 11/23/2010 05:43 PM, Paolo Bonzini wrote:
On 11/23/2010 10:46 PM, Anthony Liguori wrote:
+static __thread int sigusr1_wfd;
While OpenBSD finally updated the default compiler to 4.2.1 from 3.x
series, thread local storage is still not supported:
Hrm, is there a portable way to do this (dist
On 11/23/2010 10:46 PM, Anthony Liguori wrote:
+static __thread int sigusr1_wfd;
While OpenBSD finally updated the default compiler to 4.2.1 from 3.x
series, thread local storage is still not supported:
Hrm, is there a portable way to do this (distinguish a signal on a
particular thread)?
Yo
On 11/23/2010 01:35 PM, Blue Swirl wrote:
On Tue, Nov 23, 2010 at 4:49 PM, Anthony Liguori wrote:
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of teaching
them to respond to these signals (which cannot be trapped), use SIGUSR1 to
approximate the behavior of SIGSTOP/SIGC
On Tue, Nov 23, 2010 at 4:49 PM, Anthony Liguori wrote:
> qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of teaching
> them to respond to these signals (which cannot be trapped), use SIGUSR1 to
> approximate the behavior of SIGSTOP/SIGCONT.
>
> The purpose of this is to implemen
qemu-kvm vcpu threads don't response to SIGSTOP/SIGCONT. Instead of teaching
them to respond to these signals (which cannot be trapped), use SIGUSR1 to
approximate the behavior of SIGSTOP/SIGCONT.
The purpose of this is to implement CPU hard limits using an external tool that
watches the CPU cons
40 matches
Mail list logo