Ingo Molnar wrote:
> i dont understand: how are you separating 'stolen time' drifts from
> events generated for absolute timeouts?
>
I'm not sure what you're asking; I think we're talking past each other.
I can extract from Xen how much time was stolen over some real-time
interval. If I call
On Thu, 2007-03-15 at 13:35 -0700, Dan Hecht wrote:
> >> Yes, the part in the "i.e." above is describing available time. So,
> >> it is essentially is the same definition of stolen time VMI uses:
> >
> >> stolen time == ready to run but not running
> >> available time == running or not read
* Jeremy Fitzhardinge <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
> > touching the 'timer tick' is the wrong approach. 'stolen time' only
> > matters to the /scheduler tick/. So extend the hypervisor interface to
> > allow the injection of 'virtual' scheduler tick events: via the use of a
On 03/15/2007 01:14 PM, Rik van Riel wrote:
Dan Hecht wrote:
Yes, the part in the "i.e." above is describing available time. So,
it is essentially is the same definition of stolen time VMI uses:
stolen time == ready to run but not running
available time == running or not ready to run
Dan Hecht wrote:
Yes, the part in the "i.e." above is describing available time. So, it
is essentially is the same definition of stolen time VMI uses:
stolen time == ready to run but not running
available time == running or not ready to run
S390 too. We were quite careful to make sur
On 03/15/2007 12:53 PM, Jeremy Fitzhardinge wrote:
Dan Hecht wrote:
Available time is defined to be (real_time - stolen_time). i.e. time
in which the vcpu is either running or not ready to run [because it is
halted, and nothing is pending]).
Hm, the Xen definition of stolen time is "time VCPU
Dan Hecht wrote:
Available time is defined to be (real_time - stolen_time). i.e. time in
which the vcpu is either running or not ready to run [because it is
halted, and nothing is pending]).
From the guest perspective, steal time is:
"Time I would have liked to run"
--
Politics is the stru
Dan Hecht wrote:
> Available time is defined to be (real_time - stolen_time). i.e. time
> in which the vcpu is either running or not ready to run [because it is
> halted, and nothing is pending]).
Hm, the Xen definition of stolen time is "time VCPU spent in runnable
(vs running) or offline state"
Paul Mackerras wrote:
> A cycle on one thread of a machine with SMT/hyperthreading when the
> other thread is idle *isn't* equivalent to a cycle when the other
> thread is busy. We run into this on POWER5, where we have hardware
> that counts cycles when each of the two threads in each core gets t
Dan Hecht wrote:
> So, yes, it is per-vcpu. But, the sched_clock() samples are rebased
> when processes are migrated between runqueues; search sched.c for
> most_recent_timestamp. It's not perfect since most_recent_timestamp
> between cpu0 and cpu1 do not correspond to the exact same instant, but
On 03/14/2007 02:18 PM, Jeremy Fitzhardinge wrote:
Dan Hecht wrote:
On 03/14/2007 01:31 PM, Jeremy Fitzhardinge wrote:
Dan Hecht wrote:
Sounds good. I don't see this in your patchset you sent yesterday
though; did you add it after sending out those patches?
Yes.
if so, could you forward
Jeremy Fitzhardinge writes:
> Sure. But on a given machine, the CPUs are likely to be closely enough
> matched that a cycle on one CPU is more or less equivalent to a cycle on
> another CPU. The fact that a cycle represents a different amount of
A cycle on one thread of a machine with SMT/hyper
Daniel Walker wrote:
> It's used for measuring execution time, but timers are triggered based
> on that time, so it needs to be actual execution time. I don't know to
> what extent this is already inaccurate on some system tho.
>
Well, "actual execution time" is a bit ambiguous: should that be
Con Kolivas wrote:
> I think you're looking for a complex solution to a problem that doesn't
> exist.
>
The problem is subtle, but I think the solution is actually fairly simple.
> The job of the process scheduler is to meter out the available cpu resources.
> It cannot make up cycles for a
On Wed, 2007-03-14 at 14:16 -0700, Jeremy Fitzhardinge wrote:
>
> > It's also used for some posix cpu timers
> > (sched_ns) , and it used for migration thread initialization.
>
> sched_ns doesn't use it directly except for the case where the process
> is currently running. Anyway, it's compati
On Thursday 15 March 2007 08:36, Con Kolivas wrote:
> On Wednesday 14 March 2007 03:31, Jeremy Fitzhardinge wrote:
> > The current Linux scheduler makes one big assumption: that 1ms of CPU
> > time is the same as any other 1ms of CPU time, and that therefore a
> > process makes the same amount of p
On Wednesday 14 March 2007 03:31, Jeremy Fitzhardinge wrote:
> The current Linux scheduler makes one big assumption: that 1ms of CPU
> time is the same as any other 1ms of CPU time, and that therefore a
> process makes the same amount of progress regardless of which particular
> ms of time it gets.
Dan Hecht wrote:
> On 03/14/2007 01:31 PM, Jeremy Fitzhardinge wrote:
>> Dan Hecht wrote:
>>> Sounds good. I don't see this in your patchset you sent yesterday
>>> though; did you add it after sending out those patches?
>>
>> Yes.
>>
>>> if so, could you forward the new patch? does it explicitl
Daniel Walker wrote:
> For interactive tasks (basic scheduling) the execution time, and sleep
> time need to be measured.
Sleep time is interesting. It doesn't make much sense to talk about
time that was stolen while a process was sleeping (it was either stolen
from another running process, or th
Ingo Molnar wrote:
> touching the 'timer tick' is the wrong approach. 'stolen time' only
> matters to the /scheduler tick/. So extend the hypervisor interface to
> allow the injection of 'virtual' scheduler tick events: via the use of a
> special clockevents device - do not change clockevents it
On 03/14/2007 01:31 PM, Jeremy Fitzhardinge wrote:
Dan Hecht wrote:
Sounds good. I don't see this in your patchset you sent yesterday
though; did you add it after sending out those patches?
Yes.
if so, could you forward the new patch? does it explicitly prevent
stolen time from getting a
* Jeremy Fitzhardinge <[EMAIL PROTECTED]> wrote:
> I added stolen time accounting to xen-pv_ops last night. For Xen, at
> least, it wasn't hard to fit into the clockevent infrastructure. I
> just update the stolen time accounting for each cpu when it gets a
> timer tick; they seem to get a t
On Wed, 2007-03-14 at 12:44 -0700, Jeremy Fitzhardinge wrote:
> Daniel Walker wrote:
> > sched_clock is used to bank real time against some specific states
> > inside the scheduler, and no it doesn't _just_ measure a processes
> > executing time.
> >
>
> Could you point these places out? All u
Dan Hecht wrote:
> Sounds good. I don't see this in your patchset you sent yesterday
> though; did you add it after sending out those patches?
Yes.
> if so, could you forward the new patch? does it explicitly prevent
> stolen time from getting accounted as user/system time or does it
> just
How is cpustat->steal used? How does it get out to usermode?
Via /proc/stat, used by modern 'top', maybe other utilities. It is
useful to users who want to see where the time is really going from
inside a guest when running on a (para)virtual machine.
I believe previous set of xen paravirt-
Jeremy Fitzhardinge wrote:
Rik van Riel wrote:
Steal time allows you to see the difference between a busy
system and an overloaded system.
Sure, the various accounting tools can go into as much detail as you
want. I just added stolen time accounting to the xen-pv_ops patchset
which is equiv
Jeremy Fitzhardinge wrote:
How is time quantum getting stolen less important? Time quantum
getting stolen results directly in more unnecessary context switches
since we might steal the entire timeslice before the process even ran.
It doesn't matter why you didn't get the time;
Oh, but it d
Rik van Riel wrote:
> Jeremy Fitzhardinge wrote:
>
>> It doesn't matter why you didn't get the time;
>
> Oh, but it does.
I meant specifically from a scheduling perspective.
> System administrators can use steal time the same way they
> use iowait time: to spot bottlenecks on their systems.
>
>
Daniel Walker wrote:
> sched_clock is used to bank real time against some specific states
> inside the scheduler, and no it doesn't _just_ measure a processes
> executing time.
>
Could you point these places out? All uses of sched_clock() that I
could see in kernel/sched.c seemed to be related
Dan Hecht wrote:
> Yes, I was just trying to use some consistent terminology, so I picked
> linux (hrtimer.c) terms: CLOCK_REALTIME == wallclock, CLOCK_MONOTONIC
> == "real" time counter.
OK. I had used "monotonic" in its more general sense earlier in the
thread, and I wanted to be sure.
> Even
On Wed, 2007-03-14 at 11:41 -0700, Jeremy Fitzhardinge wrote:
> Daniel Walker wrote:
> > >From prior emails I think your suggesting that 1ms (or 5 or 10) of time
> > should actually be a variable X that is changed inside sched_clock().
> > That's not the purpose of that API call, sched_clock() meas
On 03/13/2007 09:37 PM, Jeremy Fitzhardinge wrote:
Dan Hecht wrote:
With your previous definition of work time, would it be that:
monotonic_time == work_time + stolen_time ??
(By monotonic time, I presume you mean monotonic real time.)
Yes, I was just trying to use some consistent termino
Daniel Walker wrote:
> >From prior emails I think your suggesting that 1ms (or 5 or 10) of time
> should actually be a variable X that is changed inside sched_clock().
> That's not the purpose of that API call, sched_clock() measure real time
> period.
>
To what purpose? What is it really meas
On Wed, 2007-03-14 at 10:08 -0700, Jeremy Fitzhardinge wrote:
> The actual length of the timeslices is an orthogonal issue. It may be
> that you want to give processes more cpu time by making their quanta
> longer to compensate for lost cpu time, but that would affect their
> real-time characteri
Daniel Walker wrote:
>> I suppose you could, but that seems more complex. I think you could
>> encode the same information in the measurement of how much work a cpu
>> actually got done while a process was scheduled on it.
>>
>
> I know it's more complex, but that seems more like the "right"
On Wed, 2007-03-14 at 09:37 -0700, Jeremy Fitzhardinge wrote:
> Daniel Walker wrote:
> > Then your direction is wrong, sched_clock() should be constant ideally
> > (1millisecond should really be 1millisecond).
>
> Rather than repeating myself, I suggest you read my original post
> again. But my
Daniel Walker wrote:
> Then your direction is wrong, sched_clock() should be constant ideally
> (1millisecond should really be 1millisecond).
Rather than repeating myself, I suggest you read my original post
again. But my point is that "I was runnable on a cpu for 1ms of real
time" is a meaningl
On Tue, 2007-03-13 at 23:52 -0700, Jeremy Fitzhardinge wrote:
> >
> > That's true, but given a constant clock (like what sched_clock should
> > have) then the accounting is similarly inaccurate. Any connection
> > between the scheduler and the TSC frequency changes aren't part of the
> > design AF
On Wed, Mar 14, 2007 at 08:08:17AM -0700, Jeremy Fitzhardinge wrote:
> You're right. That's a very tough case. I don't know if there's any
> way to do a reasonable estimate of the slowdown. You could handwave it
> and say "if both threads are running a process, then apply an X scaling
> factor t
Lennart Sorensen wrote:
> How would you deal with something like a pentium 4 HT processor where
> you may run slower just because you got scheduled on the sibling of a
> cpu that happens to run something else needing the same execution units
> you do, causing you to get delayed more, even though th
On Tue, Mar 13, 2007 at 09:37:59PM -0700, Jeremy Fitzhardinge wrote:
> (By monotonic time, I presume you mean monotonic real time.) Yes, I
> suppose you could, but I don't think that's terribly useful. I think
> work_time is probably most naturally measured in cpu clock cycles rather
> than an a
On Tue, 2007-03-13 at 23:52 -0700, Jeremy Fitzhardinge wrote:
> Yep. But the tsc is just an example of a clocksource, and doesn't have
> any real bearing on what I'm saying.
[cut/snip/slash]
> Well, it doesn't need to be a constant clock if its modelling a changing
> rate. And it doesn't need to
Daniel Walker wrote:
> The adjustments that I spoke of above are working regardless of ntp ..
> The stability of the TSC directly effects the clock mult adjustments in
> timekeeping, as does interrupt latency since the clock is essentially
> validated against the timer interrupt.
>
Yep. But th
Dan Hecht wrote:
> With your previous definition of work time, would it be that:
>
> monotonic_time == work_time + stolen_time ??
(By monotonic time, I presume you mean monotonic real time.) Yes, I
suppose you could, but I don't think that's terribly useful. I think
work_time is probably most n
On Tue, 2007-03-13 at 14:59 -0700, Jeremy Fitzhardinge wrote:
> Daniel Walker wrote:
> > The frequency tracking you mention is done to some extent inside the
> > timekeeping adjustment functions, but I'm not sure it's totally accurate
> > for non-timekeeping, and it also tracks things like interrup
On 03/13/2007 02:59 PM, Jeremy Fitzhardinge wrote:
Daniel Walker wrote:
The frequency tracking you mention is done to some extent inside the
timekeeping adjustment functions, but I'm not sure it's totally accurate
for non-timekeeping, and it also tracks things like interrupt latency.
Tracking fr
Daniel Walker wrote:
> The frequency tracking you mention is done to some extent inside the
> timekeeping adjustment functions, but I'm not sure it's totally accurate
> for non-timekeeping, and it also tracks things like interrupt latency.
> Tracking frequency changes where it's important to get it
On Tue, 2007-03-13 at 13:32 -0700, Jeremy Fitzhardinge wrote:
> Most of the existing clocksource infrastructure would only operate on
> CLOCK_TIMEBASE_REALTIME clocksources, so I'm not sure how much overlap
> there would be here. In the case of dealing with cpufreq, there's a
> certain appeal to
john stultz wrote:
> My gut reaction would be to avoid using clocksources for now. While
> there is some thought going into how to expand clocksources for other
> uses (Daniel is working on this, for example), the design for
> clocksources has been very focused on its utility to timekeeping, so I'm
On Tue, 2007-03-13 at 09:31 -0700, Jeremy Fitzhardinge wrote:
> The current Linux scheduler makes one big assumption: that 1ms of CPU
> time is the same as any other 1ms of CPU time, and that therefore a
> process makes the same amount of progress regardless of which particular
> ms of time it gets
The current Linux scheduler makes one big assumption: that 1ms of CPU
time is the same as any other 1ms of CPU time, and that therefore a
process makes the same amount of progress regardless of which particular
ms of time it gets.
This assumption is wrong now, and will become more wrong as
virtual
51 matches
Mail list logo