Hi...

On Mon, Mar 30, 2009 at 4:55 AM, David Neiss <[email protected]> wrote:
> So, having kernel code which does:
>
> set_current_state(TASK_INTERRUPTIBLE);
> schedule();
>
> causes blocking, as expected, and /proc/stat shows increasing idle
> task time and non increasing io wait time. Fine.
>
> Changing schedule() to io_schedule(), time now seems to be split
> between idle task and io_wait time.
>
> Question, why does the system impute any running time to io_wait time?
>  The task that got blocked is presumably not doing anything and just
> sitting there inactive until it gets woken up, so I would expect just
> idle time to be increasing. I know its trying to account for io_wait
> time, but how does it know how many jiffies to be adding to io_wait
> time? Googling on io_wait and/or io_schedule are not turning up much.
> So, I'm obviously not understanding things here. Any input?

Interesting, never thought about it before. As usual, lxr help us to
locate the related function:
void __sched io_schedule(void)
{
        struct rq *rq = &__raw_get_cpu_var(runqueues);

        delayacct_blkio_start();
        atomic_inc(&rq->nr_iowait);
        schedule();
        atomic_dec(&rq->nr_iowait);
        delayacct_blkio_end();
}

further checking of delayacct_blkio_start roughly tells me that it
starts to account for io_wait. So that's the answer, this is the
function that does the accounting.

However, due to the way time accounting done in Linux (in every
ticks), there could be imprecisionness.

regards,

Mulyadi.

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to [email protected]
Please read the FAQ at http://kernelnewbies.org/FAQ

Reply via email to