Mike Gerdts wrote:
[...] However, leaving these same people blind is
not a good approach either. We have long-term history of vmstat data
and would prefer to continue to use this as an indicator.
Yeah, change always leads to challenges. So, just add your old %wio to
your old
%idle, and your archived data will match-up with new data. You should
not feel
blind to I/O wait though; applications should instrument it. After all,
only individual
workload elements really "experience" I/O waits. It's truly not a valid
systemic
metric - and it never was.
Actually, one might have many AIO threads blocked for I/O via libaio
without the
AIO-issuing application incurring any user-level waits at all. That's
more or less
why AIO exists. Whether AIO occurs via the LWP-based path, the KAIO path,
or the QFS "samaio" path - a "good" metric would report them all the
same. So,
while I'm happy that the 'b' statistic will be re-exposed, I'm certain
we'll have further
questions arising when users switch between various filesystem options
and see
their "blocked" statistics change "mysteriously".
Considering the rapid rise of virtualization technologies, it makes more
and more
sense to me every day that we killed the %wio metric.
In any event, it makes no sense whatsoever in terms of engineering units
for I/O
wait to be expressed in units of "%cpu". There's simply no generic
propensity for
a thread to do computation once it is unblocked from an I/O wait.
Cheers,
-- Bob
It sounds like the fix for this bug would be a kernel patch - is this
correct?
Thank you very much for the quick analysis.
Mike
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org