On Fri, Apr 25, 2008 at 12:24 PM, Eric Saxe <Eric.Saxe at sun.com> wrote:
> Rafael Vanoni wrote:
>  > Eric Saxe wrote:
>  >
>  >> Rafael Vanoni wrote:
>  >>
>  >>> It depends. For instance, writing a lot to disk shows a lower wakeup
>  >>> count than events:
>  >>>
>  >>> Wakeups-from-idle per second: 1566.7    interval: 2.0s
>  >>> no ACPI power usage estimate available
>  >>>
>  >>> Top causes for wakeups:
>  >>> 88.4% (1384.6) <interrupt> :  pci-ide#0
>  >>>   9.5% (149.3)     <kernel> :  uhci`uhci_handle_root_hub_status_change
>  >>>   7.7% (120.4)  <interrupt> :  nvidia#0
>  >>>   4.4% ( 68.2)         java :  <scheduled timeout expiration>
>  >>>   4.3% ( 66.7)     <kernel> :  ehci`ehci_handle_root_hub_status_change
>  >>>   4.3% ( 66.7)     <kernel> :  genunix`clock
>  >>>   2.1% ( 33.3)     <kernel> :  genunix`cyclic_timer
>  >>>   0.9% ( 13.9)     <kernel> :  genunix`lwp_timer_timeout
>  >>>
>  >>>
>  >> Right, because there's a DTrace probe that's firing when we get an
>  >> interrupt for the pci-ide#0 device...but that doesn't necessarily result
>  >> in an idle CPU waking up. :) It's a similar issue to the cyclics being
>  >> batched processed.
>  >>
>  >
>  > Yep :)
>  > So do you think the solution to batch processing is a good one
>  > (reporting one event per expire timestamp) ?
>  >
>  >
>
>  I guess it's ok for cyclics, but on the other hand it only addresses the
>  cyclic accounting piece of what you point out is a more generic issue.

I got a chance to attend Beijing OpenSolaris user group meeting the day
before yesterday. I was keeping asked when system is in idle, why the percent
sum of top-wakeups > 100%

>
>  >>> While sitting idle shows a higher wakeup count:
>  >>>
>  >>> Wakeups-from-idle per second: 562.7     interval: 2.0s
>  >>> no ACPI power usage estimate available
>  >>>
>  >>> Top causes for wakeups:
>  >>> 26.5% (149.3)        <kernel> :  uhci`uhci_handle_root_hub_status_change
>  >>> 21.5% (120.9)     <interrupt> :  nvidia#0
>  >>> 12.2% ( 68.7)            java :  <scheduled timeout expiration>
>  >>> 11.8% ( 66.7)        <kernel> :  ehci`ehci_handle_root_hub_status_change
>  >>> 10.3% ( 57.7)        <kernel> :  genunix`cyclic_timer
>  >>>   7.5% ( 42.3)        <kernel> :  genunix`clock
>  >>>   2.7% ( 15.4) thunderbird-bin :  <scheduled timeout expiration>
>  >>>   2.4% ( 13.4)        <kernel> :  genunix`lwp_timer_timeout
>  >>>
>  >>>
>  >> This makes me wonder if there's a way that we could dig around to see,
>  >> when the idle-state-transition probe fires (because we're coming out of
>  >> halt/mwait/etc), to see what's waking us up (rather then trying to
>  >> correlate wakeups with events elsewhere in the system).
>  >>
>  >> It's probably a short list...either:
>  >>     - Device interrupt
>  >>     - cyclic apic timer related firing
>  >>     - cross call (poke) from another CPU because something became runnable
>  >>
>  >> If we can figure out which of the above things is responsible for the
>  >> idle-state-transition, and can then get information about the thing that
>  >> happened:
>  >>     - Which device
>  >>     - which cyclic is due
>  >>     - what became runnable
>  >>
>  >> ...and then have powertop report that...that should get us pretty close,
>  >> I would think.
>  >> What do you think?
>  >>
>  >
>  > Sounds good. Not sure if it's possible since the idle-state probe
>  > doesn't tell us much, but I'm gonna have a look around. Maybe one way
>  > would be to have a script that ties every event to a firing of idle-state.
>  >
>  That's sort of what I was thinking as well.
>
>
>  > I got a couple of ideas on how to report it, visually-wise.
>  > Should be fun.
>  >
>  :)
>
>  -Eric
>
>
>
>  _______________________________________________
>  tesla-dev mailing list
>  tesla-dev at opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/tesla-dev
>

Reply via email to