According to your intrstat data, it looks like a nge network
device and disk sata controller both have interrupts bound
to cpu 7.

Max provided a DTrace script to determine which process is having
their thread(s) pinned when an interrupt comes in.

If you're concerned about the possibility of a CPU handling a high
rate of interrupts, and running application threads, causing a performance
issue, just fence the CPU off. It's easy, and dynamic.

For example, if cpu7 is handling most of the interrupts, just put it
in a processor set;

psrset -c 7

Once you've done that, the kernel will not schedule user threads on
cpu7 unless you explicitly bind processes/threads to the processor
set. By NOT doing any binding, you've effectively fenced off cpu7 -
all it will do is handle interrupts.

If it doesn't help, or makes things worse, just "psradm -d 1" to
delete the set.

/jim


Qihua Wu wrote:
So do we have a way to know what causes the interrupt (this may be got by intrstat) and which pid/execname is interrupted?

Thanks
Daniel

On Thu, Jan 21, 2010 at 10:43 PM, Jim Mauro <james.ma...@sun.com <mailto:james.ma...@sun.com>> wrote:

    "sched" is the execname of the PID 0 process (run "ps -e").

    The string "sched" gets plugged into the DTrace execname variable
    if the CPU is in an interrupt handler when a probe fires.

    CPU 0 is very likely taking the clock interrupts, which by default
    occur every 10 milliseconds.

    HTH,
    /jim




    Qihua Wu wrote:

        Why sched is often interrupt on cpu 0 instead of distribution
        evenly on all cpu?

        dtrace -n 'sdt:::interrupt-start { @num[execname,cpu] =
        count(); }'
        dtrace: description 'sdt:::interrupt-start ' matched 1 probe
        ^C

sched 4 355 sched 18 358 sched 8 369 sched 10 370 sched 11 370 sched 12 408 sched 13 408 sched 15 408 sched 19 408 sched 20 408 sched 21 408 sched 22 408 sched 23 408 sched 14 409 sched 9 410 sched 2 411 sched 16 411 sched 6 422 sched 17 456 sched 3 469 sched 5 510 sched 7 564 sched 1 3079 sched 0 535071

        Thanks,
        Daniel
        ------------------------------------------------------------------------

        _______________________________________________
        dtrace-discuss mailing list
        dtrace-discuss@opensolaris.org
        <mailto:dtrace-discuss@opensolaris.org>

_______________________________________________
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org

Reply via email to