Function mips64_send_ipi() will block if the target core has a pending
IPI request. This will degrade performance in a system that has more
than a few cores and where there is a heavy IPI load.

The blocking can be avoided by coalescing requests. The mips64 code
should handle this just fine. The only place where exact,
non-coalescing IPI delivery is needed is the rendezvous mechanism, but
there invocations are serialized by the rendezvous mutex anyway.

The order of the sending and receiving steps is important. I hope that
I got it right.


Index: arch/mips64/mips64/ipifuncs.c
===================================================================
RCS file: src/sys/arch/mips64/mips64/ipifuncs.c,v
retrieving revision 1.10
diff -u -p -r1.10 ipifuncs.c
--- arch/mips64/mips64/ipifuncs.c       20 Apr 2015 19:08:52 -0000      1.10
+++ arch/mips64/mips64/ipifuncs.c       13 Jul 2015 13:18:12 -0000
@@ -98,15 +98,12 @@ mips64_ipi_intr(void *arg)
 
        KASSERT (cpuid == cpu_number());
 
-       /* figure out which ipi are pending */
-       pending_ipis = ipi_mailbox[cpuid];
        /* clear ipi interrupt */
        hw_ipi_intr_clear(cpuid);
+       /* get and clear pending ipis */
+       pending_ipis = atomic_swap_uint(&ipi_mailbox[cpuid], 0);
        
        if (pending_ipis > 0) {
-               /* clear pending ipi, since we're about to handle them */
-               atomic_clearbits_int(&ipi_mailbox[cpuid], pending_ipis);
-
                for (bit = 0; bit < MIPS64_NIPIS; bit++)
                        if (pending_ipis & (1UL << bit))
                                (*ipifuncs[bit])();
@@ -128,7 +125,7 @@ mips64_send_ipi(unsigned int cpuid, unsi
                panic("mips_send_ipi: CPU %ld not running", cpuid);
 #endif
 
-       atomic_wait_and_setbits_int(&ipi_mailbox[cpuid], ipimask);
+       atomic_setbits_int(&ipi_mailbox[cpuid], ipimask);
 
        hw_ipi_intr_set(cpuid);
 }

Reply via email to