On 30/08/06, Jan Kiszka <[EMAIL PROTECTED]> wrote:

Mmh, considering this and also the existing code I wonder if we could
optimise this a bit. I'm only looking at xnintr_irq_handler now (sharing
is slow anyway): currently the intr object is touched both before
(naturally...) and after the call to the ISR handler. Maybe we can push
all accesses before the handler for the fast path. E.g.:

int unhandled = intr->unhandled;
intr->unhandled = 0;
++intr->hits;
s = intr->isr(...);

if (s == XN_ISR_NONE) {
        intr->unhandled = ++unhandled;
        if (unhandled == XNINTR_MAX_UNHANDLED)
                ALARM!
}

so the idea is that we touch all intr members while they are hopefully in the same cache line (if it's 128 bits long) as the cache may be disturbed by ISR afterwards.

But OTOH, we do an additional write "int unhandled = intr->unhandled;"

(write-through cache) ---> put in cache + syncing with memory immediately

(write-back cache) ---> put in cache + delay syncing (but will it still happen?)

So while we avoid one possible read from the main memory into the cache when intr->unhandled and ->hits (ok, this one can be moved for sure) are called after ISR (and they are not in the cache), we introduce another one (at least for write-through it's 100%) that takes place before ISR and that's actually a "+" component to the IRQ latency.

or I can be wrong though.

(cache line == 128 bits) let's say cacheline[4]

int a = 1;   // e.g. &a == 0xabcd0004

this part of memory is currently not in the cache. So :

1) [0xabcd0000, 0xabcd0010] == 128 bits is loaded from memory into cacheline.

2) then 1 is loaded into cacheline[1]

3) [ write-through ] ---> sync with memory
or
    [ write-back ] ---> delay synching

 ?


Jan




--
Best regards,
Dmitry Adamushko
_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to