We are still about to fix some timing issues with our Freescale i.MXL/i.MX21.

With our current port, we measured a delay between 6-7us between the timer IRQ 
its reprogramming; this actually corresponds more or less to the IRQ latency 
which is inherent to the 
IRQ handling layer. However, with a simple Linux without Xenomai, we got a 
latency of 5 us.

Having a deep look at the IRQ entry point in ipipe-root.c - the patch we used 
comes from 
the patch for ARM available in the CVS - we discovered that __ipipe_handle_irq 
is performed before
the timer reprogramming (i.e., before __ipipe_mach_set_dec()).
We moved the call to __ipipe_handle_irq() at the end of the function, and we 
then got a latency of 2us; this is great, but....

Does this change have an impact on the upper layers of Adeos? Timer 
reprogramming at this level shouldn't be affected
by other tasks, right? Can anybody confirm or not this?

I attached the function below.

Thanks a lot for your inputs.



asmlinkage int __ipipe_grab_irq(int irq, struct pt_regs *regs)

        if (irq == __ipipe_mach_timerint) {

                __ipipe_tick_regs[cpuid].ARM_cpsr = regs->ARM_cpsr;
                __ipipe_tick_regs[cpuid].ARM_pc = regs->ARM_pc;

                __ipipe_handle_irq(irq, regs);


                if (__ipipe_decr_ticks != __ipipe_mach_ticks_per_jiffy) {
                        unsigned long long next_date, now;

                        next_date = __ipipe_decr_next[cpuid];

                        while ((now = __ipipe_read_timebase()) >= next_date)
                                next_date += __ipipe_decr_ticks;

                        __ipipe_mach_set_dec(next_date - now);

                        __ipipe_decr_next[cpuid] = next_date;
        else {
                __ipipe_handle_irq(irq, regs);


        return (ipipe_percpu_domain[cpuid] == ipipe_root_domain &&

Xenomai-core mailing list

Reply via email to