On Tue, 2009-10-06 at 14:26 -0400, Andreas Glatz wrote:
> On Fri, 2009-10-02 at 21:00 +0200, Philippe Gerum wrote:
> > On Fri, 2009-10-02 at 14:01 -0400, Andreas Glatz wrote:
> > > >
> > > > - powerpc32 updates for 2.6.30. Mainly to merge the once experimental
> > > > bits that prevent most alignment faults from triggering a secondary mode
> > > > switch. Andreas told me this works like a charm on 83xx, and I did not
> > > > see any issue on 52xx, 85xx or 86xx either.
> > > >
> > >
> > > Can I get a version of that patch for testing? Is it in your git
> > > repository?
> >
> > I just pushed this commit to my remote tree (ipipe-2.6.30-powerpc
> > branch); it should appear in a few hours once mirrored (cron job).
> >
>
> I finally had a chance to test the ipipe-2.6.30-powerpc version
> from the git repository. Unfortunately, I noticed that our application
> dies after some time and that this behaviour is related to that
> alignment patch (if I take it out everything runs fine for > 2 days).
>
> Currently I'm investigating the reasons for that crash. It has
> something to do with floating point registers not being restored
> properly. Our alignment exceptions are mainly triggered by accesses
> to unaligned floating point data.
Does it work any better with this patch in?
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 32cc3df..a04a5e3 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -80,7 +80,9 @@ void flush_fp_to_thread(struct task_struct *tsk)
* FPU, and then when we get scheduled again we would store
* bogus values for the remaining FP registers.
*/
- ipipe_preempt_disable(flags);
+ if (ipipe_root_domain_p)
+ preempt_disable();
+ local_irq_save_hw_cond(flags);
if (tsk->thread.regs->msr & MSR_FP) {
#ifdef CONFIG_SMP
/*
@@ -94,7 +96,9 @@ void flush_fp_to_thread(struct task_struct *tsk)
#endif
giveup_fpu(tsk);
}
- ipipe_preempt_enable(flags);
+ local_irq_restore_hw_cond(flags);
+ if (ipipe_root_domain_p)
+ preempt_enable();
}
}
>
> Andreas
--
Philippe.
_______________________________________________
Xenomai-core mailing list
[email protected]
https://mail.gna.org/listinfo/xenomai-core