On Fri, Oct 29, 2004 at 03:54:00PM +0300, Pantelis Antoniou wrote: > Joakim Tjernlund wrote: > > >>Joakim Tjernlund wrote: > >> > >> > >>>>On Thu, Oct 14, 2004 at 04:53:49PM +0200, Joakim Tjernlund wrote: > >>>> > >>>> > >>>> > >>>> > >>>>>Is it not time to merge 8xx from linuxppc-2.5 into Linus tree? > >>>>> > >>>>>I know the 8xx is not fully functional yet but this isn't done > >>>>>soon I think it won't happen at all. The 8xx arch can be made to > >>>>>depend on BROKEN in Linus tree to make it clear that it isn't > >>>>>working properly yet. > >>>>> > >>>>> > >>>>> > >>>>I've been the hard-ass about holding back on moving 8xx forward. Once > >>>>2.6.9 finally comes out (assuming and hoping that Linus really intends > >>>>to do a release and not -rc5), I'll start moving stuff over and make it > >>>>depend on BROKEN hopefully in time for 2.6.10-rc1. > >>> > >>I'm currently battling to make 2.6.10-rc1 work the same way it used do. > >> > >>But something changed in slab management and the kmallocs in request_irq > >>called by init_IRQ fails. > >> > > > >hmm, I think I saw something about that in the www log for Linus kernel. > >It is offline so > >I can't check now. > > > >The www I/F for both Linus tree(http://linux.bitkeeper.com/) and the ppc > >tree(http://ppc.bitkeeper.com/) > >are offline alot. Anyone who knows whats going on?
BitMover has been moving and upgrading the machines of late, I think this is done now. > Found the bug. > > IRQ code is now common for all arches, but ppc used a special > irq_kmalloc routine, since request irq was called very early. > > The generic code just calls straight kmalloc, which obviously craps out > when too early. > > Lets see what can I do to fix it... > > BTW I'm curious if any embedded PPC actually works on 2.6.10... In 2.6.10-rc1 release? No. In linuxppc-2.5? Now, yes. The changes came from Randy Vinson, to make request_irq() / openpic_hookup_cascade() calls become arch_initcalls() which fixed at least some platforms. I would guess 8xx will need something similar. -- Tom Rini http://gate.crashing.org/~trini/