In the last mail Philip Blundell said:
> >My problem is a segv in perl. If I do any of:
> >Add another empty function into the source code
> >Add 1 or more instructions to the troublesome function
> >Remove 1 instruction from the troublesome function
> >
> >the segv goes away.
> >Is it likely that this si a StrongARM revision K bug?
>
> Well, I always hate to point the finger without conclusive proof. But it does
> sound rather like it might be (I assume you're using a rev K chip!).
Well, it says "K" on the outside (it's DEC, dating from October 1996)
Is my understanding correct:
There are 2 bugs in the silicon of the revision K StrongARM
One relates to STMs using ^ to access user mode registers while in privileged
mode, which doesn't affect ARMLinux (but does RISC OS)
The second relates to PC being the destination of LDM and LDR instructions,
where the LDM or LDR is on the last word of a page. The pipeline read for
the instruction following the LDM or LDR causes a prefect abort to be
scheduled, but the K chip doesn't realise that the PC was altered in the LDM
or LDR, still generates the abort, which upon return causes execution to
carry on with the instruction at the memory location after the LDM or LDR,
rather than at the address that was loaded into the PC by the LDM or LDR.
OK. It has been suggested that one foolproof solution would be to arrange for
gcc never to generate loads into the PC. However, I assume that this would
have a big impact on performance as LDM is the preferred return from function
instruction.
I had an alternative thought. I'm assuming something about the operation of
the linker, and that executables are loaded at fixed addresses.
Is there anything fundamentally wrong with this:
I assume that when ld is run it has an internal list of object files to add
to the final executable. I assume that it takes the first file in the list,
places it at the address of the start of the final executable, then takes the
second file, places it at (start address + length of first), and so on
through the list. If my above assumption is correct, is there a problem with
modifying the linker so that with a flag it can change its behaviour to:
take current object file at top of list
assume that it's to be placed at current address in executable
check the ends of all will-be pages for LDM or LDR with PC.
if any are found, then defer linking this file
if there are other object files remaining on list, then take the next and
loop back
otherwise (no object file is safe at this point as all would put an LDM or LDR
in a danger position)
pad before object file with 4 bytes of 0.
has this shift moved all LDM/LDR out of danger?
if no, try with pad of 8 bytes
and so on up to page size (4K always?)
At this point give up trying to put the LDM/LDR out of harms way and use no
padding. Someone's managed to make a pathological object file with 1024
LDR/LDMs at all possible (positions modulo 4096).
Would this work? As far as I can see you end up with the linker ensuring that
no executable has any LDM or LDR in the danger position, hence your executables
never actually tickle the bug
Nick
PS What's the revision S bug again? LDMIB and what? Is there a web page I
should read all this on?
unsubscribe: body of `unsubscribe linux-arm' to [EMAIL PROTECTED]