https://bugs.kde.org/show_bug.cgi?id=322935

--- Comment #27 from Julian Seward <jsew...@acm.org> ---
This keeps cropping up, for example most recently in bug 366464.  Maybe
I should explain more why this isn't supported.  It's because we don't have
a feasible way to do it.  Valgrind's JIT instruments code blocks as they are
first visited, and the endianness of the current blocks are "baked in" to the
instrumentation.  So there are two options:

(1) when a SETEND instruction is executed, throw away all the JITted code
    that Valgrind has created, and JIT new code blocks with the new endianness.

(2) JIT code blocks in an endian-agnostic way and have a runtime test
   for each memory access, to decide on whether to call a big or little
   endian instrumentation helper function.

(1) gives zero performance overhead for code that doesn't use SETEND but
  a gigantic (completely infeasible) hit for code that does.

(2) makes endian changes free, but penalises all memory traffic regardless of
  whether SETEND is actually used.

So I don't find either of those acceptable.  And I can't think of any other way
to
implement it.

Truth be told, I don't believe this is really even necessary, either.  In the
old days,
on x86 (32-bit) linux and ppc32-linux (note: 32-bit, little- and big-endian
respectively)
glibc used platform-specific code -- sometimes in C, sometimes in assembly --
to
implement str* functions, and these normally process data in 32 bit chunks. 
For
example strlen on x86 was done with 32 bit loads and some tricks to do with
carry bit propagation, by adding magic constants 0x80808080 and/or 0xFEFEFEFF
to the loaded values.

So I don't get why rpi has to be special about this.  Why can't it just follow
existing
practice?

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to