On Wed, Jun 25, 2008 at 11:11 PM, Nadav <[EMAIL PROTECTED]> wrote: > I came across a problem with the x86 backend of gcc (3.3 - 4.1). When I > compile code which access a lookup table of size 65536, gcc generates code > which uses 16bit opcodes. > > The intel architecture optimization guide warns that 16bit opcodes should be > avoided whenever possible due to Length-Changing Prefixes (LCP). The > recommendation is: ""If imm16 is needed, load equivalent imm32 into a > register and use the word value in the register instead. "" > > I would expect gcc to use a 32bits register and mask it, if needed. > Even when I use 32bits registers and mask them ( & 0xFFFF), gcc detects > correctly that my variable is effectively only 16bits and it generates > 16bits code. > I modified the generated assembly to use 32bit registers and it ran much > faster.
Please file a bugreport following instructions at http://gcc.gnu.org/bugs.html. Please also add a runtime test that can be used to analyze the problem. Thanks, Uros.