On Thu, Feb 21, 2013 at 03:58:56PM +0400, Konstantin Vladimirov wrote:
> Hi,
> 
> Sorry, mistyped. Please read `jne` instead of `je` in handwritten
> "optimized" assembler.
> 
> ---
> With best regards, Konstantin
> 
> On Thu, Feb 21, 2013 at 3:57 PM, Konstantin Vladimirov
> <konstantin.vladimi...@gmail.com> wrote:
> > Hi,
> >
> > Discovered this optimization possibilty on private backend, but can
> > easily reproduce on x86
> >
> > Consider code, say test.c:
> >
> > static __attribute__((noinline)) unsigned int*
> > proxy1( unsigned int* codeBuffer, unsigned int oper, unsigned int a, 
> > unsigned in
> > {
> >     return codeBuffer;
> > }
> >
> > static __attribute__((noinline)) unsigned int*
> > proxy2( unsigned int* codeBuffer, unsigned int oper, unsigned int a, 
> > unsigned in
> > {
> >     return codeBuffer;
> > }
> >
> > __attribute__((noinline)) unsigned int*
> > myFunc( unsigned int* codeBuffer, unsigned int oper)
> > {
> >     if( (oper & 0xF) == 14)
> >     {
> >         return proxy1( codeBuffer, oper, 0x22, 0x2102400b);
> >     }
> >     else
> >     {
> >         return proxy2( codeBuffer, oper, 0x22, 0x1102400b);
> >     }
> > }

This cannot be done in general as proxy1 could be self-modifying code. 

I considered writing post optimizer of binaries but I do not know how
detect self-modifying behaviour so what I can do is limited.

Reply via email to