On Sat, Dec 13, 2008 at 7:26 PM, Vishnu Param <vishnu...@hotmail.com> wrote:
>> Which is faster (memcpy or inline or and << operations)? It varies widely
[...]
> But i guess this only applies for my hardware and not others. It is really
> hard to say what the compiler is doing beneath the system, and I am no
> expert in these matters.

Correct. Though as a rule of thumb, one can assume
memcpy()/memcmp()/memmove() to be faster than 'home grown' versions of
the same.

BTW: if you ever want or need to know what's done under the hood,
check out disassemblers (such as IDA Pro). Powerful stuff.

And many compilers (e.g. Intel ICC, GNU C) recognize several
(mem*/str*) standard lib calls and replace them with 'intrinsics',
i.e. highly optimized bits of inline assembly. memcpy() is one such a
call.
It depends very much on the compiler, the (hardware) platform and the
compiler settings.
Other compilers provide optimized memcpy() library calls which are a
tad slower, thanks to the function call/return overhead, but they very
probably still outperform 'home grown'.

All in all, the most important thing is to feed (relatively) large
chunks of data on each call, so call overhead and such is
'negligable', compared to the data processing itself.

And when you can devise your software in such a way that data, which
must be aligned, enters the system at an aligned boundary already,
that cuts out the aligning memcpy/memmove and gives some additional
speed.


-- 
Met vriendelijke groeten / Best regards,

Ger Hobbelt

--------------------------------------------------
web:    http://www.hobbelt.com/
        http://www.hebbut.net/
mail:   g...@hobbelt.com
mobile: +31-6-11 120 978
--------------------------------------------------
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to