> On Jul 14, 2016, at 1:23 AM, Frank Siebenlist <frank.siebenl...@gmail.com> 
> wrote:
> 
> Guess hashlib used some better optimization on the C-calls (?).
> 
> This is my last update on this observation.
> Conclusion is "so be it", and using bigger chunks for hashing gives
> (much) better performance.


I believe this is going to be due to the overhead of CFFI on CPython. Every 
time we call a C function via CFFI there is some marshaling and such that goes 
on, so when you call update() a whole lot of times (one per byte) there’s a 
whole lot of marshaling and crossing the C boundary going on.

In contrast, hashlib is written using the C-EXT API in CPython, which means 
that it integrates directly into the internals of CPython and doesn’t need to 
pay that marshaling cost.

In terms of safety, CFFI is far superior to directly writing C in the C-EXT 
API, it’s also more portable since it utilized a pluggable backend approach, 
and on PyPy it tends to be much faster since it offers introspection that the 
JIT can take advantage of.

The downside is, putting a bunch of CFFI calls in a hot loop on CPython can be 
slower than C-EXTs.

—
Donald Stufft



_______________________________________________
Cryptography-dev mailing list
Cryptography-dev@python.org
https://mail.python.org/mailman/listinfo/cryptography-dev

Reply via email to