On 1/3/07, Joe Orton <[EMAIL PROTECTED]> wrote:
This would need to have some severe future-proofing to be safe against
any change to MD5_CTX in future versions of OpenSSL, e.g. only using it
for the specific sizeof() that structure as currently defined.  (that
would cover the cases where MD5_LONG is currently not 32-bit too)

Well, I think the 'right' solution may be to deploy a new MD5 API that
takes a pool and we can dynamically allocate the MD5_CTX rather than
asking the client to allocate it off the stack.

Basically, change apr_md5_init() to:

APU_DECLARE(apr_status_t) apr_md5_init(apr_md5_ctx_t **context,
apr_pool_t *pool);

combined with making apr_md5_ctx_t an opaque structure.

If we change its signature, I don't know what we want to do - do we
follow Subversion's example and use 'apr_md5_init2' and friends?  Or,
should we open the flood gates for APR 2.0?

1. is having an ENOTIMPL _set_xlate really an excusable regression?

Yes.  We already do that for !APR_HAS_XLATE case, so callers need to
handle that anyway.

2. is it intended that the return values from the OpenSSL MD5_*
functions are ignored?  (can those functions even use the OpenSSL error
stack and all the mess that entails?)

My look through those functions doesn't indicate any error returns -
just like our own implementation, FWIW.

3. does this mean that apr_md5_* can only be guaranteed to work after
apu_ssl_init() has been called?

AFAICT, looking at the source code, the OpenSSL MD5 code doesn't use
any of the OpenSSL error stack or functions.  (It doesn't use any
errors.)  So, it shouldn't depend on apu_ssl_init() being called
first.  -- justin

Reply via email to