Hi,

As far as I know, OpenSSL 1.0 is not meant to be binary compatible with 
OpenSSL 0.9.8x, at least for low-level APIs like the AES one you are 
referring to.
So, as you suggest it, an application should know if it is using a 0.9.8 
libeay32 or an 1.0 one, and depending on that it will use the correct 
prototype.

Cheers,
--
Mounir IDRASSI
IDRIX
http://www.idrix.fr

On 7/28/2010 3:02 PM, John Skodon via RT wrote:
> Hi guys:
>
> I'm probably wrong here, but it looks like you've changed some function 
> prototypes, e.g., aes.h, in version 1.0.0a to  "size_t" from "unsigned long" 
> in 0.9.8o.
>
> E.g.,
> 0.9.8o, AES.H:
> void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
>   size_t length, const AES_KEY *key,
>   unsigned char ivec[AES_BLOCK_SIZE],
>   unsigned char ecount_buf[AES_BLOCK_SIZE],
>   unsigned int *num);
>
> 1.0.0a, AES.H:
> void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
>   const unsigned long length, const AES_KEY *key,
>   unsigned char ivec[AES_BLOCK_SIZE],
>   unsigned char ecount_buf[AES_BLOCK_SIZE],
>   unsigned int *num);
>
> The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has "long" 
> and "unsigned long" as 32-bits, and "size_t" as 64-bits. So, it would seem 
> that code that called AES_ctr128_encrypt() compiled under 0.9.8o would push 
> 32-bits less onto the stack on AMD/Itanium than code using the 1.0.0a headers.
>
> Just about every other popular compiler model I can think of, primarily W32, 
> Unix 32, and Unix 64 LP64 would not experience a problem.
>
> If I'm correct, code calling these functions on AMD/x64 would need maybe two 
> different function pointers defined for AES_ctr128_encrypt(), and on the fly 
> switching between the two, depending on the version retrieved from 
> LIBEAY32.DLL.
>
> Am I missing something here?
>
> Thanks in advance for your help,
> JGS
>
>
>
> Hi guys:
> I'm probably wrong here, but it looks like you've changed some function
> prototypes, e.g., aes.h, in version 1.0.0a to "size_t" from "unsigned long" in
> 0.9.8o.
> E.g.,
> 0.9.8o, AES.H:
> void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
> * size_t length*, const AES_KEY *key,
> unsigned char ivec[AES_BLOCK_SIZE],
> unsigned char ecount_buf[AES_BLOCK_SIZE],
> unsigned int *num);
> 1.0.0a, AES.H:
> void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
> * const unsigned long length*, const AES_KEY *key,
> unsigned char ivec[AES_BLOCK_SIZE],
> unsigned char ecount_buf[AES_BLOCK_SIZE],
> unsigned int *num);
> The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has "long" 
> and
> "unsigned long" as 32-bits, and "size_t" as 64-bits. So, it would seem that 
> code
> that called AES_ctr128_encrypt() compiled under 0.9.8o would push 32-bits less
> onto the stack on AMD/Itanium than code using the 1.0.0a headers.
> Just about every other popular compiler model I can think of, primarily W32,
> Unix 32, and Unix 64 LP64 would not experience a problem.
> If I'm correct, code calling these functions on AMD/x64 would need maybe two
> different function pointers defined for AES_ctr128_encrypt(), and on the fly
> switching between the two, depending on the version retrieved from 
> LIBEAY32.DLL.
> Am I missing something here?
> Thanks in advance for your help,
> JGS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to