Hi guys:
I'm probably wrong here, but it looks like you've changed some function
prototypes, e.g., aes.h, in version 1.0.0a to "size_t" from "unsigned
long" in 0.9.8o.
E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
size_t length, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
const unsigned long length, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has
"long" and "unsigned long" as 32-bits, and "size_t" as 64-bits. So, it
would seem that code that called AES_ctr128_encrypt() compiled under
0.9.8o would push 32-bits less onto the stack on AMD/Itanium than code
using the 1.0.0a headers.
Just about every other popular compiler model I can think of, primarily
W32, Unix 32, and Unix 64 LP64 would not experience a problem.
If I'm correct, code calling these functions on AMD/x64 would need maybe
two different function pointers defined for AES_ctr128_encrypt(), and on
the fly switching between the two, depending on the version retrieved
from LIBEAY32.DLL.
Am I missing something here?
Thanks in advance for your help,
JGS
Hi guys:
I'm probably wrong here, but it looks like you've changed some function
prototypes, e.g., aes.h, in version 1.0.0a to "size_t" from "unsigned
long" in
0.9.8o.
E.g.,
0.9.8o, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* size_t length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
1.0.0a, AES.H:
void AES_ctr128_encrypt(const unsigned char *in, unsigned char *out,
* const unsigned long length*, const AES_KEY *key,
unsigned char ivec[AES_BLOCK_SIZE],
unsigned char ecount_buf[AES_BLOCK_SIZE],
unsigned int *num);
The eccentric LLP64 model of Microsoft Windows AMD64 and Itanium has
"long" and
"unsigned long" as 32-bits, and "size_t" as 64-bits. So, it would seem
that code
that called AES_ctr128_encrypt() compiled under 0.9.8o would push 32-bits
less
onto the stack on AMD/Itanium than code using the 1.0.0a headers.
Just about every other popular compiler model I can think of, primarily
W32,
Unix 32, and Unix 64 LP64 would not experience a problem.
If I'm correct, code calling these functions on AMD/x64 would need maybe
two
different function pointers defined for AES_ctr128_encrypt(), and on the
fly
switching between the two, depending on the version retrieved from
LIBEAY32.DLL.
Am I missing something here?
Thanks in advance for your help,
JGS