Attempting to decrypt/decode a large smime encoded file created with
openssl fails regardless of the amount of OS memory available.
OpenSSL version 1.0.2d
Ubuntu 15.10 Linux 4.2.0-41 x86_64
Create keypair:
openssl req -x509 -nodes -newkey rsa:2048 \
-keyout mysqldump-secure.priv.pem \
Hi Rich,
This is a 64 bit kernel on AMD64 architecture. Do I need ints bigger than
that? Any reason not to patch the code with size_t? I could craft such a
change for a ticket if desired.
Thanks,
Brian
On Wed, Aug 17, 2016 at 1:28 PM, Rich Salz via RT wrote:
> You'll
That doesn't sound like an ideal case for a bugfix. Any other creative
ideas on how to fix this one? Some suggestions I read previously included
adding support for streaming decode to avoid such a large memory
allocation. This may not easily be feasible because of needing to verify
signatures
Ok, so this might be a separate issue. Please let me know what you think
and I can file. The issue is pretty much irrelevant since you can't
decrypt anything over 1.5G.
Try this:
bmorton@athens:~$ dd if=/dev/urandom of=sample.txt bs=512K count=6144
6144+0 records in
6144+0 records out
Very helpful, thanks! So that's not an actual issue.
Which do you think is more pressing from your project's perspective:
removing BUF_MEM dependency on int, or streaming decode for smime? While
certainly non-trivial, the latter is certainly more isolated for a
newcomer. On the other hand, the
Thank you Steve (and Rich) for your insights so far. It looks like BUF_MEM
is the structure in question? If so, that might also explain the odd
behavior with smime encrypting which makes this issue sort of moot. Even
if I could fix this, the encrypted files I've created so far would be