Re: Blocking on a non-blocking socket?

2024-05-23 Thread Wiebe Cazemier via openssl-users
Hi Detlef,

- Original Message -
> From: "Detlef Vollmann" 
> To: openssl-users@openssl.org
> Sent: Friday, 24 May, 2024 12:02:37
> Subject: Re: Blocking on a non-blocking socket?
> 
> That's correct, but if I understand Matt correctly, this isn't the case.
> The idea of SSL_MODE_AUTO_RETRY is that if there's data, but it isn't
> application data but some kind of handshake data, then SSL_read doesn't
> return (after handling the handshake data), but immediately retries.
> If this retry fails with EWOULDBLOCK (or actually BIO_read returns 0),
> then SSL_read returns with 0 and SSL_WANT_READ.

Wouldn't the option then have to be called 'read more than one record at a 
time'? To me, 'retry' is a bit of a misnomer in that description.

Tracing the code, the retry seems to be considered based on 
BIO_fd_non_fatal_error(), which looks at EWOULDBLOCK. See [1] and [2].

Wiebe


[1] 
https://github.com/openssl/openssl/blob/b9e084f139c53ce133e66aba2f523c680141c0e6/crypto/bio/bss_fd.c#L226
[2] 
https://github.com/openssl/openssl/blob/b9e084f139c53ce133e66aba2f523c680141c0e6/crypto/bio/bss_fd.c#L113


Re: Blocking on a non-blocking socket?

2024-05-23 Thread Wiebe Cazemier via openssl-users
Hi Matt,

- Original Message -
> From: "Matt Caswell" 
> To: openssl-users@openssl.org
> Sent: Friday, 24 May, 2024 00:26:28
> Subject: Re: Blocking on a non-blocking socket?

> Not quite.
> 
> When you call SSL_read() it is because you are hoping to read
> application data.
> 
> OpenSSL will go ahead and attempt to read a record from the socket. If
> there is no data (and you are using a non-blocking socket), or only a
> partial record available then the SSL_read() call will fail and indicate
> SSL_ERROR_WANT_READ.
> 
> If a full record is available it will process it. If the record contains
> application data then the SSL_read() call will return successfully and
> provide the application data to the application.
> 
> If the record contains non-application data (i.e. some TLS protocol
> message like a key update, or new session ticket) then, with
> SSL_MODE_AUTO_RETRY on it will automatically try and read another record
> (and the above process repeats). 

Can you show me in the code where that is? It seems the callers of BIO_read() 
[1] are responsible for doing the retry, because the reader functions abort 
when retry is set. Those are many callers, for x509, evp, b64, etc. But, the 
code is kind of hard to trace, because it's all calls to bio_method_st.bread 
function pointers.

My main concern is, if it would get an EWOULDBLOCK, there is (almost) no sense 
in retrying because in the 100 microseconds or so that passed, there is likely 
still no data. Plus, is there a limit on how often it's retried? If the 
connection is broken (packet loss, so nobody is aware) in the middle of 
rekeying, it can retry all it wants, but nothing will ever come. If it does 
that, then at some point, reads on the socket would fail with ETIMEDOUT, which 
is what I'm seeing.


[1] 
https://github.com/openssl/openssl/blob/b9e084f139c53ce133e66aba2f523c680141c0e6/crypto/bio/bio_lib.c#L303


Re: Blocking on a non-blocking socket?

2024-05-23 Thread Wiebe Cazemier via openssl-users
Hi Neil,

- Original Message -
> From: "Neil Horman" 
> To: "Wiebe Cazemier" 
> Cc: "udhayakumar" , openssl-users@openssl.org
> Sent: Thursday, 23 May, 2024 23:42:18
> Subject: Re: Blocking on a non-blocking socket?

> from:
> [ https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_set_mode.html |
> https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_set_mode.html ]

> SSL_MODE_AUTO_RETRY in non-blocking mode should cause SSL_reaa/SSL_write to
> return -1 with an error code of WANT_READ/WANT_WRITE until such time as the
> re-negotiation has completed. I need to confirm thats the case in the code, 
> but
> it seems to be. If the underlying socket is in non-blocking mode, there should
> be no way for calls to block in SSL_read/SSL_write on the socket read/write
> system call.

I still don't really see what the difference is between SSL_MODE_AUTO_RETRY on 
or off in non-blocking mode?

The person at [1] seems to have had a similar issue, and was convinced clearing 
SSL_MODE_AUTO_RETRY fixed it. But I agree, I don't know how it could be. 
OpenSSL would have to remove the O_NONBLOCK, or do select/poll, and I can't 
find it doing that.

I hope it happens again soon and I'm around to attach a debugger.

Regards,

Wiebe


[1] https://github.com/alanxz/rabbitmq-c/issues/586


Re: Blocking on a non-blocking socket?

2024-05-23 Thread Wiebe Cazemier via openssl-users
- Original Message -
> From: "Neil Horman" 
> To: "udhayakumar" 
> Cc: "Wiebe Cazemier" , openssl-users@openssl.org
> Sent: Thursday, 23 May, 2024 22:05:22
> Subject: Re: Blocking on a non-blocking socket?

> do you have a stack trace of the thread hung in this state? That would confirm
> whats going on here
> Neil

Hi Neil, 

No, I don't. I wasn't there to attach a debugger. It recovered before I could 
do that. And despite a lot of effort, I can't reproduce it either.

But in general, what does SSL_MODE_AUTO_RETRY on/off change in non-blocking 
mode? The documentation is too vague for me. It says:

> Setting SSL_MODE_AUTO_RETRY for a nonblocking BIO will process 
> non-application data records until either no more data is available or an 
> application data record has been processed.

But how is that different from disabling SSL_MODE_AUTO_RETRY?

Regards,

Wiebe


Blocking on a non-blocking socket?

2024-05-22 Thread Wiebe Cazemier via openssl-users
Hi List,

I have a very obscure problem with an application using O_NONBLOCK still 
blocking. Over the course of a year of running with hundreds of thousands of 
clients, it has happened twice over the last month that a worker thread froze. 
It's a long story, but I'm pretty sure it's not a deadlock or spinning event 
loop or something, primarily because the application recovers after about 20 
minutes with a client errorring out with ETIMEDOUT. Coincidentally, that 20 
minutes matches the timeout description of the tcp man page [1].

It really looks like a non-blocking socket is still blocking. I found something 
with a similar problem ([2]), but what they think of SSL_MODE_AUTO_RETRY does 
not match the documentation.

So, is there indeed any way an application that has SSL_MODE_AUTO_RETRY on 
(which is default since 1.1.1) can block? Looking at the source code, I don't 
see any calls to fcntl() that removes the O_NONBLOCK.

My IO method is SSL_read() and SSL_write() with an SSL object given to 
SSL_set_fd().

The only SSL modes I change from the default is that I set 
SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER. 

There are two primary deployments of this application, one with OpenSSL 1.1.1 
and one with 3.0.0. Only 1.1.1 has shown this problem, but it may be a 
coincidence.

Side question, is it a problem to set SSL_set_fd() before using fcntl to set 
the fd to O_NONBLOCK? I ask, because the docs say "The BIO and hence the SSL 
engine inherit the behaviour of fd. If fd is non-blocking, the ssl will also 
have non-blocking behaviour.". The 'inherit' may be a key word here; not sure 
when it's done.

Regards,

Wiebe Cazemier



[1] https://man7.org/linux/man-pages/man7/tcp.7.html
[2] https://github.com/alanxz/rabbitmq-c/issues/586


Trouble decoding key in provider

2024-05-22 Thread Bernd Ritter via openssl-users

Hi there,

I am trying to implement a provider. The decoder successfull decodes the 
key (it is using an ED25519 key with a custom OID -> hence the provider).


Currently I am facing two problems:

1. the PEM decoding is ignored unless I comment out the DER decoding part

The private key is packaged in a pkcs#8 format. This seems to be 
decodable from PEM, but only if I comment out the DER part. This does 
not leave me confident my code is correct.


static const OSSL_ALGORITHM ed25519ph_decoder_dispatch[] = {
 {"ed25519-my:1.2.3.4", "provider=my,input=pem,structure=pkcs8", 
dispatch_decoder_my_ed25519_private_der, PROV_DESCS_ED25519},

 {ed25519-my:1.2.3.4", "provider=my,input=der", , PROV_DESCS_ED25519},
 {NULL, NULL, NULL, NULL}};

Both parts work each. Maybe someone can enlighten me here :-)

2. the DER part works as well. I can extract the exact key I've created 
in the keygen part before. But despite my efforts to find the correct 
export mechanism by trail and error and reading through all the provider 
implementation I've found in the provider corner and in the internet, 
some thing is still missing here. After reading out the key, I does not 
seem to be available:
 PROVIDER: 
making public key from example-priv.pem

0:d=0  hl=2 l=  46 cons: SEQUENCE
2:d=1  hl=2 l=   1 prim: INTEGER   :00
5:d=1  hl=2 l=   5 cons: SEQUENCE
7:d=2  hl=2 l=   3 prim: OBJECT:1.2.3.4
   12:d=1  hl=2 l=  34 prim: OCTET STRING  [HEX 
DUMP]:0420E2F448C53E58A6233CBFD306D62B2875EE175F3A88C20DCA7C4DF0BE945F192F

ed25519ph provider init...
registered  oids
new  provider context
ed25519ph provider init complete
operating switch: 22 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 10 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
/home/rittebe/Entwicklung/ed25519ph-provider/src/ed25519ph_decoder.c - 
Decoder context new 0x5c304c501a50

operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
src/ed25519ph_decoder.c - Decoder context new 0x5c304c503b60
src/ed25519ph_decoder.c - Decoder decode DER (selection=87) 0x5c304c503b60
PKCS8_pkey_get0
dump with len=34
04 20 e2 f4 48 c5 3e 58 a6 23 3c bf d3 06 d6 2b 28 75 ee 17 5f 3a 88 c2 
0d ca 7c 4d f0 be 94 5f 19 2f

dump done
asn1 octet string. read 34
asn1 octet string len=32
dump with len=32
e2 f4 48 c5 3e 58 a6 23 3c bf d3 06 d6 2b 28 75 ee 17 5f 3a 88 c2 0d ca 
7c 4d f0 be 94 5f 19 2f

dump done
src/ed25519ph_decoder.c - Decoder Export to provider independent params 
format

src/ed25519ph_decoder.c - 32
Private DER decode rc from callback=0, rc=1
src/ed25519ph_decoder.c - Decoder context free 0x5c304c503b60
operating switch: 10 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 10 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
src/ed25519ph_decoder.c - Decoder context new 0x5c304c5050c0
operating switch: 21 (10=KEYMGMT, 12=SIG, 20=ENC, 21=DEC, 22=STOR)
src/ed25519ph_decoder.c - Decoder context new 0x5c304c507080
src/ed25519ph_decoder.c - Decoder context free 0x5c304c507080
Could not find private key of key from example-priv.pem
800BD7ED6273:error:1608010C:STORE 
routines:ossl_store_handle_load_result:unsupported:crypto/store/store_result.c:151:

src/ed25519ph_decoder.c - Decoder context free 0x5c304c5050c0
src/ed25519ph_decoder.c - Decoder context free 0x5c304c501a50

The first part is from openssl asn1parse to see if the key matches. 
After it is read correctly I get this "could not find private key of key 
from xxx" error.


At the moment I am "exporting" the key from the decoder like this:

OSSL_PARAM params[4];
int object_type = OSSL_OBJECT_PKEY;

params[0] = OSSL_PARAM_construct_int(OSSL_OBJECT_PARAM_TYPE, 
_type);
params[1] = 
OSSL_PARAM_construct_utf8_string(OSSL_OBJECT_PARAM_DATA_TYPE, 
(char*)"privkey", 0);
params[2] = 
OSSL_PARAM_construct_octet_string(OSSL_OBJECT_PARAM_DATA, (void*)objref, 
objref_sz);

params[3] = OSSL_PARAM_construct_end();

objref just contains the plain key, which should be usable for import in 
the keymanagement.


Looking for help. Thanks.

Bernd

--
Bernd Ritter
Linux Developer
Tel.: +49 175 534 4534
Mail: rit...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt, HRB 3537


Re: OpenSSL version 3.3.0 published

2024-05-17 Thread Dennis Clarke via openssl-users

On 5/16/24 08:28, Neil Horman wrote:

Glad its working a bit better for you.  If you are inclined, please feel
free to open a PR with your changes for review.


Well, the changes are *really* trivial. Necessary and trivial.


--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken



Re: OpenSSL version 3.3.0 published

2024-05-16 Thread Dennis Clarke via openssl-users

On 5/15/24 18:34, Neil Horman wrote:

You are correct, the files you reference (most of them in fact) get built
into separate objects in the event the build flags are different for shared
and static libraries, and should be unrelated to the issue you are seeing



I was somewhat puzzled by this also. Yes.


As for the undefined symbols, thats definitely a mystery.  most notably,
the symbols referenced are internal.  That is to say they shouldn't be
exported in the symbol table for libssl.so (though a quick look with
objectdump shows they are, which may be a separate issue)

Looking at the sources, I can see what _might_ be happening

cert_comp_tests.c includes "../ssl/ssl_local.h"
if quic is enabled (i.e. OPENSSL_NO_QUIC is undefined), ssl_local.h
includes quic/quic_local.h
quic_local.h includes internal/quic_txp.h
quic_txp.h includes internal/quic_stream_map.h
quic_stream_map.h defines a static inline function
named ossl_quic_stream_recv_get_final_size which calls
ossl_quic_rxfc_get_final_size, which in turn
calls ossl_quic_rxfc_get_final_size

I'm guessing the other symbols have simmilar patterns.



I am still digging into the issue.
I thank you the thoughtful reply.



As to why its happening my first guess would be that more recent compilers
like gcc and clang do lazy symbol resolution, only resolving a subordonate
symbol, when its calling parent is found to be used.  Given
ossl_quic_stream_recv_get_final_size isn't called anywhere in
comp_stream_test.c, the compiler disposes of the symbol prior to any need
to resolve its called symbols, and so everything is ok.



I also suspect a linker issue here and the sad fact is that the GNU
 ld just will not suffice in this server. C'est la vie ici.



conversely (again, I'm guessing here) the solaris 5.10 compiler likely take
a more bulldozer-ish approach, leaving everything in the object file and
only stripping symbols after all resolutions are complete, leading to the
missing symbols error, despite its not being needed.



I have to laugh at the "bulldozer" idea as you are likely quite 
correct there.




As to what to do about this...I'm unsure.  The quick hack I would imagine
would be to move the definition of ossl_quic_stream_recv_get_final_size
into a c file (likely quic_stream_map.c) and just declare a prototype in
the quic_stream_map.h header, so as to avoid the unneeded symbol
resolution.  You would have to lather rinse  repeat with the other missing
symbols of course.

As to your prior question about how long the ability to support SunOS will
last, well, unfortunately I don't think any of us can say.  I believe the
platoform you are building on is on our unadpoted platform list:
https://www.openssl.org/policies/general-supplemental/platforms.html

And while we endeavor to keep openssl building on as many platforms as
possible, its not feasible to cover all the currently
unmaintained platforms.  You do have some agency here however. If you are
willing and interested, you could volunteer to be a community platform
maintainer for your target platform.  This would entail you building
openssl on your adopted platform, and running the test suite routinely,
reporting bugs and fixing errors as they occur.  Its not a small amount of
work, but it would be a significant contribution toward ensuring that
openssl stays viable on the targets you need.


I can tell you that this morning I see :

.
.
.
All tests successful.
Files=312, Tests=3182, 6714 wallclock secs (25.22 usr  3.10 sys + 
6370.32 cusr 170.55 csys = 6569.19 CPU)

Result: PASS
`test' is up to date.

hubble $ pwd
/opt/bw/build/openssl-3.3.0_SunOS_5.10_SPARC64.005

hubble $
hubble $ psrinfo -pv
The physical processor has 8 virtual processors (0-7)
  SPARC64-VII+ (portid 1024 impl 0x7 ver 0xa1 clock 2860 MHz)
hubble $
hubble $ uname -a
SunOS hubble 5.10 Generic_150400-67 sun4u sparc SUNW,SPARC-Enterprise
hubble $

hubble $ hash -r
hubble $ which openssl
/opt/bw/bin/openssl
hubble $

hubble $ ldd /opt/bw/bin/openssl
libssl.so.3 =>   /opt/bw/lib/libssl.so.3
libcrypto.so.3 =>/opt/bw/lib/libcrypto.so.3
libsocket.so.1 =>/lib/64/libsocket.so.1
libnsl.so.1 =>   /lib/64/libnsl.so.1
libdl.so.1 =>/lib/64/libdl.so.1
librt.so.1 =>/lib/64/librt.so.1
libstatomic.so.1 =>  /opt/bw/lib/libstatomic.so.1
libc.so.1 => /lib/64/libc.so.1
libmp.so.2 =>/lib/64/libmp.so.2
libmd.so.1 =>/lib/64/libmd.so.1
libscf.so.1 =>   /lib/64/libscf.so.1
libaio.so.1 =>   /lib/64/libaio.so.1
libdoor.so.1 =>  /lib/64/libdoor.so.1
libuutil.so.1 => /lib/64/libuutil.so.1
libgen.so.1 =>   /lib/64/libgen.so.1
libm.so.2 => /lib/64/libm.so.2
/lib/sparcv9/../libm/sparcv9/libm_hwcap1.so.2
/platform/SUNW,SPARC-Enterprise/lib/sparcv9/libc_psr.so.1
hubble $ /opt/bw/bin/op

Re: OpenSSL version 3.3.0 published

2024-05-15 Thread Dennis Clarke via openssl-users

On 5/13/24 03:34, Matt Caswell wrote:



On 13/05/2024 02:42, Neil Horman wrote:
We added support for RCU locks in 3.3 which required the use of 
atomics (or emulated atomic where they couldn't be supported), but 
those were in libcrypro not liberal




Right - its supposed to fallback to emulated atomic calls where atomics 
aren't available on a particular platform.


Some platforms have some atomics support but you have to link in a 
separate atomics library to get it to work. You might try adding 
"-latomic" to Configure command line and see if that helps at all.



Well first the good news : managed to get past the need for C11 atomics
with the bundled libatomic.so.1 that the Oracle people provide in the
dev tools.

 So that works now.  Yay.

Now comes the next horrible hurdle to jump and that seems to be called
the quic protocol goodness.  For the record I am able to get a good
result if I go with "no-quic" in the config :

hubble $ $PERL ./Configure solaris64-sparcv9-cc \
> --prefix=/opt/bw no-asm no-engine shared zlib-dynamic \
> no-quic enable-weak-ssl-ciphers -DPEDANTIC 2>&1
Configuring OpenSSL version 3.3.0 for target solaris64-sparcv9-cc
Using os-specific seed configuration
Created configdata.pm
Running configdata.pm
Created Makefile.in
Created Makefile
Created include/openssl/configuration.h

**
***    ***
***   OpenSSL has been successfully configured ***
******
***   If you encounter a problem while building, please open an***
***   issue on GitHub <https://github.com/openssl/openssl/issues>  ***
***   and include the output from the following command:   ***
******
***   perl configdata.pm --dump***
***        ***
***   (If you are new to OpenSSL, you might want to consult the***
***   'Troubleshooting' section in the INSTALL.md file first)  ***
******
**
hubble $


That all builds neatly on this old platform and all the testsuite looks
to be sweet :

.
.
.
All tests successful.
Files=312, Tests=3182, 6723 wallclock secs (25.17 usr  3.15 sys + 
6375.57 cusr 171.52 csys = 6575.41 CPU)

Result: PASS
`test' is up to date.

So that is cute.

However, if I leave in the "quic"-ness then I eventually land on this
weird linking problem :

Undefined   first referenced
 symbol in file
ossl_quic_rxfc_get_final_size   test/cert_comp_test-bin-cert_comp_test.o
ossl_quic_sstream_get_final_sizetest/cert_comp_test-bin-cert_comp_test.o
ossl_quic_vlint_decode_uncheckedtest/cert_comp_test-bin-cert_comp_test.o
ld: fatal: symbol referencing errors. No output written to 
test/cert_comp_test

*** Error code 2
make: Fatal error: Command failed for target `test/cert_comp_test'
Current working directory /opt/bw/build/openssl-3.3.0_SunOS_5.10_SPARC64.004
*** Error code 1
make: Fatal error: Command failed for target `build_sw'

These files refer to the above symbols :

1) headers
-rw-r--r--   1 dclarke  devl4670 Apr  9 12:12 
./include/internal/packet_quic.h
-rw-r--r--   1 dclarke  devl   10769 Apr  9 12:12 
./include/internal/quic_fc.h
-rw-r--r--   1 dclarke  devl   17692 Apr  9 12:12 
./include/internal/quic_stream.h
-rw-r--r--   1 dclarke  devl   34987 Apr  9 12:12 
./include/internal/quic_stream_map.h
-rw-r--r--   1 dclarke  devl4212 Apr  9 12:12 
./include/internal/quic_vlint.h


2) C sources
-rw-r--r--   1 dclarke  devl2060 Apr  9 12:12 ./crypto/quic_vlint.c
-rw-r--r--   1 dclarke  devl  121348 Apr  9 12:12 ./ssl/quic/quic_impl.c
-rw-r--r--   1 dclarke  devl   12010 Apr  9 12:12 
./ssl/quic/quic_sstream.c
-rw-r--r--   1 dclarke  devl   26592 Apr  9 12:12 
./ssl/quic/quic_stream_map.c
-rw-r--r--   1 dclarke  devl   17658 Apr  9 12:12 
./ssl/quic/quic_tserver.c

-rw-r--r--   1 dclarke  devl  114209 Apr  9 12:12 ./ssl/quic/quic_txp.c

Looking into my compile logs I see that quic_vlint.c gets processed into
three output objects :

{CC foo} -c -o crypto/libcrypto-lib-quic_vlint.o   crypto/quic_vlint.c
{CC foo} -c -o crypto/libcrypto-shlib-quic_vlint.o crypto/quic_vlint.c
{CC foo} -c -o crypto/libssl-shlib-quic_vlint.ocrypto/quic_vlint.c

I see that quic_impl.c gets processed into two output objects :

{CC foo} -c -o ssl/quic/libssl-lib-quic_impl.o ssl/quic/quic_impl.c
{CC foo} -c -o ssl/quic/libssl-shlib-quic_impl.o   ssl/quic/quic_impl.c


Similarly we see that quic_sstream.c results in two objects also :

out

Re: OpenSSL version 3.3.0 published

2024-05-12 Thread Dennis Clarke via openssl-users

On 5/12/24 21:42, Neil Horman wrote:

We added support for RCU locks in 3.3 which required the use of atomics (or
emulated atomic where they couldn't be supported), but those were in
libcrypro not liberal



I see. I am having great difficulty with 3.3 on an old Sun SPARC64
server where there really is not any libatomic support. Well, there is
sort of but it is a hack. Given how portable the code is there must be a
configuration option somewhere to disable the need for those atomic ops.

Meanwhile, OpenSSL 3.0.x builds and tests flawlessly but ... how
long will that last?


--
--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken



Re: OpenSSL version 3.3.0 published

2024-05-12 Thread Dennis Clarke via openssl-users



On 4/9/24 08:56, OpenSSL wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


OpenSSL version 3.3.0 released
==




Trying to compile this on an old Solaris 10 machine and over and over 
and over I see these strange things as Undefined symbols :


Undefined   first referenced
 symbol in file
__atomic_store_4./libssl.so
__atomic_fetch_add_4./libssl.so
__atomic_fetch_sub_4./libssl.so
atomic_thread_fence ./libssl.so
__atomic_load_4 ./libssl.so
ld: fatal: symbol referencing errors. No output written to apps/openssl
gmake[1]: *** [Makefile:12601: apps/openssl] Error 2
gmake[1]: Leaving directory 
'/opt/bw/build/openssl-3.3.0_SunOS_5.10_SPARC64.002'

gmake: *** [Makefile:1978: build_sw] Error 2


Those look like strange C11 atomics. Are they really needed somewhere?

I see include/internal/refcount.h talks about C11 atomics and yet the
entire code base is supposed to be C90 clean.  See section the OpenSSL
Coding Style policy :

https://www.openssl.org/policies/technical/coding-style.html

Chapter 14: Portability

To maximise portability the version of C defined in
ISO/IEC 9899:1990 should be used. This is more commonly
referred to as C90. ISO/IEC 9899:1999 (also known as C99) is
not supported on some platforms that OpenSSL is used on and
therefore should be avoided.


Perhaps I need to define OPENSSL_DEV_NO_ATOMICS ?


Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken



Re: goto out not working in tests

2024-05-06 Thread The Doctor via openssl-users
On Mon, May 06, 2024 at 11:34:59PM -0600, The Doctor via openssl-users wrote:
> Using clang versino 18
> 
> and it is spewing at goto out 
> 

Line 417 and 434 of test/threadstest.c

in openssl-3.3 daily

-- 
Member - Liberal International This is doc...@nk.ca Ici doc...@nk.ca
Yahweh, King & country!Never Satan President Republic!Beware AntiChrist rising!
Look at Psalms 14 and 53 on Atheism ;


goto out not working in tests

2024-05-06 Thread The Doctor via openssl-users
Using clang versino 18

and it is spewing at goto out 

-- 
Member - Liberal International This is doc...@nk.ca Ici doc...@nk.ca
Yahweh, King & country!Never Satan President Republic!Beware AntiChrist rising!
Look at Psalms 14 and 53 on Atheism ;


RE: Open SSL 1.1.1 and Vxworks 5.4.2 - Query on Entropy source

2024-04-30 Thread Prithvi Raj R (Nokia) via openssl-users
Users,

An update here: See that we have OPENSSL_RAND_SEED_OS  defined on our VxWorks 
based system. Would it be a trusted entropy source ? The default for VxWorks 
seems to be OPENSSL_RAND_SEED_NONE.

Thanks,
Prithvi
From: Prithvi Raj R (Nokia)
Sent: Tuesday, April 30, 2024 12:47 AM
To: openssl-users@openssl.org
Subject: Open SSL 1.1.1 and Vxworks 5.4.2 - Query on Entropy source

Hi Users,

A beginner on cryptography and Open SSL here.

First query - On our VxWorks 5.4.2 based system with Open SSL 1.1.1, I would 
like to know what entropy source would be used by RAND_priv_bytes() to generate 
random numbers. Does Vxworks not use an OS based entropy source ?  I see so in 
the openssl link: 
https://mta.openssl.org/pipermail/openssl-users/2020-March/012087.html.
In our implementation, we have the OPENSSL_RAND_SEED_NONE macro definition 
commented in the opensslconf.h file. What would be the default entropy source 
then if OS based sources are not used ? Which Open SSL config file/compile 
parameter can help me zero in on the correct entropy source being used ?  
Wanted to know if the source is a trusted one or not. See that 
rand_drbg_get_entropy is being used (no parent drbg ;_rand_pool_acquire_entropy 
is used with entropy factor 2 being set) and entropy available is greater than 
0.

Second query - Please confirm if the following are valid:

  1.  Understand the Entropy size by default is 256 bits.
  2.  Understand that RAND_priv_bytes() is cryptographically secure (depends on 
the entropy source again ?)

Thanks,
Prithvi


Invalid code generated by GCC on 32-bit x86 in gcm128.c

2024-04-29 Thread Michael Wojcik via openssl-users
We recently debugged, and found a workaround for, a GCC [###version] 
code-generation error when compiling OpenSSL 3.0.8 for 32-bit on Intel x86. 
This error resulted in a use of a misaligned memory operand with a 
packed-quadword instruction, producing a SIGSEGV on RedHat 8. (I'm a bit 
surprised Linux doesn't raise SIGBUS for this particular trap, but whatever.) I 
wanted to document this here in case other people run into it.

Aside: This does raise the question: Why aren't other people running into it? 
And why are we only seeing it now? Honestly, I don't know. It is sensitive to 
stack layout, but in some of our tests we could reproduce it consistently. It's 
possible you'll never see this in a program where the path into the sensitive 
functions in gcm128.c, which appear to be CRYPTO_gcm128_aad, 
CRYPTO_gcm128_encrypt, and CRYPTO_gcm128_decrypt, is made up completely of code 
compiled with GCC. In our case we have non-GCC code along that path in some 
cases, and that non-GCC code does not follow GCC's rather arbitrary stack-frame 
alignment rules for x86, so GCC may be making an invalid assumption about 
callers further up the stack and how they'll pad and align stack frames.

(It's known that with default build flags and optimization, GCC requires that 
callers align *parameters* strictly, because it may generate SSE code for 
operations on 64-bit and larger operations. But the problem here isn't a 
parameter, as I'll show in a moment.)

Anyway, back to the issue.

The affected functions declare a 64-bit integer object with automatic storage 
class:

u64 alen = ctx->len.u[0];

and then operate on it:

alen += len;

GCC, under appropriate conditions, generates code that performs a 
packed-quadword operation (specifically a PADDQ) with alen as the destination. 
That requires alen have 64-bit alignment. However, the generated code puts alen 
on a 32-bit boundary; examining its address before the trap occurs confirms it 
ends with 0x8.

The fix we're using is to add -mstackrealign to the build flags for OpenSSL on 
GCC x86 platforms. That adds prologue code to each function which checks the 
stack alignment at runtime and fixes it if necessary. Unfortunately this does 
mean some performance cost, obviously, which we have not yet tried to measure.

After quite a bit of investigation, we're fairly confident we'd call this a GCC 
bug. It looks like a consequence of the "fix" for GCC bug 65105, which was made 
a couple of years ago, to use XMM registers in 32-bit generated code on x86. 
GCC has an unfortunate history of assuming stronger stack-alignment rules on 
this platform than are required by the ISA or enforced by other languages and 
compilers, and some members of the GCC team are a bit notorious for their ... 
enthusiasm ... in justifying this position.

We have not yet attempted to raise this as a GCC bug, because, well, I've read 
those discussions in the GCC forums.

-- 
Michael Wojcik


Re: [External] : Re: BIO_read() crash

2022-12-05 Thread Benjamin Kaduk via openssl-users
On Mon, Dec 05, 2022 at 11:31:18AM -0800, Thomas Dwyer III wrote:
> Why does EVP_get_digestbyname("md4") return non-NULL if the legacy provider
> isn't loaded? Similarly, why does it return non-NULL for "md5" after doing
> EVP_set_default_properties(NULL, "fips=yes")? This seems unintuitive. Legacy
> code that does not know about EVP_MD_fetch() checks the return value of
> EVP_get_digestbyname(). Isn't that where the error should be detected? Why
> let it get all the way to BIO_set_md() (or EVP_DigestInit() or whatever)
> before the error is detected?

To do so would introduce a time-of-check/time-of-use race, as the set of
providers available may change in the intervening period.

-Ben


OpenSSL version 3.1.0-alpha1 published

2022-12-01 Thread OpenSSL
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


   OpenSSL version 3.1 alpha 1 released
   

   OpenSSL - The Open Source toolkit for SSL/TLS
   https://www.openssl.org/

   OpenSSL 3.1 is currently in alpha.

   OpenSSL 3.1 alpha 1 has now been made available.

   Note: This OpenSSL pre-release has been provided for testing ONLY.
   It should NOT be used for security critical purposes.

   Specific notes on upgrading to OpenSSL 3.1 from previous versions are
   available in the OpenSSL Migration Guide, here:

https://www.openssl.org/docs/man3.0/man7/migration_guide.html

   The alpha release is available for download via HTTPS and FTP from the
   following master locations (you can find the various FTP mirrors under
   https://www.openssl.org/source/mirror.html):

 * https://www.openssl.org/source/
 * ftp://ftp.openssl.org/source/

   The distribution file name is:

o openssl-3.1.0-alpha1.tar.gz
  Size: 15343477
  SHA1 checksum:  91a7cbcb761c4bb8a460899bccddcbd5d047d3c3
  SHA256 checksum:  
ef10f70023f4e3f701c434db0b4b0c8cfea1e1e473a0eb3c9ccbc5c54f5f5566

   The checksums were calculated using the following commands:

openssl sha1 openssl-3.1.0-alpha1.tar.gz
openssl sha256 openssl-3.1.0-alpha1.tar.gz

   Please download and check this alpha release as soon as possible.
   To report a bug, open an issue on GitHub:

https://github.com/openssl/openssl/issues

   Please check the release notes and mailing lists to avoid duplicate
   reports of known issues. (Of course, the source is also available
   on GitHub.)

   Yours,

   The OpenSSL Project Team.

-BEGIN PGP SIGNATURE-

iQJGBAEBCAAwFiEE3HAyZir4heL0fyQ/UnRmohynnm0FAmOIqpASHHRvbWFzQG9w
ZW5zc2wub3JnAAoJEFJ0ZqIcp55tWrIQAJHT40JekEs3DacHjQrTmGLc56TmzaFD
oDp8Md2E0RpX/vuANdIVGB89zGQMag13TPa9CzT1yk7wFBilPoiuapolmo8N0nvF
OnMLIQjF+sbsQN0gqchuMKKD98omc1ZNNcijq/GlKM9wH6ey1uHnFAi2aXF4f6ai
2SviauJvHQDgDOe9tFfA5lDF1EdYZt20D46Yc+yJf/zr4MJZFcX2T2qmo+oew6VA
djZ+cRPeeNmRXrl5Banqpfcy2iH4N57wvEcM4dtGaGY+4Pwr0H9XN6MxfamGUbLv
oSySdFpTagPENPGDBPoRilPSXdapCD5m8Xd2FERM1HF5E1GaemqaQKUYiXbANqL/
SDBftayilhYf+tXg3/22xksZVEkEjFD79M0mj75dn+UgQilOTR/AOdup2imTB7PG
7Cgq2HGz93ppO3kG0iuTS5uc95Gfu9AfkjgfcydA2eZf+rmHAoocm8kpThdxD/a5
avpMudgklyXysmO+2MJ16806Sa27L8N52YTPzy4Zthx/SLR/RA//bXBnlSlguRGw
7+hIDPncmaCfegaI65yq/TgtU9z/OLhNTPmYaUQi3IFtsCrAahZNVYg8qZtnMtgC
iaVYQkNZsqE0wSDalgJANJkZUa8VHdh2O3sOBSYbZvHWEiYJJ+9ATgLSLDjiGq0e
l9cvtybysQsx
=upN5
-END PGP SIGNATURE-


Re: Upgrading OpenSSL on Windows 10

2022-11-25 Thread Michael Wojcik via openssl-users
​​> From: Steven_M.irc 
> Sent: Thursday, November 24, 2022 21:21

> > This is not true in the general case. There are applications which are 
> > available on Linux which do not use the
> > distribution's package manager. There are applications which use their own 
> > OpenSSL build, possibly linked
> > statically or linked into one of their own shared objects or with the 
> > OpenSSL shared objects renamed. Linux
> > distributions have not magically solved the problem of keeping all software 
> > on the system current.

> That's disheartening. My next computer will be running Linux and I was 
> thinking that (as long as I stick to
> installing software from appropriate repositories) my update worries would be 
> over soon.

That's the state of general-purpose software development. Believe me, having 
software automatically updated would by no means solve the most pressing 
security issues in the current software industry.
 
> > It is possible, with relatively little effort, to find all the copies of 
> > the OpenSSL DLLs under their usual names on a system

> Could you please provide me with a list of the usual names?

At the moment I'm not in a position to do that, and it wouldn't achieve 
anything useful anyway.

> I've got a lot of libssl DLL's on my system, but I'm not sure if they're part 
> of OpenSSL or some other implementation
> of SSL.

Filenames wouldn't prove anything anyway.

> >I'm not sure OpenSSL versions should be particularly high on anyone's 
> >priority list.

> As I understand it, OpenSSL is responsible for establishing HTTPS 
> connections, the primary protocol
> for ensuring security and authenticity over the Internet, and you *don't* 
> think OpenSSL versions should
> be a high priority? I don't understand your lack of alarm here.

I'm not alarmed because I'm operating under a sensible threat model.

What vulnerabilities are you concerned about? Why? What versions of OpenSSL do 
those apply to? Being "alarmed" without being able to answer those questions 
just means you're shooting in the dark.

Frankly, after 2012 -- the year that brought us Heartbleed, Goto Fail, and 
severe vulnerabilities in most major TLS implementations -- there have been few 
published vulnerabilities of much concern to client-side TLS use, and most of 
those only apply to very high-value targets. TLS connections are not the 
low-hanging fruit. Attackers have much better return and much lower cost 
exploiting other vulnerabilities, including on the user side phishing and other 
social-engineering attacks, typosquatting, credential stuffing, and so on. On 
the service-provider side, software supply-chain attacks and poor 
organizational defenses are common threat vectors.

Very few people will bother attacking HTTPS at the protocol level. It's not 
worth the effort.

> >What are you actually trying to accomplish? What's your task? Your threat 
> >model?

> I want to be able to trust the HTTPS connections between my PC and servers on 
> the Internet again;

"Again" since when? "Trust" in what sense? "Trust", like "secure", doesn't mean 
anything useful in an absolute sense. It's only meaningful in the context of a 
threat model.

For a typical computer user, TLS implementations is the wrong thing to worry 
about. Most home and employee individual users who are successfully attacked 
will fall victim to some sort of social engineering, such as phishing; to poor 
personal security practices such as weak passwords or password reuse; or to a 
server-side compromise they have absolutely no control over. Some will be 
compromised due to a failure to install updates to the OS or major software 
packages such as Microsoft Office long after those updates are released, but 
that's a less-common vector.

HTTPS compromise is statistically insignificant. In the vast majority of cases, 
the dangers with HTTPS are what people use it for -- online shopping at sites 
with poor security, for example, or downloading malicious software -- not with 
the channel itself.

-- 
Michael Wojcik

RE: Upgrading OpenSSL on Windows 10

2022-11-24 Thread Steven_M.irc via openssl-users
Hi Job,
Thanks very much for your reply. Apologies for the lateness of mine.

I will ask around and get more information about Powershell and PDQ Inventory.

Thanks again,
Steven




Sent with Proton Mail secure email.

--- Original Message ---
On Wednesday, November 23rd, 2022 at 5:36 AM, Job Cacka  wrote:


> Michael's point should be asked and answered first for your environment.
> 
> To find all of the OpenSSL bits used on a windows system you would use
> Powershell or a tool that flexes its use like PDQ Inventory. There is a
> steep learning curve and it is probably off topic for this group but there
> are several different ways to use powershell to gain this information from
> different viewpoints (Installed files, registry, event log, etc...).
> 
> Thanks,
> Job
> 
> -Original Message-
> From: openssl-users openssl-users-boun...@openssl.org On Behalf Of Michael
> 
> Wojcik via openssl-users
> Sent: Monday, November 21, 2022 4:18 PM
> To: openssl-users@openssl.org
> Subject: Re: Upgrading OpenSSL on Windows 10
> 
> > From: openssl-users openssl-users-boun...@openssl.org on behalf of
> > Steven_M.irc via openssl-users openssl-users@openssl.org
> > Sent: Monday, November 21, 2022 15:56
> 
> > However, I am running Windows 10, and since (unlike Linux) every piece
> > of software outside of Windows itself needs to be updated
> > individually, I don't know how to track down every single application that
> 
> might be using OpenSSL and make sure that the copy of OpenSSL it uses is
> up-to-date.
> 
> You don't. There may be applications that have OpenSSL linked statically, or
> linked into one of its own DLLs, or just with the OpenSSL DLLs renamed.
> 
> > As many of you would know, under repository-based systems (such as
> > most Linux distros), this would not be an issue as I could update every
> 
> single application (system or non-system) at once.
> 
> This is not true in the general case. There are applications which are
> available on Linux which do not use the distribution's package manager.
> There are applications which use their own OpenSSL build, possibly linked
> statically or linked into one of their own shared objects or with the
> OpenSSL shared objects renamed. Linux distributions have not magically
> solved the problem of keeping all software on the system current.
> 
> 
> Back to Windows: It is possible, with relatively little effort, to find all
> the copies of the OpenSSL DLLs under their usual names on a system, and then
> glean from them their version information. With significantly more effort,
> you can search for exported OpenSSL symbols within third-party binaries,
> which will detect some more instances. With quite a lot of additional
> effort, you can winkle out binaries which contain significant portions of
> code matching some OpenSSL release (see various research efforts on
> function-point and code-block matching, and compare with alignment
> strategies in other fields, such as genomics). If your definition of
> "OpenSSL in an application" is not too ambitious, this might even be
> feasible.
> 
> But to what end? Each application will either be well-supported, in which
> case you can find out from the vendor what OpenSSL version it contains and
> whether an update is available; or it is not, in which you'll be out of
> luck.
> 
> This is true of essentially every software component, most of which are not
> as well-maintained or monitored as OpenSSL. Modern software development is
> mostly a haphazard hodgepodge of accumulating software of uncertain
> provenance and little trustworthiness into enormous systems with
> unpredictable behavior and failure modes. I'm not sure OpenSSL versions
> should be particularly high on anyone's priority list.
> 
> What are you actually trying to accomplish? What's your task? Your threat
> model?
> 
> --
> Michael Wojcik


Re: Upgrading OpenSSL on Windows 10

2022-11-24 Thread Steven_M.irc via openssl-users
Hi Michael,
Thanks very much for replying to my e-mail/post. I apologize for the lateness 
of my reply.

> This is not true in the general case. There are applications which are 
> available on Linux which do not use the distribution's package manager. There 
> are applications which use their own OpenSSL build, possibly linked 
> statically or linked into one of their own shared objects or with the OpenSSL 
> shared objects renamed. Linux distributions have not magically solved the 
> problem of keeping all software on the system current.

That's disheartening. My next computer will be running Linux and I was thinking 
that (as long as I stick to installing software from appropriate repositories) 
my update worries would be over soon.
 
>It is possible, with relatively little effort, to find all the copies of the 
>OpenSSL DLLs under their usual names on a system

Could you please provide me with a list of the usual names? I've got a lot of 
libssl DLL's on my system, but I'm not sure if they're part of OpenSSL or some 
other implementation of SSL.

>I'm not sure OpenSSL versions should be particularly high on anyone's priority 
>list.

As I understand it, OpenSSL is responsible for establishing HTTPS connections, 
the primary protocol for ensuring security and authenticity over the Internet, 
and you *don't* think OpenSSL versions should be a high priority? I don't 
understand your lack of alarm here.

>What are you actually trying to accomplish? What's your task? Your threat 
>model?

I want to be able to trust the HTTPS connections between my PC and servers on 
the Internet again; whether I'm using a browser, a software installer (that 
downloads data from the Internet before installing), a peer-to-peer 
application, or any other network application.

Thank you for your time.

Steven


Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-23 Thread Jakob Bohm via openssl-users

On 2022-11-15 21:36, Phillip Susi wrote:


Jakob Bohm via openssl-users  writes:


Performance wise, using a newer compiler that implements int64_t etc. via
frequent library calls, while technically correct, is going to run
unnecessarily slow compared to having algorithms that actually use the
optimal integral sizes for the hardware/compiler combination.

Why would you think that?  If you can rewrite the code to break things
up into 32 bit chunks and handle overflows etc, the compiler certainly
can do so at least as well, and probably faster than you ever could.


When a compiler breaks up operations, it will do so separately for
every operation such as +, -, *, /, %, <<, >> .  In doing so,
compilers will generally use expansions that are supposedly
valid for all numbers, while manually breaking up code can often
skip cases not possible in the algorithm in question, for example
taking advantage of some values always being less than
SIZE_T_MAX.

Also, I already mentioned that some compilers do the breaking
incorrectly, resulting in code that makes incorrect calculations.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Upgrading OpenSSL on Windows 10

2022-11-21 Thread Michael Wojcik via openssl-users
> From: openssl-users  on behalf of 
> Steven_M.irc via openssl-users 
> Sent: Monday, November 21, 2022 15:56
 
> However, I am running Windows 10, and since (unlike Linux) every piece of 
> software outside of Windows itself
> needs to be updated individually, I don't know how to track down every single 
> application that might be using
> OpenSSL and make sure that the copy of OpenSSL it uses is up-to-date.

You don't. There may be applications that have OpenSSL linked statically, or 
linked into one of its own DLLs, or just with the OpenSSL DLLs renamed.

> As many of you would know, under repository-based systems (such as most Linux 
> distros), this would not be an
> issue as I could update every single application (system or non-system) at 
> once.

This is not true in the general case. There are applications which are 
available on Linux which do not use the distribution's package manager. There 
are applications which use their own OpenSSL build, possibly linked statically 
or linked into one of their own shared objects or with the OpenSSL shared 
objects renamed. Linux distributions have not magically solved the problem of 
keeping all software on the system current.


Back to Windows: It is possible, with relatively little effort, to find all the 
copies of the OpenSSL DLLs under their usual names on a system, and then glean 
from them their version information. With significantly more effort, you can 
search for exported OpenSSL symbols within third-party binaries, which will 
detect some more instances. With quite a lot of additional effort, you can 
winkle out binaries which contain significant portions of code matching some 
OpenSSL release (see various research efforts on function-point and code-block 
matching, and compare with alignment strategies in other fields, such as 
genomics). If your definition of "OpenSSL in an application" is not too 
ambitious, this might even be feasible.

But to what end? Each application will either be well-supported, in which case 
you can find out from the vendor what OpenSSL version it contains and whether 
an update is available; or it is not, in which you'll be out of luck.

This is true of essentially every software component, most of which are not as 
well-maintained or monitored as OpenSSL. Modern software development is mostly 
a haphazard hodgepodge of accumulating software of uncertain provenance and 
little trustworthiness into enormous systems with unpredictable behavior and 
failure modes. I'm not sure OpenSSL versions should be particularly high on 
anyone's priority list.

What are you actually trying to accomplish? What's your task? Your threat model?

-- 
Michael Wojcik

Upgrading OpenSSL on Windows 10

2022-11-21 Thread Steven_M.irc via openssl-users
Hi All,
A few weeks ago I sent this e-mail to the group: 
https://mta.openssl.org/pipermail/openssl-users/2022-November/015613.html I 
received a couple of replies, but sadly I have been too busy to respond to 
them. Regardless, I need a bit more information please.

In one of the replies, Viktor said "Just upgrade any affected systems and 
you'll be fine.". However, I am running Windows 10, and since (unlike Linux) 
every piece of software outside of Windows itself needs to be updated 
individually, I don't know how to track down every single application that 
might be using OpenSSL and make sure that the copy of OpenSSL it uses is 
up-to-date. As many of you would know, under repository-based systems (such as 
most Linux distros), this would not be an issue as I could update every single 
application (system or non-system) at once.

For those of you who may be thinking "but Windows doesn't use OpenSSL"; when 
the latest OpenSSL vulnerabilities were discovered I asked a Windows IRC 
channel whether or not Windows uses OpenSSL, the reply was that Windows itself 
does not use it, but many applications running on Windows do.

Thank you all for your time.


Re: X52219/X448 export public key coordinates

2022-11-21 Thread ORNEST Matej - Contractor via openssl-users
Thanks for the explanation, that probably makes sense.

Thank you
Matt

From: Kyle Hamilton 
Date: Monday, 21 November 2022 12:46
To: ORNEST Matej - Contractor 
Cc: openssl-users 
Subject: Re: X52219/X448 export public key coordinates
The reason has to do with the type of curve representation. X25519 is typically 
represented in (I believe, but I'm not an expert and I haven't looked at the 
primary sources recently so take this with a grain of salt) Montgomery form. 
Its digital signature counterpart Ed25519 uses the same curve represented in 
Edwards form.

Conversely, the NIST curves are in Weierstrass form. The EC_KEY interface deals 
solely with Weierstrass form.

To my understanding, you can convert any curve to any representation. However, 
different forms can be acted on with different values at different levels of 
efficiency, which is why the different forms exist.

I hope this helps!

-Kyle H

On Fri, Nov 18, 2022, 11:47 ORNEST Matej - Contractor via openssl-users 
mailto:openssl-users@openssl.org>> wrote:
Yeah, of course, sorry for the typo. I’ve already found a solution that seems 
to be working by using EVP_PKEY_get_raw_public_key() for these types of curves. 
I was confused why it’s not working with EC_KEY interfaces though it’s type of 
elliptic curve. Then I found somewhere that it’s implemented outside the 
context of EC. It’s not clear to me why but I believe there’s a good reason for 
it.
Anyway, thanks for your answer!
Regards
Matt


On 18. 11. 2022, at 17:13, Kyle Hamilton 
mailto:aerow...@gmail.com>> wrote:

X25519?

On Mon, Nov 14, 2022, 05:23 ORNEST Matej - Contractor via openssl-users 
mailto:openssl-users@openssl.org>> wrote:
Hi all,

I need to implement support for X52219/X448 for DH key exchange (and 
Ed52219/Ed448 for DSA) elliptic curves in our project. I need to export public 
key for DH exchange in form of DER encoded chunk in form 
tag+X-coordinate+Y-coordinate. Thus I need to get EC_POINT from EVP_PKEY and 
encode it as needed. I understand that those key types differs from EC types in 
way that I need just X coordinate and a flag bit to reconstruct the key, but 
still, how do I get the X coordinate?
My solution works for all other EC types such as SecpX and Brainpool families, 
but not for X52219/X448 keys and I do not completely understand why. 
Specifically when I decode public key previously encoded with i2d_PUBKEY() to 
EVP_PEKY and try to get EC_KEY by calling EVP_PKEY_get0_EC_KEY(), it returns 
NULL and issues an error that it’s not an EC key…

I’m using following code:


EVP_PKEY *key = … // Decode from DER encoded public key



if(key != nil) {



EC_KEY *ecKey = EVP_PKEY_get0_EC_KEY(key);

 /// When X52219 or X448 key is passed, ecKey is NULL

if(ecKey != NULL) {

const EC_POINT *point = EC_KEY_get0_public_key(ecKey);

const EC_GROUP *group = EC_KEY_get0_group(ecKey);



if(point != NULL && group != NULL) {

BIGNUM *bnX = BN_new();

BIGNUM *bnY = BN_new();



if(EC_POINT_get_affine_coordinates(group, point, bnX, bnY, 
NULL)) {

char *hexX = BN_bn2hex(bnX);

char *hexY = BN_bn2hex(bnY);



// Convert to custom data structures

  …

}



BN_free(bnX);

BN_free(bnY);

}

}

}


Is there any way how to export those key types in desired format?  I’m using 
OpenSSL version 1.1.1q.

Thank you very much for any hint
Matt


Re: X52219/X448 export public key coordinates

2022-11-18 Thread ORNEST Matej - Contractor via openssl-users
Yeah, of course, sorry for the typo. I’ve already found a solution that seems 
to be working by using EVP_PKEY_get_raw_public_key() for these types of curves. 
I was confused why it’s not working with EC_KEY interfaces though it’s type of 
elliptic curve. Then I found somewhere that it’s implemented outside the 
context of EC. It’s not clear to me why but I believe there’s a good reason for 
it.
Anyway, thanks for your answer!

Regards
Matt

On 18. 11. 2022, at 17:13, Kyle Hamilton  wrote:


X25519?

On Mon, Nov 14, 2022, 05:23 ORNEST Matej - Contractor via openssl-users 
mailto:openssl-users@openssl.org>> wrote:
Hi all,

I need to implement support for X52219/X448 for DH key exchange (and 
Ed52219/Ed448 for DSA) elliptic curves in our project. I need to export public 
key for DH exchange in form of DER encoded chunk in form 
tag+X-coordinate+Y-coordinate. Thus I need to get EC_POINT from EVP_PKEY and 
encode it as needed. I understand that those key types differs from EC types in 
way that I need just X coordinate and a flag bit to reconstruct the key, but 
still, how do I get the X coordinate?
My solution works for all other EC types such as SecpX and Brainpool families, 
but not for X52219/X448 keys and I do not completely understand why. 
Specifically when I decode public key previously encoded with i2d_PUBKEY() to 
EVP_PEKY and try to get EC_KEY by calling EVP_PKEY_get0_EC_KEY(), it returns 
NULL and issues an error that it’s not an EC key…

I’m using following code:


EVP_PKEY *key = … // Decode from DER encoded public key



if(key != nil) {



EC_KEY *ecKey = EVP_PKEY_get0_EC_KEY(key);

 /// When X52219 or X448 key is passed, ecKey is NULL

if(ecKey != NULL) {

const EC_POINT *point = EC_KEY_get0_public_key(ecKey);

const EC_GROUP *group = EC_KEY_get0_group(ecKey);



if(point != NULL && group != NULL) {

BIGNUM *bnX = BN_new();

BIGNUM *bnY = BN_new();



if(EC_POINT_get_affine_coordinates(group, point, bnX, bnY, 
NULL)) {

char *hexX = BN_bn2hex(bnX);

char *hexY = BN_bn2hex(bnY);



// Convert to custom data structures

  …

}



BN_free(bnX);

BN_free(bnY);

}

}

}


Is there any way how to export those key types in desired format?  I’m using 
OpenSSL version 1.1.1q.

Thank you very much for any hint
Matt


X52219/X448 export public key coordinates

2022-11-14 Thread ORNEST Matej - Contractor via openssl-users
Hi all,

I need to implement support for X52219/X448 for DH key exchange (and 
Ed52219/Ed448 for DSA) elliptic curves in our project. I need to export public 
key for DH exchange in form of DER encoded chunk in form 
tag+X-coordinate+Y-coordinate. Thus I need to get EC_POINT from EVP_PKEY and 
encode it as needed. I understand that those key types differs from EC types in 
way that I need just X coordinate and a flag bit to reconstruct the key, but 
still, how do I get the X coordinate?
My solution works for all other EC types such as SecpX and Brainpool families, 
but not for X52219/X448 keys and I do not completely understand why. 
Specifically when I decode public key previously encoded with i2d_PUBKEY() to 
EVP_PEKY and try to get EC_KEY by calling EVP_PKEY_get0_EC_KEY(), it returns 
NULL and issues an error that it’s not an EC key…

I’m using following code:


EVP_PKEY *key = … // Decode from DER encoded public key



if(key != nil) {



EC_KEY *ecKey = EVP_PKEY_get0_EC_KEY(key);

 /// When X52219 or X448 key is passed, ecKey is NULL

if(ecKey != NULL) {

const EC_POINT *point = EC_KEY_get0_public_key(ecKey);

const EC_GROUP *group = EC_KEY_get0_group(ecKey);



if(point != NULL && group != NULL) {

BIGNUM *bnX = BN_new();

BIGNUM *bnY = BN_new();



if(EC_POINT_get_affine_coordinates(group, point, bnX, bnY, 
NULL)) {

char *hexX = BN_bn2hex(bnX);

char *hexY = BN_bn2hex(bnY);



// Convert to custom data structures

  …

}



BN_free(bnX);

BN_free(bnY);

}

}

}


Is there any way how to export those key types in desired format?  I’m using 
OpenSSL version 1.1.1q.

Thank you very much for any hint
Matt


Fw:OpenSSL AES Decryption fails randomly C++

2022-11-12 Thread WuJinze via openssl-users
sorry for my mistake. I found that the gist url can not display well in mail 
and here is the 
url:https://gist.github.com/GoGim1/77c9bebec1cc71cea066515b4623a051




WuJinze
294843...@qq.com








--Original--
From:   
 "WuJinze"  
  <294843...@qq.com;
Date:Sat, Nov 12, 2022 06:17 PM
To:"openssl-users"

OpenSSL AES Decryption fails randomly C++

2022-11-12 Thread WuJinze via openssl-users
Dear OpenSSL Group,
Greetings. I was working on writing simple aes encrypt/decrypt wrapper 
function in c++ and running into a strange problem. The minimal reproducible 
examples in gist seems working fine but when i uncomment lines 90-92, it will 
fail to decrypt randomly. Can someone help me to figure out what's wrong with 
the code?Here is my code: OpenSSL AES Decryption fails randomly C++ 
(github.com). OpenSSL version is OpenSSL 1.1.1f. G++ version is 9.4.0.Regards, 
Jinze

Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-11 Thread Jakob Bohm via openssl-users

On 2022-11-06 23:14, raf via openssl-users wrote:

On Sat, Nov 05, 2022 at 02:22:55PM +, Michael Wojcik 
 wrote:


From: openssl-users  On Behalf Of raf via
openssl-users
Sent: Friday, 4 November, 2022 18:54

On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users
 wrote:


I'm inclined to agree. While there's an argument for backward compatibility,
C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
younger than C99. It doesn't seem like an unreasonable requirement.

Would this be a choice between backwards-compatibility with C90
compilers and compatibility with 32-bit architectures?

I don't see how.

It's a question of the C implementation, not the underlying
architecture. A C implementation for a 32-bit system can certainly
provide a 64-bit integer type. If that C implementation conforms to
C99 or later, it ought to do so using long long and unsigned long
long. (I'm excluding C implementations for exotic systems where, for
example, CHAR_BIT != 8, such as some DSPs; those aren't going to be
viable targets for OpenSSL anyway.)


Is there another way to get 64-bit integers on 32-bit systems?

Sure. There's a standard one, which is to include  and
use int64_t and uint64_t. That also requires C99 or later and an
implementation which provides those types; they're not required.

Sorry. I assumed that it was clear from context that I was only
thinking about C90-compliant 64-bit integers on 32-bit systems.


And for some implementations there are implementation-specific
extensions, which by definition are not standard.

And you can roll your own. In a non-OO language like C, this would
be intrusive for the parts of the source base that rely on a 64-bit
integer type.


I suspect that that there are more 32-bit systems than there are
C90 compilers.

Perhaps, but I don't think it's relevant here. In any case, OpenSSL is
not in the business of supporting every platform and C implementation
in existence. There are the platforms supported by the project, and
there are contributed platforms which are included in the code base
and supported by the community (hopefully), and there are unsupported
platforms.

If someone wants OpenSSL on an unsupported platform, then it's up to
them to do the work.

So it sounds like C90 is now officially unsupported.
I got the impression that, before this thread, it was believed
that C90 was supported, and the suggestion of a pull request
indicated a willingness to retain/return support for C90.
Perhaps it just indicated a willingness to accept community
support for it.

I'd be amazed if anyone could actually still be using a
30 year old C90 compiler, rather than a compiler that
just gives warnings about C90. :-)


Regarding C90 compilers, it is important to realize that some system
vendors kept providing (arbitrarily extended) C90 compiler long after
1999.  Microsoft is one example, with many of their system compilers
for "older" OS versions being based on Microsoft's C90 compilers.
 These compilers did not provide a good stdint.h, but might be coached
to load a porter provided stdint.h that maps int64_t and uint64_t to
their vendor specific C90 extensions (named __int64 and unsigned __int64).

Even worse, I seem to recall at least one of those compilers miscompiling
64 bit integer arithmetic, but working acceptably with the older OpenSSL
1.0.x library implementations of stuff like bignums (BN) and various pure
C algorithm implementations in OpenSSL 1.0.x, that happened to do 
everything

by means of 32 and 16 bit types.

As part of our company business is to provide software for the affected
"older" systems, thus desiring the ability to compile OpenSSL 3.x with
options indicating "compiler has no good integral types larger than
uint32_t, floating point is also problematic"

Other major vendors with somewhat old C compilers include a few embedded
platforms such as older ARM and MIPS chips that were mass produced in
vast quantities.

Performance wise, using a newer compiler that implements int64_t etc. via
frequent library calls, while technically correct, is going to run
unnecessarily slow compared to having algorithms that actually use the
optimal integral sizes for the hardware/compiler combination.

I seem to recall using at least one bignum library (not sure if OpenSSL
or not) that could be configured to use uint32_t and uint16_t using the
same C code that combines uint64_t and uint32_t on newer hardware.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-06 Thread raf via openssl-users
On Sat, Nov 05, 2022 at 02:22:55PM +, Michael Wojcik 
 wrote:

> > From: openssl-users  On Behalf Of raf 
> > via
> > openssl-users
> > Sent: Friday, 4 November, 2022 18:54
> > 
> > On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users
> >  wrote:
> > 
> > >
> > > I'm inclined to agree. While there's an argument for backward 
> > > compatibility,
> > > C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
> > > younger than C99. It doesn't seem like an unreasonable requirement.
> > 
> > Would this be a choice between backwards-compatibility with C90
> > compilers and compatibility with 32-bit architectures?
> 
> I don't see how.
> 
> It's a question of the C implementation, not the underlying
> architecture. A C implementation for a 32-bit system can certainly
> provide a 64-bit integer type. If that C implementation conforms to
> C99 or later, it ought to do so using long long and unsigned long
> long. (I'm excluding C implementations for exotic systems where, for
> example, CHAR_BIT != 8, such as some DSPs; those aren't going to be
> viable targets for OpenSSL anyway.)
> 
> > Is there another way to get 64-bit integers on 32-bit systems?
> 
> Sure. There's a standard one, which is to include  and
> use int64_t and uint64_t. That also requires C99 or later and an
> implementation which provides those types; they're not required.

Sorry. I assumed that it was clear from context that I was only
thinking about C90-compliant 64-bit integers on 32-bit systems.

> And for some implementations there are implementation-specific
> extensions, which by definition are not standard.
> 
> And you can roll your own. In a non-OO language like C, this would
> be intrusive for the parts of the source base that rely on a 64-bit
> integer type.
> 
> > I suspect that that there are more 32-bit systems than there are
> > C90 compilers.
> 
> Perhaps, but I don't think it's relevant here. In any case, OpenSSL is
> not in the business of supporting every platform and C implementation
> in existence. There are the platforms supported by the project, and
> there are contributed platforms which are included in the code base
> and supported by the community (hopefully), and there are unsupported
> platforms.
> 
> If someone wants OpenSSL on an unsupported platform, then it's up to
> them to do the work.

So it sounds like C90 is now officially unsupported.
I got the impression that, before this thread, it was believed
that C90 was supported, and the suggestion of a pull request
indicated a willingness to retain/return support for C90.
Perhaps it just indicated a willingness to accept community
support for it.

I'd be amazed if anyone could actually still be using a
30 year old C90 compiler, rather than a compiler that
just gives warnings about C90. :-)

> -- 
> Michael Wojcik

cheers,
raf



Re: TLS 1.3 Early data

2022-11-05 Thread Benjamin Kaduk via openssl-users
On Sat, Nov 05, 2022 at 11:50:18AM +0100, Dirk Menstermann wrote:
> Hello,
> 
> I did few experiments with early data but was not successful in solving my
> exotic use case: "Using early data dependent on the SNI"
> 
> I control the server (linux, supports http2) based on OpenSSL 111q and use a
> recent firefox as client:
> 
> 1) Setting SSL_CTX_set_max_early_data in the SSL_CTX* works (FF sends early 
> data)
> 2) Setting SSL_set_max_early_data on the just created SSL* works (FF sends 
> early
> data)
> 3) Setting SSL_set_max_early_data in the SNI callback during the handshake 
> does
> not work (FF does not send early data)
> 
> I guess there is a dirty way to "peek" into the client hello and parse it
> without OpenSSL, extracting the SNI and make it then like in 2), but I wonder 
> if
> there is a better way.
> 
> Any idea?

The SNI callback runs far too late for this purpose (and, to be honest, a lot of
other purposes).  You should be able to use the client_hello callback for it,
though 
(https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_client_hello_cb.html).

Note that SSL_get_servername() does not provide something useful within the
client hello callback execution and you'll have to do something like
https://github.com/openssl/openssl/blob/master/test/helpers/handshake.c#L146-L198
in order to access the provided SNI value from the client.

-Ben


RE: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-05 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of raf via
> openssl-users
> Sent: Friday, 4 November, 2022 18:54
> 
> On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users
>  wrote:
> 
> >
> > I'm inclined to agree. While there's an argument for backward compatibility,
> > C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
> > younger than C99. It doesn't seem like an unreasonable requirement.
> 
> Would this be a choice between backwards-compatibility with C90
> compilers and compatibility with 32-bit architectures?

I don't see how.

It's a question of the C implementation, not the underlying architecture. A C 
implementation for a 32-bit system can certainly provide a 64-bit integer type. 
If that C implementation conforms to C99 or later, it ought to do so using long 
long and unsigned long long. (I'm excluding C implementations for exotic 
systems where, for example, CHAR_BIT != 8, such as some DSPs; those aren't 
going to be viable targets for OpenSSL anyway.)

> Is there another way to get 64-bit integers on 32-bit systems?

Sure. There's a standard one, which is to include  and use int64_t 
and uint64_t. That also requires C99 or later and an implementation which 
provides those types; they're not required.

And for some implementations there are implementation-specific extensions, 
which by definition are not standard.

And you can roll your own. In a non-OO language like C, this would be intrusive 
for the parts of the source base that rely on a 64-bit integer type.

> I suspect that that there are more 32-bit systems than there are
> C90 compilers.

Perhaps, but I don't think it's relevant here. In any case, OpenSSL is not in 
the business of supporting every platform and C implementation in existence. 
There are the platforms supported by the project, and there are contributed 
platforms which are included in the code base and supported by the community 
(hopefully), and there are unsupported platforms.

If someone wants OpenSSL on an unsupported platform, then it's up to them to do 
the work.

-- 
Michael Wojcik


Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-04 Thread raf via openssl-users
On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users 
 wrote:

> > From: openssl-users  On Behalf Of Phillip
> > Susi
> > Sent: Wednesday, 2 November, 2022 11:45
> > 
> > The only thing to fix is don't put your compiler in strict C90 mode.
> 
> I'm inclined to agree. While there's an argument for backward compatibility,
> C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
> younger than C99. It doesn't seem like an unreasonable requirement.
> 
> But as Tomas wrote, anyone who thinks it is can submit a pull request.
> 
> -- 
> Michael Wojcik

Would this be a choice between backwards-compatibility with C90
compilers and compatibility with 32-bit architectures?

Is there another way to get 64-bit integers on 32-bit systems?

I suspect that that there are more 32-bit systems than there are
C90 compilers.

cheers,
raf



RE: OpenSSL 3.0.7 make failure on Debian 10 (buster)

2022-11-04 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Friday, 4 November, 2022 06:43
> 
> This looks like something environmental rather than a problem with
> OpenSSL itself. /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h
> is clearly a system include file, trying to include some other system
> include file ("recurse down to the real one") which it is failing to find.

Specifically, limits.h is part of the C standard library (see e.g. ISO 
9899:1999 7.10). This is a GCC issue; there's something wrong with John's GCC 
installation, or how his environment configures it.

GCC often appears to have adopted "too clever by half" as a design goal.

-- 
Michael Wojcik


Re: Output buffer length in EVP_EncryptUpdate for ECB mode

2022-11-04 Thread Wiktor Kwapisiewicz via openssl-users

Matt,

EVP_EncryptUpdate() can be called repeatedly, incrementally feeding in 
the data to be encrypted. The ECB mode (when used with AES-128) will 
encrypt input data 16 bytes at a time, and the output size will also be 
16 bytes per input block. If the data that you feed in to 
EVP_EncryptUpdate() is not a multiple of 16 bytes then the amount of 
data that is over a multiple of 16 bytes will be cached until a 
subsequent call where it does have 16 bytes.


Let's say you call EVP_EncryptUpdate() with 15 bytes of data. In that 
case all 15 bytes will be cached and 0 bytes will be output.


If you then call it again with 17 bytes of data, then added to the 15 
bytes already cached we have a total of 32 bytes. This is a multiple of 
16, so 2 blocks (32 bytes) will be output, so:


(inl + cipher_block_size - 1) = (17 + 16 - 1) = 32


This explanation makes perfect sense. Thank you!

The context I asked is that the rust-openssl wrapper always requires the 
output buffer to be at least as big as the input buffer + the cipher's 
block size [0] (assuming pessimistic case). That is even if I always 
feed the EVP_EncryptUpdate with blocks exactly 16 bytes long the wrapper 
requires 32 byte output buffers, while, based on your description 16 
byte output buffers should be sufficient.


Thank you for your time!

Kind regards,
Wiktor

[0]: https://docs.rs/openssl/latest/src/openssl/cipher_ctx.rs.html#504


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 10:43
> >
> > And your description looks wrong anyway: shutdown(SHUT_RD) has
> > implementation-defined behavior for TCP sockets (because TCP does not
> > announce the read side of half-close to the peer), and on Linux causes
> > blocked receives and subsequent receives to return 0 (according to 
> > references
> 
> perl -MSocket -MIO::Socket::INET -e'my $s = IO::Socket::INET->new( Server =>
> 1, Listen => 1 ) or die; my $port = $s->sockport(); my $c = IO::Socket::INET-
> >new("localhost:$port") or die; syswrite $c, "hello"; my $sc = $s->accept();
> shutdown($sc, SHUT_RD); sysread $sc, my $buf, 512 or die $!; print $buf'
> 
> ^^ The above, I believe, demonstrates to the contrary: the read buffer is
> populated prior to shutdown and drained afterward.

As I noted, I hadn't tested it. The Linux man page is ambiguous:

   If how is SHUT_RD, further receptions will be disallowed.

It doesn't define "receptions". It's entirely possible that SHUT_RD will cause 
the stack to reject further application data (i.e. packets that increment the 
sequence number for anything other than ACK) from the peer, but permit the 
socket owner to continue to receive already-buffered data. That's arguably a 
poor implementation, and not what the man page appears to imply. And it looks 
to be in conflict with the Single UNIX Specification Issue 7 (not that Linux 
claims to be UNIX-conformant), which states that SHUT_SD "Disables further 
receive operations"; "operations" certainly seems to refer to actions taken by 
the caller, not by the peer.

There is a fair bit of debate about this online, and a number of people opine 
that the Linux behavior is correct, and SUS (they often refer to "POSIX", but 
POSIX has been superseded by SUS) is wrong. Others disagree.

The Linux kernel does take some action for a TCP socket that has SHUT_RD 
requested for it, but the behavior is not simple. (One SO comment mentions it 
causes it to exit the read loop in tcp_splice_read(), for example.) I'd be 
leery about relying on it.

I'm not sure how shutdown(SHUT_RD) is useful in the case of a TCP socket being 
used for TLS, to be perfectly honest. If the application protocol delimits 
messages properly and is half-duplex (request/response), then one side should 
know that no more data is expected and the other can detect incomplete 
messages, so there's likely no issue. If not, there's no way to guarantee you 
haven't encountered an incomplete message in bounded time (FPL Theorem 
applies). SHUT_RD does not signal the peer, so the peer can still get a RST if 
it continues to send. Perhaps I'm missing something, but I don't see what 
failure mode is being avoided by using SHUT_RD.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 08:51
> 
> You probably know this, but: On Linux, at least, if a TCP socket close()s
> with a non-empty read buffer, the kernel sends TCP RST to the peer.

Yes, that's a conditional-compliance (SHOULD) requirement from the Host 
Requirements RFC. See RFC 1122, 4.2.2.13.

> Some
> applications “panic” when they receive the RST and discard data.

Well, applications do a lot of things. Receiving an RST informs the peer that 
some of the data they sent was not successfully processed by the local 
application, so treating that as an error condition is not inappropriate.

But generally it's better if the application protocol imposes its own record 
structure and control information on top of TCP's very basic stream.

> It’s a rare
> issue, but when it does it’s a head-scratcher. To avoid that, it’s necessary
> to shutdown(SHUT_RD) then drain the read buffer before close().

Well, it's not *necessary* to do a half-close. Applications often know when 
they've received all the data the peer intends to send, thanks to 
record-delimiting mechanisms in the application protocol.

And your description looks wrong anyway: shutdown(SHUT_RD) has 
implementation-defined behavior for TCP sockets (because TCP does not announce 
the read side of half-close to the peer), and on Linux causes blocked receives 
and subsequent receives to return 0 (according to references -- I have't tested 
it), which means after shutdown(SHUT_RD) you *can't* drain the receive buffer. 
shutdown(SHUT_WR) would work, since it sends a FIN, telling the peer you won't 
be sending any more data, and still allows you to receive.

> So it seems like this *shouldn’t* be obscure, if applications do the
> shutdown/drain thing.

It's obscure in the sense that a great many people trying to use TLS get much 
more basic things wrong.

More generally, the OpenSSL documentation mostly covers the OpenSSL APIs, and 
leaves networking up to the OpenSSL consumer to figure out. The OpenSSL wiki 
covers topics that people have written, and those are going to focus on common 
questions and areas of particular interest for someone. If the interactions 
among the OpenSSL API, the TLS protocol (in its various versions), and the 
shutdown system call haven't historically been a problem for many people, then 
it's "obscure" in the literal sense of not having 
attracted much notice.

And in practice, the majority of TLS use is with HTTP, and HTTP does a fairly 
good job of determining when more data is expected, and handling cases where it 
isn't. An HTTP client that receives a complete response and then attempts to 
use the conversation for its next request, and gets an RST on that, for 
example, will just open a new conversation; it doesn't care that the old one 
was terminated. HTTP servers are simliarly tolerant because interactive user 
agents in particular cancel requests by closing (or, unfortunately, aborting) 
the connection all the time.

> I would guess that many don’t and just don’t see the
> RST thing frequently enough to worry about it. Regardless, the documentation
> is already pretty voluminous, so if this doesn’t bite many folks, then hey.

Yes, but wiki articles are always appreciated.

-- 
Michael Wojcik


Output buffer length in EVP_EncryptUpdate for ECB mode

2022-11-03 Thread Wiktor Kwapisiewicz via openssl-users

Hello,

I'd like to clarify one aspect of the API regarding EVP_EncryptUpdate
[0] that is the length of the output buffer that should be passed to
that function ("out" parameter). (Actually I'm using EVP_CipherUpdate 
but the docs are more comprehensive for EVP_EncryptUpdate).


[0]: https://www.openssl.org/docs/manmaster/man3/EVP_EncryptUpdate.html

For the record I'm using AES-128 cipher in ECB mode and the docs say:


For most ciphers and modes, the amount of data written can be
anything from zero bytes to (inl + cipher_block_size - 1) bytes. For
wrap cipher modes, the amount of data written can be anything from
zero bytes to (inl + cipher_block_size) bytes. For stream ciphers,
the amount of data written can be anything from zero bytes to inl
bytes.


AES-128-ECB doesn't appear to be a stream cipher (since the "block size" 
returns 16 not the magical value of 1) and I'm unable to find any 
mentions of "wrap cipher modes" in search engines. Apparently ECB is a 
block cipher mode.


Does that mean that "wrap cipher modes" == "block cipher modes"?

Is there any documentation I could read on the reasoning of why a space 
for additional block is needed in this case ("(inl + cipher_block_size) 
bytes")? I'm trying to understand the differences between OpenSSL and 
other cryptographic backends in an OpenPGP library [1].


Thank you for your time and help!

Kind regards,
Wiktor

[1]: 
https://gitlab.com/sequoia-pgp/sequoia/-/merge_requests/1361#note_1150958453


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 07:42
> 
> It sounds, then like shutdown() (i.e., TCP half-close) is a no-no during a
> TLS session.

Um, maybe. Might generally be OK in practice, particularly with TLSv1.3, which 
got rid of some of the less-well-considered ideas of earlier TLS versions. 
Honestly I'd have to spend some time digging through chapter & verse of the 
RFCs to arrive at any reliable opinion on the matter, though. Someone else here 
may have already considered it.

> Does OpenSSL’s documentation mention that? (I’m not exhaustively
> familiar with it, but I don’t remember having seen such.)

I doubt it. I don't see anything on the wiki, and this is a pretty obscure 
issue, all things considered.

> It almost seems like, given that TLS notify-close then TCP close() (i.e.,
> without awaiting the peer’s TLS notify-close) is legitimate, OpenSSL could
> gainfully tolerate/hide the EPIPE that that close() likely produces, and have
> SSL_read() et al just return empty-string.

Well, it could, but OpenSSL generally doesn't try to provide that type of 
abstraction.

Also note this paragraph from the wiki page on TLSv1.3 
(https://wiki.openssl.org/index.php/TLS1.3):

   If a client sends it's [sic] data and directly sends the close
   notify request and closes the connection, the server will still
   try to send tickets if configured to do so. Since the connection
   is already closed by the client, this might result in a write
   error and receiving the SIGPIPE signal. The write error will be
   ignored if it's a session ticket. But server applications can
   still get SIGPIPE they didn't get before.

So session tickets can also be a source of EPIPE when a client closes the 
connection.

> It surprises me that notify-close then close() is considered legitimate use.

There are so many TLS implementations and TLS-using applications out there that 
interoperability would be hugely compromised if we didn't allow a large helping 
of Postel's Interoperability Principle. So most applications try to be 
accommodating. There's even an OpenSSL flag to ignore the case where a peer 
closes without sending a close-notify, in case you run into one of those and 
want to suppress the error.

-- 
Michael Wojcik


RE: Worried about the vulnerabilities recently found in OpenSSL versions 3.0.0 - 3.0.6.

2022-11-03 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of
> Steven_M.irc via openssl-users
> Sent: Wednesday, 2 November, 2022 17:18
> 
> I'm really worried about the vulnerabilities recently found in OpenSSL
> versions 3.0.0 - 3.0.6.

Why? What's your threat model?

> If I understand things correctly (and please do
> correct me if I'm wrong), it doesn't matter which version of OpenSSL clients
> are running, only which version of OpenSSL *servers* are running. Thus it
> seems like end-users can do very little to protect themselves.

Protect themselves from what?

Take the most recent issues, CVE-2022-3786 and -3602. 3786 is a potential 
4-byte buffer overflow when parsing an email address component of a 
distinguished name in a certificate. (Note, contrary to what you wrote above, 
this could affect both servers and clients, since it would be triggered by 
parsing a malformed certificate.) This is probably not exploitable, per the 
OpenSSL blog post and analyses performed elsewhere, but let's imagine the worst 
case: OpenSSL 3.0.6 running on some platform where it's possible to leverage 
this BOF into an RCE.

If that's a server system, then:
1) If the server doesn't request client certificates, it should reject a 
Certificate message from the client, and not try to parse any, so there's no 
exposure.
2) We'll assume *you* aren't going to send a malicious certificate, so for your 
connection the vulnerability is irrelevant.
3) So the only case we care about is where some other actor sends a malicious 
certificate and chains the RCE with other attacks to pivot and escalate and 
subvert the server. We're on a pretty narrow branch of the attack tree here, 
and more importantly, the same could be true of a vast array of potential 
vulnerabilities in the server site. This is only an issue if an attacker can't 
find any other more useful vulnerability in the site. If you pay attention to 
IT security, you know *that* isn't likely.

If it's a client system, then you only care if it's *your* client, and you 
visit a malicious site. If you're in the habit of using OpenSSL 3.0.6 to 
connect to malicious servers, well, 3786 is not likely to be high on your list 
of problems.

3602 is even less likely to be exploitable.

Vulnerabilities are only meaningful in the context of a threat model. I don't 
see a plausible threat model where these should matter to a client-side end 
user.

-- 
Michael Wojcik


How to upgrade openssl from 3.0.2 to 3.0.7

2022-11-02 Thread Anupam Dutta via openssl-users
Hi Team,

I want to upgrade the openssl version from 3.0.2 to 3.0.7. My OS version is
Ubuntu 22.04.1 LTS (Jammy Jellyfish). Please help .It is urgent.

Regards,
Anupam


自动回复: Re: Worried about the vulnerabilities recently found in OpenSSLversions 3.0.0 - 3.0.6.

2022-11-02 Thread kjjhh7 via openssl-users
这是一封自动回复邮件。已经收到您的来信,我会尽快回复。

Worried about the vulnerabilities recently found in OpenSSL versions 3.0.0 - 3.0.6.

2022-11-02 Thread Steven_M.irc via openssl-users
Hi All,
I'm really worried about the vulnerabilities recently found in OpenSSL versions 
3.0.0 - 3.0.6. If I understand things correctly (and please do correct me if 
I'm wrong), it doesn't matter which version of OpenSSL clients are running, 
only which version of OpenSSL *servers* are running. Thus it seems like 
end-users can do very little to protect themselves. For example, how can an 
end-user tell if a website they're visiting is using a safe or an unsafe 
version of OpenSSL?

I did try putting my bank's website through an SSL tester (www.ssllabs.com), 
but I couldn't find an easy way to determine which version of OpenSSL they're 
running. I did get a protocol report, which read as follows:
TLS 1.3 Yes
TLS 1.2 Yes
TLS 1.1 No
TLS 1.0 No
SSL 3 No
SSL 2 No

However, I don't know if any of those protocol version numbers give any 
indication as to the OpenSSL version number(s)?

Any advice would be greatly appreciated.

Many thanks,
Steven_M



Sent with Proton Mail secure email.


RE: SSL_read empty -> close?

2022-11-02 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Wednesday, 2 November, 2022 12:46
> 
> I wouldn’t normally expect EPIPE from a read operation. I get why it happens;
> it just seems odd. Given that it’s legitimate for a TLS peer to send the
> close_notify and then immediately do TCP close, it also seems like EPIPE is a
> “fact of life” here.

Yeah. That's because an OpenSSL "read" operation can do sends under the covers, 
and an OpenSSL "send" can do receives, in order to satisfy the requirements of 
TLS. Depending on the TLS version and cipher suite being used, it might need to 
do that for renegotiation or the like. Or if the socket is non-blocking you can 
get WANT_READ from a send and WANT_WRITE from a receive.

In your example it was actually a sendmsg that produced the EPIPE, but within 
the logical "read" operation.

The original idea of SSL was "just be a duplex bytestream service for the 
application", i.e. be socket-like; but that abstraction proved to be rather 
leaky. Much as sockets themselves are a leaky abstraction once you try to do 
anything non-trivial.

-- 
Michael Wojcik


Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-02 Thread Dennis Clarke via openssl-users

On 11/2/22 18:29, Michael Wojcik via openssl-users wrote:

From: openssl-users  On Behalf Of Phillip
Susi
Sent: Wednesday, 2 November, 2022 11:45

The only thing to fix is don't put your compiler in strict C90 mode.


I'm inclined to agree. While there's an argument for backward compatibility, 
C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is younger 
than C99. It doesn't seem like an unreasonable requirement.

But as Tomas wrote, anyone who thinks it is can submit a pull request.




The more that I dig into this and look at the new OpenSSL 3.x the
more I am inclined to think C99 is good enough. Everywhere. Also I doubt
that the age of the thing matters much. The portability does.

Now I await with a flame proof suit for someone to yell "rewrite it
all in rust!"  Not bloodly likely.


--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional


RE: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-02 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Phillip
> Susi
> Sent: Wednesday, 2 November, 2022 11:45
> 
> The only thing to fix is don't put your compiler in strict C90 mode.

I'm inclined to agree. While there's an argument for backward compatibility, 
C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is younger 
than C99. It doesn't seem like an unreasonable requirement.

But as Tomas wrote, anyone who thinks it is can submit a pull request.

-- 
Michael Wojcik


Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-02 Thread Dennis Clarke via openssl-users

On 11/2/22 07:30, Tomas Mraz wrote:

No, long long and unsigned long long is required and it was required
for quite some time. The code is mostly C90 but not strictly.

I suppose on platforms with 64bit long type we could make it work
without long long though. Pull requests are welcome.

Tomas Mraz, OpenSSL




So fix it?

Feels like we are just going around and around in circles here :


Strict C90 CFLAGS results in sha.h:91 ISO C90 does not support long long
https://github.com/openssl/openssl/issues/10547


OPENSSL_strnlen SIGSEGV in o_str.c line 76
https://github.com/openssl/openssl/issues/8048



So the code is *mostly* C90 but not really. Got it.
Certainly worth looking at.




--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional


RE: ungrade openssl 3.0.2 to 3.0.7

2022-11-02 Thread Dr. Matthias St. Pierre via openssl-users
Anupam,

please don’t attempt to install an openssl version which you built yourself to 
your Linux system, it might brake your applications. Your Linux distribution 
(Ubuntu) installs their own compiled versions  which you can upgrade using its 
package manager (apt)

Regards,

Matthias


From: openssl-users  On Behalf Of Anupam 
Dutta via openssl-users
Sent: Wednesday, November 2, 2022 9:12 AM
To: openssl-users@openssl.org
Subject: ungrade openssl 3.0.2 to 3.0.7

Hi Team,

I want to upgrade openssl from 3.0.2 to 3.0.7. I have downloaded 3.0.7 from 
https://www.openssl.org/source and installed successfully. But, still it is 
showing version 3.0.2.Please help. It's urgent.

My OS: 22.04.1 LTS (Jammy Jellyfish)

Regards,
Anupam


smime.p7s
Description: S/MIME cryptographic signature


ungrade openssl 3.0.2 to 3.0.7

2022-11-02 Thread Anupam Dutta via openssl-users
Hi Team,

I want to upgrade openssl from 3.0.2 to 3.0.7. I have downloaded 3.0.7 from
https://www.openssl.org/source and installed successfully. But, still it is
showing version 3.0.2.Please help. It's urgent.

My OS: 22.04.1 LTS (Jammy Jellyfish)

Regards,
Anupam


自动回复: Re: issues with OpenSSL 1.1.1n

2022-11-01 Thread kjjhh7 via openssl-users
这是一封自动回复邮件。已经收到您的来信,我会尽快回复。

an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-01 Thread Dennis Clarke via openssl-users



Good day :

 This always bites me when I try strict C90 :

In file included from include/openssl/x509.h:41,
 from apps/include/apps.h:29,
 from apps/lib/app_libctx.c:10:
include/openssl/sha.h:106:37: error: ISO C90 does not support 'long 
long' [-Wlong-long]

  106 | #   define SHA_LONG64 unsigned long long
  | ^~~~
include/openssl/sha.h:110:5: note: in expansion of macro 'SHA_LONG64'
  110 | SHA_LONG64 h[8];
  | ^~
include/openssl/sha.h:106:37: error: ISO C90 does not support 'long 
long' [-Wlong-long]

  106 | #   define SHA_LONG64 unsigned long long
  | ^~~~
include/openssl/sha.h:111:5: note: in expansion of macro 'SHA_LONG64'
  111 | SHA_LONG64 Nl, Nh;
  | ^~
include/openssl/sha.h:106:37: error: ISO C90 does not support 'long 
long' [-Wlong-long]

  106 | #   define SHA_LONG64 unsigned long long
  | ^~~~
include/openssl/sha.h:113:9: note: in expansion of macro 'SHA_LONG64'
  113 | SHA_LONG64 d[SHA_LBLOCK];
  | ^~
gmake[1]: *** [Makefile:3989: apps/lib/libapps-lib-app_libctx.o] Error 1
gmake[1]: Leaving directory '/opt/bw/build/openssl-3.0.7_debian_ppc64.002'
make: *** [Makefile:2958: build_sw] Error 2


etc etc ...

I can just as neatly go to C11 or some such but I thought the whole code
 base was C90 clean ?  At least it was.




--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional


stunnel 5.67 released

2022-11-01 Thread Michał Trojnara via openssl-users

Dear Users,

I have released version 5.67 of stunnel.

### Version 5.67, 2022.11.01, urgency: HIGH
* Security bugfixes
  - OpenSSL DLLs updated to version 3.0.7.
* New features
  - Provided a logging callback to custom engines.
* Bugfixes
  - Fixed "make cert" with OpenSSL older than 3.0.
  - Fixed the code and the documentation to use conscious
    language for SNI servers (thx to Clemens Lang).

Home page: https://www.stunnel.org/
Download: https://www.stunnel.org/downloads.html

SHA-256 hashes:
3086939ee6407516c59b0ba3fbf555338f9d52f459bcab6337c0f00e91ea8456 
stunnel-5.67.tar.gz
a6bdc2a735eb34465d10e3c7e61f32d679ba29a68de8ea8034db79c0c8b328a3 
stunnel-5.67-win64-installer.exe
893f53d6647900eb34041be8f21a21c052a31de3fb393a97627021a1ef2752f5 
stunnel-5.67-android.zip

Best regards,
    Mike


OpenPGP_signature
Description: OpenPGP digital signature


OpenSSL Security Advisory

2022-11-01 Thread OpenSSL
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

OpenSSL Security Advisory [01 November 2022]


X.509 Email Address 4-byte Buffer Overflow (CVE-2022-3602)
==

Severity: High

A buffer overrun can be triggered in X.509 certificate verification,
specifically in name constraint checking. Note that this occurs
after certificate chain signature verification and requires either a
CA to have signed the malicious certificate or for the application to
continue certificate verification despite failure to construct a path
to a trusted issuer. An attacker can craft a malicious email address
to overflow four attacker-controlled bytes on the stack. This buffer
overflow could result in a crash (causing a denial of service) or
potentially remote code execution.

Many platforms implement stack overflow protections which would mitigate
against the risk of remote code execution. The risk may be further
mitigated based on stack layout for any given platform/compiler.

Pre-announcements of CVE-2022-3602 described this issue as CRITICAL.
Further analysis based on some of the mitigating factors described above
have led this to be downgraded to HIGH. Users are still encouraged to
upgrade to a new version as soon as possible.

In a TLS client, this can be triggered by connecting to a malicious
server. In a TLS server, this can be triggered if the server requests
client authentication and a malicious client connects.

OpenSSL versions 3.0.0 to 3.0.6 are vulnerable to this issue.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.7.

OpenSSL 1.1.1 and 1.0.2 are not affected by this issue.

This issue was reported to OpenSSL on 17th October 2022 by Polar Bear.
The fixes were developed by Dr Paul Dale.

We are not aware of any working exploit that could lead to code execution,
and we have no evidence of this issue being exploited as of the time of
release of this advisory (November 1st 2022).

X.509 Email Address Variable Length Buffer Overflow (CVE-2022-3786)
===

Severity: High

A buffer overrun can be triggered in X.509 certificate verification,
specifically in name constraint checking. Note that this occurs after
certificate chain signature verification and requires either a CA to
have signed a malicious certificate or for an application to continue
certificate verification despite failure to construct a path to a trusted
issuer. An attacker can craft a malicious email address in a certificate
to overflow an arbitrary number of bytes containing the `.' character
(decimal 46) on the stack. This buffer overflow could result in a crash
(causing a denial of service).

In a TLS client, this can be triggered by connecting to a malicious
server. In a TLS server, this can be triggered if the server requests
client authentication and a malicious client connects.

OpenSSL versions 3.0.0 to 3.0.6 are vulnerable to this issue.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.7.

OpenSSL 1.1.1 and 1.0.2 are not affected by this issue.

This issue was discovered on 18th October 2022 by Viktor Dukhovni while
researching CVE-2022-3602. The fixes were developed by Dr Paul Dale.

We have no evidence of this issue being exploited as of the time of
release of this advisory (November 1st 2022).

References
==

URL for this Security Advisory:
https://www.openssl.org/news/secadv/20221101.txt

Note: the online version of the advisory may be updated with additional details
over time.

For details of OpenSSL severity classifications please see:
https://www.openssl.org/policies/secpolicy.html
-BEGIN PGP SIGNATURE-

iQJGBAEBCAAwFiEE3HAyZir4heL0fyQ/UnRmohynnm0FAmNhRdsSHHRvbWFzQG9w
ZW5zc2wub3JnAAoJEFJ0ZqIcp55tARIP/R4TFlh4N3wH4enjT74oJowxjmwNIu0q
uRTmmwtMwJOd1Nw0tfydVEtd3qaN/KMcMnnBMzIzvCdzQ202g8SRSzX7zeHZtAEe
idu9qQyQep1ECK7UGybdN+4Ahey30Py6J99okWejCmdHSpxo7+OOtADFdraqrV5A
5vwyojD1Iv95Z0/RqYxMmMBEoJZitsGxeraw1IxBJCqw6sL2WwDelGb9NZwKFee1
BrfeF+dwaXlAZ97Hsaai6ssDf8VOoTNbCDsrsnbo4MAbFAc6ZraynMcWMm9kwF96
y+pO+0P9etzWeHkP+qHAeCCHZqU76Rexr58XtuWQpTdmbPbmLpnwr7wgwBAZxHA0
RkhpR244vPLYrF3cIssNxEstHCi2NFX0cMtOnbY84lJfmnxgHTJqH/7LvUmHibC6
FBNM9CCSezZgEiSvERB0R/auHZnpODj9riCyWWq82sXTkk3XrqkdnN3mAjgVpnDK
3Cacx9vJxpUDl2U4ObEVCE1I1qHKomAcKVAErAMmLLsdkbzoK9dUquG2VhFaJYJW
3TtqDMhQM0fqRgRu750P42w6dm1glH/UIK41viB0eVwbBZ0RdaAnI3+Tuk2NXH2o
nZdH5Lx6scgS+l4K+IF2WzO+WCYThG0Sg22hC6NnFbdksoGA/XaXl80Kf5Ec1LJr
QLeTSjQDj6Fc
=8mrQ
-END PGP SIGNATURE-


OpenSSL version 1.1.1s published

2022-11-01 Thread OpenSSL
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


   OpenSSL version 1.1.1s released
   ===

   OpenSSL - The Open Source toolkit for SSL/TLS
   https://www.openssl.org/

   The OpenSSL project team is pleased to announce the release of
   version 1.1.1s of our open source toolkit for SSL/TLS. For details
   of changes and known issues see the release notes at:

https://www.openssl.org/news/openssl-1.1.1-notes.html

   OpenSSL 1.1.1s is available for download via HTTP and FTP from the
   following master locations (you can find the various FTP mirrors under
   https://www.openssl.org/source/mirror.html):

 * https://www.openssl.org/source/
 * ftp://ftp.openssl.org/source/

   The distribution file name is:

o openssl-1.1.1s.tar.gz
  Size: 9868981
  SHA1 checksum: d316e1523a609bbfc4ddd3abfa9861db99f17044
  SHA256 checksum: 
c5ac01e760ee6ff0dab61d6b2bbd30146724d063eb322180c6f18a6f74e4b6aa

   The checksums were calculated using the following commands:

openssl sha1 openssl-1.1.1s.tar.gz
openssl sha256 openssl-1.1.1s.tar.gz

   Yours,

   The OpenSSL Project Team.

-BEGIN PGP SIGNATURE-

iQJGBAEBCAAwFiEE3HAyZir4heL0fyQ/UnRmohynnm0FAmNhEsESHHRvbWFzQG9w
ZW5zc2wub3JnAAoJEFJ0ZqIcp55tB9sP/0xTGoi3fCQNWE3tq2iSLbhMeoXNSrnT
kcKF98Dbzu1fuA+HRbb6rUr4Fnm8lp387cTM2ZQZQhpcMD8R16fwasZCkimaE64j
o9Szand1G6OauVqUSCumzyM7ZEYg3PMvCwM9tOdZoUwxAt7cXagXEl8d+WDX9Xdm
Gz8pAGTc2qk1oVfd25tBZkm6ievKq9a5B6QLmJdfYiycbRRLJV8bAcNrRNAy6EK/
aZDuQA7eYRgtg/K0LcwWKi0XYUT5zVTN1/GEEy4MzGASOw0UxWZ3B+gAje0bq2V0
3nt6+Ys/9THy418s3F16VRl9HiffZMICqDCPEYV7wQaKlm6dVTvc6kWQiGWR2C91
A1F/wOcvJzPuvNrqwwjmAzRJYdpyIS9FWhz39mOCbkm8C+ZAKyuhLzsZKeqDDGST
oNgoIcc+ewn3O3ZKT65n7cgllvco2YpfIkdkh+afmhC8Jyy4wOpvA1qo5fQb20bk
2/K+qj+oLWSwqUzDQ14Lij3QY6p9IJY87dY8wIheJSAaGsRx+59JIlKuc7Y+QMah
XJkugpXoht63j3phi8sDfz+be+oNNYNw9b43kkxPjT1T3403s5Eae3E8pPgj/ns3
12+nyNYe+e6O+i52QdjNVFG8DbIswrCWU2gm+5DZvd3ARffvWUykSZMuUGqz2d3R
vlAteLLJJpw/
=ysWZ
-END PGP SIGNATURE-


自动回复: Re: issue with 1.1.1n

2022-11-01 Thread kjjhh7 via openssl-users
这是一封自动回复邮件。已经收到您的来信,我会尽快回复。

OpenSSL version 3.0.7 published

2022-11-01 Thread OpenSSL
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


   OpenSSL version 3.0.7 released
   ==

   OpenSSL - The Open Source toolkit for SSL/TLS
   https://www.openssl.org/

   The OpenSSL project team is pleased to announce the release of
   version 3.0.7 of our open source toolkit for SSL/TLS.
   For details of the changes, see the release notes at:

https://www.openssl.org/news/openssl-3.0-notes.html

   Specific notes on upgrading to OpenSSL 3.0 from previous versions are
   available in the OpenSSL Migration Guide, here:

https://www.openssl.org/docs/man3.0/man7/migration_guide.html

   OpenSSL 3.0.7 is available for download via HTTPS and FTP from the
   following master locations (you can find the various FTP mirrors under
   https://www.openssl.org/source/mirror.html):

 * https://www.openssl.org/source/
 * ftp://ftp.openssl.org/source/

   The distribution file name is:

o openssl-3.0.7.tar.gz
  Size: 15107575
  SHA1 checksum:  f20736d6aae36bcbfa9aba0d358c71601833bf27
  SHA256 checksum:  
83049d042a260e696f62406ac5c08bf706fd84383f945cf21bd61e9ed95c396e

   The checksums were calculated using the following commands:

openssl sha1 openssl-3.0.7.tar.gz
openssl sha256 openssl-3.0.7.tar.gz

   Yours,

   The OpenSSL Project Team.

-BEGIN PGP SIGNATURE-

iQJGBAEBCAAwFiEE3HAyZir4heL0fyQ/UnRmohynnm0FAmNhKfISHHRvbWFzQG9w
ZW5zc2wub3JnAAoJEFJ0ZqIcp55tI3sP/0LX5X5pav+ajK9Vr0noUbAJwouA3YHi
QMqkY30JjoUEc47PE2IJlEAObWebkeePz09UixdlNyQv/sZ8OdhKlDvzHJ1LxMM2
LfetggGkASQ4nQkjFxiyNDTdaP0feKQGzBfo/rjTz+H1plY7D6u7AtIeCnJW0qZX
7a4yzTV8FxEdHvr81XCYyYsuUlWYwoZk4iEstGR4jG4lzA12jh1DXuCfKhV6siTm
7530FQ4kid2R0eAwffiaZPPSG53AOUsRbc7M2xgjl3HKOdTCEIInwpVtUWqFOufo
L/vkxjmFq8Xyq/DKUjCjcysiqX/Q4or0riMMzYkqqoIIQHGPyUrH7YvidEJ/ynPz
BexjXLSFpx+McUxs711BR7p6pHOrp/Acu1619EKgzhVOGdgqxd3PW2/maVqx5YIZ
ntsy5XNHE7UZ3tMTNz8gkVBAgZvQhl0YUN+LW5K6V/6VGxXqwFe6ZjyeyHvbv95J
TRfZvC/T7ABmeWKAblQ5LL3EeLXyLSOL3mV/fp+dRNUyuFJFuHQmUTGFNRgx191c
2PbAbtHTd7Wihx4M/mEhRiklo/VQI9jdRq47yjtKgv6tji6+9v+txK7f7lMlVZP9
IxsHYgcomMo92vpj+FTCVQcOTXTiCfHi9A6PBSltd4sodMR2XxED44cNJ/FyJPj6
nuPkN6wv8d59
=9cNh
-END PGP SIGNATURE-


Re: Getting cert serial from an OCSP single response

2022-10-31 Thread Jakob Bohm via openssl-users

On 2022-10-31 01:11, Alexei Khlebnikov wrote:


Hello Geoff,

Try the following function, receive the serial number via the 
"pserial" pointer. But avoid changing the number via the pserial 
pointer because it points inside the OCSP_CERTID structure.


int OCSP_id_get0_info(ASN1_OCTET_STRING **piNameHash, ASN1_OBJECT **pmd,
                     ASN1_OCTET_STRING **pikeyHash,
                     ASN1_INTEGER **pserial, OCSP_CERTID *cid);

Med vennlig hilsen / Best regards,
Alexei.


This function prototype really needs basic constification to mark
which arguments are inputs and which are outputs.  The pserial in
particular needs different const modifiers for each level of
indirection to indicate that this is output of a pointer to a
read-only number.

Quite surprised this hasn't been done during all the pointless API
changes after the new management took over the project.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Snapshots

2022-10-31 Thread The Doctor via openssl-users
No snapshots since 2022-10-19.

-- 
Member - Liberal International This is doc...@nk.ca Ici doc...@nk.ca
Yahweh, King & country!Never Satan President Republic!Beware AntiChrist rising!
Look at Psalms 14 and 53 on Atheism https://www.empire.kred/ROOTNK?t=94a1f39b 
How can one be prejudiced and remain objective? -unknown Beware 
https://mindspring.com


OSSL api example to write DH params

2022-10-28 Thread Samiya Khanum via openssl-users
Hi All,

"PEM_write_DHparams" is deprecated in openssl3.0.
I am trying to replace "PEM_write_DHparams" with the OSSL api. But  getting
below compilation error while assigning the dh.
Could you please provide an example to use "OSSL_ENCODER_CTX_new_for_pkey"
in the correct way and also please provide your inputs on the below error.

FYI, I have included the openssl/evp.h and openssl/dh.h header files.



*error: dereferencing pointer to incomplete type   dh = pkey->pkey.dh;
  ^*

PEM_write_DHparams code is replaced with OSSL_ENCODER_CTX_new_for_pkey.



* EVP_PKEY *pkey = NULL; OSSL_ENCODER_CTX *ectx = NULL;*















* ectx = OSSL_ENCODER_CTX_new_for_pkey(pkey,
 OSSL_KEYMGMT_SELECT_DOMAIN_PARAMETERS,
 "PEM", NULL, NULL);
if (NULL != ectx){  if (!OSSL_ENCODER_to_fp(ectx, fp))  {
  OSSL_ENCODER_CTX_free(ectx);EVP_PKEY_free(dhkey);
fclose(fp);return ;  }  dh = pkey->pkey.dh;}*

Thanks & Regards,
Samiya khanum

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.


smime.p7s
Description: S/MIME Cryptographic Signature


Proper way to "update" an expired CA certificate

2022-10-26 Thread Leroy Tennison via openssl-users
and continue to use unexpired certificate/key pairs signed by the expired CA 
certificate.  I did some research and found "openssl x509 -in ca.crt -days 3650 
-out new-ca.crt -signkey ca.key" which seems to work but want to make sure 
there aren't any less-than-obvious issues i missed and that there isn't a 
better way to address the issue.  Thanks for your help.

RE: SSL_read empty -> close?

2022-10-26 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Felipe
> Gasper
> Sent: Wednesday, 26 October, 2022 11:15
> 
>   I’m seeing that OpenSSL 3, when it reads empty on a socket, sends some
> sort of response, e.g.:
> 
> - before read
> [pid 42417] read(7276781]>, "", 5) = 0
> [pid 42417] sendmsg(7276781]>, {msg_name=NULL, msg_namelen=0,
> msg_iov=[{iov_base="\0022", iov_len=2}], msg_iovlen=1,
> msg_control=[{cmsg_len=17, cmsg_level=SOL_TLS, cmsg_type=0x1}],
> msg_controllen=17, msg_flags=0}, 0) = -1 EPIPE (Broken pipe)
> - after read
> 
>   What is that being sent after the read()? Is there a way to disable
> it?

I'd guess it's a TLS Alert Close_notify.

When read/recv on a TCP stream socket returns 0, it means a TCP FIN has been 
received from the peer (or possibly some interfering middleman, such as a 
firewall). This indicates the peer will no longer be sending any application 
data, only at most ACKs and perhaps a RST if conversation does not go quietly 
into that good night. Since TLS requires bidirectional communications, that 
means the TLS conversation is effectively open, and the local end needs to be 
closed; and TLS requires sending a close_notify so the peer knows the 
conversation has not been truncated.

Now, the most common cause of a FIN is the peer calling close(), which means it 
can't receive that close_notify. But TCP supports half-close, and the peer 
*could have* called shutdown(, SD_SEND), indicating that it was done sending 
but still wanted to be able to receive data. So the local side has no way of 
knowing, at the point where it gets a 0 from read(), that the peer definitely 
can't see the close_notify; and thus it's still obligated by the TLS 
specification (I believe) to send it.

At any rate, that's my understanding of the requirement for sending 
close_notify - I haven't confirmed that in the RFC - and what I suspect OpenSSL 
is doing there. I could well be wrong.

If the peer *has* called close, then EPIPE is what you'd expect. Note that on 
UNIXy systems this means you should have set the disposition of SIGPIPE to 
SIG_IGN to avoid being signaled, but all well-written UNIX programs should do 
that anyway. (SIGPIPE, as Dennis Ritchie noted many years ago, was always 
intended as a failsafe for poorly-written programs that fail to check for 
errors when writing.)

-- 
Michael Wojcik


自动回复: Re: OpenSSL 1.1.1 Windows dependencies

2022-10-26 Thread kjjhh7 via openssl-users
这是一封自动回复邮件。已经收到您的来信,我会尽快回复。

自动回复: Re: OpenSSL 1.1.1 Windows dependencies

2022-10-26 Thread kjjhh7 via openssl-users
这是一封自动回复邮件。已经收到您的来信,我会尽快回复。

RE: Setting a group to an existing EVP_PKEY in OpenSSL 3

2022-10-24 Thread Martin via openssl-users
Kory,

 

Thanks for your response. I want to preserve the rest of the EC public key 
params. I did this. I haven’t test yet.

 

OSSL_PARAM* extracted_params = NULL;

char curve_name[64];

OSSL_PARAM* param_ecgroup = NULL;

 

// sigkey is the EVP_PKEY ECDSA public key

 

 

if (EVP_PKEY_todata(sigkey, EVP_PKEY_PUBLIC_KEY, _params) == 0)

{

   // error 

}

curve_name = OSSL_EC_curve_nid2name(nid));

if (curve_name == NULL)

{

// error

}

if ((param_ecgroup = OSSL_PARAM_locate(params, "group")) != NULL)

{

   OSSL_PARAM_set_utf8_string(param_ecgroup, curve_name);

}

else

{

   // error

}

 

Martin

 

From: Kory Hamzeh  
Sent: Monday, October 24, 2022 7:22 PM
To: amar...@xtec.com
Cc: openssl-users@openssl.org
Subject: Re: Setting a group to an existing EVP_PKEY in OpenSSL 3

 

I haven’t done exactly what you are trying, but something similar.

 

 See EVP_PKEY_set_params:

 

https://www.openssl.org/docs/man3.0/man3/EVP_PKEY_set_params.html

 

The specific parm to set the group could be set like this:

 

 OSSL_PARAM_BLD_push_utf8_string(param_bld, "group",  

curve, 0;

 

 

 

Please note that that I have not tested the above code as my code uses 
key-from-data. But I think it should work.

 





On Oct 24, 2022, at 2:31 PM, Martin via openssl-users 
mailto:openssl-users@openssl.org> > wrote:

 

Hi,

 

How can I set a GROUP to an existing EC type EVP_PKEY in OpenSSL 3?

 

In 1.0.2 I was using this code having the EC_KEY:

 

EC_KEY_set_group(eckey, EC_GROUP_new_by_curve_name(nid));

 

In OpenSSL 3 still EC_GROUP_new_by_curve_name(nid) can be used, but I don’t 
know how to go from that to set it on the existing key.

 

 

Thanks,

 

Martin

 



Setting a group to an existing EVP_PKEY in OpenSSL 3

2022-10-24 Thread Martin via openssl-users
Hi,

 

How can I set a GROUP to an existing EC type EVP_PKEY in OpenSSL 3?

 

In 1.0.2 I was using this code having the EC_KEY:

 

EC_KEY_set_group(eckey, EC_GROUP_new_by_curve_name(nid));

 

In OpenSSL 3 still EC_GROUP_new_by_curve_name(nid) can be used, but I don't
know how to go from that to set it on the existing key.

 

 

Thanks,

 

Martin

 



RE: [building OpenSSL for vxWorks on Windows using Cygwin]

2022-10-24 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Friday, 21 October, 2022 02:39
> Subject: Re: openssl-users Digest, Vol 95, Issue 27

Please note the text in the footer of each openssl-users digest message:

> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openssl-users digest..."

This is part of asking a good question. Also, you need to trim parts of the 
digest message you're replying to that aren't relevant to your question. Don't 
just send the entire digest back to the list. That's confusing and discourteous 
to your readers.

> - Why are you trying to build OpenSSL?
> My objective is to sign an 'image.bin' with RSA2048 and verify the signature.
> Now, I would like to port it to vxWorks 7. 

See, this is why you need to ask a good question. I believe this is the first 
time you mention vxWorks, which makes an enormous difference. Prior to this 
message, I assumed you were building OpenSSL *for Windows*, since that was the 
only platform you mentioned.

vxWorks is, I believe, an unsupported platform. Someone in the past ported 
OpenSSL to vxWorks and contributed the necessary changes to the project, but 
the project maintainers don't have the resources to maintain that port. OpenSSL 
consumers who want to run on vxWorks have to provide their own support for it.

Had you made it clear you were targeting vxWorks at the start, someone could 
have pointed that out, and saved us all some trouble.

Since you are targeting vxWorks, you'll need to get advice from someone who's 
familiar with building OpenSSL for that platform. I am not, and I haven't seen 
anyone else on the list comment on it yet, so there may not be any vxWorks 
users reading this thread. And so you may need to look elsewhere -- perhaps on 
vxWorks forums.

> A: If there an l'ibOpenssl.a'  static library for vxWorks, then there would 
> be no
> reason to build the OpenSSL. Is there? 

I don't know; I don't work with vxWorks.

> A: If there was on option to use Only the verify signature module, then I 
> would just
> compile this module and not the entire OpenSSL. Is there an option?

Not with OpenSSL. There are other cryptography libraries, some of which may be 
more convenient to get for vxWorks. Verifying an RSA signature in some fashion 
(you don't say anything about a message format or padding, but that's a whole 
other area of discussion) is a common primitive.

> > - What platform do you want to build OpenSSL for?
> A: vxWorks-7, the toolchain is windows exe files (gcc,ar,ld), thus the only 
> option
> I had in mind to build the OpenSSL is cygwin.

> > - What toolchain do you want to use, and if that's not the default 
> > toolchain for
> > that platform, why aren't you using the default?
> A: I have vxWorks toolchain, on windows platform. (It definitely be easier if 
> I had
> the vxWorks toochain on Linux, but I don't)

This still isn't clear to me. If you have the vxWorks toolchain for Windows, 
why do you need Cygwin? Is it just for Perl, for the configuration step? I have 
no idea what the vxWorks tools expect for things like file-path format, so I 
can't guess whether Cygwin's Perl would be appropriate.

> > - Have you read the text files in the top-level directory of the OpenSSL 
> > source
> > distribution?
> Please direct me to the relevant README on "how to build OpenSSL on vxWorks" 
> (or
> similar platform, in which all is needed is to inject the relevant toochain
> i.e. perl Configure VxWorks)

That's not how it works. If you want to build OpenSSL, you should be consulting 
all of the files to figure out what's relevant for your build. Building OpenSSL 
is often not trivial, so particularly if you run into problems, the thing to do 
is actually read those files and understand the build process. Or find someone 
else who's done it for the the platform you're working with, and ask them.

-- 
Michael Wojcik


OpenSSL 3 ECC Key use question

2022-10-23 Thread Martin via openssl-users
Hi,

 

How can I get the nid from the curve name for a EC key in OpenSSL 3? I'm
porting code from OpenSSL 1.0.2.

 

I'm converting this:

 

ecc_curve_type = EC_GROUP_get_curve_name(EC_KEY_get0_group((const EC_KEY
*)eckey));

if(ecc_curve_type == NID_undef)

{

 

to

 

EVP_PKEY_get_utf8_string_param(pkey, OSSL_PKEY_PARAM_GROUP_NAME, curve_name,
sizeof(curve_name), _len);

ecc_curve_type = ossl_ec_curve_name2nid(curve_name);

 

but ossl_ec_curve_name2nid() is internal and it is not defined in
/include/openssl/ec.h but in /include/crypto/ec.h

 

Thanks,

 

Martin



RE: OpenSSL 1.1.1 Windows dependencies

2022-10-23 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of David
> Harris
> Sent: Saturday, 22 October, 2022 09:02
> 
> I now have wireshark captures showing the exchanges between the working
> instance and the non-working instance respectively; the problem is definitely
> happening after STARTTLS has been issued and during the TLS handshake.

A packet-inspecting firewall can monitor a TLS handshake (for TLS prior to 1.3) 
and terminate the conversation if it sees something in the unencrypted messages 
- ClientHello, ServerHello, ServerCertificate, etc - that it doesn't like. It's 
not beyond imagining that an organization would have a packet-inspecting 
firewall that terminates conversations using particular cipher suites, for 
example.

> I'm not high-level enough to be able to make any sense of the negotiation
> data though. The wireshark capture is quite short (22 items in the list)
> and I don't mind making it available if it would be useful to anyone.

Someone might be able to tell something from it.

Not much else is coming to mind, I'm afraid. It would help to know what system 
call is failing, with what errno value, but that's going to be a bit tough to 
determine on Windows. ProcMon, maybe? And it's curious that the OpenSSL error 
stack is empty, but without being able to debug you probably couldn't track 
that down, short of instrumenting a bunch of the OpenSSL code.

-- 
Michael Wojcik


RE: OpenSSL 1.1.1 Windows dependencies

2022-10-21 Thread Michael Wojcik via openssl-users
> From: David Harris 
> Sent: Friday, 21 October, 2022 01:42
>
> On 20 Oct 2022 at 20:04, Michael Wojcik wrote:
> 
> > I think more plausible causes of this failure are things like OpenSSL
> > configuration and interference from other software such as an endpoint
> > firewall. Getting SYSCALL from SSL_accept *really* looks like
> > network-stack-level interference, from a firewall or similar
> > mechanism.
> 
> That was my initial thought too, except that if it were firewall-related, the
> initial port 587 connection would be blocked, and it isn't - the failure 
> doesn't
> happen until after STARTTLS has been issued.

Not necessarily. That's true for a first-generation port-blocking firewall, but 
not for a packet-inspecting one. There are organizations which use 
packet-inspecting firewalls to block STARTTLS because they enforce their own 
TLS termination, in order to inspect all incoming traffic for malicious content 
and outgoing traffic for exfiltration.

> Furthermore, the OpenSSL
> configuration is identical between the systems/combinations of OpenSSL that
> work and those that don't.

Do you know that for certain? There's no openssl.cnf from some other source 
being picked up on the non-working system?

-- 
Michael Wojcik


RE: OpenSSL 1.1.1 Windows dependencies

2022-10-20 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of David
> Harris
> Sent: Wednesday, 19 October, 2022 18:54
> 
> Do recent versions of OpenSSL 1.1.1 have dependencies on some Windows
> facility (winsock and wincrypt seem likely candidates) that might work on
> Server 2019 but fail on Server 2012?

OpenSSL on Windows has always had a dependency on Winsock/Winsock2 (see 
b_sock.c, e_os.h, sockets.h) for supporting socket BIOs. Obviously OpenSSL used 
for TLS is going to be interacting with Winsock. I can't think of any 
difference between Server 2012 and Server 2019 that would be relevant to the 
issue you describe.

OpenSSL 1.1.1 uses Windows cryptographic routines in two areas I'm aware of: 
rand_win.c and the CAPI engine. I don't offhand see a way that a problem with 
the calls in rand_win.c would cause the particular symptom you described. My 
guess is that you're not using the CAPI engine, but you might check your 
OpenSSL configuration on the failing system.

I think more plausible causes of this failure are things like OpenSSL 
configuration and interference from other software such as an endpoint 
firewall. Getting SYSCALL from SSL_accept *really* looks like 
network-stack-level interference, from a firewall or similar mechanism.

Personally, if I ran into this, I'd just build OpenSSL for debug and debug into 
it. But I know that's not everyone's cup of tea.

-- 
Michael Wojcik


RE: openssl-users Digest, Vol 95, Issue 24

2022-10-19 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Tuesday, 18 October, 2022 11:58

> I have downloaded perl strawberry, but I have no clue how to get rid of the
> built-in perl that comes in cygwin, and point cygwin to use the strawberry 
> perl.

You don't have to remove the Cygwin version of perl, just change your PATH. 
This is basic both to the various shells available under Cygwin and to the 
Windows command line, so I'm getting the impression that you're not very 
familiar with your operating environment. That's not an ideal place to start 
from when trying to build, much less use, OpenSSL.

I can't be more detailed because at this point I frankly don't understand what 
you're trying to do. I suggest you try asking the right question, in a useful 
manner. (See https://catb.org/esr/faqs/smart-questions for advice in how to ask 
the right question.)

In particular:

- Why are you trying to build OpenSSL?
- Why did you clone the GitHub repository rather than downloading one of the 
released source tarballs? Did you read the instructions on www.openssl.org on 
how to download OpenSSL source releases?
- What platform do you want to build OpenSSL for?
- What toolchain do you want to use, and if that's not the default toolchain 
for that platform, why aren't you using the default?
- Have you read the text files in the top-level directory of the OpenSSL source 
distribution?

There may well be an easier way to accomplish whatever your goal is. OpenSSL 
may not even be a particularly good solution for you. You haven't given us 
enough information to go on.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 12:03

Send messages to the list, not directly to me.

> And, in which header file am I expected to find the Definition for LONG?

That's a question about the Windows SDK, not OpenSSL.

It's in WinNT.h, per Microsoft's documentation (which is readily available 
online).

But for building OpenSSL this is not your concern. Building OpenSSL on Windows 
with the Microsoft toolchain requires a valid installation of the Windows SDK. 
If you're not building with the Microsoft toolchain, then you'll have to 
consult the OpenSSL build instructions for the toolchain you're using. Have you 
read the text files in the OpenSSL distribution which explain how to build it?

> Which linux command I can use to find if there exists a definition for LONG?

Assuming you mean "which Cygwin command can I use on Windows...": find + xargs 
+ grep would be the usual choice to find the definition, but as I already noted 
that's in WinNT.h. If that's not what you mean, then your question is unclear.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 11:12

> see attached file for cygwin details.

I'm afraid I have no comment on that. I merely mentioned that for some OpenSSL 
releases, using a POSIXy perl implementation such as Cygwin's to configure 
OpenSSL for a Windows build did not work.

> ***   OpenSSL has been successfully configured                     ***

If memory serves, configuring with Cygwin perl would succeed, but the build 
would subsequently fail due to an issue with paths somewhere. I don't remember 
the details.

I suggest you try Strawberry Perl. It's free, and trying it would not take long.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 11:16

Please send messages to the list, not to me directly.

> And for the question with regard to the Windows style, are you referring to 
> CRLF as
> opposed to LF from linux?

No, to Windows-style file paths, with drive letters and backslashes, rather 
than (sensible) POSIX-style ones.

-- 
Michael Wojcik


Need help on OpenSSL windows build errors

2022-10-17 Thread Ashok Kumar Sarode via openssl-users
Hello OpenSSL users,
I need help on following errors which I am getting from myWindows machine 
building on Visual Studio 2019,
Version 16.11.17.
Build started...1>-- Build started: Project: executeHelloWorld, 
Configuration: Debug Win32 
--1>VerifyJWTSignUsingRSA.cpp1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(28,1):
 error C2447: '{': missing function header (old-style formal 
list?)1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(29,5):
 error C2018: unknown character 
'0x40'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(30,16):
 error C2018: unknown character 
'0x40'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(36,14):
 error C2018: unknown character 
'0x40'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(40,9):
 error C2018: unknown character 
'0x40'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(41,16):
 error C2018: unknown character 
'0x40'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(51,1):
 error C2447: '{': missing function header (old-style formal 
list?)1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(57,1):
 error C4430: missing type specifier - int assumed. Note: C++ does not support 
default-int1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(57,4):
 error C2065: '$config': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(57,12):
 error C2065: 'bn_ll': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(57,47):
 error C2059: syntax error: 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(57,47):
 error C2143: syntax error: missing ';' before 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(59,1):
 error C4430: missing type specifier - int assumed. Note: C++ does not support 
default-int1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(59,4):
 error C2065: '$config': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(59,12):
 error C2065: 'b64l': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(59,46):
 error C2059: syntax error: 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(59,46):
 error C2143: syntax error: missing ';' before 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(60,1):
 error C2143: syntax error: missing ';' before 
'{'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(60,1):
 error C2447: '{': missing function header (old-style formal 
list?)1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(61,1):
 error C4430: missing type specifier - int assumed. Note: C++ does not support 
default-int1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(61,4):
 error C2065: '$config': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(61,12):
 error C2065: 'b32': undeclared 
identifier1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(61,46):
 error C2059: syntax error: 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(61,46):
 error C2143: syntax error: missing ';' before 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(67,1):
 error C2143: syntax error: missing ';' before 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\configuration.h(67,1):
 error C2059: syntax error: 
'}'1>C:\Users\myDir\WindowsUtils\executeHelloWorld\openssl-master\include\openssl\macros.h(138,6):
 fatal error C1017: invalid integer constant expression1>Done building project 
"executeHelloWorld.vcxproj" -- FAILED.== Build: 0 succeeded, 1 failed, 
0 up-to-date, 0 skipped ==

NOTE: I have re-named file openssl\configuration.h.in to 
openssl\configuration.hLikewise i re-named err.h, ssl.h, opensslv.h, crypto.h
I downloaded OpenSLL source from GitHub - openssl/openssl: TLS/SSL and crypto 
library
Regards,
S.Ashok Kumar  

RE: Build openssl on windows 10 using cygwin

2022-10-16 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Saturday, 15 October, 2022 15:48

> I have tried to build openssl using cygwin:

> Both options starts compiling, but end up with error:
> In file included from providers/implementations/storemgmt/winstore_store.c:27:
> /usr/include/w32api/wincrypt.h:20:11: error: unknown type name 'LONG'
>   20 |   typedef LONG HRESULT;
> Q: What am I missing here?

Well, the version of OpenSSL you're using, for one thing. And what C 
implementation; there are various ones which can be used under Cygwin. Cygwin 
is an environment, not a build toolchain.

I don't know if this is still true, or if it differs for 1.1.1 and 3.0; but 
historically there have been issues using Cygwin perl to build OpenSSL, because 
OpenSSL on Windows wants a perl implementation that uses Windows-style file 
paths. We use Strawberry Perl.

That said, that error appears to be due to an issue with the Windows SDK 
headers, since it's the Windows SDK which should be typedef'ing LONG. (Because 
we wouldn't want Microsoft to use actual standard C type names, would we?) So 
this might be due to not having some macro defined when including the various 
Windows SDK headers.

-- 
Michael Wojcik


Include jeanmswe...@gmail.com please

2022-10-12 Thread Jean Sweeny via openssl-users



Sent from my iPad


PBKDF2 & HMAC-SHA1-128 Functions

2022-10-12 Thread John Deer via openssl-users
What OpenSSL functions to use in "Visual Studio 2022" to create a C++ program::
 
PSK = PBKDF2(Passphrase, SSID, 4096)
PMK = PBKDF2(HMAC−SHA1, PSK, SSID, 4096, 256)
PMKID = HMAC-SHA1-128(PMK,"PMK Name" | MAC_AP | MAC_STA)
 
Sample test data for PSK (Pre-Shared Key)
 
Network SSID:   linksys54gh
WPA passphrase: radiustest
PSK = 9e9988bde2cba74395c0289ffda07bc41ffa889a3309237a2240c934bcdc7ddb (Result)
 
See WPA key calculation in the link here
 
Caster


RE: CA/Server configuration

2022-10-03 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Dmitrii 
> Odintcov
> Sent: Sunday, 2 October, 2022 21:15
>
> This is where the confusion begins: if ‘bar’, the certificate requestor, 
> itself
> wants to be a CA (basicConstraints = CA:true),

I assume here you mean bar is going to be a subordinate CA for foo, or bar is a 
subordinate that's being cross-signed by foo. Otherwise foo issuing a CA 
certificate for bar doesn't make sense. Note that bar can't be a root, since 
it'll be signed by some entity other than itself. (A root is a self-signed CA 
certificate, by definition.)

> then its bar.conf must answerboth sets of questions at the same time!

Why? Creating a CSR and generating the certificate for it are separate 
operations. bar's configuration is used in creating the CSR. foo's is used in 
generating the certificate.

> For instance, if bar wants to request its own CA certificate to be valid for
> 5 years, but is only willing to issue others’ certificates for 1 year, what
> should `default_days` be in bar.conf?

Oh, I see, you're talking about generating bar's CSR versus signing 
certificates using bar. The answer is: you have two configurations, one for 
generating bar's CSR and the other for signing certificates using bar. Those 
are separate operations (obviously, since bar can't sign anything until it has 
its certificate), so they're not required to use the same configuration.

Configuration files are tied to *operations*, not to *entities*. You use the 
configuration file appropriate for the operation, where an operation is 
something like "requesting a CSR for a subordinate CA" or "signing a 
certificate for a subordinate CA" or "signing a certificate for a non-CA 
entity".

-- 
Michael Wojcik


Please allow the Apple ID and iCloud address to use open ssl for iCloud data communication

2022-10-02 Thread Jean Sweeny via openssl-users



Sent from my iPad


RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-30 Thread GonzalezVillalobos, Diego via openssl-users
[AMD Official Use Only - General]

Hello Tomas,

There was a logic error in my code, I did not realize that the first iteration 
of the verification was supposed to fail. The verification is working 
correctly! I apologize for my last response. I really appreciate all your help!

Thank you very much,

Diego Gonzalez
--
 

-Original Message-
From: Tomas Mraz  
Sent: Friday, September 30, 2022 1:22 AM
To: GonzalezVillalobos, Diego ; 
openssl-users@openssl.org
Subject: Re: Updating RSA public key generation and signature verification from 
1.1.1 to 3.0

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


Hi,

unfortunately I do not see anything wrong with the code. Does the 
EVP_DigestVerifyFinal return 0 or negative value? I do not think this is a bug 
in OpenSSL as this API is thoroughly tested and it is highly improbable that 
there would be a bug in the ECDSA verification through this API.

I am currently out of ideas on what could be wrong or how to investigate 
further. Perhaps someone else can chime in on what can be wrong?

Tomas

On Thu, 2022-09-29 at 19:22 +, GonzalezVillalobos, Diego wrote:
> [AMD Official Use Only - General]
>
> Hello Tomas,
>
> So, I made sure that px_size and py_size are equal to the group order 
> (48). I was able to verify successfully using our previous method
> (deprecated) with the new key generation method, but I'm still not 
> able to get the digestverify to work successfully. As a reminder this 
> is how we were verifying before:
>
> // Determine if SHA_TYPE is 256 bit or 384 bit if 
> (parent_cert->pub_key_algo == SEV_SIG_ALGO_RSA_SHA256 || 
> parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA256 ||parent_cert-
> >pub_key_algo == SEV_SIG_ALGO_ECDH_SHA256)
> {
> sha_type = SHA_TYPE_256;
> sha_digest = sha_digest_256;
> sha_length = sizeof(hmac_sha_256);
> }
> else if (parent_cert->pub_key_algo == SEV_SIG_ALGO_RSA_SHA384 || 
> parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA384 || 
> parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDH_SHA384)
> {
> sha_type = SHA_TYPE_384;
> sha_digest = sha_digest_384;
> sha_length = sizeof(hmac_sha_512);
> }
> else
> {
> break;
> }
>
> // 1. SHA256 hash the cert from Version through pub_key 
> parameters
> // Calculate the digest of the input message   rsa.c ->
> rsa_pss_verify_msg()
> // SHA256/SHA384 hash the cert from the [Version:pub_key] 
> params
> uint32_t pub_key_offset = offsetof(sev_cert, sig_1_usage); // 
> 16 + sizeof(SEV_PUBKEY)
> if (!digest_sha((uint8_t *)child_cert, pub_key_offset, 
> sha_digest, sha_length, sha_type)) {
> break;
> }
> if ((parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA256) ||
>  (parent_cert->pub_key_algo ==
> SEV_SIG_ALGO_ECDSA_SHA384) ||
>  (parent_cert->pub_key_algo ==
> SEV_SIG_ALGO_ECDH_SHA256)  ||
>  (parent_cert->pub_key_algo ==
> SEV_SIG_ALGO_ECDH_SHA384)) {  // ecdsa.c -> sign_verify_msg
> ECDSA_SIG *tmp_ecdsa_sig = ECDSA_SIG_new();
> BIGNUM *r_big_num = BN_new();
> BIGNUM *s_big_num = BN_new();
>
> // Store the x and y components as separate BIGNUM 
> objects. The values in the
> // SEV certificate are little-endian, must reverse 
> bytes before storing in BIGNUM
> r_big_num = BN_lebin2bn(cert_sig[i].ecdsa.r,
> sizeof(sev_ecdsa_sig::r), r_big_num);// LE to BE
> s_big_num = BN_lebin2bn(cert_sig[i].ecdsa.s, 
> sizeof(sev_ecdsa_sig::s), s_big_num);
>
> // Calling ECDSA_SIG_set0() transfers the memory 
> management of the values to
> // the ECDSA_SIG object, and therefore the values that 
> have been passed
> // in should not be freed directly after this function 
> has been called
> if (ECDSA_SIG_set0(tmp_ecdsa_sig, r_big_num,
> s_big_num) != 1) {
> BN_free(s_big_num);   // Frees
> BIGNUMs manually here
> BN_free(r_big_num);
> ECDSA_SIG_free(tmp_ecdsa_sig);
> continue;
> }
> EC_KEY *tmp_ec_key =
> EVP_PKEY_get1_EC_KEY(parent_signing_key); // Make a local key so you 
> can free it later
> if (ECDSA_do_verify(sha_dige

RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-30 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Tomas
> Mraz
> Sent: Friday, 30 September, 2022 00:22
> 
> unfortunately I do not see anything wrong with the code. Does the
> EVP_DigestVerifyFinal return 0 or negative value? I do not think this
> is a bug in OpenSSL as this API is thoroughly tested and it is highly
> improbable that there would be a bug in the ECDSA verification through
> this API.
> 
> I am currently out of ideas on what could be wrong or how to
> investigate further. Perhaps someone else can chime in on what can be
> wrong?

Coincidentally, just yesterday I was helping someone debug a DigestVerify 
issue. We were consistently getting the "first octet is invalid" error out of 
the RSA PSS signature verification code, but the same inputs worked with 
openssl dgst.

I wrote a fresh minimal program from scratch (really minimal, with hard-coded 
filenames for the inputs), and it worked fine as soon as it compiled cleanly.

I'd suggest trying that. Get it working in a minimal program first. Make sure 
you have all the correct OpenSSL headers, and there are no compilation 
warnings. Then integrate that code into your application.

(I didn't have the original application to go back to, in my case, and the 
person I was working with is in another timezone and had left for the day.)

-- 
Michael Wojcik
Distinguished Engineer, Application Modernization and Connectivity




Regarding how to use symmetric key for an openssl engine

2022-09-29 Thread 董亚敏 via openssl-users
Hi,
Here is question,can you help me out? Thanks.
Background:
   I am working to write an openssl engine to use cryptographic algorithm in a 
hardware device. The hardware device support asymmetric/symmetric algorithm, 
for example:rsa/aes.
Question:
  When I write openssl engine, I shall use ENGINE_load_private_key() function 
to load and use asymmetric private key in the hardware device.
  How to set and use symmetric key in the hardware device ? is there any 
example for my case?

#/**本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
 This e-mail and its attachments contain confidential information from XIAOMI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!**/#


RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-29 Thread GonzalezVillalobos, Diego via openssl-users
SHA384) ||
 (parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDH_SHA256)  ||
 (parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDH_SHA384)) { 
 // ecdsa.c -> sign_verify_msg

ECDSA_SIG *tmp_ecdsa_sig = ECDSA_SIG_new();
BIGNUM *r_big_num = BN_new();
BIGNUM *s_big_num = BN_new();
uint32_t sig_len;
unsigned char* der_sig = NULL;;

// Store the x and y components as separate BIGNUM objects. The 
values in the
// SEV certificate are little-endian, must reverse bytes before 
storing in BIGNUM
r_big_num = BN_lebin2bn(cert_sig[i].ecdsa.r, 
sizeof(sev_ecdsa_sig::r), r_big_num);// LE to BE
s_big_num = BN_lebin2bn(cert_sig[i].ecdsa.s, 
sizeof(sev_ecdsa_sig::s), s_big_num);

// Calling ECDSA_SIG_set0() transfers the memory management of 
the values to
// the ECDSA_SIG object, and therefore the values that have 
been passed
// in should not be freed directly after this function has been 
called
if (ECDSA_SIG_set0(tmp_ecdsa_sig, r_big_num,s_big_num) != 1) {
BN_free(s_big_num); // FreesBIGNUMs manually here
BN_free(r_big_num);
ECDSA_SIG_free(tmp_ecdsa_sig);
break;
}

int der_sig_len = i2d_ECDSA_SIG(tmp_ecdsa_sig, _sig);
// der_sig = static_cast(OPENSSL_malloc(der_sig_len));
// unsigned char* der_iter = der_sig;
// der_sig_len = i2d_ECDSA_SIG(tmp_ecdsa_sig, _iter); // <= 
bugfix here


if (der_sig_len == 0) {
cout << "sig length invalid" << endl;
break;
}

if (der_sig == NULL) {
cout << "sig generation failed" << endl;
break;
}

// loop through the array elements
for (size_t i = 0; i < der_sig_len; i++) {
cout << der_sig[i] << ' ';
}

verify_md_ctx = EVP_MD_CTX_new();


if (!verify_md_ctx) {
cout << "Error md verify context " << endl;;
break;
}

if (EVP_DigestVerifyInit(verify_md_ctx, NULL, 
(parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA256 || 
parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDH_SHA256) ? EVP_sha256(): 
EVP_sha384(), NULL, parent_signing_key) <= 0) {
cout << "Init fails " << endl;
break;
}

if (EVP_DigestVerifyUpdate(verify_md_ctx, (uint8_t 
*)child_cert, pub_key_offset) <= 0){// Calls SHA256_UPDATE
cout << "updating digest fails" << endl;
break;
}

int ret = EVP_DigestVerifyFinal(verify_md_ctx, der_sig, 
der_sig_len);
if (ret == 0) {
cout << "EC Verify digest fails" << endl;
break;
} else if (ret < 0) {
printf("Failed Final Verify 
%s\n",ERR_error_string(ERR_get_error(),NULL));
cout << "EC Verify error" << endl;
break;
}

found_match = true;
cout << "SEV EC verification Succesful" << endl;

if (verify_md_ctx)
EVP_MD_CTX_free(verify_md_ctx);

break;
}

The only difference still is using the der signature; besides that, it is the 
same. Could it be a bug?

Thank you,

Diego Gonzalez
--
 

-Original Message-
From: Tomas Mraz  
Sent: Thursday, September 29, 2022 1:12 AM
To: GonzalezVillalobos, Diego ; 
openssl-users@openssl.org
Subject: Re: Updating RSA public key generation and signature verification from 
1.1.1 to 3.0

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


Hi,

comments below.

On Wed, 2022-09-28 at 22:12 +, GonzalezVillalobos, Diego wrote:
> [AMD Official Use Only - General]
>
> Hello Tomas,
>
> I generated the key as you suggested, and I am no longer getting an 
> error message! Thank you for that. Here is how I'm generating the key
> now:
>
> // SEV certificate are little-endian, must reverse bytes before 
> generating key
> if ((cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA256) ||
> (cert->pu

RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-28 Thread GonzalezVillalobos, Diego via openssl-users
// the ECDSA_SIG object, and therefore the values that have 
been passed
// in should not be freed directly after this function has been 
called
if (ECDSA_SIG_set0(tmp_ecdsa_sig, r_big_num,s_big_num) != 1) {
BN_free(s_big_num); // FreesBIGNUMs manually here
BN_free(r_big_num);
ECDSA_SIG_free(tmp_ecdsa_sig);
break;
}

int der_sig_len = i2d_ECDSA_SIG(tmp_ecdsa_sig, NULL);
der_sig = static_cast(OPENSSL_malloc(der_sig_len));
unsigned char* der_iter = der_sig;
der_sig_len = i2d_ECDSA_SIG(tmp_ecdsa_sig, _iter); // <= 
bugfix here

if (der_sig_len == 0) {
cout << "sig length invalid" << endl;
break;
}

if (der_sig == NULL) {
cout << "sig generation failed" << endl;
break;
}


verify_md_ctx = EVP_MD_CTX_new();


if (!verify_md_ctx) {
cout << "Error md verify context " << endl;;
break;
}

if (EVP_DigestVerifyInit(verify_md_ctx, NULL, 
(parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDSA_SHA256 || 
parent_cert->pub_key_algo == SEV_SIG_ALGO_ECDH_SHA256) ? EVP_sha256(): 
EVP_sha384(), NULL, parent_signing_key) <= 0) {
cout << "Init fails " << endl;
break;
}

if (EVP_DigestVerifyUpdate(verify_md_ctx, (uint8_t 
*)child_cert, pub_key_offset) <= 0){// Calls SHA256_UPDATE
cout << "updating digest fails" << endl;
break;
}

int ret = EVP_DigestVerifyFinal(verify_md_ctx, der_sig, 
der_sig_len);
cout << ret << endl;
if (ret == 0) {
cout << "EC Verify digest fails" << endl;
break;
} else if (ret < 0) {
printf("Failed Final Verify 
%s\n",ERR_error_string(ERR_get_error(),NULL));
cout << "EC Verify error" << endl;
break;
}

found_match = true;
cout << "SEV EC verification Succesful" << endl;

Could it be because I'm creating a ECDSA SIG object and then turning it into a 
der format to verify? Again, suggestions would be appreciated.

Thank you!

Diego Gonzalez Villalobos
--
 

-Original Message-
From: Tomas Mraz  
Sent: Friday, September 23, 2022 1:17 AM
To: GonzalezVillalobos, Diego ; 
openssl-users@openssl.org
Subject: Re: Updating RSA public key generation and signature verification from 
1.1.1 to 3.0

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.


Please look at the answer in this question in GitHub:

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fopenssl%2Fopenssl%2Fissues%2F19219%23issuecomment-1247782572data=05%7C01%7CDiego.GonzalezVillalobos%40amd.com%7C49cb5498fa2142b3c73f08da9d2b3799%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637995106207913021%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=S%2FIfBL5bnOa%2Fa2owmcihtZlG4AxTYCWDkaGpmJdid%2Fw%3Dreserved=0

Matt Caswell's answer to very similar question is presented there.

I'm copying the answer here for convenience:

You are attempting to create an EC public key using the "x" and "y"
parameters - but no such parameters exist. The list of available EC parameters 
is on this page:

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openssl.org%2Fdocs%2Fman3.0%2Fman7%2FEVP_PKEY-EC.htmldata=05%7C01%7CDiego.GonzalezVillalobos%40amd.com%7C49cb5498fa2142b3c73f08da9d2b3799%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637995106207913021%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7Csdata=dRWekElnVV5leTg6ZN0k9LwQq1Sivf2Hx%2BZrY7YPajE%3Dreserved=0

For your purposes you need to use the OSSL_PKEY_PARAM_PUB_KEY parameter
("pub") to supply the public key. It needs to be an octet string with the value 
POINT_CONVERSION_UNCOMPRESSED at the start followed by the x and y co-ords 
concatenated together. For that curve, x and y need to be zero padded to be 32 
bytes long each. There is an example of doing this on the EVP_PKEY_fromdata man 
page. Actually the example is is for EVP_PK

RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-22 Thread GonzalezVillalobos, Diego via openssl-users
lt;< "EC Verify digest fails" << endl;
break; 
} else if (ret < 0) {
printf("Failed Final Verify 
%s\n",ERR_error_string(ERR_get_error(),NULL));
cout << "EC Verify error" << endl;
break;
}

found_match = true;
cout << "SEV EC verification Succesful" << endl;


My current output when I reach EVP_DigestVerifyFinal is showing this error:
Failed Final Verify error:0395:digital envelope routines::no operation set

I have been playing around with it for a while, but I am stuck at this point. 
Any advice would be appreciated.

Thank you,

Diego Gonzalez Villalobos
--
 

-Original Message-
From: Tomas Mraz  
Sent: Friday, September 9, 2022 10:36 AM
To: GonzalezVillalobos, Diego ; 
openssl-users@openssl.org
Subject: Re: Updating RSA public key generation and signature verification from 
1.1.1 to 3.0

[CAUTION: External Email]

On Thu, 2022-09-08 at 16:10 +, GonzalezVillalobos, Diego via openssl-users 
wrote:
> [AMD Official Use Only - General]
>
> Hello everyone,
>
> I am currently working on updating a signature verification function 
> in C++ and I am a bit stuck. I am trying to replace the deprecated
> 1.1.1 functions to the appropriate 3.0 versions. The function takes in 
> 2 certificate objects (parent and cert), which are not x509 
> certificates, but certificates the company had previously defined.
> Using the contents from parent we create an RSA public key and using 
> the contents from cert we create the digest and grab the signature to 
> verify.
>
> In the 1.1.1 version we were using the RSA Object and the rsa_set0_key 
> function to create the RSA public key and then used RSA_public_decrypt 
> to decrypt the signature and RSA_verify_PKCS1_PSS to verify it. This 
> whole workflow is now deprecated.
>
...
> Is this the correct way of creating RSA keys now? Where is my logic 
> failing? Can the same type of procedure even be done on 3.0? Any 
> advice would be really appreciated.
>

In the original code you seem to be using PSS padding for verification.
Did you try to set the PSS padding on the digest verify context? See 
demos/signature/rsa_pss_hash.c on how to do it.

--
Tomáš Mráz, OpenSSL


RE: Best Practices for private key files handling

2022-09-18 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Michael
> Ströder via openssl-users
> Sent: Sunday, 18 September, 2022 04:27
> 
> On 9/18/22 06:09, Philip Prindeville wrote:
> >> On Sep 15, 2022, at 4:27 PM, Michael Wojcik via openssl-users  us...@openssl.org> wrote:
> >> You still haven't explained your threat model, or what mitigation
> >> the application can take if this requirement is violated, or why
> >> you think this is a "best practice".
>
> > The threat model is impersonation, where the legitimate key has been
> > replaced by someone else's key, and the ensuing communication is
> > neither authentic nor private.
> 
> Maybe I'm ignorant but shouldn't this be prevented by ensuring the
> authenticity and correct identity mapping of the public key?

Exactly. In most protocols the public key, not the private key, authenticates 
the peer.

Relying on file system metadata (!) as the root of trust for authentication, 
particularly for an application that may be running with elevated privileges 
(!!), seems a marvelously poor design.

> > Otherwise, the owners of the system can't claim non-repudiation as to
> > the genuine provenance of communication.

I'm with Peter Gutmann on this. Non-repudiation is essentially meaningless for 
the vast majority of applications. But in any case, filesystem metadata is a 
poor foundation for it.

> More information is needed about how you're system is working to comment
> on this.

Indeed. This is far from clear here.


-- 
Michael Wojcik


Re: Best Practices for private key files handling

2022-09-18 Thread Michael Ströder via openssl-users

On 9/18/22 06:09, Philip Prindeville wrote:

On Sep 15, 2022, at 4:27 PM, Michael Wojcik via openssl-users 
 wrote:
You still haven't explained your threat model, or what mitigation
the application can take if this requirement is violated, or why
you think this is a "best practice". >

The threat model is impersonation, where the legitimate key has been
replaced by someone else's key, and the ensuing communication is
neither authentic nor private.


Maybe I'm ignorant but shouldn't this be prevented by ensuring the 
authenticity and correct identity mapping of the public key?


More information is needed about how you're system is working to comment 
on this.


Ciao, Michael.



AW: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-16 Thread Andrew Lynch via openssl-users
Understood.  My main reason for telling them is that Google Chrome complains 
bitterly when asked to download a http link from a page that was fetched with 
https.

I hadn't noticed that yesterday because I was analyzing the problem on a Linux 
VM and copy-pasted all the URLs from Chrome on my desktop to wget in the VM.

-Ursprüngliche Nachricht-
Von: openssl-users  Im Auftrag von Viktor 
Dukhovni
Gesendet: Freitag, 16. September 2022 16:22
An: openssl-users@openssl.org
Betreff: Re: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared 
to 1.0.2?.

On Fri, Sep 16, 2022 at 02:11:38PM +, Andrew Lynch via openssl-users wrote:
...
>
> I’ve also asked my colleagues why the download is http instead of 
> https…

You should look to multiple independent sources to validate the authenticity of 
a trust anchor public key.  Trusting "https" to prove the validity of a WebPKI 
trust anchor is a bit too circular.

Also "https" is redundant for CRL and intermediate CA distribution, since these 
are signed by the issuing CA.  That said, the same ".crt"
file is availabe via "https":

...

Trust anchor certificates are often delivered as an operating system "package", 
and ideally the package maintainers apply proper due diligence.

--
Viktor.


AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-16 Thread Andrew Lynch via openssl-users
Oops, sorry.  The correct intermediate is of course also SN2.

 

http://sm-pkitest.atos.net/cert/Atos-Smart-Grid-Test.CA.2.crt

Fingerprint a0 6d 32 c3 56 7d 8e 20 0f a3 8e d3 d0 0a 04 21 2a 0a 1e ae

 

I’ve also asked my colleagues why the download is http instead of https…

 

Von: openssl-users  Im Auftrag von Andrew 
Lynch via openssl-users
Gesendet: Freitag, 16. September 2022 15:53
An: Corey Bonnell ; openssl-users@openssl.org
Betreff: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

Hi Corey,

 

I believe Victor has explained the issue sufficiently (thanks!).  Just for 
completeness here are the actual root certificates relevant to the question.  
They are part of the German national Smart Metering environment:

 

SM-Test-Root-CA SN1 (O=SM-Test-PKI)

CN=SM-Test-Root.CA, SERIALNUMBER=1, gültig bis 19.05.2023
SHA256: 97 C2 68 C8 67 D7 6C 0E 13 4C B6 C9 AF F7 A9 E3 BD 9C 4E 30 E1 F6 CB F7 
8E DE 4C 3F 11 A3 8D 4D

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn1.der

 

SM-Test-Root-CA Link-Zertifikat (1>2)

Download

CN=SM-Test-Root.CA, SERIALNUMBER=2, gültig bis 19.05.2023

SHA256: ED 54 7F 5D F0 BC 41 D9 D7 3D 92 8B 75 FE 7D B9 9C D9 23 31 78 95 BD 26 
BF D2 4A AF DE EF AE 10

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn2_link.der

 

SM-Test-Root-CA SN2

Download

CN=SM-Test-Root.CA, SERIALNUMBER=2, gültig bis 19.10.2025

SHA256: 1D 77 21 17 16 69 66 41 AA B2 A3 61 5F E7 8E 76 73 C9 0E 16 E0 69 66 71 
47 0F A4 6A 74 FC 18 36

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn2.der

 

(All from 
https://www.telesec.de/de/service/downloads/branchen-und-eco-systeme/.  There 
is an English language downloads page but that does not show the Smart Metering 
PKI section.)

 

Our intermediate CA that issued the end entity certificate is

 

http://sm-pkitest.atos.net/cert/Atos-Smart-Grid-Test.CA.3.crt

Fingerprint 14 f3 d2 f8 cd 00 ca 9d f6 41 ca 5b 10 55 9c d3 ac eb cc 5a

 

The chain Atos-Smart-Grid-Test.CA.3.crt <- sm-test-root.ca_sn2.der is fine.  It 
is a straightforward self-signed root plus intermediate setup.

The chain Atos-Smart-Grid-Test.CA.3.crt <- sm-test-root.ca_sn2_link.der <- 
sm-test-root.ca_sn1.der is problematic because the “link” certificate has SN2 
as subject but SN1 as issuer.  So I believe it is effectively adding another 
intermediate layer which then violates pathlen:1 in sm-test-root.ca_sn1.der.

 

My (naïve) understanding of such link or cross-certified CA certificates is 
that they are intended to help systems that only have SN1 as a trust anchor to 
verify certificates issued by SN2.  But wouldn’t they stumble over pathlen too?

 

My colleague doing the verifying initially had all three sm-test-root.ca 
certificates in his CAfile and OpenSSL 1.1.1 picked the path with the link 
certificate.  Once he removed that everything was fine as the verify then used 
the self-signed SN2 root directly.

 

Regards,

Andrew.

 

Von: Corey Bonnell mailto:corey.bonn...@digicert.com> > 
Gesendet: Freitag, 16. September 2022 14:23
An: Andrew Lynch mailto:andrew.ly...@atos.net> >; 
openssl-users@openssl.org <mailto:openssl-users@openssl.org> 
Betreff: RE: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

Hi Andrew,

Can you provide the actual subject DNs for each certificate? RFC 5280 specifies 
that self-issued certificates (i.e., issuer DN == subject DN) are not 
considered in the pathLen calculation, so knowing whether these certificates 
are self-issued or not may be helpful in better diagnosing the issue.

 

Thanks,

Corey

 

From: openssl-users mailto:openssl-users-boun...@openssl.org> > On Behalf Of Andrew Lynch via 
openssl-users
Sent: Friday, September 16, 2022 4:32 AM
To: openssl-users@openssl.org <mailto:openssl-users@openssl.org> 
Subject: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

So is this a possible bug or a feature of OpenSSL 1.1.1?  (using 1.1.1n right 
now)

 

If I set up the content of CAfile or CApath so that E <- D <- C <- A is the 
only path that can be taken then the validation fails with

 

error 25 at 3 depth lookup: path length constraint exceeded

 

If I create the first root certificate (A) with pathlen:2 instead of pathlen:1 
then validation succeeds

 

user1_cert.pem: OK

Chain:

depth=0: C = DE, O = Test Org, CN = Test User (untrusted)   E

depth=1: C = DE, O = Test Org, CN = Test Sub-CA  D

depth=2: C = DE, O = Test Org, CN = Test Root 2-CA C

depth=3: C = DE, O = Test Org, CN = Test Root 1-CA     A

 

So it appears to me that OpenSSL 1.1.1n is definitely taking the pathlen 
constraint of certificate A into account.

 

Andrew.

 

 

Von: Erwann Abalea mailto:erwann.aba...@docusign.com> > 
Gesendet: Donnerstag, 15. September

AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-16 Thread Andrew Lynch via openssl-users
Hi Corey,

 

I believe Victor has explained the issue sufficiently (thanks!).  Just for 
completeness here are the actual root certificates relevant to the question.  
They are part of the German national Smart Metering environment:

 

SM-Test-Root-CA SN1 (O=SM-Test-PKI)

CN=SM-Test-Root.CA, SERIALNUMBER=1, gültig bis 19.05.2023
SHA256: 97 C2 68 C8 67 D7 6C 0E 13 4C B6 C9 AF F7 A9 E3 BD 9C 4E 30 E1 F6 CB F7 
8E DE 4C 3F 11 A3 8D 4D

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn1.der

 

SM-Test-Root-CA Link-Zertifikat (1>2)

Download

CN=SM-Test-Root.CA, SERIALNUMBER=2, gültig bis 19.05.2023

SHA256: ED 54 7F 5D F0 BC 41 D9 D7 3D 92 8B 75 FE 7D B9 9C D9 23 31 78 95 BD 26 
BF D2 4A AF DE EF AE 10

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn2_link.der

 

SM-Test-Root-CA SN2

Download

CN=SM-Test-Root.CA, SERIALNUMBER=2, gültig bis 19.10.2025

SHA256: 1D 77 21 17 16 69 66 41 AA B2 A3 61 5F E7 8E 76 73 C9 0E 16 E0 69 66 71 
47 0F A4 6A 74 FC 18 36

https://www.telesec.de/assets/downloads/Smart-Metering-PKI/sm-test-root.ca_sn2.der

 

(All from 
https://www.telesec.de/de/service/downloads/branchen-und-eco-systeme/.  There 
is an English language downloads page but that does not show the Smart Metering 
PKI section.)

 

Our intermediate CA that issued the end entity certificate is

 

http://sm-pkitest.atos.net/cert/Atos-Smart-Grid-Test.CA.3.crt

Fingerprint 14 f3 d2 f8 cd 00 ca 9d f6 41 ca 5b 10 55 9c d3 ac eb cc 5a

 

The chain Atos-Smart-Grid-Test.CA.3.crt <- sm-test-root.ca_sn2.der is fine.  It 
is a straightforward self-signed root plus intermediate setup.

The chain Atos-Smart-Grid-Test.CA.3.crt <- sm-test-root.ca_sn2_link.der <- 
sm-test-root.ca_sn1.der is problematic because the “link” certificate has SN2 
as subject but SN1 as issuer.  So I believe it is effectively adding another 
intermediate layer which then violates pathlen:1 in sm-test-root.ca_sn1.der.

 

My (naïve) understanding of such link or cross-certified CA certificates is 
that they are intended to help systems that only have SN1 as a trust anchor to 
verify certificates issued by SN2.  But wouldn’t they stumble over pathlen too?

 

My colleague doing the verifying initially had all three sm-test-root.ca 
certificates in his CAfile and OpenSSL 1.1.1 picked the path with the link 
certificate.  Once he removed that everything was fine as the verify then used 
the self-signed SN2 root directly.

 

Regards,

Andrew.

 

Von: Corey Bonnell  
Gesendet: Freitag, 16. September 2022 14:23
An: Andrew Lynch ; openssl-users@openssl.org
Betreff: RE: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

Hi Andrew,

Can you provide the actual subject DNs for each certificate? RFC 5280 specifies 
that self-issued certificates (i.e., issuer DN == subject DN) are not 
considered in the pathLen calculation, so knowing whether these certificates 
are self-issued or not may be helpful in better diagnosing the issue.

 

Thanks,

Corey

 

From: openssl-users mailto:openssl-users-boun...@openssl.org> > On Behalf Of Andrew Lynch via 
openssl-users
Sent: Friday, September 16, 2022 4:32 AM
To: openssl-users@openssl.org <mailto:openssl-users@openssl.org> 
Subject: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

So is this a possible bug or a feature of OpenSSL 1.1.1?  (using 1.1.1n right 
now)

 

If I set up the content of CAfile or CApath so that E <- D <- C <- A is the 
only path that can be taken then the validation fails with

 

error 25 at 3 depth lookup: path length constraint exceeded

 

If I create the first root certificate (A) with pathlen:2 instead of pathlen:1 
then validation succeeds

 

user1_cert.pem: OK

Chain:

depth=0: C = DE, O = Test Org, CN = Test User (untrusted)   E

depth=1: C = DE, O = Test Org, CN = Test Sub-CA  D

depth=2: C = DE, O = Test Org, CN = Test Root 2-CA C

depth=3: C = DE, O = Test Org, CN = Test Root 1-CA     A

 

So it appears to me that OpenSSL 1.1.1n is definitely taking the pathlen 
constraint of certificate A into account.

 

Andrew.

 

 

Von: Erwann Abalea mailto:erwann.aba...@docusign.com> > 
Gesendet: Donnerstag, 15. September 2022 19:51
An: Andrew Lynch mailto:andrew.ly...@atos.net> >
Cc: openssl-users@openssl.org <mailto:openssl-users@openssl.org> 
Betreff: Re: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

Assuming that all self-signed certificates are trusted (here, A and B), then 
providing a CAfile with D+C+B+A to validate E, the different possible paths 
are: 

 - E <- D <- B: this path is valid

 - E <- D <- C <- A: this path is valid

 

In the validation algorithm described in RFC5280 and X.509, the 
pathlenConstraints contained in the certificate of the Trust Anchor (here, A or 
B) is not taken into

RE: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-16 Thread Corey Bonnell via openssl-users
Hi Andrew,

Can you provide the actual subject DNs for each certificate? RFC 5280 specifies 
that self-issued certificates (i.e., issuer DN == subject DN) are not 
considered in the pathLen calculation, so knowing whether these certificates 
are self-issued or not may be helpful in better diagnosing the issue.

 

Thanks,

Corey

 

From: openssl-users  On Behalf Of Andrew 
Lynch via openssl-users
Sent: Friday, September 16, 2022 4:32 AM
To: openssl-users@openssl.org
Subject: AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

So is this a possible bug or a feature of OpenSSL 1.1.1?  (using 1.1.1n right 
now)

 

If I set up the content of CAfile or CApath so that E <- D <- C <- A is the 
only path that can be taken then the validation fails with

 

error 25 at 3 depth lookup: path length constraint exceeded

 

If I create the first root certificate (A) with pathlen:2 instead of pathlen:1 
then validation succeeds

 

user1_cert.pem: OK

Chain:

depth=0: C = DE, O = Test Org, CN = Test User (untrusted)   E

depth=1: C = DE, O = Test Org, CN = Test Sub-CA  D

depth=2: C = DE, O = Test Org, CN = Test Root 2-CA C

depth=3: C = DE, O = Test Org, CN = Test Root 1-CA A

 

So it appears to me that OpenSSL 1.1.1n is definitely taking the pathlen 
constraint of certificate A into account.

 

Andrew.

 

 

Von: Erwann Abalea mailto:erwann.aba...@docusign.com> > 
Gesendet: Donnerstag, 15. September 2022 19:51
An: Andrew Lynch mailto:andrew.ly...@atos.net> >
Cc: openssl-users@openssl.org <mailto:openssl-users@openssl.org> 
Betreff: Re: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

 

Assuming that all self-signed certificates are trusted (here, A and B), then 
providing a CAfile with D+C+B+A to validate E, the different possible paths 
are: 

 - E <- D <- B: this path is valid

 - E <- D <- C <- A: this path is valid

 

In the validation algorithm described in RFC5280 and X.509, the 
pathlenConstraints contained in the certificate of the Trust Anchor (here, A or 
B) is not taken into account. Therefore, the only ones that matter are the 
values set in C and D, and these values are coherent with both chains.

 

 

On Thu, Sep 15, 2022 at 7:34 PM Andrew Lynch via openssl-users 
mailto:openssl-users@openssl.org> > wrote:

Hi,

 

I would like to have my understanding of the following issue confirmed:

 

Given a two-level CA where the different generations of Root cross-sign each 
other, the verification of an end-entity certificate fails with OpenSSL 1.1.1 – 
“path length constraint exceeded”.  With OpenSSL 1.0.2 the same verify succeeds.

 

All Root CA certificates have Basic Constraints CA:TRUE, pathlen:1.  The Sub CA 
certificate has pathlen:0.

 

A) Issuer: CN=Root CA, serialNumber=1

   Subject: CN=Root CA, serialNumber=1

 

B) Issuer: CN=Root CA, serialNumber=2

   Subject: CN=Root CA, serialNumber=2

 

C) Issuer: CN=Root CA, serialNumber=1

   Subject: CN=Root CA, serialNumber=2

 

D) Issuer: CN=Root CA, serialNumber=2

   Subject: CN=Sub CA, serialNumber=2

 

E) Issuer: CN=Sub CA, serialNumber=2

   Subject: Some end entity

 

With a CAfile containing D, C, B, A in that order the verify of E fails.  If I 
remove the cross certificate C then the verify succeeds.

 

I believe OpenSSL 1.1.1 is building a chain of depth 3 (D – C – A) and so 
pathlen:1 of A is violated.  Without the cross certificate the chain is only 
depth 2 (D – B).

 

Is my understanding of the reason for this failure correct?

Why is OpenSSL 1.0.2 verifying successfully?  Does it not check the path length 
constraint or is it actually picking the depth 2 chain instead of the depth 3?

 

Regards,

Andrew.

 




 

-- 

Cordialement, 

Erwann Abalea.



smime.p7s
Description: S/MIME cryptographic signature


Need Help to check DH_generate_key() functionality

2022-09-16 Thread Priyanka C via openssl-users
Dear OpenSSL Team,

While migrating to OpenSSL 3.0 we are facing issue with use of 
DH_generate_key(). Getting dh->pub_key NULL.
Logic used is as given below, I have omitted the error handling code.


  *   p and g buffer is of type unsigned char *
  *   p_len is 128 and g_len is 1.

  DH *dh;
dh = DH_new();
dh->params.p = BN_bin2bn(p, p_len, NULL);
dh->params.g = BN_bin2bn(g, g_len, NULL);
DH_generate_key(dh);

I have checked openssl man pages 
(https://www.openssl.org/docs/manmaster/man3/DH_generate_key.html).
According to which DH_generate_key() expects dh to contain the shared 
parameters p and g only, still not able to generate pub_key.

Tried solutions given on following links:
Approach 1:
https://github.com/openssl/openssl/issues/11108
  Used DH_new_by_nid() instead of DH_new() .

Approach 2:
We were skeptical about the values of p and g so tried setting valid values for 
p q and g using DH_set0_pqg().

BIGNUM *a = BN_bin2bn(p, p_len, NULL);
BIGNUM *b = BN_bin2bn(g, g_len, NULL);
DH_set0_pqg(dh, a, NULL, b);

But this did not help, as this set function does not change q value if NULL is 
passed.
We don't have idea about what can be a valid value for q which we can set.

Approach 3:
Currently working on the solution given on this link, using EVP wrappers for DH 
key generation.
https://www.mail-archive.com/openssl-users@openssl.org/msg88906.html

Please help to look into this and guide with possible solutions.

Thanks,
Priyanka



AW: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-16 Thread Andrew Lynch via openssl-users
So is this a possible bug or a feature of OpenSSL 1.1.1?  (using 1.1.1n right 
now)

If I set up the content of CAfile or CApath so that E <- D <- C <- A is the 
only path that can be taken then the validation fails with

error 25 at 3 depth lookup: path length constraint exceeded

If I create the first root certificate (A) with pathlen:2 instead of pathlen:1 
then validation succeeds

user1_cert.pem: OK
Chain:
depth=0: C = DE, O = Test Org, CN = Test User (untrusted)   E
depth=1: C = DE, O = Test Org, CN = Test Sub-CA  D
depth=2: C = DE, O = Test Org, CN = Test Root 2-CA C
depth=3: C = DE, O = Test Org, CN = Test Root 1-CA A

So it appears to me that OpenSSL 1.1.1n is definitely taking the pathlen 
constraint of certificate A into account.

Andrew.


Von: Erwann Abalea 
Gesendet: Donnerstag, 15. September 2022 19:51
An: Andrew Lynch 
Cc: openssl-users@openssl.org
Betreff: Re: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 
1.0.2?.

Assuming that all self-signed certificates are trusted (here, A and B), then 
providing a CAfile with D+C+B+A to validate E, the different possible paths are:
 - E <- D <- B: this path is valid
 - E <- D <- C <- A: this path is valid

In the validation algorithm described in RFC5280 and X.509, the 
pathlenConstraints contained in the certificate of the Trust Anchor (here, A or 
B) is not taken into account. Therefore, the only ones that matter are the 
values set in C and D, and these values are coherent with both chains.


On Thu, Sep 15, 2022 at 7:34 PM Andrew Lynch via openssl-users 
mailto:openssl-users@openssl.org>> wrote:
Hi,

I would like to have my understanding of the following issue confirmed:

Given a two-level CA where the different generations of Root cross-sign each 
other, the verification of an end-entity certificate fails with OpenSSL 1.1.1 – 
“path length constraint exceeded”.  With OpenSSL 1.0.2 the same verify succeeds.

All Root CA certificates have Basic Constraints CA:TRUE, pathlen:1.  The Sub CA 
certificate has pathlen:0.

A) Issuer: CN=Root CA, serialNumber=1
   Subject: CN=Root CA, serialNumber=1

B) Issuer: CN=Root CA, serialNumber=2
   Subject: CN=Root CA, serialNumber=2

C) Issuer: CN=Root CA, serialNumber=1
   Subject: CN=Root CA, serialNumber=2

D) Issuer: CN=Root CA, serialNumber=2
   Subject: CN=Sub CA, serialNumber=2

E) Issuer: CN=Sub CA, serialNumber=2
   Subject: Some end entity

With a CAfile containing D, C, B, A in that order the verify of E fails.  If I 
remove the cross certificate C then the verify succeeds.

I believe OpenSSL 1.1.1 is building a chain of depth 3 (D – C – A) and so 
pathlen:1 of A is violated.  Without the cross certificate the chain is only 
depth 2 (D – B).

Is my understanding of the reason for this failure correct?
Why is OpenSSL 1.0.2 verifying successfully?  Does it not check the path length 
constraint or is it actually picking the depth 2 chain instead of the depth 3?

Regards,
Andrew.



--
Cordialement,
Erwann Abalea.


RE: Best Practices for private key files handling

2022-09-15 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Philip
> Prindeville
> Sent: Thursday, 15 September, 2022 15:41

> I was thinking of the case where the directory containing the keys (as
> configured) is correctly owned, but contains a symlink pointing outside of
> that directory somewhere else... say to a file owned by an ordinary user.
> 
> In that case, as has been pointed out, it might be sufficient to just pay
> attention to the owner/group/modes of the file and reject them if:
> 
> (1) the file isn't 600 or 400;
> (2) the file isn't owned by root or the app-id that the app runs at.

#2 is irrelevant if #1 holds and the application isn't running as root. And if 
the application doesn't need to run with elevated privileges, it shouldn't be 
run with elevated privileges.

You still haven't explained your threat model, or what mitigation the 
application can take if this requirement is violated, or why you think this is 
a "best practice".

It's true there's potentially some benefit to warning an administrator even 
after the fact if some violation of key hygiene is detected, but whether that's 
a "best practice" (and, for that matter, the extent to which file permissions 
constitute evidence of such a violation), much less whether an application 
should fail in some manner when it's detected, is certainly debatable.

-- 
Michael Wojcik


Re: Best Practices for private key files handling

2022-09-15 Thread Shawn Heisey via openssl-users

On 9/15/22 15:40, Philip Prindeville wrote:

I was thinking of the case where the directory containing the keys (as 
configured) is correctly owned, but contains a symlink pointing outside of that 
directory somewhere else... say to a file owned by an ordinary user.

In that case, as has been pointed out, it might be sufficient to just pay 
attention to the owner/group/modes of the file and reject them if:

(1) the file isn't 600 or 400;
(2) the file isn't owned by root or the app-id that the app runs at.

Do we agree on that?


Yes, that sounds very good.

That's the potential problem with symlinks.  Rarely should they ever 
point to something that is under the control of an unprivileged user.  
Exceptions might be in cases where you actually do want a configuration 
for that user to come from a directory that they control ... but that 
should only be done in situations where that input is considered 
untrusted and is stringently validated and sanitized before it is used.


If symlinks are used responsibly, they won't have security risks. In 
general, if the program checks the ownership and permissions of the 
actual file before using it, it shouldn't matter whether there is a 
symlink or not.


Thanks,
Shawn



Re: [EXTERNAL] Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?.

2022-09-15 Thread Erwann Abalea via openssl-users
Assuming that all self-signed certificates are trusted (here, A and B),
then providing a CAfile with D+C+B+A to validate E, the different possible
paths are:
 - E <- D <- B: this path is valid
 - E <- D <- C <- A: this path is valid

In the validation algorithm described in RFC5280 and X.509, the
pathlenConstraints contained in the certificate of the Trust Anchor (here,
A or B) is not taken into account. Therefore, the only ones that matter are
the values set in C and D, and these values are coherent with both chains.


On Thu, Sep 15, 2022 at 7:34 PM Andrew Lynch via openssl-users <
openssl-users@openssl.org> wrote:

> Hi,
>
>
>
> I would like to have my understanding of the following issue confirmed:
>
>
>
> Given a two-level CA where the different generations of Root cross-sign
> each other, the verification of an end-entity certificate fails with
> OpenSSL 1.1.1 – “path length constraint exceeded”.  With OpenSSL 1.0.2 the
> same verify succeeds.
>
>
>
> All Root CA certificates have Basic Constraints CA:TRUE, pathlen:1.  The
> Sub CA certificate has pathlen:0.
>
>
>
> A) Issuer: CN=Root CA, serialNumber=1
>
>Subject: CN=Root CA, serialNumber=1
>
>
>
> B) Issuer: CN=Root CA, serialNumber=2
>
>Subject: CN=Root CA, serialNumber=2
>
>
>
> C) Issuer: CN=Root CA, serialNumber=1
>
>Subject: CN=Root CA, serialNumber=2
>
>
>
> D) Issuer: CN=Root CA, serialNumber=2
>
>Subject: CN=Sub CA, serialNumber=2
>
>
>
> E) Issuer: CN=Sub CA, serialNumber=2
>
>Subject: Some end entity
>
>
>
> With a CAfile containing D, C, B, A in that order the verify of E fails.
> If I remove the cross certificate C then the verify succeeds.
>
>
>
> I believe OpenSSL 1.1.1 is building a chain of depth 3 (D – C – A) and so
> pathlen:1 of A is violated.  Without the cross certificate the chain is
> only depth 2 (D – B).
>
>
>
> Is my understanding of the reason for this failure correct?
>
> Why is OpenSSL 1.0.2 verifying successfully?  Does it not check the path
> length constraint or is it actually picking the depth 2 chain instead of
> the depth 3?
>
>
>
> Regards,
>
> Andrew.
>
>
>


-- 
Cordialement,
Erwann Abalea.


Stricter pathlen checks in OpenSSL 1.1.1 compared to 1.0.2?

2022-09-15 Thread Andrew Lynch via openssl-users
Hi,

I would like to have my understanding of the following issue confirmed:

Given a two-level CA where the different generations of Root cross-sign each 
other, the verification of an end-entity certificate fails with OpenSSL 1.1.1 - 
"path length constraint exceeded".  With OpenSSL 1.0.2 the same verify succeeds.

All Root CA certificates have Basic Constraints CA:TRUE, pathlen:1.  The Sub CA 
certificate has pathlen:0.

A) Issuer: CN=Root CA, serialNumber=1
   Subject: CN=Root CA, serialNumber=1

B) Issuer: CN=Root CA, serialNumber=2
   Subject: CN=Root CA, serialNumber=2

C) Issuer: CN=Root CA, serialNumber=1
   Subject: CN=Root CA, serialNumber=2

D) Issuer: CN=Root CA, serialNumber=2
   Subject: CN=Sub CA, serialNumber=2

E) Issuer: CN=Sub CA, serialNumber=2
   Subject: Some end entity

With a CAfile containing D, C, B, A in that order the verify of E fails.  If I 
remove the cross certificate C then the verify succeeds.

I believe OpenSSL 1.1.1 is building a chain of depth 3 (D - C - A) and so 
pathlen:1 of A is violated.  Without the cross certificate the chain is only 
depth 2 (D - B).

Is my understanding of the reason for this failure correct?
Why is OpenSSL 1.0.2 verifying successfully?  Does it not check the path length 
constraint or is it actually picking the depth 2 chain instead of the depth 3?

Regards,
Andrew.



Re: Best Practices for private key files handling

2022-09-13 Thread Shawn Heisey via openssl-users

On 9/13/22 14:17, Philip Prindeville wrote:

But what happens when the file we encounter is a symlink?  If the symlink is 
owned by root but the target isn't, or the target permissions aren't 0600 0r 
0400...  Or the target is a symlink, or there's a symlink somewhere in the 
target path, etc.

So... what's the Best Practices list for handling private key materials?  Has 
anyone fleshed this out?


This is not really related to openssl, but I will tell you what you are 
likely to hear in another setting:


In most cases, applications are not really aware of symlinks, unless 
they have been explicitly written to treat them differently than regular 
files or directories.  Some software can choose to not follow symlinks, 
but usually when that is possible, the program has a configuration 
option to enable/disable that functionality.


All symlinks I have ever seen on POSIX systems have 777 permissions, and 
MOST of the symlinks I have seen have root:root ownership.  I've never 
seen a situation where the ownership of the link itself has any bearing 
on whether the target location can be accessed.  I'm not going to 
unilaterally claim that isn't possible, but I have never seen it.


Best practices do not change when there are symlinks involved, unless 
the software refuses to follow symlinks.  Anything that would apply to a 
real file/directory would apply to the target of a symlink.  My own best 
practices about private keys:  They should only be readable by root and 
whatever user/group is active when software needs to use them.  They 
should definitely not be writable by any user other than root.  Some 
software starts as root to handle security stuff, then throws away the 
elevated permissions and runs as an unprivileged user.  Apache httpd is 
a prime example of this.


You might be concerned that with 777 permissions, a symlink can be 
modified by anyone ... but I am about 98 percent sure that is not the 
case when proper permissions are used.  I believe that a symlink can 
only be modified by a user that has write permission to the directory 
containing the symlink.


Properly implemented, symlinks do not reduce security, but any tool can 
be misused.  If you have a situation where a symlink presents a security 
concern, it probably means someone did it wrong.


Thanks,
Shawn



RE: Best Practices for private key files handling

2022-09-13 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Philip
> Prindeville
> Sent: Tuesday, 13 September, 2022 14:17
> 
> I'm working on a bug in an application where the application config is given
> the directory path in which to find a key-store, which it then loads.
> 
> My issue is this: a regular UNIX file is trivial to handle (make sure it's
> owned by "root" or the uid that the app runs at, and that it's 0600 or 0400
> permissions... easy-peasy).
> 
> But what happens when the file we encounter is a symlink?

You read the target. What's the problem?

>  If the symlink is
> owned by root but the target isn't, or the target permissions aren't 0600 0r
> 0400...

So what?

You can use lstat if you're really worried about symlinks, but frankly I'm not 
seeing the vulnerability, at least at first blush. What's the threat model?

This is reading a private key, not writing one, so there's no exfiltration 
issue simply from reading the file.

Suppose an attacker replaces the private key file, or the contents of the file. 
So what? Either the attacker is in a privileged position and so can satisfy 
your ownership and permissions checks; or the attacker isn't, and ... you read 
a private key that either is the correct one (i.e. corresponds to the public 
key in the certificate), and so there's no problem; or it isn't, and you can't 
use the certificate, and you fail safe.

Is this check meant to alert an administrator to a possibly-compromised, or 
prone-to-compromise, private key? Because if so, 1) it's too late, 2) a 
privileged attacker can trivially prevent it, and 3) why is that your job 
anyway?

It's also not clear to me why symbolic links are toxic under your threat model.

It's entirely possible I'm missing something here, but my initial impression is 
that these checks are of little value anyway. Can you explain what problem 
you're trying to solve?

-- 
Michael Wojcik


  1   2   3   4   5   6   7   8   9   10   >