Re: Upgrading OpenSSL on Windows 10

2022-11-25 Thread Michael Wojcik via openssl-users
​​> From: Steven_M.irc 
> Sent: Thursday, November 24, 2022 21:21

> > This is not true in the general case. There are applications which are 
> > available on Linux which do not use the
> > distribution's package manager. There are applications which use their own 
> > OpenSSL build, possibly linked
> > statically or linked into one of their own shared objects or with the 
> > OpenSSL shared objects renamed. Linux
> > distributions have not magically solved the problem of keeping all software 
> > on the system current.

> That's disheartening. My next computer will be running Linux and I was 
> thinking that (as long as I stick to
> installing software from appropriate repositories) my update worries would be 
> over soon.

That's the state of general-purpose software development. Believe me, having 
software automatically updated would by no means solve the most pressing 
security issues in the current software industry.
 
> > It is possible, with relatively little effort, to find all the copies of 
> > the OpenSSL DLLs under their usual names on a system

> Could you please provide me with a list of the usual names?

At the moment I'm not in a position to do that, and it wouldn't achieve 
anything useful anyway.

> I've got a lot of libssl DLL's on my system, but I'm not sure if they're part 
> of OpenSSL or some other implementation
> of SSL.

Filenames wouldn't prove anything anyway.

> >I'm not sure OpenSSL versions should be particularly high on anyone's 
> >priority list.

> As I understand it, OpenSSL is responsible for establishing HTTPS 
> connections, the primary protocol
> for ensuring security and authenticity over the Internet, and you *don't* 
> think OpenSSL versions should
> be a high priority? I don't understand your lack of alarm here.

I'm not alarmed because I'm operating under a sensible threat model.

What vulnerabilities are you concerned about? Why? What versions of OpenSSL do 
those apply to? Being "alarmed" without being able to answer those questions 
just means you're shooting in the dark.

Frankly, after 2012 -- the year that brought us Heartbleed, Goto Fail, and 
severe vulnerabilities in most major TLS implementations -- there have been few 
published vulnerabilities of much concern to client-side TLS use, and most of 
those only apply to very high-value targets. TLS connections are not the 
low-hanging fruit. Attackers have much better return and much lower cost 
exploiting other vulnerabilities, including on the user side phishing and other 
social-engineering attacks, typosquatting, credential stuffing, and so on. On 
the service-provider side, software supply-chain attacks and poor 
organizational defenses are common threat vectors.

Very few people will bother attacking HTTPS at the protocol level. It's not 
worth the effort.

> >What are you actually trying to accomplish? What's your task? Your threat 
> >model?

> I want to be able to trust the HTTPS connections between my PC and servers on 
> the Internet again;

"Again" since when? "Trust" in what sense? "Trust", like "secure", doesn't mean 
anything useful in an absolute sense. It's only meaningful in the context of a 
threat model.

For a typical computer user, TLS implementations is the wrong thing to worry 
about. Most home and employee individual users who are successfully attacked 
will fall victim to some sort of social engineering, such as phishing; to poor 
personal security practices such as weak passwords or password reuse; or to a 
server-side compromise they have absolutely no control over. Some will be 
compromised due to a failure to install updates to the OS or major software 
packages such as Microsoft Office long after those updates are released, but 
that's a less-common vector.

HTTPS compromise is statistically insignificant. In the vast majority of cases, 
the dangers with HTTPS are what people use it for -- online shopping at sites 
with poor security, for example, or downloading malicious software -- not with 
the channel itself.

-- 
Michael Wojcik

Re: Upgrading OpenSSL on Windows 10

2022-11-21 Thread Michael Wojcik via openssl-users
> From: openssl-users  on behalf of 
> Steven_M.irc via openssl-users 
> Sent: Monday, November 21, 2022 15:56
 
> However, I am running Windows 10, and since (unlike Linux) every piece of 
> software outside of Windows itself
> needs to be updated individually, I don't know how to track down every single 
> application that might be using
> OpenSSL and make sure that the copy of OpenSSL it uses is up-to-date.

You don't. There may be applications that have OpenSSL linked statically, or 
linked into one of its own DLLs, or just with the OpenSSL DLLs renamed.

> As many of you would know, under repository-based systems (such as most Linux 
> distros), this would not be an
> issue as I could update every single application (system or non-system) at 
> once.

This is not true in the general case. There are applications which are 
available on Linux which do not use the distribution's package manager. There 
are applications which use their own OpenSSL build, possibly linked statically 
or linked into one of their own shared objects or with the OpenSSL shared 
objects renamed. Linux distributions have not magically solved the problem of 
keeping all software on the system current.


Back to Windows: It is possible, with relatively little effort, to find all the 
copies of the OpenSSL DLLs under their usual names on a system, and then glean 
from them their version information. With significantly more effort, you can 
search for exported OpenSSL symbols within third-party binaries, which will 
detect some more instances. With quite a lot of additional effort, you can 
winkle out binaries which contain significant portions of code matching some 
OpenSSL release (see various research efforts on function-point and code-block 
matching, and compare with alignment strategies in other fields, such as 
genomics). If your definition of "OpenSSL in an application" is not too 
ambitious, this might even be feasible.

But to what end? Each application will either be well-supported, in which case 
you can find out from the vendor what OpenSSL version it contains and whether 
an update is available; or it is not, in which you'll be out of luck.

This is true of essentially every software component, most of which are not as 
well-maintained or monitored as OpenSSL. Modern software development is mostly 
a haphazard hodgepodge of accumulating software of uncertain provenance and 
little trustworthiness into enormous systems with unpredictable behavior and 
failure modes. I'm not sure OpenSSL versions should be particularly high on 
anyone's priority list.

What are you actually trying to accomplish? What's your task? Your threat model?

-- 
Michael Wojcik

RE: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-05 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of raf via
> openssl-users
> Sent: Friday, 4 November, 2022 18:54
> 
> On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users
>  wrote:
> 
> >
> > I'm inclined to agree. While there's an argument for backward compatibility,
> > C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
> > younger than C99. It doesn't seem like an unreasonable requirement.
> 
> Would this be a choice between backwards-compatibility with C90
> compilers and compatibility with 32-bit architectures?

I don't see how.

It's a question of the C implementation, not the underlying architecture. A C 
implementation for a 32-bit system can certainly provide a 64-bit integer type. 
If that C implementation conforms to C99 or later, it ought to do so using long 
long and unsigned long long. (I'm excluding C implementations for exotic 
systems where, for example, CHAR_BIT != 8, such as some DSPs; those aren't 
going to be viable targets for OpenSSL anyway.)

> Is there another way to get 64-bit integers on 32-bit systems?

Sure. There's a standard one, which is to include  and use int64_t 
and uint64_t. That also requires C99 or later and an implementation which 
provides those types; they're not required.

And for some implementations there are implementation-specific extensions, 
which by definition are not standard.

And you can roll your own. In a non-OO language like C, this would be intrusive 
for the parts of the source base that rely on a 64-bit integer type.

> I suspect that that there are more 32-bit systems than there are
> C90 compilers.

Perhaps, but I don't think it's relevant here. In any case, OpenSSL is not in 
the business of supporting every platform and C implementation in existence. 
There are the platforms supported by the project, and there are contributed 
platforms which are included in the code base and supported by the community 
(hopefully), and there are unsupported platforms.

If someone wants OpenSSL on an unsupported platform, then it's up to them to do 
the work.

-- 
Michael Wojcik


RE: OpenSSL 3.0.7 make failure on Debian 10 (buster)

2022-11-04 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Friday, 4 November, 2022 06:43
> 
> This looks like something environmental rather than a problem with
> OpenSSL itself. /usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h
> is clearly a system include file, trying to include some other system
> include file ("recurse down to the real one") which it is failing to find.

Specifically, limits.h is part of the C standard library (see e.g. ISO 
9899:1999 7.10). This is a GCC issue; there's something wrong with John's GCC 
installation, or how his environment configures it.

GCC often appears to have adopted "too clever by half" as a design goal.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 10:43
> >
> > And your description looks wrong anyway: shutdown(SHUT_RD) has
> > implementation-defined behavior for TCP sockets (because TCP does not
> > announce the read side of half-close to the peer), and on Linux causes
> > blocked receives and subsequent receives to return 0 (according to 
> > references
> 
> perl -MSocket -MIO::Socket::INET -e'my $s = IO::Socket::INET->new( Server =>
> 1, Listen => 1 ) or die; my $port = $s->sockport(); my $c = IO::Socket::INET-
> >new("localhost:$port") or die; syswrite $c, "hello"; my $sc = $s->accept();
> shutdown($sc, SHUT_RD); sysread $sc, my $buf, 512 or die $!; print $buf'
> 
> ^^ The above, I believe, demonstrates to the contrary: the read buffer is
> populated prior to shutdown and drained afterward.

As I noted, I hadn't tested it. The Linux man page is ambiguous:

   If how is SHUT_RD, further receptions will be disallowed.

It doesn't define "receptions". It's entirely possible that SHUT_RD will cause 
the stack to reject further application data (i.e. packets that increment the 
sequence number for anything other than ACK) from the peer, but permit the 
socket owner to continue to receive already-buffered data. That's arguably a 
poor implementation, and not what the man page appears to imply. And it looks 
to be in conflict with the Single UNIX Specification Issue 7 (not that Linux 
claims to be UNIX-conformant), which states that SHUT_SD "Disables further 
receive operations"; "operations" certainly seems to refer to actions taken by 
the caller, not by the peer.

There is a fair bit of debate about this online, and a number of people opine 
that the Linux behavior is correct, and SUS (they often refer to "POSIX", but 
POSIX has been superseded by SUS) is wrong. Others disagree.

The Linux kernel does take some action for a TCP socket that has SHUT_RD 
requested for it, but the behavior is not simple. (One SO comment mentions it 
causes it to exit the read loop in tcp_splice_read(), for example.) I'd be 
leery about relying on it.

I'm not sure how shutdown(SHUT_RD) is useful in the case of a TCP socket being 
used for TLS, to be perfectly honest. If the application protocol delimits 
messages properly and is half-duplex (request/response), then one side should 
know that no more data is expected and the other can detect incomplete 
messages, so there's likely no issue. If not, there's no way to guarantee you 
haven't encountered an incomplete message in bounded time (FPL Theorem 
applies). SHUT_RD does not signal the peer, so the peer can still get a RST if 
it continues to send. Perhaps I'm missing something, but I don't see what 
failure mode is being avoided by using SHUT_RD.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 08:51
> 
> You probably know this, but: On Linux, at least, if a TCP socket close()s
> with a non-empty read buffer, the kernel sends TCP RST to the peer.

Yes, that's a conditional-compliance (SHOULD) requirement from the Host 
Requirements RFC. See RFC 1122, 4.2.2.13.

> Some
> applications “panic” when they receive the RST and discard data.

Well, applications do a lot of things. Receiving an RST informs the peer that 
some of the data they sent was not successfully processed by the local 
application, so treating that as an error condition is not inappropriate.

But generally it's better if the application protocol imposes its own record 
structure and control information on top of TCP's very basic stream.

> It’s a rare
> issue, but when it does it’s a head-scratcher. To avoid that, it’s necessary
> to shutdown(SHUT_RD) then drain the read buffer before close().

Well, it's not *necessary* to do a half-close. Applications often know when 
they've received all the data the peer intends to send, thanks to 
record-delimiting mechanisms in the application protocol.

And your description looks wrong anyway: shutdown(SHUT_RD) has 
implementation-defined behavior for TCP sockets (because TCP does not announce 
the read side of half-close to the peer), and on Linux causes blocked receives 
and subsequent receives to return 0 (according to references -- I have't tested 
it), which means after shutdown(SHUT_RD) you *can't* drain the receive buffer. 
shutdown(SHUT_WR) would work, since it sends a FIN, telling the peer you won't 
be sending any more data, and still allows you to receive.

> So it seems like this *shouldn’t* be obscure, if applications do the
> shutdown/drain thing.

It's obscure in the sense that a great many people trying to use TLS get much 
more basic things wrong.

More generally, the OpenSSL documentation mostly covers the OpenSSL APIs, and 
leaves networking up to the OpenSSL consumer to figure out. The OpenSSL wiki 
covers topics that people have written, and those are going to focus on common 
questions and areas of particular interest for someone. If the interactions 
among the OpenSSL API, the TLS protocol (in its various versions), and the 
shutdown system call haven't historically been a problem for many people, then 
it's "obscure" in the literal sense of not having 
attracted much notice.

And in practice, the majority of TLS use is with HTTP, and HTTP does a fairly 
good job of determining when more data is expected, and handling cases where it 
isn't. An HTTP client that receives a complete response and then attempts to 
use the conversation for its next request, and gets an RST on that, for 
example, will just open a new conversation; it doesn't care that the old one 
was terminated. HTTP servers are simliarly tolerant because interactive user 
agents in particular cancel requests by closing (or, unfortunately, aborting) 
the connection all the time.

> I would guess that many don’t and just don’t see the
> RST thing frequently enough to worry about it. Regardless, the documentation
> is already pretty voluminous, so if this doesn’t bite many folks, then hey.

Yes, but wiki articles are always appreciated.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-11-03 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Thursday, 3 November, 2022 07:42
> 
> It sounds, then like shutdown() (i.e., TCP half-close) is a no-no during a
> TLS session.

Um, maybe. Might generally be OK in practice, particularly with TLSv1.3, which 
got rid of some of the less-well-considered ideas of earlier TLS versions. 
Honestly I'd have to spend some time digging through chapter & verse of the 
RFCs to arrive at any reliable opinion on the matter, though. Someone else here 
may have already considered it.

> Does OpenSSL’s documentation mention that? (I’m not exhaustively
> familiar with it, but I don’t remember having seen such.)

I doubt it. I don't see anything on the wiki, and this is a pretty obscure 
issue, all things considered.

> It almost seems like, given that TLS notify-close then TCP close() (i.e.,
> without awaiting the peer’s TLS notify-close) is legitimate, OpenSSL could
> gainfully tolerate/hide the EPIPE that that close() likely produces, and have
> SSL_read() et al just return empty-string.

Well, it could, but OpenSSL generally doesn't try to provide that type of 
abstraction.

Also note this paragraph from the wiki page on TLSv1.3 
(https://wiki.openssl.org/index.php/TLS1.3):

   If a client sends it's [sic] data and directly sends the close
   notify request and closes the connection, the server will still
   try to send tickets if configured to do so. Since the connection
   is already closed by the client, this might result in a write
   error and receiving the SIGPIPE signal. The write error will be
   ignored if it's a session ticket. But server applications can
   still get SIGPIPE they didn't get before.

So session tickets can also be a source of EPIPE when a client closes the 
connection.

> It surprises me that notify-close then close() is considered legitimate use.

There are so many TLS implementations and TLS-using applications out there that 
interoperability would be hugely compromised if we didn't allow a large helping 
of Postel's Interoperability Principle. So most applications try to be 
accommodating. There's even an OpenSSL flag to ignore the case where a peer 
closes without sending a close-notify, in case you run into one of those and 
want to suppress the error.

-- 
Michael Wojcik


RE: Worried about the vulnerabilities recently found in OpenSSL versions 3.0.0 - 3.0.6.

2022-11-03 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of
> Steven_M.irc via openssl-users
> Sent: Wednesday, 2 November, 2022 17:18
> 
> I'm really worried about the vulnerabilities recently found in OpenSSL
> versions 3.0.0 - 3.0.6.

Why? What's your threat model?

> If I understand things correctly (and please do
> correct me if I'm wrong), it doesn't matter which version of OpenSSL clients
> are running, only which version of OpenSSL *servers* are running. Thus it
> seems like end-users can do very little to protect themselves.

Protect themselves from what?

Take the most recent issues, CVE-2022-3786 and -3602. 3786 is a potential 
4-byte buffer overflow when parsing an email address component of a 
distinguished name in a certificate. (Note, contrary to what you wrote above, 
this could affect both servers and clients, since it would be triggered by 
parsing a malformed certificate.) This is probably not exploitable, per the 
OpenSSL blog post and analyses performed elsewhere, but let's imagine the worst 
case: OpenSSL 3.0.6 running on some platform where it's possible to leverage 
this BOF into an RCE.

If that's a server system, then:
1) If the server doesn't request client certificates, it should reject a 
Certificate message from the client, and not try to parse any, so there's no 
exposure.
2) We'll assume *you* aren't going to send a malicious certificate, so for your 
connection the vulnerability is irrelevant.
3) So the only case we care about is where some other actor sends a malicious 
certificate and chains the RCE with other attacks to pivot and escalate and 
subvert the server. We're on a pretty narrow branch of the attack tree here, 
and more importantly, the same could be true of a vast array of potential 
vulnerabilities in the server site. This is only an issue if an attacker can't 
find any other more useful vulnerability in the site. If you pay attention to 
IT security, you know *that* isn't likely.

If it's a client system, then you only care if it's *your* client, and you 
visit a malicious site. If you're in the habit of using OpenSSL 3.0.6 to 
connect to malicious servers, well, 3786 is not likely to be high on your list 
of problems.

3602 is even less likely to be exploitable.

Vulnerabilities are only meaningful in the context of a threat model. I don't 
see a plausible threat model where these should matter to a client-side end 
user.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-11-02 Thread Michael Wojcik via openssl-users
> From: Felipe Gasper 
> Sent: Wednesday, 2 November, 2022 12:46
> 
> I wouldn’t normally expect EPIPE from a read operation. I get why it happens;
> it just seems odd. Given that it’s legitimate for a TLS peer to send the
> close_notify and then immediately do TCP close, it also seems like EPIPE is a
> “fact of life” here.

Yeah. That's because an OpenSSL "read" operation can do sends under the covers, 
and an OpenSSL "send" can do receives, in order to satisfy the requirements of 
TLS. Depending on the TLS version and cipher suite being used, it might need to 
do that for renegotiation or the like. Or if the socket is non-blocking you can 
get WANT_READ from a send and WANT_WRITE from a receive.

In your example it was actually a sendmsg that produced the EPIPE, but within 
the logical "read" operation.

The original idea of SSL was "just be a duplex bytestream service for the 
application", i.e. be socket-like; but that abstraction proved to be rather 
leaky. Much as sockets themselves are a leaky abstraction once you try to do 
anything non-trivial.

-- 
Michael Wojcik


RE: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-02 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Phillip
> Susi
> Sent: Wednesday, 2 November, 2022 11:45
> 
> The only thing to fix is don't put your compiler in strict C90 mode.

I'm inclined to agree. While there's an argument for backward compatibility, 
C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is younger 
than C99. It doesn't seem like an unreasonable requirement.

But as Tomas wrote, anyone who thinks it is can submit a pull request.

-- 
Michael Wojcik


RE: SSL_read empty -> close?

2022-10-26 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Felipe
> Gasper
> Sent: Wednesday, 26 October, 2022 11:15
> 
>   I’m seeing that OpenSSL 3, when it reads empty on a socket, sends some
> sort of response, e.g.:
> 
> - before read
> [pid 42417] read(7276781]>, "", 5) = 0
> [pid 42417] sendmsg(7276781]>, {msg_name=NULL, msg_namelen=0,
> msg_iov=[{iov_base="\0022", iov_len=2}], msg_iovlen=1,
> msg_control=[{cmsg_len=17, cmsg_level=SOL_TLS, cmsg_type=0x1}],
> msg_controllen=17, msg_flags=0}, 0) = -1 EPIPE (Broken pipe)
> - after read
> 
>   What is that being sent after the read()? Is there a way to disable
> it?

I'd guess it's a TLS Alert Close_notify.

When read/recv on a TCP stream socket returns 0, it means a TCP FIN has been 
received from the peer (or possibly some interfering middleman, such as a 
firewall). This indicates the peer will no longer be sending any application 
data, only at most ACKs and perhaps a RST if conversation does not go quietly 
into that good night. Since TLS requires bidirectional communications, that 
means the TLS conversation is effectively open, and the local end needs to be 
closed; and TLS requires sending a close_notify so the peer knows the 
conversation has not been truncated.

Now, the most common cause of a FIN is the peer calling close(), which means it 
can't receive that close_notify. But TCP supports half-close, and the peer 
*could have* called shutdown(, SD_SEND), indicating that it was done sending 
but still wanted to be able to receive data. So the local side has no way of 
knowing, at the point where it gets a 0 from read(), that the peer definitely 
can't see the close_notify; and thus it's still obligated by the TLS 
specification (I believe) to send it.

At any rate, that's my understanding of the requirement for sending 
close_notify - I haven't confirmed that in the RFC - and what I suspect OpenSSL 
is doing there. I could well be wrong.

If the peer *has* called close, then EPIPE is what you'd expect. Note that on 
UNIXy systems this means you should have set the disposition of SIGPIPE to 
SIG_IGN to avoid being signaled, but all well-written UNIX programs should do 
that anyway. (SIGPIPE, as Dennis Ritchie noted many years ago, was always 
intended as a failsafe for poorly-written programs that fail to check for 
errors when writing.)

-- 
Michael Wojcik


RE: [building OpenSSL for vxWorks on Windows using Cygwin]

2022-10-24 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Friday, 21 October, 2022 02:39
> Subject: Re: openssl-users Digest, Vol 95, Issue 27

Please note the text in the footer of each openssl-users digest message:

> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openssl-users digest..."

This is part of asking a good question. Also, you need to trim parts of the 
digest message you're replying to that aren't relevant to your question. Don't 
just send the entire digest back to the list. That's confusing and discourteous 
to your readers.

> - Why are you trying to build OpenSSL?
> My objective is to sign an 'image.bin' with RSA2048 and verify the signature.
> Now, I would like to port it to vxWorks 7. 

See, this is why you need to ask a good question. I believe this is the first 
time you mention vxWorks, which makes an enormous difference. Prior to this 
message, I assumed you were building OpenSSL *for Windows*, since that was the 
only platform you mentioned.

vxWorks is, I believe, an unsupported platform. Someone in the past ported 
OpenSSL to vxWorks and contributed the necessary changes to the project, but 
the project maintainers don't have the resources to maintain that port. OpenSSL 
consumers who want to run on vxWorks have to provide their own support for it.

Had you made it clear you were targeting vxWorks at the start, someone could 
have pointed that out, and saved us all some trouble.

Since you are targeting vxWorks, you'll need to get advice from someone who's 
familiar with building OpenSSL for that platform. I am not, and I haven't seen 
anyone else on the list comment on it yet, so there may not be any vxWorks 
users reading this thread. And so you may need to look elsewhere -- perhaps on 
vxWorks forums.

> A: If there an l'ibOpenssl.a'  static library for vxWorks, then there would 
> be no
> reason to build the OpenSSL. Is there? 

I don't know; I don't work with vxWorks.

> A: If there was on option to use Only the verify signature module, then I 
> would just
> compile this module and not the entire OpenSSL. Is there an option?

Not with OpenSSL. There are other cryptography libraries, some of which may be 
more convenient to get for vxWorks. Verifying an RSA signature in some fashion 
(you don't say anything about a message format or padding, but that's a whole 
other area of discussion) is a common primitive.

> > - What platform do you want to build OpenSSL for?
> A: vxWorks-7, the toolchain is windows exe files (gcc,ar,ld), thus the only 
> option
> I had in mind to build the OpenSSL is cygwin.

> > - What toolchain do you want to use, and if that's not the default 
> > toolchain for
> > that platform, why aren't you using the default?
> A: I have vxWorks toolchain, on windows platform. (It definitely be easier if 
> I had
> the vxWorks toochain on Linux, but I don't)

This still isn't clear to me. If you have the vxWorks toolchain for Windows, 
why do you need Cygwin? Is it just for Perl, for the configuration step? I have 
no idea what the vxWorks tools expect for things like file-path format, so I 
can't guess whether Cygwin's Perl would be appropriate.

> > - Have you read the text files in the top-level directory of the OpenSSL 
> > source
> > distribution?
> Please direct me to the relevant README on "how to build OpenSSL on vxWorks" 
> (or
> similar platform, in which all is needed is to inject the relevant toochain
> i.e. perl Configure VxWorks)

That's not how it works. If you want to build OpenSSL, you should be consulting 
all of the files to figure out what's relevant for your build. Building OpenSSL 
is often not trivial, so particularly if you run into problems, the thing to do 
is actually read those files and understand the build process. Or find someone 
else who's done it for the the platform you're working with, and ask them.

-- 
Michael Wojcik


RE: OpenSSL 1.1.1 Windows dependencies

2022-10-23 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of David
> Harris
> Sent: Saturday, 22 October, 2022 09:02
> 
> I now have wireshark captures showing the exchanges between the working
> instance and the non-working instance respectively; the problem is definitely
> happening after STARTTLS has been issued and during the TLS handshake.

A packet-inspecting firewall can monitor a TLS handshake (for TLS prior to 1.3) 
and terminate the conversation if it sees something in the unencrypted messages 
- ClientHello, ServerHello, ServerCertificate, etc - that it doesn't like. It's 
not beyond imagining that an organization would have a packet-inspecting 
firewall that terminates conversations using particular cipher suites, for 
example.

> I'm not high-level enough to be able to make any sense of the negotiation
> data though. The wireshark capture is quite short (22 items in the list)
> and I don't mind making it available if it would be useful to anyone.

Someone might be able to tell something from it.

Not much else is coming to mind, I'm afraid. It would help to know what system 
call is failing, with what errno value, but that's going to be a bit tough to 
determine on Windows. ProcMon, maybe? And it's curious that the OpenSSL error 
stack is empty, but without being able to debug you probably couldn't track 
that down, short of instrumenting a bunch of the OpenSSL code.

-- 
Michael Wojcik


RE: OpenSSL 1.1.1 Windows dependencies

2022-10-21 Thread Michael Wojcik via openssl-users
> From: David Harris 
> Sent: Friday, 21 October, 2022 01:42
>
> On 20 Oct 2022 at 20:04, Michael Wojcik wrote:
> 
> > I think more plausible causes of this failure are things like OpenSSL
> > configuration and interference from other software such as an endpoint
> > firewall. Getting SYSCALL from SSL_accept *really* looks like
> > network-stack-level interference, from a firewall or similar
> > mechanism.
> 
> That was my initial thought too, except that if it were firewall-related, the
> initial port 587 connection would be blocked, and it isn't - the failure 
> doesn't
> happen until after STARTTLS has been issued.

Not necessarily. That's true for a first-generation port-blocking firewall, but 
not for a packet-inspecting one. There are organizations which use 
packet-inspecting firewalls to block STARTTLS because they enforce their own 
TLS termination, in order to inspect all incoming traffic for malicious content 
and outgoing traffic for exfiltration.

> Furthermore, the OpenSSL
> configuration is identical between the systems/combinations of OpenSSL that
> work and those that don't.

Do you know that for certain? There's no openssl.cnf from some other source 
being picked up on the non-working system?

-- 
Michael Wojcik


RE: OpenSSL 1.1.1 Windows dependencies

2022-10-20 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of David
> Harris
> Sent: Wednesday, 19 October, 2022 18:54
> 
> Do recent versions of OpenSSL 1.1.1 have dependencies on some Windows
> facility (winsock and wincrypt seem likely candidates) that might work on
> Server 2019 but fail on Server 2012?

OpenSSL on Windows has always had a dependency on Winsock/Winsock2 (see 
b_sock.c, e_os.h, sockets.h) for supporting socket BIOs. Obviously OpenSSL used 
for TLS is going to be interacting with Winsock. I can't think of any 
difference between Server 2012 and Server 2019 that would be relevant to the 
issue you describe.

OpenSSL 1.1.1 uses Windows cryptographic routines in two areas I'm aware of: 
rand_win.c and the CAPI engine. I don't offhand see a way that a problem with 
the calls in rand_win.c would cause the particular symptom you described. My 
guess is that you're not using the CAPI engine, but you might check your 
OpenSSL configuration on the failing system.

I think more plausible causes of this failure are things like OpenSSL 
configuration and interference from other software such as an endpoint 
firewall. Getting SYSCALL from SSL_accept *really* looks like 
network-stack-level interference, from a firewall or similar mechanism.

Personally, if I ran into this, I'd just build OpenSSL for debug and debug into 
it. But I know that's not everyone's cup of tea.

-- 
Michael Wojcik


RE: openssl-users Digest, Vol 95, Issue 24

2022-10-19 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Tuesday, 18 October, 2022 11:58

> I have downloaded perl strawberry, but I have no clue how to get rid of the
> built-in perl that comes in cygwin, and point cygwin to use the strawberry 
> perl.

You don't have to remove the Cygwin version of perl, just change your PATH. 
This is basic both to the various shells available under Cygwin and to the 
Windows command line, so I'm getting the impression that you're not very 
familiar with your operating environment. That's not an ideal place to start 
from when trying to build, much less use, OpenSSL.

I can't be more detailed because at this point I frankly don't understand what 
you're trying to do. I suggest you try asking the right question, in a useful 
manner. (See https://catb.org/esr/faqs/smart-questions for advice in how to ask 
the right question.)

In particular:

- Why are you trying to build OpenSSL?
- Why did you clone the GitHub repository rather than downloading one of the 
released source tarballs? Did you read the instructions on www.openssl.org on 
how to download OpenSSL source releases?
- What platform do you want to build OpenSSL for?
- What toolchain do you want to use, and if that's not the default toolchain 
for that platform, why aren't you using the default?
- Have you read the text files in the top-level directory of the OpenSSL source 
distribution?

There may well be an easier way to accomplish whatever your goal is. OpenSSL 
may not even be a particularly good solution for you. You haven't given us 
enough information to go on.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 12:03

Send messages to the list, not directly to me.

> And, in which header file am I expected to find the Definition for LONG?

That's a question about the Windows SDK, not OpenSSL.

It's in WinNT.h, per Microsoft's documentation (which is readily available 
online).

But for building OpenSSL this is not your concern. Building OpenSSL on Windows 
with the Microsoft toolchain requires a valid installation of the Windows SDK. 
If you're not building with the Microsoft toolchain, then you'll have to 
consult the OpenSSL build instructions for the toolchain you're using. Have you 
read the text files in the OpenSSL distribution which explain how to build it?

> Which linux command I can use to find if there exists a definition for LONG?

Assuming you mean "which Cygwin command can I use on Windows...": find + xargs 
+ grep would be the usual choice to find the definition, but as I already noted 
that's in WinNT.h. If that's not what you mean, then your question is unclear.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 11:12

> see attached file for cygwin details.

I'm afraid I have no comment on that. I merely mentioned that for some OpenSSL 
releases, using a POSIXy perl implementation such as Cygwin's to configure 
OpenSSL for a Windows build did not work.

> ***   OpenSSL has been successfully configured                     ***

If memory serves, configuring with Cygwin perl would succeed, but the build 
would subsequently fail due to an issue with paths somewhere. I don't remember 
the details.

I suggest you try Strawberry Perl. It's free, and trying it would not take long.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-17 Thread Michael Wojcik via openssl-users
> From: רונן לוי  
> Sent: Monday, 17 October, 2022 11:16

Please send messages to the list, not to me directly.

> And for the question with regard to the Windows style, are you referring to 
> CRLF as
> opposed to LF from linux?

No, to Windows-style file paths, with drive letters and backslashes, rather 
than (sensible) POSIX-style ones.

-- 
Michael Wojcik


RE: Build openssl on windows 10 using cygwin

2022-10-16 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of  ???
> Sent: Saturday, 15 October, 2022 15:48

> I have tried to build openssl using cygwin:

> Both options starts compiling, but end up with error:
> In file included from providers/implementations/storemgmt/winstore_store.c:27:
> /usr/include/w32api/wincrypt.h:20:11: error: unknown type name 'LONG'
>   20 |   typedef LONG HRESULT;
> Q: What am I missing here?

Well, the version of OpenSSL you're using, for one thing. And what C 
implementation; there are various ones which can be used under Cygwin. Cygwin 
is an environment, not a build toolchain.

I don't know if this is still true, or if it differs for 1.1.1 and 3.0; but 
historically there have been issues using Cygwin perl to build OpenSSL, because 
OpenSSL on Windows wants a perl implementation that uses Windows-style file 
paths. We use Strawberry Perl.

That said, that error appears to be due to an issue with the Windows SDK 
headers, since it's the Windows SDK which should be typedef'ing LONG. (Because 
we wouldn't want Microsoft to use actual standard C type names, would we?) So 
this might be due to not having some macro defined when including the various 
Windows SDK headers.

-- 
Michael Wojcik


RE: CA/Server configuration

2022-10-03 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Dmitrii 
> Odintcov
> Sent: Sunday, 2 October, 2022 21:15
>
> This is where the confusion begins: if ‘bar’, the certificate requestor, 
> itself
> wants to be a CA (basicConstraints = CA:true),

I assume here you mean bar is going to be a subordinate CA for foo, or bar is a 
subordinate that's being cross-signed by foo. Otherwise foo issuing a CA 
certificate for bar doesn't make sense. Note that bar can't be a root, since 
it'll be signed by some entity other than itself. (A root is a self-signed CA 
certificate, by definition.)

> then its bar.conf must answerboth sets of questions at the same time!

Why? Creating a CSR and generating the certificate for it are separate 
operations. bar's configuration is used in creating the CSR. foo's is used in 
generating the certificate.

> For instance, if bar wants to request its own CA certificate to be valid for
> 5 years, but is only willing to issue others’ certificates for 1 year, what
> should `default_days` be in bar.conf?

Oh, I see, you're talking about generating bar's CSR versus signing 
certificates using bar. The answer is: you have two configurations, one for 
generating bar's CSR and the other for signing certificates using bar. Those 
are separate operations (obviously, since bar can't sign anything until it has 
its certificate), so they're not required to use the same configuration.

Configuration files are tied to *operations*, not to *entities*. You use the 
configuration file appropriate for the operation, where an operation is 
something like "requesting a CSR for a subordinate CA" or "signing a 
certificate for a subordinate CA" or "signing a certificate for a non-CA 
entity".

-- 
Michael Wojcik


RE: Updating RSA public key generation and signature verification from 1.1.1 to 3.0

2022-09-30 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Tomas
> Mraz
> Sent: Friday, 30 September, 2022 00:22
> 
> unfortunately I do not see anything wrong with the code. Does the
> EVP_DigestVerifyFinal return 0 or negative value? I do not think this
> is a bug in OpenSSL as this API is thoroughly tested and it is highly
> improbable that there would be a bug in the ECDSA verification through
> this API.
> 
> I am currently out of ideas on what could be wrong or how to
> investigate further. Perhaps someone else can chime in on what can be
> wrong?

Coincidentally, just yesterday I was helping someone debug a DigestVerify 
issue. We were consistently getting the "first octet is invalid" error out of 
the RSA PSS signature verification code, but the same inputs worked with 
openssl dgst.

I wrote a fresh minimal program from scratch (really minimal, with hard-coded 
filenames for the inputs), and it worked fine as soon as it compiled cleanly.

I'd suggest trying that. Get it working in a minimal program first. Make sure 
you have all the correct OpenSSL headers, and there are no compilation 
warnings. Then integrate that code into your application.

(I didn't have the original application to go back to, in my case, and the 
person I was working with is in another timezone and had left for the day.)

-- 
Michael Wojcik
Distinguished Engineer, Application Modernization and Connectivity




RE: Best Practices for private key files handling

2022-09-18 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Michael
> Ströder via openssl-users
> Sent: Sunday, 18 September, 2022 04:27
> 
> On 9/18/22 06:09, Philip Prindeville wrote:
> >> On Sep 15, 2022, at 4:27 PM, Michael Wojcik via openssl-users  us...@openssl.org> wrote:
> >> You still haven't explained your threat model, or what mitigation
> >> the application can take if this requirement is violated, or why
> >> you think this is a "best practice".
>
> > The threat model is impersonation, where the legitimate key has been
> > replaced by someone else's key, and the ensuing communication is
> > neither authentic nor private.
> 
> Maybe I'm ignorant but shouldn't this be prevented by ensuring the
> authenticity and correct identity mapping of the public key?

Exactly. In most protocols the public key, not the private key, authenticates 
the peer.

Relying on file system metadata (!) as the root of trust for authentication, 
particularly for an application that may be running with elevated privileges 
(!!), seems a marvelously poor design.

> > Otherwise, the owners of the system can't claim non-repudiation as to
> > the genuine provenance of communication.

I'm with Peter Gutmann on this. Non-repudiation is essentially meaningless for 
the vast majority of applications. But in any case, filesystem metadata is a 
poor foundation for it.

> More information is needed about how you're system is working to comment
> on this.

Indeed. This is far from clear here.


-- 
Michael Wojcik


RE: Best Practices for private key files handling

2022-09-15 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Philip
> Prindeville
> Sent: Thursday, 15 September, 2022 15:41

> I was thinking of the case where the directory containing the keys (as
> configured) is correctly owned, but contains a symlink pointing outside of
> that directory somewhere else... say to a file owned by an ordinary user.
> 
> In that case, as has been pointed out, it might be sufficient to just pay
> attention to the owner/group/modes of the file and reject them if:
> 
> (1) the file isn't 600 or 400;
> (2) the file isn't owned by root or the app-id that the app runs at.

#2 is irrelevant if #1 holds and the application isn't running as root. And if 
the application doesn't need to run with elevated privileges, it shouldn't be 
run with elevated privileges.

You still haven't explained your threat model, or what mitigation the 
application can take if this requirement is violated, or why you think this is 
a "best practice".

It's true there's potentially some benefit to warning an administrator even 
after the fact if some violation of key hygiene is detected, but whether that's 
a "best practice" (and, for that matter, the extent to which file permissions 
constitute evidence of such a violation), much less whether an application 
should fail in some manner when it's detected, is certainly debatable.

-- 
Michael Wojcik


RE: Best Practices for private key files handling

2022-09-13 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Philip
> Prindeville
> Sent: Tuesday, 13 September, 2022 14:17
> 
> I'm working on a bug in an application where the application config is given
> the directory path in which to find a key-store, which it then loads.
> 
> My issue is this: a regular UNIX file is trivial to handle (make sure it's
> owned by "root" or the uid that the app runs at, and that it's 0600 or 0400
> permissions... easy-peasy).
> 
> But what happens when the file we encounter is a symlink?

You read the target. What's the problem?

>  If the symlink is
> owned by root but the target isn't, or the target permissions aren't 0600 0r
> 0400...

So what?

You can use lstat if you're really worried about symlinks, but frankly I'm not 
seeing the vulnerability, at least at first blush. What's the threat model?

This is reading a private key, not writing one, so there's no exfiltration 
issue simply from reading the file.

Suppose an attacker replaces the private key file, or the contents of the file. 
So what? Either the attacker is in a privileged position and so can satisfy 
your ownership and permissions checks; or the attacker isn't, and ... you read 
a private key that either is the correct one (i.e. corresponds to the public 
key in the certificate), and so there's no problem; or it isn't, and you can't 
use the certificate, and you fail safe.

Is this check meant to alert an administrator to a possibly-compromised, or 
prone-to-compromise, private key? Because if so, 1) it's too late, 2) a 
privileged attacker can trivially prevent it, and 3) why is that your job 
anyway?

It's also not clear to me why symbolic links are toxic under your threat model.

It's entirely possible I'm missing something here, but my initial impression is 
that these checks are of little value anyway. Can you explain what problem 
you're trying to solve?

-- 
Michael Wojcik


RE: using TLS (>1.2) with more than one certificate

2022-05-24 Thread Michael Wojcik via openssl-users
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Tuesday, 24 May, 2022 07:43
> To: openssl-users@openssl.org
> Subject: Re: using TLS (>1.2) with more than one certificate
> 
> On 24/05/2022 13:52, tobias.w...@t-systems.com wrote:
> > I’ve a server application and need to support RSA and ECC clients at the
> > same time.
> >
> > I don’t know which certificate from my local keystore I have to send to
> > the client, btw I have a rsa and a ecc certificate in my keystore
> already.
> >
> > I don’t know with which certificate (rsa or ecc) a client comes during
> > handshake of a tls connection.
> >
> > How can this technically work?
> >
> 
> It's perfectly find to add multiple certs/keys of different types to a
> single SSL_CTX/SSL. OpenSSL will select the appropriate cert to use
> based on the negotiated sigalg (for TLSv1.3).

Just to clarify - this works for earlier TLS versions as well.

Configure the server's SSL_CTX with both certificate chains and the private 
keys for the two entity certificates, and for older TLS versions the server 
will select the appropriate chain based on the cipher-suite list in the 
ClientHello. That is, it will use the ECC certificate (probably ECDSA, though 
EdDSA is becoming more common) if the client's cipher-suite list indicates it 
supports the necessary algorithms.

-- 
Michael Wojcik


RE: Openssl 3.0.2- Build error - catgets_failed

2022-04-21 Thread Michael Wojcik
> From: Gaurav Mittal11 
> Sent: Thursday, 21 April, 2022 09:55
> 
> Yes, I have gone through internet search, I have not found any clue.
> 
> Still same error even after setting LANG to C
> 
> Yes, HP is kind of legacy server and very less help available on internet.
> 
> Any more suggestions would be helpful.

All I can say at the moment is that we haven't seen this with OpenSSL 1.1.1n on 
HP-UX 11i. We haven't tried building any 3.x versions on that platform yet (and 
maybe we won't have to, which would be great).

Can you post the failing .s file somewhere? Maybe looking at it will provide 
some clue. Those messages look like they might be syntax cascade errors, so 
it's possible the Perl script tossed something bogus in there.

We do have HP-UX assembler expertise here (in the compiler team, since we've 
had HP-UX support since the PA-RISC days at least, quite possibly since the 
68K/FOCUS days) - not just Itanium experience, that is, but some working 
knowledge of the as program on that platform. I've battled through a bit of 
Itanium assembly now and then myself. So I may be able to find someone who can 
figure out where it's gone wrong.

-- 
Michael Wojcik


RE: Openssl 3.0.2- Build error - catgets_failed

2022-04-20 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Gaurav 
> Mittal11
> Sent: Wednesday, 20 April, 2022 06:52

> ...
> as: "crypto/aes/aes-ia64.s", catgets_failed 2: catgets_failed 1052: 
> catgets_failed - IDENT

A web search isn't turning anything up, but you probably tried that already.

I wonder if "catgets_failed" is a cryptic way of saying that as tried to 
retrieve message text from an NLS message catalog and failed, and it's just 
dumping parameters. What's your $LANG? Can you try the build with LANG set to 
C? Maybe make sure NLSPATH isn't set too?

(Someone at HP clearly didn't get the memo about emitting useful error 
messages. It's really not hard to wrap your message output to have a default 
string when the catalog lookup fails. Right up there in the list of Why 
Software Sucks, to use Platt's phrase.)

-- 
Michael Wojcik


RE: RSA and DES encryption and decryption with C++ on Windows

2022-04-10 Thread Michael Wojcik
> From: openssl-users  On Behalf Of John 
> Alway
> Sent: Saturday, 9 April, 2022 19:45

> From this site https://slproweb.com/products/Win32OpenSSL.html
>  I downloaded " Win32 OpenSSL v3.0.2" MSI 

Well, I suppose that's one option. Personally I would not use a build from some 
random website; I'd build it myself, after verifying the signature on the 
tarball.

> Anyway, the long and short of it is that I am having a bear of a time getting 
> things
> to work.  I did get base64 coding/encoding working, and I was able to get 
> this guys example working: ...
> However, his second example ... only half worked for me.  The encryption 
> worked, but
> the decryption threw an error in EVP_DecryptFinal_ex, where it returned error 
> code
> 0.

(Writing code based on videos? Seems baffling to me. Anyway...)

Many examples of using OpenSSL you might find online are not of particularly 
good quality. Many examples will be for older OpenSSL releases; the API has 
changed periodically.

I recommend you use a decent source, such as the OpenSSL Wiki, which can be 
found by going to openssl.org and looking around. (I'm not digging up a link 
because this will be a good exercise.) The wiki is haphazard and of mixed 
quality, which of course is the nature of a wiki, but at least much of it has 
been examined by people with some measure of OpenSSL experience.

> Anyway, I'm trying to encrypt/decrypt using RSA and DES schemes.  I've tried 
> some of
> the older code examples I could find, but some of the functions weren't 
> recognized by
> my header files.

Kenneth Goldman has already pointed out that your choice of encryption 
algorithms is suspect. To that I'd add:

- RSA as an asymmetric cipher is no longer preferred. It's useful primarily 
when you need to support peers who don't do anything better. That might be true 
in your case, but you've failed to tell us anything about your use case. That's 
a significant omission. When posting to openssl-users, it's almost always a 
good idea to explain your use case.

- DES is only useful if you have to support it for backward compatibility, or 
for academic interest.

- A cryptosystem is not just a cryptographic algorithm (which is what RSA and 
DES are; they are not "schemes", which suggests something more complete). It is 
very easy to misuse cryptographic algorithms in ways which defeat security for 
all but the most naive attacks. If you're not a cryptographer, you should not 
create your own cryptosystems, even using well-known algorithms, except for 
experimentation or learning purposes. Don't use homemade cryptosystems in 
production.

- If all you want is to encrypt some data, and do ... something ... with 
asymmetric crypography, and you're running on Windows, then why are you using 
OpenSSL? OpenSSL is a reasonably good choice for a cryptographic library if 
you're on Linux or UNIX, or you need to be cross-platform. If you're only 
working with Windows, it's come with cryptographic APIs since at least the 
Windows XP days. Those are designed to be convenient for Windows developers, 
and you get OS key management in the bargain.

> Can anyone help me with this?  I want to encrypt fairly long strings.  A few 
> hundred
> bytes or so.   Maybe longer.

Those aren't "long" for symmetric-encryption purposes. They may start to be 
troublesome for asymmetric encryption, but if you're encrypting application 
data asymmetrically you're Doing It Wrong anyway.

>  If I can do a continuous stream of blocks that would be great, as well.

"A continuous stream of blocks" could mean any number of things. To a first 
approximation, of course you can; but there isn't enough information here for 
us to discern what you're actually trying to do.

> Also, is there an efficient way to search this email list?  I was trying to 
>search
> for similar questions but wasn't able.

Possibly your questions are too broad and vague? There will be many discussions 
of encryption, for example.

If you need to use cryptography, it really helps to either use an API with 
high-level abstractions to minimize what might go wrong; or learn the basics of 
modern cryptography first, e.g. from a book like /Applied Cryptography/ or 
/Cryptographic Engineering/, before wading into writing code. Modern 
cryptography is complicated and easy to get wrong. I've seen plenty of cases 
where someone uses a cipher in a way that's obviously broken even to someone 
with only moderate practical experience in the field.

> I guess I could use google and the email list name?

I've never had a problem just using a web search engine (DDG, in my case) to 
search for past discussions on the list. It's not impossible that someone has a 
searchable archive of it somewhere. (I also save messages that seem like they 
might be particularly useful, but to be honest I rarely refer to my own 
collection because a web search generally finds what I need.)

-- 
Michael Wojcik


RE: looking for properly configured Windows VMs

2022-04-01 Thread Michael Wojcik
> From: Michael Wojcik
> Sent: Friday, 1 April, 2022 15:41
> >
> > View results: https://github.com/openssl/openssl/actions/runs/2073285321
> 
> I'll take a look when I get a chance to see if anything jumps out. I
> haven't had to deal with IPv6 raw or UDP programming in Windows yet, but I
> do a fair bit with Windows networking development in general.

Logs have been deleted, unfortunately.


RE: looking for properly configured Windows VMs

2022-04-01 Thread Michael Wojcik
> From: openssl-users  On Behalf Of
> Michael Richardson
> Sent: Friday, 1 April, 2022 07:34
> 
> Attempts to make bss_dgram.c compile with the right IPv6 include for
> Windows
> are not going well.
> 
> Some of the jobs actually die before my test case even runs, but at least,
> I
> guess they compile bss_dgram.c
> Others (the shared ones) seem to fail completely to compile bss_dgram.c
> 
> I haven't run a compile on DOS since the days of djcpp...

Well, to be fair, modern Windows isn't even slightly MS-DOS. But it is often a 
difficult and unnecessarily idiosyncratic environment.

> I wonder if anyone has VM images with the right OS and compilation tools
> installed?

I don't have public ones. I do have assorted Windows platforms available, 
though, and I'll try to pull your branch (do I remember correctly that you 
posted a link a while back?) over the weekend and build it, if I can find the 
time.

> The day-long cycle, making a change and then waiting for CI to give an
> opinion is just too slow.  (I didn't know WIN32 was still even thing... I
> guess Windows ME still uses it maybe.)

Many modern Windows applications are 32-bit programs. Modern Windows itself is 
a 64-bit OS, but runs 32-bit programs in a translation environment (WOW64, for 
"Windows on Windows"). About 10% of the processes currently running on my main 
Windows development system at the moment are 32-bit. Many of our flagship 
products install both 32- and 64-bit binaries because customers may be using 
either or both at the same time.

So for Windows the 32-bit builds of OpenSSL are still quite important.

Not that it really matters one way or the other, I suppose. If Win32 is a 
supported platform, it's a supported platform, and we'd like to fix this to 
build there (rather than just not supporting the feature).

(Windows ME, on the other hand, is long dead. Last release was over 20 years 
ago. But Microsoft's parade of versions with different naming conventions makes 
this sort of thing tough to keep track of.)

> Subject: Re: [openssl/openssl] PR run failed: Windows GitHub CI -
> bio_dgram uses recvmsg/sendmsg to retrieve destination and set origin
> address (41cc92c)
> 
> View results: https://github.com/openssl/openssl/actions/runs/2073285321

I'll take a look when I get a chance to see if anything jumps out. I haven't 
had to deal with IPv6 raw or UDP programming in Windows yet, but I do a fair 
bit with Windows networking development in general.

-- 
Michael Wojcik


RE: [openssl/openssl] bio_dgram vs IPv6

2022-04-01 Thread Michael Wojcik
> From: Michael Richardson 
> Sent: Friday, 1 April, 2022 07:40
> 
> Michael Wojcik  wrote:
> > Actually, in the context of #if expressions, unrecognized tokens
> expand to 0 anyway:
> 
> > After all replacements due to macro expansion and the defined unary
> > operator have been performed, all remaining identifiers are replaced
> > with the pp-number 0...
> 
> > (ISO 9899:1999 6.10.1 #3)
> 
> Yes, but that generates a warning, and then error via -Werror with some
> set
> of compile options that at least one CI run uses.

Oh, well. An implementation is allowed to generate any diagnostics it wishes, 
and is allowed to fail to translate even a conforming program.

Ultimately we're at the mercy of the implementation, and GCC is not a 
particularly good C implementation. (Of course, in its default mode, it doesn't 
implement C; it implements a language similar to, but not, C.)


RE: [openssl/openssl] bio_dgram vs IPv6

2022-03-31 Thread Michael Wojcik
> From: Michael Richardson 
> Sent: Thursday, 31 March, 2022 14:18
> 
> Michael Wojcik  wrote:
> > #if defined OPENSSL_SYS_WINDOWS
> > # include 
> > #else
> > # include 
> > #endif
> 
> But, don't all the OPENSSL_* macros expand to 0/1, anyway, so we actually
> just want #if OPENSSL_SYS_WINDOWS?

I did a quick grep through the source for 1.1.1k (just because that's what I 
had to hand; we've actually just finished updating my portfolio to 1.1.1n), and 
there's a mix of #if OPENSSL_SYS_WINDOWS and #if defined(OPENSSL_SYS_WINDOWS). 
apps/s_client.c uses the latter, for example.

Actually, in the context of #if expressions, unrecognized tokens expand to 0 
anyway:

After all replacements due to macro expansion and the defined unary
operator have been performed, all remaining identifiers are replaced
with the pp-number 0...

(ISO 9899:1999 6.10.1 #3)

So defining a macro used for conditional inclusion to the value 0 is kind of a 
bad idea, since that means there's different behavior between #if FOO and #if 
defined FOO. Much better to not define it and get the default value of 0 if you 
want to turn it off.

But that said, #if OPENSSL_SYS_WINDOWS is safer for the same reason: it doesn't 
matter whether it's defined as 0, or not defined at all.

The "defined" operator is overused in C source generally. It's good for things 
like header inclusion guards. It's not really a good choice for most other 
cases of conditional inclusion.

-- 
Michael Wojcik


RE: [openssl/openssl] bio_dgram vs IPv6

2022-03-31 Thread Michael Wojcik
> From: openssl-users  On Behalf Of
> Michael Richardson
> Sent: Thursday, 31 March, 2022 14:19
> 
> The clang-9 test fails with:
> 
> # ERROR:  @ test/bio_dgram_test_helpers.c:150
> # failed to v6 bind socket: Permission denied
> #
> #
> # OPENSSL_TEST_RAND_ORDER=1648577511
> not ok 2 - iteration 1
> 
> https://github.com/mcr/openssl/runs/5741887864?check_suite_focus=true
> 
> The other clang-XX tests seem to run fine.
> This smells like the problem with TRAVIS where IPv6 was not enabled in the
> Google VMs, but we aren't using those anymore.
> 
> It does not bind specific sockets (lets kernel choose), so there shouldn't
> a
> conflict between test cases.  Anyway, if that were the case, I'd expect to
> see in-use error rather than permission denied.
> 
> Smells to me like someone has restricted network sockets in order to avoid
> being used as an attack system.

Yes, the EPERM certainly suggests that.

Are these running on Linux VMs? SELinux or similar in use, perhaps?

-- 
Michael Wojcik


RE: [openssl/openssl] bio_dgram vs IPv6

2022-03-29 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Tuesday, 22 March, 2022 10:31
> 
> There is already code in bss_dgram.c that is conditionally compiled on
> OPENSSL_USE_IPV6. Is it reasonable to assume that if AF_INET6 is defined
> then ip6.h exists?

I meant to look into this earlier but got distracted.

Windows has IPv6 support and defines AF_INET6, but does not have ip6.h (at 
least in the SDK version I have installed on this machine). If you do a search 
online you'll see many projects have copied the ip6.h from some other platform 
into their source trees for use by Windows.

I've confirmed it's present on:
* AIX 7.1
* HP-UX 11.31316
* Solaris 11.3

and of course on Linux generally. I don't have other platforms handy to test.

Windows will be the sticking point. However, the Microsoft Windows SDK includes 
a header shared/netiodef.h, which includes at least some of the structures 
defined by RFC 3542, albeit with different type and field names; and macros 
mapping the RFC 3542 names to those identifiers. At least the following are 
available in that header:

ip6_hdr 
ip6_flow
ip6_plen
ip6_nxt 
ip6_hops
ip6_hlim
ip6_src 
ip6_dst

So something like this might work:

#if defined OPENSSL_SYS_WINDOWS
# include 
#else
# include 
#endif

(Note C does not require the argument of the operator "defined" to be 
parenthesized. Doing so just adds visual noise. ISO 9899-1999 6.10.1 #1.)

-- 
Michael Wojcik


RE: [openssl/openssl] bio_dgram vs IPv6

2022-03-21 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Monday, 21 March, 2022 05:33
> 
> Given that OpenSSL already supports IPv6 but we've never needed to
> include [netinet/ip6.h], I am wondering what is in that header that needs to
> be used?

netinet/ip6.h is for the "Advanced API for IPv6", detailed in RFC 3542. It's 
typically used for raw-socket access to IPv6, for things like traceroute and 
hop-by-hop.

The RFC specifically mentions using this API to retrieve and set addresses, so 
it seems like a fix for issue 5257 does need to use it, if that's to be done in 
a portable way.

3542 is only Informational, but I'd expect most or all platforms with IPv6 
support to conform to it.

-- 
Michael Wojcik


RE: Certificate authority changes with OpenSSL

2022-03-17 Thread Michael Wojcik
> From: openssl-users  On Behalf Of 
> egoitz--- via openssl-users
> Sent: Thursday, 17 March, 2022 12:52

> 1 - Is it possible to update a whole CA with 2048 bit public and private keys
> (I used in req section of openssl.conf, the default_bits to 2048) to a 
> Signature
> algorithm that don't bother the SECLEVEL 2?. I mean to have two versions of 
> the
> same certificate. One for SECLEVEL1 and one for SECLEVEL2?. I preserve all 
> csr and
> so

It's not clear to me exactly what you're thinking of doing here. Usually what 
I'd do is create a new intermediate signing certificate with a modern signing 
algorithm, such as sha256WithRSAEncryption or an ECC equivalent, and a suitably 
strong key (I use a 4096-bit RSA key even for my RSA testing, to catch out 
implementations that don't support adequately-strong keys). Then I'd use that 
to generate entity certificates.

Presumably your client systems already trust your existing root, so replacing 
that is extra work.

If you've been signing entity certificates with the root, then 1) stop doing 
that, and 2) create a new root with a different Subject CN and suitable 
parameters. You'll need to distribute that new root to your client systems.

> 2 - I was wondering too another question... although this is not urgent now.
> If the CA key pair, is almost expiring what's the proper process of doing what
> is supposed to be done?

Keys don't expire. That is, PKIX separates private keys from certificates; the 
latter expire, but the former do not. You can issue a new certificate that 
contains the same public key as an expired certificate.

That said, many people do periodically rotate keys. There is a great deal of 
(often tiresome and unenlightening) debate on this question, so I'm not going 
to express an opinion on whether you should do this, particularly if this is 
not a public CA, where key rotation would appear to be of minimal benefit. 
(Whoops.)

Also, CAs normally don't have a single keypair. They have one for the root 
certificate, which should only be used to sign intermediate signing 
certificates; and one for each intermediate.

Again, if you're signing entity certificates with the CA root, don't do that. 
Why not?

- If this is a real CA, i.e. the certificates it issues are used to protect 
anything of value, then you want to strongly protect the root's private key 
(preferable don't have it online at all, but on removable media or in an HSM). 
You want to be able to revoke a signing certificate if its key is compromised, 
and that's a lot less distruptive if an intermediate is used to sign, 
particularly if your entities send complete chains (or at least the signing 
intermediates), because then peers don't have to include the intermediates in 
their trust stores. Using intermediates also lets you partition your signing: 
intermediate X to sign this type of certificate, intermediate Y to sign this 
other type (or X to sign servers in this part of the organization, or whatever).

- If this is a test CA, it's better to test with a realistic PKIX hierarchy, 
and real hierarchies use intermediates.

If the CA root certificate is about to expire, then you'll need to create a new 
root. You can do that using the same Subject DN (if you revoke the old root) 
and Subject Key Identifier (SKID), which means your client systems can just 
update their trust stores with the new certificate and your server certificates 
should continue to work (until they expire).

-- 
Michael Wojcik



RE: OpenSSL version 1.1.1n published

2022-03-15 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Yann
> Droneaud
> Sent: Tuesday, 15 March, 2022 14:19
> 
> At the time of writing neither
> https://www.openssl.org/news/openssl-1.1.1-notes.html nor
> https://www.openssl.org/news/changelog.html#openssl-111 are updated to
> match 1.1.1n release.

Neither have the Newslog (news/newslog.html) nor the Vulnerabilities 
(news/vulnerabilities) pages.

This is not uncommon with new OpenSSL releases. Resources for updating the 
website are limited, and it's not a priority. I expect they'll be updated 
within the next few days. (Part of the problem is the same information, in 
different forms, on multiple pages; that's not ideal for prompt and consistent 
updates. But overhauling the website would take yet more resources.)

openssl-users is a better channel if you want rapid notification, and a paid 
support contract is better yet.

-- 
Michael Wojcik


RE: RE: How to create indirect CRL using openssl ca command

2022-03-11 Thread Michael Wojcik
> From: edr 
> Sent: Friday, 11 March, 2022 03:59
> 
> On 10.03.2022 20:27, Michael Wojcik wrote:
> > Personally, I'd be leery of using openssl ca for anything other than
> dev/test purposes, in which case frequent CRL generation seems unlikely to
> be a requirement. AIUI, openssl ca isn't really intended for production
> use.
> 
> I did see the RESTRICTIONS [1] and WARNINGs [2] sections in the openssl-ca
> documentation. I think that I can handle the problems described there but
> would still be interested if you have any concerns beyond those warnings
> and the functional limitations I am currently running into.

My concerns are more general. CAs are tricky. Even dev/test CAs are not trivial 
to get right, and corporate CAs generally have a lot of requirements. (We 
recently revamped our internal corporate CA, and even after an extensive 
requirement-gathering process we're still shaking out issues.)

Commercial CAs, of course, are much worse. But even for internal CAs, you'll 
have to comply with various CA/BF requirements if you want to issue server 
certificates that current browser releases will accept, for example.

So building a CA out of what is essentially a utility for experimentation and 
testing raises a red flag.

Beyond that, "openssl ca" does not by itself do many of the things you'd want 
even an internal production CA to do. It doesn't provide any change management 
or backup of the database. It doesn't audit. It doesn't provide access control. 
Those are all things that need to be added on top of that, and if you're going 
to do that, it seems like looking for a solution that already addresses at 
least some of those issues might be a better option.

> Also what (open source) ca software do you recommend instead?

I've never had to build a production CA, so I don't have any suggestions, I'm 
afraid. And even if I had, I don't know what your use cases are, so I wouldn't 
know how well they mapped to my (hypothetical) ones. Different entities will 
have some difference in requirements.

-- 
Michael Wojcik


RE: How to create indirect CRL using openssl ca command

2022-03-10 Thread Michael Wojcik
> From: openssl-users  On Behalf Of
> Michael Ströder via openssl-users
> Sent: Thursday, 10 March, 2022 12:17
> 
> On 3/10/22 14:06, edr dr wrote:
> > At the same time, I do not want to store passwords used for
> > certificate creation in cleartext anywhere.

Personally, I'd be leery of using openssl ca for anything other than dev/test 
purposes, in which case frequent CRL generation seems unlikely to be a 
requirement. AIUI, openssl ca isn't really intended for production use.

> It's a pity that there is not something like an OpenSSL key agent
> (similar to ssh-agent) for interactively loading the CA's private key
> into memory during service start.

To be fair, this is not an OpenSSL limitation; it's a limitation of openssl, 
the utility. Which, again, is not intended to solve all production use cases.

openssl ca, like most openssl subcommands, allows the use of an engine (or 
provider in 3.0), which means in many cases it's possible to use an inexpensive 
USB-attached HSM (via the pkcs11 engine) rather than having an on-disk key in 
the first place. I did this some years ago as an experiment using a NitroKey 
and it worked well.

-- 
Michael Wojcik


RE: Doubt regarding ssl options

2022-01-31 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jan Just 
> Keijser
> Sent: Monday, 31 January, 2022 03:51
> To: Srinivas, Saketh (c) ; openssl-users@openssl.org
> Subject: Re: Doubt regarding ssl options

> On 31/01/22 10:27, Srinivas, Saketh (c) wrote:

> > what is the difference between  SSL_CTX_set_min_proto_version
> > and SSL_set_min_proto_version.

> The effect of SSL_CTX_set_min_proto_version and SSL_set_min_proto_version is
> exactly the same...

More generally: The difference between SSL_CTX_something and SSL_something is 
that the former operates on an SSL_CTX object, and the latter on an SSL object.

An SSL object controls an SSL connection (more or less). An SSL_CTX object is 
used to create one or more SSL objects; it serves as a template for those SSL 
objects.

So if you want to set "something" for multiple SSL objects you will create 
later, use the SSL_CTX_something function. If you only need to alter the 
properties of an existing SSL object, use the SSL_something function.

This is a fundamental aspect of the OpenSSL API.

-- 
Michael Wojcik


RE: [openssl-1.1.1l] TLS1.2 Server responses with Alert

2021-12-31 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Ma 
> Zhenhua
> Sent: Thursday, 30 December, 2021 23:59

> On the SSL/TLS server, there's one error as follows. 
> "SSL Error(118) - no suitable signature algorithm"

Debugging handshake failures isn't my area of expertise, but I note both 
ClientHellos include a signature_algorithms extension, and the contents are 
quite different. In particular, the successful ClientHello includes the 
Signature Hash Algorithm Hash and Signature Hash Algorithm Signature 
parameters, while the failing one doesn't.

The failing one also includes a signature_algorithms_cert extension, while the 
successful one does not. I don't know offhand how the algorithms specified in 
that extension correspond to the signature-algorithm OIDs in signatures, but 
the server's certificate has 1.2.840.113549.1.1.11 (sha256WithRSAEncryption) 
which seems like it ought to correspond to either rsa_pss_rsae_sha256 or 
rsa_pss_pss_sha256. (Apparently those are both RSA-PSS with SHA256, as the name 
implies, and the difference between the two of them is whether the public key 
is encoded using the rsaEncryption format in the certificate, or the 
id-RSASSA-PSS format. The failing client is saying it understands both, AIUI.)

So my guess would be the server is unhappy that the failing client's 
ClientHello doesn't include the parameters for the various supported signature 
schemes in its signature_algorithms extension. But that's just a guess, and I 
don't know how you'd fix it.

-- 
Michael Wojcik


RE: Enumerating TLS protocol versions and ciphers supported by the peer

2021-12-06 Thread Michael Wojcik
> From: Dr. Matthias St. Pierre 
> Sent: Monday, 6 December, 2021 07:53
> To: Michael Wojcik ; openssl-
> 
> 
> > "Comparable elegant" is underspecified.
> 
> (I guess, "Comparably elegant" would have been grammatically more
> correct.)

I just meant that elegance is in the eye of the beholder.

Many people might agree that having a single command line return the list of 
what suites the server supports is elegant, at least for the user. Others 
prefer the original UNIX philosophy of simpler tools which are scripted to 
perform more complex operations; that's the testssl.sh approach, and it's more 
elegant in the sense of being composed in a visible (and modifiable) way from 
smaller pieces.

A command-line option to s_client to do this sort of server profiling is 
conceivable, but it would be a significant departure from what s_client does 
now, since it would conflict with some other options and would involve making 
multiple connections. That doesn't mean it shouldn't be implemented, 
necessarily, just that it's not parallel to most of the other things s_client 
options do.

-- 
Michael Wojcik


RE: Enumerating TLS protocol versions and ciphers supported by the peer

2021-12-06 Thread Michael Wojcik
From: openssl-users  On Behalf Of Dr. 
Matthias St. Pierre
Sent: Monday, 6 December, 2021 07:12


> today I learned that nmap has a nice feature to enumerate the protocol 
> versions and cipher
> suites supported by the peer (see below).
> Is there a comparable elegant way to obtain the same results using the 
> `openssl s_client`
> tool?

"Comparable elegant" is underspecified.

Perhaps try testssl.sh (https://testssl.sh/)? It has various options for 
reducing the number and types of tests it runs. We've used it for profiling 
internal TLS-enabled servers.

-- 
Michael Wojcik


RE: “EC PUBLIC KEY”

2021-11-17 Thread Michael Wojcik
> From: Michael Wojcik
> Sent: Wednesday, 17 November, 2021 14:22
> To: openssl-users@openssl.org
> Subject: RE: “EC PUBLIC KEY”
> 
> > From: openssl-users  On Behalf Of
> Billy
> > Brumley
> > Sent: Wednesday, 17 November, 2021 12:40
> > To: openssl-users@openssl.org
> > Subject: Re: “EC PUBLIC KEY”
> >
> > That's an ed25519 key. Not an ECC key. They are different formats, at
> > both the OID and asn1 structure levels.
> 
> Oh, of course you're right. Apologies.

Further on this, I'd like to know where the OP got a file with a "BEGIN EC 
PUBLIC KEY" header. Various discussions elsewhere (including one from this list 
in 2017) cast doubt on the existence of any such beast.

The PEM header "BEGIN EC PRIVATE KEY" is used by the OpenSSL "traditional" 
format for EC private keys. EC private keys in PKCS#8 format (in PEM format) 
use "BEGIN PRIVATE KEY" because PKCS#8 includes metadata about the key type.

Public keys all use "BEGIN PUBLIC KEY" (in PEM format) because, if I understand 
correctly, they're all in SPKI (SubjectPublicKeyInfo) format, as specified in 
RFC 5280 (PKIX Certificate and CRL Profile); and SPKI also includes key-type 
metadata.

If someone does have a file with a "BEGIN EC PUBLIC KEY" PEM header, it would 
be interesting to see it, or at least the output from openssl asn1parse, and to 
know where it came from.

Or I could be wrong about all of this once again. Live and learn.

-- 
Michael Wojcik


RE: “EC PUBLIC KEY”

2021-11-17 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Billy
> Brumley
> Sent: Wednesday, 17 November, 2021 12:40
> To: openssl-users@openssl.org
> Subject: Re: “EC PUBLIC KEY”
> 
> That's an ed25519 key. Not an ECC key. They are different formats, at
> both the OID and asn1 structure levels.

Oh, of course you're right. Apologies.


RE: “EC PUBLIC KEY”

2021-11-17 Thread Michael Wojcik
> From: openssl-users  On Behalf Of
> Felipe Gasper
> Sent: Wednesday, 17 November, 2021 09:12
> To: openssl-users@openssl.org
> Subject: “EC PUBLIC KEY”
> 
>   Does OpenSSL intend to handle EC public keys that in PEM begin
> “BEGIN EC PUBLIC KEY”?
> 
>   I can’t find a way to output this format and am not sure if it’s
> actually defined anywhere, but it seems like a logical analogue to the
> default/legacy RSA public key format.

With 1.1.1i (which is the version of the openssl command-line utility that I 
happen to have on my path at the moment):

# Generate a new Ed25519 key pair:
$ openssl genpkey -algorithm ed25519 -out ed25519-key.pem

# Extract its public key:
$ openssl pkey -in ed25519-key.pem -pubout ed25519-key-public.pem

# Confirm the public key:
$ openssl pkey -pubin -in ed25519-key-public.pem

This uses the PEM header "BEGIN PUBLIC KEY", but it's an ECC public key in PEM 
format.

This version of OpenSSL doesn't recognize "BEGIN EC PUBLIC KEY", but it'd be 
trivial to script copying the key to a temporary file and editing the PEM 
header and footer.

-- 
Michael Wojcik


RE: need help cross-compiling SSL for 5 different QNX OS target platforms

2021-11-08 Thread Michael Wojcik
> From: openssl-users  On Behalf Of 
> Williams, Roger
> Sent: Monday, 8 November, 2021 07:55


> I am trying to cross-compile the SSL software

Specifically, you're trying to build OpenSSL 1.1.1l, using cross-compilation. 
OpenSSL is only one implementation of SSL/TLS, so calling it "the SSL software" 
is not technically correct.

> ... for [various versions and platforms] of the QNX Operating System:

> While trying to compile SSH 8.8, it is providing the error:

>    In file included from openssl-compat.c:32:
> openssl-compat.h:37:3: error: #error OpenSSL 1.0.1 or greater is 
>required

>To satisfy this condition, I downloaded openssl-1.1.1l.    I do not know how to
> configure/make this software to create the 5 sets of SSL libraries required by
> SSH to make for my 5 targets. 

Have you read the README and INSTALL files in the OpenSSL distribution?

That may not help you, however. According to CHANGES, QNX support was removed 
for the 1.1.1 stream due to licensing issues with the contributions that had 
been made for it. So it appears you'll have to do a fresh port, not simply a 
build. That's generally a job for someone who has a good understanding of 
cryptography, TLS, and OpenSSL.

You might be able to build the final 1.1.0 release (1.1.0l). It can be found at 
https://www.openssl.org/source/old/1.1.0/.

There are serious security issues with using older, unsupported OpenSSL 
versions, but 1) you'll presumably only be using portions of the cryptography 
layer, and not TLS support, since this is for SSH; and 2) you don't appear to 
have any other viable choice, at least based on what you've told us.

For all I know, there's an alternative cryptography library you can use with 
SSH on QNX. I don't work on that platform, and we don't know what possibilities 
you've investigated.

-- 
Michael Wojcik


RE: Openssl 1.1.1l compilation issue for aix64-cc

2021-10-28 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Thiagu 
> Mohan
> Sent: Thursday, 28 October, 2021 07:31

> Openssl  Version 1.1.1l

> I am trying to compile openssl in Aix 7.2 OS ( ./Configure aix64-cc   )

I don't recall seeing these, but I think the latest version of AIX we're 
building on is 7.1. And it looks like we're still using 1.1.1k (must not have 
needed the fixes from 1.1.1l). Also, I'm no longer the person doing our UNIX 
builds.

> While running make, receiving error 

> "Undeclared identifier RTLD_MEMBER" 

This appears to be because _ALL_SOURCE isn't defined.

>            and  
> "ldinfo_next" is not a member of "struct ld_info". 

The ldinfo_next error is a cascade from the earlier errors reported against 
sys/ldr.h. I have no idea at the moment why you're getting those; it's not 
obvious from a quick look at the header. And at least the one on the AIX 7.1 
box I checked hasn't changed since 2014.

> ...

> cc  -I. -Iinclude -qpic -q64 -qmaxmem=16384 -qro -qroconst -qthreaded -O 
> -DB_ENDIAN -DOPENSSL_PIC 
> -DOPENSSLDIR="\"/usr/mohant2/aix/openssl_1.1.1l_aix\"" 
> -DENGINESDIR="\"/usr/mohant2/aix/openssl_1.1.1l_aix/lib/engines-1.1\"" 
> -D_THREAD_SAFE -DNDEBUG -D_REENTRANT -D_XOPEN_SOURCE=700  -c -o 
> crypto/dso/dso_dlfcn.o crypto/dso/dso_dlfcn.c
> "crypto/dso/dso_dlfcn.c", line 114.18: 1506-045 (S) Undeclared identifier 
> RTLD_MEMBER.
> "/usr/include/sys/ldr.h", line 168.9: 1506-046 (S) Syntax error.
> "/usr/include/sys/ldr.h", line 205.5: 1506-046 (S) Syntax error.
> "/usr/include/sys/ldr.h", line 218.5: 1506-046 (S) Syntax error.
> "/usr/include/sys/ldr.h", line 225.5: 1506-046 (S) Syntax error.
> "/usr/include/sys/ldr.h", line 265.45: 1506-046 (S) Syntax error.
> "crypto/dso/dso_dlfcn.c", line 398.53: 1506-022 (S) "ldinfo_next" is not a 
> member of "struct ld_info".
> "crypto/dso/dso_dlfcn.c", line 400.24: 1506-022 (S) "ldinfo_next" is not a 
> member of "struct ld_info".

Try editing the Makefile and adding -D_ALL_SOURCE to see if that fixes the 
RTLD_MEMBER error. It might also have an effect on the ldr.h errors. If so, the 
Configure entry for aix-cc might need an update.

-- 
Michael Wojcik


RE: Consultation:Additional “ephemeral public key” and “ephemeral private key" implementations for quictls/opens

2021-08-29 Thread Michael Wojcik
> From: openssl-users  On Behalf Of 
> Sent: Sunday, 29 August, 2021 07:04

> Specifically, we are trying to enable “ephemeral public key” and 
> “ephemeral private key" for SSL/TLS.

I'm afraid it is not clear to me, at least, what you are trying to do.

Are you attempting to implement a standard protocol that incorporates ephemeral 
key pairs, such as EKE, into TLS? Are you implementing a standard specifically 
for TLS that I'm not aware of? (That's quite possible; I don't follow TLS 
standards closely.)

If not, what is your use case? How do you see your protocol interacting with 
TLS?

Some might argue that OpenSSL is not especially well-suited for adding 
experimental ciphersuites and protocols to its TLS implementation. Its focus is 
on providing a secure and rich commercial implementation of TLS and various 
cryptographic operations and protocols, not on providing a toolkit for 
researchers.

I've never used quictls (as I think QUIC is broadly undesirable for most 
applications), but my understanding is that it's a fork of OpenSSL, so it's 
probably not any better in that regard.

-- 
Michael Wojcik



RE: SM2 fix in 1.1.1l

2021-08-27 Thread Michael Wojcik
> From: Nicola Tuveri 
> Sent: Friday, 27 August, 2021 07:04

> As such only applications programmatically using the SM2 public key encryption
> algorithm (and decryption in particular) should be affected by the mentioned
> security advisory.

Thanks -- that's exactly what I was looking for.

--
Michael Wojcik


SM2 fix in 1.1.1l

2021-08-27 Thread Michael Wojcik
I imagine I could figure this out by reading the source, but does the SM2 fix 
(the high-severity issue for OpenSSL 1.1.1l) apply to TLS using SMx (RFC 8998), 
or just to applications that use SM2 directly via the EVP API? It wasn't clear 
from the announcement, unless I missed something.

We'll be picking up 1.1.1l shortly, but I'd like to be able to clarify the 
situation for management and customers.

--
Michael Wojcik


RE: problems with too many ssl_read and ssl_write errors

2021-08-26 Thread Michael Wojcik
Please reply to the list rather than to me directly.

> From: Kamala Ayyar 
> Sent: Thursday, 26 August, 2021 08:57

> We call the  WSAGetLastError  immediately after SSL_ERROR_SYSCALL and we get 
> the
> WSAETIMEDOUT

OK. This wasn't entirely clear to me from your previous message. So you are 
getting a network-stack timeout on a sockets operation; this isn't a TLS 
protocol issue or anything else at a level above the network stack.

> We also call the ERR_print_errors(bio); but it displays a blank line.  We call
> ERR_clear_error() before the SSL_read as mentioned in the manual.

I'm not sure why that might be happening. It may be that OpenSSL doesn't log 
any error messages in this case; I'd have to look at the OpenSSL source code to 
figure that out.

> The  ERR_print_errors() does not print anything- Is the error getting cleared
> because we called the WSAGetLastError() ?

That shouldn't affect the OpenSSL error list.

> Is there an order in which the Windows WSAGetLastError() should be called 
> before
> SSL_get_error()?

I don't believe so. They should be independent. The OpenSSL error list is 
maintained by OpenSSL; WSAGetLastError retrieves the Winsock error code. The 
two don't share data.

> We will try changing some of the timeouts on either side and try.

Make sure that's stack timeouts you're changing: calls to setsockopt, or 
Registry settings if you're not overriding them on your sockets. 
Application-level timeouts aren't the issue here.

You may need to involve a network administrator to look at network interface 
statistics, check wire traces to see if receive windows are closed, and look 
for interference from middleboxes such as routers and firewall appliances or 
from application firewalls, IDSes, and so on. These sorts of issues are not 
uncommon when there are load balancers, traffic-inspecting firewalls, or the 
like interfering with network traffic.

--
Michael Wojcik


RE: problems with too many ssl_read and ssl_write errors

2021-08-25 Thread Michael Wojcik
> From: Kamala Ayyar  
> Sent: Monday, 23 August, 2021 09:22

> We get the SSL_ERROR_SYSCALL from SSL_Read and SSL_Write quite often.

You'll get SSL_ERROR_SYSCALL any time OpenSSL makes a system call (including, 
on Windows, a Winsock call) and gets an error.

> It seems the handshake is done correctly and over a period of time (few hours
> to 2-3 days random) the SSL_Read /SSL_Write fails.  We do not get the
> WSAEWOULDBLOCK error code

What is the underlying error, then? Are you logging the result of 
WSAGetLastError immediately after you get SSL_ERROR_SYSCALL? What about the SSL 
error stack (with ERR_print_errors_fp or similar)?

> nor the OpenSSL's version of SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE 
> error.

SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE are not related to WSAEWOULDBLOCK, 
so I'm not sure why you're mentioning them here.

> We get WSAETIMEDOUT on Receive more often and a few times on the Send.

That's typically the case; generally speaking, a timeout is more likely when 
receiving (where you are at the mercy of the peer sending data) than when 
sending (where you simply need the peer to open the receive window and then ACK 
the sent data, both of which are often possible even if the application is not 
behaving, depending on the amount of data and other variables).

> We are not using SO_KEEPALIVE but using application specific heartbeat TO to
> keep the socket alive.

That could certainly cause send or receive timeouts on the socket if the peer 
becomes unresponsive. The same is true of any application-data transmission, of 
course.
 
> Based on blogs and googling we have seen that OpenSSL quite often issues a
> SSL_ERROR_SYSCALL when a Timeout is encountered 

Yes, that's what it should do, if "when a timeout is encountered" means "a 
socket-API function returns an error due to a timeout". SSL_ERROR_SYSCALL means 
exactly that: a system call returned an error.

I suspect one of the following:

- A client application is hanging (or blocking for some other reason), and 
consequently:
  - Not sending data, so the server's not receiving data until it times out, or
  - Not receiving data that the server is sending; that will cause its receive 
window to fill, and eventually the server's send will time out.

- Network issues are transiently preventing data and/or ACK reception by one 
side or the other. That will also eventually lead to timeouts.

-- 
Michael Wojcik


RE: Need some help signing a certificate request

2021-08-23 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jakob
> Bohm via openssl-users
> Sent: Monday, 23 August, 2021 04:40
>
> On 21/08/2021 19:42, Michael Wojcik wrote:
> >> From: rgor...@centerprism.com 
> >> Sent: Saturday, 21 August, 2021 11:26
> >>
> >> My openssl.cnf (I have tried `\` and `\\` and `/` directory
> separators):
> > Use forward slashes. Backslashes should work on Windows, but forward
> slashes work everywhere. I don't know that "\\" will work anywhere.
> \\ works only when invoking a \ expecting program from a unix-like shell
> that requires each \ to be escaped with a second backslash in order to
> pass it through.  A typical example is using CygWin bash to invoke a
> native
> Win32 program.

Yes, I know that. I use bash on Windows as my default shell. I meant I have no 
idea whether \\ would work in an OpenSSL configuration file on Windows. Windows 
APIs such as CreateFile normally tolerate extraneous backslashes, but I haven't 
tested them in OpenSSL configuration files.

> \\ where neither is an escape (so  in the above shell situation) is
> also used in native Windows programs to access a hypothetical root that
> is above the real file system roots, typically the syntax is
> "\\machine\share\ordinary\path", where:

I'm well aware of that too. And of the use of \\?\ as a prefix for CreateFileW 
et alia to enable long paths. That's not relevant in this case, since OP was 
talking about path separators, not prefixes.

--
Michael Wojcik


RE: Need some help signing a certificate request

2021-08-21 Thread Michael Wojcik
> From: rgor...@centerprism.com 
> Sent: Saturday, 21 August, 2021 11:26
> 
> My openssl.cnf (I have tried `\` and `\\` and `/` directory separators):

Use forward slashes. Backslashes should work on Windows, but forward slashes 
work everywhere. I don't know that "\\" will work anywhere. 

> [ ca ]
> default_ca = testca
> 
> [ testca ]
> dir = .
> certificate = $dir\\ca_certificate.pem
> database = $dir\\index.txt

What's in index.txt? Is it a valid OpenSSL CA index file, or completely empty 
(zero length)?

If it's not either of those, replace it with an empty file, for example with:

copy nul index.txt

> new_certs_dir = $dir\\certs
> private_key = $dir\\private\\ca_private_key.pem

These directories exist?

> serial = $dir\\serial

This file exists? Though you really shouldn't be assigning serial numbers; you 
should let OpenSSL create them using the -create_serial option.

> 
> default_crl_days = 7
> default_days = 365
> default_md = sha256
> 
> policy = testca_policy
> x509_extensions = certificate_extensions
> 
> [ testca_policy ]
> commonName = supplied
> stateOrProvinceName = optional
> countryName = optional
> emailAddress = optional
> organizationName = optional
> organizationalUnitName = optional
> domainComponent = optional
> 
> [ certificate_extensions ]
> basicConstraints = CA:false
> 
> [ req ]
> default_bits = 2048
> default_keyfile = .\\private\\ca_private_key.pem
> default_md = sha256
> prompt = yes
> distinguished_name = root_ca_distinguished_name
> x509_extensions = root_ca_extensions
> 
> [ root_ca_distinguished_name ]
> commonName = hostname
> 
> [ root_ca_extensions ]
> basicConstraints = CA:true
> keyUsage = keyCertSign, cRLSign
> 
> [ client_ca_extensions ]
> basicConstraints = CA:false
> keyUsage = digitalSignature,keyEncipherment
> extendedKeyUsage = 1.3.6.1.5.5.7.3.2

Why are you specifying this by OID? Just use "extendedKeyUsage = clientAuth". 
(I'm assuming a reasonably recent OpenSSL version.)

> 
> [ server_ca_extensions ]
> basicConstraints = CA:false
> keyUsage = digitalSignature,keyEncipherment
> extendedKeyUsage = 1.3.6.1.5.5.7.3.1


Your command line was:

> openssl ca -config .\openssl.cnf -in ../server/req.pem -out 
> server_certificate.pem -notext -batch -extensions server_ca_extensions

Try it without -batch and with -verbose. And again I'd recommend 
-create_serial, unless you have some strange requirement to control serial 
numbers. Browsers in particular may be unhappy if your serial numbers don't 
conform to the CA/BF Basic Requirements, and it's a pain trying to do that 
manually.

-- 
Michael Wojcik


RE: Need some help signing a certificate request

2021-08-21 Thread Michael Wojcik
> From: openssl-users  On Behalf Of 
> rgor...@centerprism.com
> Sent: Saturday, 21 August, 2021 09:48

> Thanks for the comment. I have tried both `/` and `\` with no change.

Most or all Windows APIs, and most programs, support the forward slash as a 
directory separator. The exceptions are mostly the cmd.exe built-ins.

> On Sat, Aug 21, 2021 at 09:21 <mailto:rgor...@centerprism.com> wrote
> When I type ‘openssl ca -config .\openssl.cnf -in ../server/req.pem -out

We need to see the contents of openssl.cnf. It might also help to have the CSR 
(req.pem). Since a CSR doesn't contain the private key (the CA should never see 
the private key), this is safe to share.

-- 
Michael Wojcik


RE: problems with too many ssl_read and ssl_write errors

2021-08-19 Thread Michael Wojcik
stem administrator analyzed the Windows event logs and the network 
statistics? Has anyone looked at network traces when the problem is occurring?

-- 
Michael Wojcik


RE: Compilation error using OpenSSL 1.1.1i

2021-07-01 Thread Michael Wojcik
> From: openssl-users  On Behalf Of 
> Jayalakshmi bhat
> Sent: Wednesday, 30 June, 2021 08:29

> I am getting the below error. Does anyone have inputs. Any help would be 
> appreciated.

> openssl/safestack.h(159) : error C2054: expected '(' to follow '__inline__'

[I don't think I've seen a reply to this. If it's already been answered, my 
apologies for the noise.]

With OpenSSL build questions, please always supply the Configure command line.

Offhand, it looks like your compiler doesn't recognize __inline__ as a 
decoration on a function declaration, and the pp-token "ossl_inline" is defined 
to expand to "__inline__".

Without digging through the configuration mechanism I can't say exactly how 
ossl_inline is defined, but presumably it's set by configuration based on what 
C implementation it believes you're using. So you may be using the wrong 
Configure target, or that target may assume a different C compiler, or a newer 
version of it.

--
Michael Wojcik


RE: openssl 1.1.1k on solaris 2.6 sparc

2021-06-24 Thread Michael Wojcik
> From: openssl-users  On Behalf Of david 
> raingeard
> Sent: Thursday, 24 June, 2021 07:06

> I compiled it using sun compiler, with some modifications to the source code.

If memory serves, OpenSSL doesn't work on Solaris SPARC if built using the Sun 
compiler. You have to use GCC. I'm pretty sure we discovered this in our SPARC 
product builds.

This, and some other platform issues (there's one with GCC optimization on x86 
64-bit, the details of which escape me now), are things I keep hoping to find 
time to dig into, but more-pressing work never seems to ease up.

--
Michael Wojcik


RE: reg: question about SSL server cert verification

2021-06-18 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jakob
> Bohm via openssl-users
> Sent: Friday, 18 June, 2021 09:38
>
> On 2021-06-18 16:23, Michael Wojcik wrote:
>
> >> From: openssl-users  On Behalf Of Jakob
> >> Bohm via openssl-users
> >> Sent: Friday, 18 June, 2021 07:10
> >> To: openssl-users@openssl.org
> >> Subject: Re: reg: question about SSL server cert verification
> >>
> > And there are a whole bunch of other checks: signature, validity dates, key
> > usage, basic constraints...
>
> Those checks would presumably happen after chain building,
> verifying that signatures, dates, key usage and other constraints
> are correct.

Well, that depends on the implementation; it could perform those checks while 
building the chain, as each certificate is added to the chain. My point was 
that they'll happen eventually, since the OP's question was pretty broad.

> > Also, the correspondence between the peer identity as requested by the
> > client, and as represented by the entity certificate, should not be done
> > using the CN component of the Subject DN (as OP suggested), but by comparing
> > against the Subject Alternative Name extension values
> >
> > (Jakob knows all this.)
>
> Actually, I have heard of nothing at all proposing the use of
> SANs on CA certificates or their use in chain building.  Hence
> why I refer only to matching the complete DN and/or matching
> the "key identifier" field.

I was only talking about the entity ("server", in this case) certificate above. 
The original message wasn't clear about whether the OP understood the use of 
SANs for the entity certificate and its validation against the peer name 
presented by the local application.

> However it is something that should be documented in OpenSSL
> documents such as the "verify(1ssl)" manpage, but somehow isn't.

Yes, that would be ideal. But, of course, someone needs to write that 
documentation.

> Revocation checks would also be part of the post-chain-building
> checks.

Yeah. I was looking at the bigger verification process.

> > My advice, for someone who wants to understand the certificate-validation
> > process in TLS, is:
> > [Snipped: List of academic texts for those who want to implement their own
> > X.509 code]

Well, opinions can differ. I've dealt with many, many customers who simply 
couldn't diagnose PKI issues because they didn't understand all the technical 
aspects of the process. They didn't know that browsers were rejecting the 
entity certificates generated by their internal CA because they had CA: TRUE in 
the Basic Constraints. They didn't understand that an entity certificate with 
no SANs wouldn't match both the bare hostname and the FQDN. They didn't 
understand how to manually construct the chain to understand which intermediate 
certificates they needed.

PKIX is a horrible mess of arcane specifications, requirements, and 
implementation ideosyncracies. In my experience, extensive technical knowledge 
is required to diagnose even a decent subset of the more common failure modes.

--
Michael Wojcik


RE: reg: question about SSL server cert verification

2021-06-18 Thread Michael Wojcik
y various national 
governments. That's just one example of an X.509-related mess that almost no 
one pays attention to.)

In practice you can learn enough about it to diagnose most 
certificate-validation problems. But it takes time and effort.

--
Michael Wojcik


RE: FW: X509_verify_cert() rejects all trusted certs with "default" X509_VERIFY_PARAM

2021-06-01 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jakob
> Bohm via openssl-users
> Sent: Tuesday, 1 June, 2021 09:58
>
> There is a very common extension to the validation of X.509
> certificates (which should ideally be available as an option
> parameter to OpenSSL validation APIs): The EKU in a CA:True
> certificate limits the end cert EKU values that are acceptable.
> The rule is NOT applied to ocspSigning due to a conflict with
> that EKU authorizing the CA public key to sign OCSP responses
> for the parent CA.
>
> For example a CA with EKU=emailProtection,clientAuth cannot be
> used to issue valid EKU=serverAuth certificates, however it can
> still issue a delegated EKU=ocspSigning delegated OCSP signing
> certificate.
>
> In this filtering anyExtendedKeyUsage acts as a wildcard
> indicating a universal CA, and   In practice, the complete
> absence of the EKU extension acts as an equivalent wildcard.

Makes sense. It would be nice if this were standardized as an update to RFC 
5280.

> The OpenSSL 3 code discussed, as described by Graham, appears
> to incorrectly apply the wildcard check without ORing it with
> the normal check for inclusion of the usage for which the chain
> is built and validated.  (I recommend that where such filtering
> is done, it is part of chain building as different chains may
> succeed for different usages).

Yeah, I suspected that, but I wanted to see if other people more familiar with 
this area of the code were going to comment.

> The CAB/F "guidelines" tend to include arbitrary restrictions above and
> beyond what good X.509 software libraries should do, such as limiting
> validity to 1 year, requiring end certificate holders to be magically
> able to respond to sudden revocations for bureaucratic reasons etc.  Or
> as quoted by Michael, a rule that all roots must be universal roots with
> the no-EKU implicit wildcard.

Agreed. I refer our customers to the CA/BF Basic Requirements when they're 
dealing with browsers and mainstream web servers -- since those programs are 
often written to follow the CA/BF rules -- but try to make it clear that the 
CA/BF doesn't control PKIX.

--
Michael Wojcik


FW: X509_verify_cert() rejects all trusted certs with "default" X509_VERIFY_PARAM

2021-05-28 Thread Michael Wojcik
Just realized I sent this directly to Graham instead of to the list.

-Original Message-
From: Michael Wojcik
Sent: Friday, 28 May, 2021 09:37
To: 'Graham Leggett' 
Subject: RE: X509_verify_cert() rejects all trusted certs with "default" 
X509_VERIFY_PARAM

> From: openssl-users  On Behalf Of Graham
> Leggett via openssl-users
> Sent: Friday, 28 May, 2021 06:30
>
> I am lost - I can fully understand what the code is doing, but I can’t see
> why openssl only trusts certs with “anyExtendedKeyUsage”.

Interesting. I wondered if this might be enforcing some RFC 5280 or CA / 
Browser Forum Baseline Requirements rule.

5280 4.2.1.12 says:

   In general, this
   extension will appear only in end entity certificates.

and

   If the extension is present, then the certificate MUST only be used
   for one of the purposes indicated.

Your certificate has serverAuth and emailProtection, yes? So it cannot be used 
to sign other certificates, and OpenSSL is correct as far as that goes. 5280 
doesn't define an EKU for signing certificates; so perhaps the intent of the 
OpenSSL code is "if EKU is present, this probably can't be used as a CA cert 
without violating 5280, but I'll look for this 'any' usage just in case and 
allow that".

The errata for 5280 and the RFCs which update it do not appear to affect this 
section.


The CA/BF BR 7.1.2.1, the part of the certificate profile that covers root 
certificates, says:

   d. extKeyUsage
  This extension MUST NOT be present.

Now, there's no particular reason for OpenSSL to enforce CA/BF BR, and good 
reason for it not to (the "CA" part refers to commercial CAs, and not all 
clients are browsers). But it's more evidence that root certificates, at least, 
should not have extKeyUsage because browsers can correctly reject those.

The CA/BF profile is more complicated regarding what it calls "subordinate" 
certificates, aka intermediates, so for non-root trust anchors there are cases 
where you can get away with extKeyUsage. But a good rule is "only put 
extKeyUsage on entity [leaf] certificates".


So that really leaves us with the question "do we want OpenSSL enforcing the 
extKeyUsage rules of RFC 5280?". And I'm tempted to say yes. In principle, the 
basicConstraints CA flag and the keyUsage keyCertSign option should suffice for 
this, but defense in depth, and in cryptographic protocols consistency is 
extremely important.

--
Michael Wojcik


FW: Strange warnings while linking to openssl version 1.1.1k

2021-04-12 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Robert 
> Smith via openssl-users
> Sent: Monday, 12 April, 2021 14:52

Your message has a Reply-to header set, directing replies to you rather than to 
the list. Don't do that; it's rude. Ask a question here, read the reply here. 
Other people may be interested in the responses.

> I am getting the following warning while linking my app to openssl version 
> 1.1.1k.
> Could you advise what can cause these warnings and how to resolve them?
> ../../../artifacts/openssl/arm3531/lib/libcrypto.a(async_posix.o): In 
> function `ASYNC_is_capable':
> async_posix.c:(.text+0x48): warning: warning: getcontext is not implemented 
> and will always fail

DuckDuckGo is your friend. The first hit for "getcontext is not implemented and 
will always fail" explains that this is an ARM issue, not an OpenSSL one. 
Another hit a little further down provides more details. See:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=369453

No one has implemented getcontext, etc, for ARM yet. Consequently they don't 
work. The warning messages are emitted by the GNU toolchain, which knows the 
context functions are not available on this platform.

OpenSSL can detect this at runtime - see ASYNC_is_capable() in async_posix.c, 
and its use in e.g. speed.c. Since there is no viable async implementation on 
Linux-ARM, you won't be able to use the OpenSSL async-job APIs, as described in 
the OpenSSL docs. If you don't need those APIs, these warnings are irrelevant.

--
Michael Wojcik


RE: Why does OpenSSL report google's certificate is "self-signed"?

2021-04-01 Thread Michael Wojcik
> From: Blumenthal, Uri - 0553 - MITLL 
> Sent: Thursday, 1 April, 2021 10:09
> To: Michael Wojcik ; openssl-users@openssl.org
> Subject: Re: Why does OpenSSL report google's certificate is "self-signed"?
>
> In general - I concur, but there are nuances: sending root CA cert is mostly
> harmless, but mostly useless - except when there's a human on the receiving
> end that can and is allowed to make a decision to accept and trust that CA
> cert.

Agreed. I tried to capture the summary of pros and cons in the document I'm 
writing for our customers.

> Re. PQC - even the "smallest" among them are much larger than what the
> Classic keys and signatures are. E.g., Falcon-1024 signature is 1330 bytes
> (or often less - say, 1200 bytes). Falcon-1024 public key is 1793 bytes.
> Compare to, e.g., ECC-384 sizes... NTRU public keys are "easier", but not by
> that much: 1230 bytes. Kyber public key is 1568 bytes. And I picked the
> *smallest* ones - those I'd consider using myself.
>
> There's also McEliece...

Yeah, if NIST standardizes on Classic McEliece for KEM, that's going to give us 
some *big* keys.

Certainly for resource-constrained applications, like embedded or high-volume, 
it makes sense to omit the root even with ECC. A few KB here and there will add 
up.

--
Michael Wojcik


RE: Why does OpenSSL report google's certificate is "self-signed"?

2021-04-01 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Mark
> Hack
> Sent: Thursday, 1 April, 2021 07:45
> To: openssl-users@openssl.org
> Subject: Re: Why does OpenSSL report google's certificate is "self-signed"?
>
> RFC6066
>
>Note that when a list of URLs for X.509 certificates is used, the
>ordering of URLs is the same as that used in the TLS Certificate
>message (see [RFC5246], Section 7.4.2), but opposite to the order in
>which certificates are encoded in PkiPath.  In either case, the
> self-signed root certificate MAY be omitted from the chain, under the
>assumption that the server must already possess it in order to
>validate it.

Thanks! I thought I'd seen something about the question in some standard. 
Having seen this, I see that RFC 8446 (TLSv1.3) has essentially the same 
language: "a certificate that specifies a trust anchor MAY be omitted from the 
chain" (4.4.2). So servers are good either way.

--
Michael Wojcik


RE: Why does OpenSSL report google's certificate is "self-signed"?

2021-04-01 Thread Michael Wojcik
Thanks to everyone who responded. You've confirmed my impression:

- There doesn't appear to be any applicable standard which requires or forbids 
including the root, or even endorses or discourages it).

- It's harmless except for performance issues and possible low-severity flags 
from analyses like Qualys's. (I wouldn't be surprised to have a customer raise 
this -- many of our customers run various scanning tools -- but for the 
products I work with, customers configure certificate chains anyway, so it's 
not a product issue.)

- Performance issues are likely negligible in many cases, where servers aren't 
dealing with huge workloads, but it's worth remembering that eventually people 
will be deploying PQC and most of the NIST finalists involve significantly 
larger keys or signatures. (They don't *all* have much larger keys/signatures; 
Falcon has a small combined public key and signature, if memory serves.)

--
Michael Wojcik


RE: Why does OpenSSL report google's certificate is "self-signed"?

2021-03-31 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Viktor
> Dukhovni
> Sent: Wednesday, 31 March, 2021 10:31
> To: openssl-users@openssl.org
> Subject: Re: Why does OpenSSL report google's certificate is "self-signed"?
>
> It looks like Google includes a self-signed root CA in the wire
> certificate chain, and if no match is found in the trust store,
> you'll get the reported error.

What do people think about this practice of including the root in the chain?

As far as I can see, neither PKIX (RFC 5280) nor the CA/BF Baseline 
Requirements say anything about the practice, though I may have missed 
something. I had a vague memory that some standard or "best practice" guideline 
somewhere said the server should send the chain up to but not including the 
root, but I don't know what that might have been.

On the one hand, including the root doesn't help with path validation: either 
some certificate along the chain is a trust anchor already, in which case 
there's no need to include the root; or it isn't, in which case the peer has no 
reason to trust the chain.

On the other, it's useful for debugging, and perhaps for quickly finding 
whether the highest intermediate in the chain is signed by a trusted root if 
that intermediate is missing an AKID (though we'd hope that isn't the case).

I can also see an application deferring trust to the user in this case: "this 
chain ends in this root, which you don't currently trust, but maybe you'd like 
to add it?". Which doesn't seem like a great plan either -- and PKIX says trust 
anchors should be added using a trustworthy out-of-band procedure, which this 
is not -- but I suppose it's a conceivable use case.

--
Michael Wojcik


RE: FIPS compliance with openssl-1.1.1j

2021-03-12 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Nagarjun 
> J
> Sent: Friday, 12 March, 2021 06:49

> How to be FIPS compliance with openssl-1.1.1j version , as does not have fips
> object module, is they any ways?

It's possible, in theory; it's even been done. But it's almost certainly not 
feasible for your organization.

You can port the OpenSSL 1.0.2 FOM to work with 1.1.1; Red Hat and SUSE both 
did that. Or write your own FIPS-140-compliant crypto layer. Then there's just 
the small matter of getting it validated, which involves some expense (tens of 
thousands of dollars) and delay (the CMVP is booked solid for the rest of the 
year, I hear); and the CMVP probably aren't going to do any more FIPS 140-2 
validations after the current batch, now that FIPS 140-3 is here.

If you did get the 1.0.2 FOM working with 1.1.1, it's possible you'd be able to 
convince some customers to accept a self-validation based on the existing 
OpenSSL validation. Of course the OpenSSL validation for the existing FOM is on 
the Historical list, which means it's not supposed to be used for new 
procurements anyway.

So, in practice, no. Unless you're on Red Hat Enterprise Linux or SUSE 
Enterprise Linux and can use the FIPS-validated OpenSSL 1.1.1 they supply, I 
guess. (I assume that's available in some RHEL and SLES releases -- I haven't 
actually checked. I just know that Red Hat announced they'd done it, and SUSE 
actually published their patches.)

If it's any consolation, many organizations are in the same boat. We have 
products which are still shipping FIPS, but that's with an OpenSSL 1.0.2 with 
Premium Support and in some cases with a substitute FIPS module that we 
developed years ago and got our own validations for. That's not an option for 
most people. (I don't blame openssl.org for this state of affairs -- FIPS 
validations are expensive and resource-intensive, and few OpenSSL consumers 
support the project. Yes, 3.0 has slipped its original schedule by quite a lot, 
but better to get it right.)

--
Michael Wojcik


RE: Client certificate authentication

2021-03-11 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Viktor
> Dukhovni
> Sent: Thursday, 11 March, 2021 10:39
> To: openssl-users@openssl.org
> Subject: Re: Client certificate authentication
>
> > On Mar 11, 2021, at 2:16 PM, Robert Ionescu 
> wrote:
> >
> > I am searching for the functions in openssl used to verify the clients
> > certificate when using mutual authentication.
> > My intention is to find a way to log a wrong user certificate directly
> inside
> > the openssl source.
>
> What does "wrong" mean?

This is an important question. PKIX does not specify the interpretation of the 
client certificate. While chain construction and most of validity checks 
(signature, validity dates, basic constraints, KU and EKU, etc) apply, the 
association between the identity claimed by the certificate and the client is 
not determined by the standard.

Even the form of that association and what is being identified are up to the 
application. Conventionally, I believe these options are most commonly used:

1. The client certificate identifies the peer system, i.e. the network node 
that the server is communicating with. This might look symmetric with the 
client's identification of the server, but it isn't, because a client specifies 
a server identity (e.g. by hostname) and then verifies that using the server 
certificate; but in the normal use case, the server has no prior notion of the 
client system's identity. So the server might get the peer IP address from the 
stack and then look for an IPADDR SAN in the client's certificate which matches 
that, for example. The server might also attempt reverse DNS (PTR record) 
resolution from the IP address to a hostname or FQDN and look for a 
corresponding DNS SAN or Subject CN, though that option is fraught with 
potential for abuse.

2. The client certificate identifies the user. Here the certificate is issued 
to, and identifies, a person or other actor (e.g. the peer application) rather 
than a network identity. What the server application does with this information 
is a further question.

3. The client certificate matches a preconfigured allow list: The server 
application just has some list of "permit any client identified by one of these 
certificates".

4. The client certificate is validated but beyond that is used as an opaque 
reference to some other database. This is a variation on #3. IBM's CICS Web 
Interface provides this mode, where clients can send arbitrary certificates as 
long as they're valid and belong to a chain that terminates in one of the 
configured trust anchors. The handshake is completed. Then the system will look 
that certificate up in the security database to see if it's known and 
associated with a user identity. If not, the application (or more properly the 
CWI subsystem) prompts for user credentials using HTTP Basic Authentication 
(over the TLS channel); if that's successful, the association between client 
certificate and user account is recorded and the conversation continues.

5. No further vetting of the certificate is done. Essentially the client 
authentication serves simply as a generic gatekeeper, so that only clients 
possessing an acceptable certificate are allowed to establish a TLS connection 
to the server. Any authentication beyond that is handled by the application 
using other means.

So a client certificate can be "wrong" in the basic PKIX sense of "invalid 
certificate" or "can't build a path", but beyond that the interpretation is up 
to the server-side application.

--
Michael Wojcik


RE: SP800-56A REV3

2021-02-08 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Nagarjun 
> J
> Sent: Monday, 8 February, 2021 10:33

> What is this SP800-56A REV3 new FIPS requirement,

Well, one thing it isn't is "new". Published April 2018, so nearly 3 years ago. 
See:
https://csrc.nist.gov/publications/detail/sp/800-56a/rev-3/final

> How it affects ECDH ,

I believe it mostly affects which curves are permitted, and which are required 
for security strength > 112 bits. See this description:
https://www.doncio.navy.mil/chips/ArticleDetails.aspx?ID=9667

> how it is different from  openssl-2.0.16 ECDH implication.

Presumably you're referring to the version of the OpenSSL FOM, since OpenSSL 
itself went from 1.1.1 to 3.0.

FOM 2.0.16 was completed before SP800-56A Rev.3 was published, so obviously it 
doesn't meet the Rev.3 specification. So the curves which are allowed in FIPS 
mode are different - and in particular, more restricted - in OpenSSL than they 
would be if it followed Rev.3.

> Which all functions that affects.

All of the ECDH ones, in FIPS mode, since it affects which curves are allowed. 
Outside of FIPS mode, it's irrelevant.

The OpenSSL FOM's validation has historical status anyway, so I don't see the 
lack of SP800-56A Rev.3 compliance as making much of a difference in terms of 
validation. I suppose it might create issues for interoperability, if a peer 
system using a different implementation in FIPS mode insisted on using a curve 
allowed by Rev.3 but not by earlier SP800-56A revisions. But I generally don't 
work with FIPS mode.

--
Michael Wojcik


RE: OpenSSL 1.1.1g Windows build slow rsa tests

2021-01-21 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Dr Paul
> Dale
> Sent: Wednesday, 20 January, 2021 19:28
>
> I'd suggest giving a build without the no-asm option a try.  The
> performance difference is usually quite significant.

I agree. It just doesn't explain what Dan's email claims.

> Statis vs dynamic builds wouldn't normally be associated with such a
> large difference.  If the difference were routinely this large, nobody
> would use dynamic linking.

In this case it's the static-linked version which is slower. But I'd be 
surprised if that's actually the cause.

--
Michael Wojcik


RE: OpenSSL 1.1.1g Windows build slow rsa tests

2021-01-20 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Dr Paul
> Dale
> Sent: Wednesday, 20 January, 2021 16:19
>
> Try building without the no-asm configuration option.

That was my first thought, but according to Dan's message, the firedaemon 
version is also built with no-asm.

The only relevant differences I see between the two builds are static (Dan's) 
versus dynamic (firedaemon's) linkage:

> On 21/1/21 6:18 am, Dan Heinz wrote:

> > compiler: cl /Fdossl_static.pdb  /Gs0 /GF /Gy /MT /Zi /W3 /wd4090
> > /nologo /O2 -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_NO_DEPRECATED

/MT uses the static-linked MSVC runtime.

> > Here is the downloaded binary from
> > https://kb.firedaemon.com/support/solutions/articles/4000121705
> > <https://kb.firedaemon.com/support/solutions/articles/4000121705>:
> > compiler: cl /Zi /Fdossl_static.pdb /Gs0 /GF /Gy /MD /W3 /wd4090 /nologo
> > /O2 -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_NO_DEPRECATED

/MD uses the dynamic-linked MSVC runtime.

> > Here are my configure parameters:
> > Configure VC-WIN64A no-shared  no-asm no-idea no-mdc2 no-rc5 no-ssl2
> > no-ssl3 no-zlib no-comp no-pinshared no-ui-console
> >   -DOPENSSL_NO_DEPRECATED --api=1.1.0
> >
> > And their configure parameters:
> > Configure VC-WIN64Ano-asm no-ssl3 no-zlib no-comp no-ui-console
> > --api=1.1.0 --prefix="%openssl-dst%" --openssldir=ssl
> > -DOPENSSL_NO_DEPRECATED

Assuming the lack of a space between "VC_WIN64A" and "no-asm" is a typo, 
they're also building with no-asm, and the only significant difference for this 
case that I can see is no-shared. (no-pinshared looks even less likely to 
affect this test, and does it even have any effect when building no-shared?)

Linking with /MT will affect code size and layout, which could adversely affect 
code caching. It's not impossible that would have a factor-of-four penalty on 
compute-bound code. I'm reluctant to conclude that's the problem, though, 
without more evidence.

Unfortunately tracking this down would likely require profiling.

That's assuming Dan is correct about the firedaemon build being configured with 
no-asm.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2021-01-12 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Tuesday, 12 January, 2021 00:18

> I'm running this in Windows 10 and when I load the smart card middleware
> PKCS11 DLL, I see the exception:
> Exception thrown at 0x773046D2 in GENCom.exe: Microsoft C++ exception:
> unsigned long at memory location 0x07FCFA00.

OK. If I were debugging libp11, it would be useful to know what the exception 
actually was, but as it is all I can say is that it seems to be a libp11 
problem. As you noted further below:

> It looks like someone else using a smart card has also encountered similar
> problems in Windows but there is no real answer as to why they are occurring:
> https://www.codeproject.com/Questions/1254182/Smart-card-apis-throw-first-chance-
> exceptions-but

You'll probably have to just swallow the exceptions and retry until it works or 
your code decides to give up and return an error. Maybe one of the libp11 
maintainers or someone else using the library will dig into it at some point.

--
Michael Wojcik


RE: Sign without having the private key

2021-01-11 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Timo 
> Lange
> Sent: Monday, 11 January, 2021 10:56

> The root certificate, as well as the client private key is not available 
> inside
> the container, but stored in a HSM.
> For sure the private key may never leave the HSM

OK.

> and also the root certificate should not.

This doesn't make any sense. Certificates are not sensitive data, and it's 
inconvenient, if not impossible (depending on application software and HSM 
firmware) to split certificate chain validation across the host machine and the 
HSM.

Using the HSM as a certificate trust anchor *store* might make sense, depending 
on the use case. But the certificate would have to be extracted from the HSM by 
the application at runtime and made available to OpenSSL (or whatever's 
handling chain validation) so the peer's entity certificate can be verified.

> The application cannot directly interfere with the HSM through standardized 
> mechanisms
> as it is not accessible from inside the container.
> For doing so a proprietary interprocess-communication is required.

That certainly seems like unnecessary complexity, but I'll assume there's some 
valid justification for it.

> I assume I need to write a custom ENGINE, but failed with all my approaches.

You *could* write a custom engine, which you'd then have to rewrite as a custom 
provider when support for OpenSSL 1.1.1 ends and you need to move to OpenSSL 
3.0 or its successor.

However, you could also hide your IPC mechanism under a PKCS#11 implementation, 
and just use OpenSSL's PKCS#11 engine. PKCS#11 is the standard mechanism for 
talking to an HSM, and nothing says it can't involve IPC in the middle.

That is: OpenSSL -> pkcs11 engine -> your IPC client (written as a PKCS#11 
library) -> some communications channel -> your IPC server -> real PKCS#11 
library for your HSM. You could implement the IPC client and server using an 
open-source PKCS#11 shim such as pkcs11-helper. This area has been discussed 
recently on this list.

However, now you have the problem of securing the IPC channel. This is an 
architecture I'd be reluctant to endorse, given the complexity and attack 
surface.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2021-01-11 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Sunday, 10 January, 2021 21:01

> Right now I am using the "libp11" DLL (i.e. 
> libp11-libp11-0.4.11\src\pkcs11.dll)
> with my PKCS11 smart card middleware DLL. Should I be using the OpenSC pkcs11 
> DLL
> instead of my middleware DLL if I am using libp1?

Honestly, I have no idea. It's been years since I worked with PKCS#11, and then 
I was using a single piece of test hardware. I got it working with OpenSSL 
using the OpenSC modules, but that may have been specific to my case.

> Do you know if it is normal to see exceptions related to the PKCS11 function 
> calls
> in the libp11 code? For example, I can see  the following function generate an
> exception on C_GetSlotList(...) multiple times but it eventually is 
> successful.
> Is this normal behaviour?

What sort of "exception"? A Windows exception? UNIX signal? C++ exception?

My initial guess would be that this is a timing issue - maybe the device needs 
some time to become available, for example. But that's just a guess. Maybe 
someone with more experience with a variety of HSMs and PKCS#11 will weigh in.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2021-01-08 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Friday, 8 January, 2021 14:35

> The comment indicates that the flag RSA_METHOD_FLAG_NO_CHECK should be set
> for smart cards[...]

> However, it is not actually set when I use a debugger to inspect the flag.
> Does it need to be set? If so, how is this done?

If memory serves, the PKCS#11 implementation invoked by the pkcs11 engine is 
supposed to set it.

See for example this patch to OpenSC's pkcs11-helper library:

https://github.com/OpenSC/pkcs11-helper/commit/5198bb1e557dfd4109bea41c086825bf6ebdd9f3

(That patch actually is to set a different flag, but it shows the code in 
question.)

I know, that's probably not terribly helpful.

If you do a web search for something like

pkcs11 "RSA_METHOD_FLAG_NO_CHECK"

you'll probably find a number of hits where other people ran into similar 
problems.

Isn't PKCS#11 grand? If you're bored with all the interoperability problems of 
X.509, PKIX, and TLS, we have good news!

--
Michael Wojcik


RE: Random and rare Seg faults at openssl library level

2021-01-07 Thread Michael Wojcik
> From: Jan Just Keijser 
> Sent: Thursday, 7 January, 2021 01:23
>
> On 06/01/21 21:57, Michael Wojcik wrote:
> >
> >
> > But you're asking the wrong question. The correct question is: Why are you
> > using an outdated version of OpenSSL?
>
> possibly because:
>
> $ cat /etc/redhat-release && openssl version
> CentOS Linux release 7.9.2009 (Core)
> OpenSSL 1.0.2k-fips  26 Jan 2017

Ugh. Well, OP should have made that clear in the original message.

And this is one of the problems with using an OpenSSL supplied by the OS vendor.

--
Michael Wojcik


RE: Random and rare Seg faults at openssl library level

2021-01-06 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Gimhani 
> Uthpala
> Sent: Wednesday, 6 January, 2021 10:10

> I'm running an application which uses openssl for secure communication between
> processes. I am getting seg-faults at openssl level. This only occurred very
> randomly and the following are stacks that seg faults  at openssl level in the
> given 2 cases.

> We are using openssl 1.0.2k.

Sometimes you see a question that nearly answers itself.

You're using a release that's approaching four years old, and which is 
unsupported, unless you have a premium support contract from openssl.org or 
similar through another vendor. If you do, that's whom you should ask.

In any case, why are you using 1.0.2k? At the very least you should be using 
the final 1.0.2 release -- and then only if you absolutely can't move to 1.1.1 
(generally because you need FIPS validation, but you don't mention FIPS). And 
then you need a premium support contract, if this is a commercial product. 
Particularly these days it's very hard to forgive a commercial-software vendor 
using an outdated, unsupported third-party component.

The most recent version of 1.0.2 that I happen to have lying around is 1.0.2n, 
and there's nothing in the changelog between 1.0.2k and 1.0.2n which looks 
likely to cause this particular problem (though CVE-2017-3735 is a slight 
contender). But that just means the cause isn't anything obvious between k and 
n.

> Went through the security vulnerabilities list for this version but couldn't
> find a clue. Running valgrind too didn't give an exact clue related to the 
> issue.
> Can you please guide me how can I find the exact root cause for the seg fault?

The same way you'd track down an intermittent cause of Undefined Behavior in 
any other program: some combination of dynamic monitoring, symbolic execution, 
static code analysis, source code review, testing variants, tracing, fuzzing, 
post-mortem analysis, and so on. This isn't specific to OpenSSL.

But you're asking the wrong question. The correct question is: Why are you 
using an outdated version of OpenSSL?

--
Michael Wojcik


RE: openssl fips patch for RSA Key Gen (186-4)

2021-01-05 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Matt
> Caswell
> Sent: Tuesday, 5 January, 2021 09:35
>
> On 05/01/2021 11:41, y vasavi wrote:
> >
> > We currently FOM 2.0 module for FIPS certification.
> > It doesn't have support for RSA Key generation(186-4)
> >
> > Are there any patches available ?
>
> Definitely there are no official ones (I'm also not aware of any
> unofficial ones).

And such a patched module would no longer be FIPS 140 validated.

I know of at least one commercial, proprietary fork of the OpenSSL FOM 2.0 with 
186-4 support. It has its own validations, obtained by the vendor. It's part of 
a commercial software package and not available for use by other software.

If memory serves, SUSE also implemented 186-4 when they ported the FOM 2.0 to 
OpenSSL 1.1.1. SUSE open-sourced their changes - you can find the diffs on one 
of the SUSE sites - but again, they had to get a new validation. It applies 
only to their module when used on SLES. (Red Hat similarly did their own ports 
and got their own validations for RHEL. I don't know whether they published 
their changes.)

So it's possible, but as usual with FIPS 140, you have the time and expense of 
validation. That's even more complicated now than it has been in past years, 
thanks in part to the transition from FIPS 140-2 to 140-3. I've heard from 
people with contacts in the CMVP that "the queue is full" for the year, and 
anyone not already in line will be waiting even longer than usual for a 
validation.

--
Michael Wojcik



RE: Directly trusted self-issued end-entity certs - Re: How to rotate cert when only first matching cert been verified

2021-01-01 Thread Michael Wojcik
> From: openssl-users  On Behalf Of ???
> Sent: Friday, 1 January, 2021 00:08

> How to pick up cert from trust store(or cert container as you say)
> is decided by different implementation themselves, do I understand correctly?

Yes, in some cases under partial or complete control by the application. Some 
APIs, including OpenSSL, give the application a lot of control over the 
building of the chain; others don't.

And almost everyone does it incorrectly. See for example:

https://duo.com/labs/research/chain-of-fools

https://nakedsecurity.sophos.com/2020/06/02/the-mystery-of-the-expiring-sectigo-web-certificate/

https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html

https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/CPT/CPT_Tool_Test-Report_Findings.pdf

(There was another article published not that long ago that surveyed a number 
of TLS implementations and how they built chains, pointing out how they failed 
to follow various requirements of PKIX, and what kinds of errors and failures 
they were prone to. It's similar to the CPT paper linked above, but included 
comparisons of different OpenSSL versions. I can't seem to find it at the 
moment.)

The path-validation algorithm in RFC 5280 and the path-building algorithm from 
RFC 4158 are agonizingly complex. Note, for example, that the description of 
the path-building algorithm in 4158 is 20 pages, without including the 
preliminary material or the longer section on optimizations.

TLS simplifies the general problem of X.509 chain construction by limiting what 
entities are supposed to send (X.509 lets you send any random collection of 
certificates, or for that matter any other data, in addition to the entity 
certificate; TLS says "send just a single chain from entity to root or to a 
certificate signed by the root"). But it's still awful, particularly when 
things like expiration and cross-signing come into play, and no version of 
OpenSSL (or any other popular library, as far as I remember) gets it entirely 
right for all cases.

In practice, if you use a supported OpenSSL release at the latest fix level 
(that means 1.1.1i at the moment), and you follow good advice about how to use 
it, and your use case isn't too complex, you provably achieve reasonable 
security under a typical application threat model. You'll want to make it 
relatively straightforward to update your trust-anchor collection. If you have 
to support an environment where things like cross-signing and multiple 
validation paths become important, that makes things harder. If you have 
stringent security requirements, that makes things harder. On the other hand, 
there are so many applications which fail to do even minimal certificate 
validation, so you can take comfort in knowing you're better than them, anyway.

--
Michael Wojcik



RE: SHA256 openssl-1.1.1i Checksum Error

2020-12-28 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Dr. 
> Matthias St. Pierre
> Sent: Monday, 28 December, 2020 11:50

> I have no experience with zsh, but it seems that quoting is handled
> differently by zsh?

Is the problem that quoting is handled differently, or that he actually had 
Unicode left-double-quote and right-double-quote characters there rather than 
proper ASCII double-quote characters? That's how it appears in the message as I 
received it.

> At least it looks like the double quotes ended up in the GET line

Agreed.

> and you simply received an HTTP 404 Not Found (which is the reason why your
> digest isn’t correct.)

Agreed.

I'll add: Don't check the checksum. Check the signature:

1. Install an OpenPGP implementation such as gpg, if you don't already have 
one. (One may come with macOS; I have no idea.)

2. Download the .asc file corresponding to the tarball you downloaded.

3. Check the signature. With gpg2, for example:

   $ gpg2 --verify openssl-1.1.1i.tar.gz.asc openssl-1.1.1i.tar.gz
   gpg: Signature made 12/08/20 06:21:06 MST using RSA key ID 0E604491

Now, you presumably won't have the signing public key (for 1.1.1i that's a key 
owned by Matt Caswell) in your keyring. You can download it from a public 
keyserver and mark it as trusted, so you'll also get verification that the 
signature was generated with the correct key:

   gpg: Good signature from "Matt Caswell " [full]
   gpg: aka "Matt Caswell " [full]

While checking the signature runs into all the well-documented issues with the 
PGP Web of Trust, it's still stronger (in the sense that it prunes more of the 
attack tree, under sensible threat models) than just checking the hash. And 
once you're set up to do it, it's a simpler operation for future downloads.

--
Michael Wojcik


RE: openssl-users Digest, Vol 73, Issue 29

2020-12-28 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jochen
> Bern
> Sent: Friday, 25 December, 2020 03:37

I believe David von Oheimb has already provided a solution for the original 
problem in this thread (setting subjectKeyIdentifier and authorityKeyIdentifer 
lets OpenSSL pick the right certificate from the trust-anchor collection). I 
wanted to comment on this tangential point:

> For server
> certs, where you need the CN to match the FQDN, you might want to add an
> OU with a timestamp so as to have the *DN* as a whole differ ...

If your entity certificate is X.509v3 and the client complies with RFC 5280, 
the CN of the Subject DN shouldn't matter, as long as the server name *as 
expected by the peer* appears in a subjectAlternativeName extension.

That is, if the client wants to connect to "www.foo.com", the server's 
certificate should have a DNS-type sAN with the value "www.foo.com". If the 
client wants to connect to the unqualified hostname "foo", the server's 
certificate should have a DNS-type sAN with the value "foo". If the client 
wants to connect to "192.168.2.1", the server's certificate should have an 
IPADR-type sAN with that value. And so on. If any sANs are present, the CN (if 
any) of the Subject DN should be ignored.

Here "wants to connect" is defined by the application and/or its TLS 
implementation. The implementation may provide a way for a client to specify 
the subject-name it wants to find in the entity certificate, or it may simply 
take whatever hostname or IP address string it's asked to connect to, and use 
that.

Also remember that OpenSSL prior to 1.0.2 didn't have support for checking 
hostnames at all. With 1.0.2 you have to make some non-obvious calls to set the 
expected name, and with 1.1.0 and later you need to use SSL_set1_host (or the 
1.0.2 method); there's a page on the OpenSSL wiki for this. I don't remember if 
this has changed again in 3.0.

--
Michael Wojcik


RE: openssl-users Digest, Vol 73, Issue 29

2020-12-28 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jochen
> Bern
> Sent: Friday, 25 December, 2020 03:37

I believe David von Oheimb has already provided a solution for the original 
problem in this thread (setting subjectKeyIdentifier and authorityKeyIdentifer 
lets OpenSSL pick the right certificate from the trust-anchor collection). I 
wanted to comment on this tangential point:

> For server
> certs, where you need the CN to match the FQDN, you might want to add an
> OU with a timestamp so as to have the *DN* as a whole differ ...

If your entity certificate is X.509v3 and the client complies with RFC 5280, 
the CN of the Subject DN shouldn't matter, as long as the server name *as 
expected by the peer* appears in a subjectAlternativeName extension.

That is, if the client wants to connect to "www.foo.com", the server's 
certificate should have a DNS-type sAN with the value "www.foo.com". If the 
client wants to connect to the unqualified hostname "foo", the server's 
certificate should have a DNS-type sAN with the value "foo". If the client 
wants to connect to "192.168.2.1", the server's certificate should have an 
IPADR-type sAN with that value. And so on. If any sANs are present, the CN (if 
any) of the Subject DN should be ignored.

Here "wants to connect" is defined by the application and/or its TLS 
implementation. The implementation may provide a way for a client to specify 
the subject-name it wants to find in the entity certificate, or it may simply 
take whatever hostname or IP address string it's asked to connect to, and use 
that.

Also remember that OpenSSL prior to 1.0.2 didn't have support for checking 
hostnames at all. With 1.0.2 you have to make some non-obvious calls to set the 
expected name, and with 1.1.0 and later you need to use SSL_set1_host (or the 
1.0.2 method); there's a page on the OpenSSL wiki for this. I don't remember if 
this has changed again in 3.0.

--
Michael Wojcik


RE: How to rotate cert when only first matching cert been verified

2020-12-23 Thread Michael Wojcik
> From: 定平袁 
> Sent: Tuesday, 22 December, 2020 20:08
> To: Michael Wojcik 

Please do not send messages regarding OpenSSL to me directly. Send them to the 
openss-users list. That is where the discussion belongs.

> > Why are you appending it to the file containing the existing certificate?

> I am rotating certificate, before the server side cert been replaced, the 
> client
> side cert need to be valid, so when rotating, need both old and new cert 
> exist.

I'm afraid it still isn't clear to me what you're doing. Both the server's 
entity certificate and the client's entity certificate are in the same file? 
What does this file contain before you append the new certificate?

> > It sounds like you're updating the server's entity certificate.

> I guess it's entity certificate (still trying to understand different cert
> concept...)

Does it identify the server, in the Subject DN and/or one or more Subject 
Alternative Name extensions?

> Below is the error message:

I'm afraid that message doesn't appear to contain any useful information.

> All the 3 clients used the same ca.crt file, which has an old cert in
> first, then a new cert behind. Only Python (used OpenSSL) failed.

So *this* sounds like what you're changing in this particular file is the set 
of trust anchors, not the entity certificates.

Where did your "CA" certificates come from? A commercial CA or some personal or 
organizational CA? From your description it sounds like the problem may be that 
the CA certificates were not generated correctly. Without the certificates to 
examine, we can't say.

Can you post the old and new certificates in PEM form in your next message?

Please note that due to the holidays I will not be reading email for several 
days, and it's likely that some other regular list members will be similarly 
unavailable.

--
Michael Wojcik


RE: How to rotate cert when only first matching cert been verified

2020-12-21 Thread Michael Wojcik
> From: openssl-users  On Behalf Of ???
> Sent: Saturday, 19 December, 2020 17:59

> 1. Generate a new cert, and append it to the cert file

Why are you appending it to the file containing the existing certificate?

> (at this point, there are 2 certs in the file, first is old cert, second is
> new, they have the same Subject), restart client side process, (no problem
> here, because first cert matching server side cert, and it verifies
> successfully)

> 2. Replace server side with new cert.

It sounds like you're updating the server's entity certificate.

> As soon as I issue step #2, the client side process starts to show error
> “certificate verify failed”.

There are many possible reasons for verification to fail.

> https://www.openssl.org/docs/man1.0.2/man3/SSL_CTX_load_verify_locations.html,
> it says the exact behavior like my test:

Similar symptoms, perhaps. But this page discusses "CA certificates" - that is,
intermediate and root certificates that have been configured to be trust anchors
or contributors to the trust chain. It has nothing to do with entity 
certificates,
which is what you're changing here.

You haven't given us enough information to guess why the new certificate is
failing client verification. You need to get detailed failure information from
the client program, or use a different client that gives you detailed 
information,
or use a utility such as "openssl verify" to test the certificate chain locally.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2020-12-14 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Monday, 14 December, 2020 13:01

> Once I get  the resulting EVP_PKEY using ENGINE_load_private_key(...),
> how do I assign it to pkey in the callback function?

I don't know offhand. As I said in my other message, that's not an area I had 
to get into when I was working with PKCS#11 some years ago.

My advice is to look at existing examples, such as the code Jan pointed you to.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2020-12-14 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Monday, 14 December, 2020 09:36

>   I see what you mean. So once I have everything setup, i use the following
> to get the private key:
> EVP_PKEY *pkey = ENGINE_load_private_key(...);
>
> Will pkey actually contain the private key from the smart card?

It had better not.

> I thought it was not possible to get a private key from a smart card?

That's the point of the smartcard (or other HSM), yes.

> Once I have pkey, do I simply use it within the client_cert_cb callback 
> function?

You can't get the private key from the smartcard. Instead, you have to let the 
engine do the encryption. I don't know what ENGINE_load_private_key actually 
does - in my PKCS#11 work I didn't have to get into this - but I suspect it 
just puts a key identifier into pkey.

Then what ought to happen is that you pass that pkey to OpenSSL where you need 
an EVP_PKEY, and OpenSSL will call the engine's appropriate method for whatever 
it needs to do, and the engine will tell the smartcard "do this thing using the 
key with this identifier".

I suggest you refer to a example such as the PPP code that Jan cited to see how 
it does this sort of thing.

Or you can take the approach that Paul suggests in his reply of writing your 
own engine specifically for your hardware, if you don't need generic PKCS#11 
support. Basically, PKCS#11 gives you support for more devices, and in 
principle should do some of the work for you; but as Paul suggests, the PKCS#11 
API and its dependence on external drivers and libraries means it's not easy to 
work with. In some cases where you only need to support one type of device (or 
a family of devices that all use the same driver / library) it might well be 
easier to just write a simple engine that only supports the features you need. 
You can use the source for the existing engines in OpenSSL to get an idea of 
what that looks like.

A few years back I forked the OpenSSL CAPI engine to make some fixes and 
enhancements, and that was pretty straightforward.

So if you have a well-documented API for your particular smartcard, with handy 
functions like "do this to get an RSA signature of a blob of data with this key 
ID and these parameters", you may want to try Paul's route. Really depends on 
your requirements and what kind of support you already have for your device.

And all of this changes in 3.0 with the new "provider" architecture, so you'll 
get to take another crack at it soon.

--
Michael Wojcik


RE: private key not available for client_cert_cb

2020-12-14 Thread Michael Wojcik
> From: openssl-users  On Behalf Of George
> Sent: Monday, 14 December, 2020 08:15

>   Thanks for your response. It looks like I don't already have the PPP and 
> PPPD.

You don't need PPP to use a smartcard or other PKCS#11 device. Jan just 
mentioned the source as a exemplar of the interactions your code will need to 
have with OpenSSL.

> Are there any other ways to get the Smart Card to work without needing to
> install additional software?

Probably not.

OpenSSL's PKCS#11 Engine implements the PKCS#11 API. That API needs a way to 
talk to the particular PKCS#11-compatible hardware you're using. That means it 
needs a driver, and generally some configuration as well.

It's been a few years since I last played around with this - I got OpenSSL 
working with a NitroKey as part of a code-signing spike - but you'll need to 
investigate PKCS#11 support for your particular device. There are Open Source 
projects such as OpenSC which may give you part or all of what you need to get 
OpenSSL's PKCS#11 Engine working with your hardware.

When I did it, it wasn't trivial. I spent a couple of days on investigation and 
experimenting before I got anything working, and a couple more days making sure 
I understood the entire process and documenting procedures that worked 
consistently. (With some applications I had persistent problems such as Windows 
insisting on prompting for the device PIN instead of letting me supply it 
programmatically, but I think that was only when using Microsoft APIs rather 
than going through OpenSSL.)

If the client certificate uses a public key that corresponds to a private key 
on the smartcard, though, that's what you'll have to do. You can't use a 
certificate as a proof of identity without the corresponding private key. (Some 
HSMs and other crypto devices have support for exporting private keys, often as 
multiple shares, for backup and cloning purposes. Using that to get the private 
key for direct use defeats the whole purpose of an HSM, of course, so that 
shouldn't be used to bypass the card.)

--
Michael Wojcik


RE: Client-Certificate blocking without conrolling the issuing CA

2020-12-04 Thread Michael Wojcik
> From: Vincent Truchsess - rockenstein AG 
> Sent: Friday, 4 December, 2020 08:59
>
> That would be the the ideal solution. The problem is that the customer's
> security-policy demands dedicated hardware performing IDS/IPS functionality
> at the point of TLS-termination. The devices at hand do not provide the
> functionality to call a user-defined external service for certificate
> validation apart from OCSP.
>
> The future workaround will be a mockup OCSP-responder but that solution will
> need some time for implementation. our current focus lies on a rather quick
> than perfect solution that buys some time to ship something more solid.

Ah, I see. Thanks for the clarification.

I don't offhand see a quick workaround for your situation. I'm not sure what 
would happen if you cross-signed all the client certificates with a CA under 
your control, and then generated a CRL for the ones you want to exclude. Or 
actually you could just cross-sign only the ones you want to allow, and made 
your CA the only trust root for the TLS termination systems; that would work. 
But I'm guessing modifying every client certificate is not a feasible solution 
for you either.

If it is, cross-signing with a CA under your control and trusting only that CA 
is probably the approach I'd go for. That's a legitimate approach under PKIX. 
It could even be mostly automated, except the end users would have to install 
updated user certificates, which is probably a deal-breaker.

--
Michael Wojcik


RE: Client-Certificate blocking without conrolling the issuing CA

2020-12-04 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Vincent
> Truchsess - rockenstein AG
> Sent: Friday, 4 December, 2020 04:27
>
> The organization legally responsible for the application maintains a
> blocklist of certificate serials they consider to be invalidated. Also, this
> organization does not bother to get those certificates revoked by their CA so
> using OCSP or CRLs against the CAs services has no effect on denying access
> to invalid users.
>
> The hardware performing the certificate-validation allows for locally stored
> CRLs. Our intention was to generate those ourselves using a selfsigned CA. As
> far as I went, it seems that openssl only allows for revocations of
> certificates signed by the local CA.

I assume you mean "certificates signed by the issuing CA". The CRL has to be 
generated by the CA that issued the certificates.

It seems to me that the simplest solution would be to have the application add 
a certificate validation callback that checks the serial number against this 
not-really-a-CRL list of forbidden client certificates. That's the sort of 
thing certificate validation callbacks are for: implementing additional 
restrictions (or removing existing ones) on which certificates will be accepted.

--
Michael Wojcik


RE: EC curve preferences

2020-11-20 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Skip
> Carter
> Sent: Friday, 20 November, 2020 09:44
>
> What are the preferred ECDH curves for a given keysize ?  Which curves
> are considered obsolete/deprecated/untrustworthy ?

For TLSv1.3, this is easy. RFC 8446 B.3.1.4 only allows the following: 
secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019), x25519(0x001D), 
x448(0x001E). Those are your choices. If you want interoperability, enable them 
all; if you want maximum security, only use X25519 and X448. See 
safecurves.cr.yp.to for the arguments in favor of the latter position.

Frankly, unless you're dealing with something of very high value or that needs 
to resist breaking for a long time, I don't see any real-world risk in using 
the SEC 2 curves. You might want to disallow just secp256r1 if you're concerned 
about that key size becoming tractable under new attacks or quantum computing 
within your threat timeframe. Ultimately, this is a question for your threat 
model.


For TLSv1.2, well...

- Some people recommend avoiding non-prime curves (i.e. over binary fields, 
such as the sect* ones) for intellectual-property reasons. I'm not going to try 
to get into that, because IANAL and even if I were, I wouldn't touch that 
without a hefty retainer.

- Current consensus, more or less, seems to be to use named curves and not 
custom ones. The arguments for that seem pretty persuasive to me. So don't use 
custom curves.

- Beyond that? Well, here's one Stack Exchange response from Thomas Pornin (who 
knows a hell of a lot more about this stuff than I do) where he suggests using 
just prime256v1 (which is the same as secp256r1 I believe?) and secp384r1:

https://security.stackexchange.com/questions/78621/which-elliptic-curve-should-i-use

Those are the curves in Suite B, before the NSA decided to emit vague warnings 
about ECC. They subsequently decided P384 aka secp384r1 is OK until 
post-quantum primitives are standardized. So if your application prefers 
secp384r1 for TLSv1.2, then you can decide whether to also allow prime256v1 for 
interoperability. Again, that's a question for your threat model.

All that said, some people will have different, and quite possibly 
better-informed, opinions on this.

--
Michael Wojcik


RE: Server application hangs on SS_read, even when client disconnects

2020-11-17 Thread Michael Wojcik
> From: Kyle Hamilton 
> Sent: Tuesday, 17 November, 2020 02:37
> On Fri, Nov 13, 2020 at 11:51 AM Michael Wojcik
>  wrote:
> >
> > > From: Brice André 
> > > Sent: Friday, 13 November, 2020 09:13
> >
> > > "Does the server parent process close its copy of the conversation 
> > > socket?"
> > > I checked in my code, but it seems that no. Is it needed?
> >
> > You'll want to do it, for a few reasons: ...
>
> There's another reason why you'll want to close your socket with
> SSL_close(): SSL (and TLS) view a prematurely-closed stream as an
> exceptional condition to be reported to the application. This is to
> prevent truncation attacks against the data communication layer.
> While your application may not need that level of protection, it helps
> to keep the state of your application in lockstep with the state of
> the TLS protocol.  If your application doesn't expect to send any more
> data, SSL_close() sends another record across the TCP connection to
> tell the remote side that it should not keep the descriptor open.

This is true, but not what we're talking about here. When the
application is done with the conversation, it should use SSL_close
to terminate the conversation.

Here, though, we're talking about the server parent process closing
its descriptor for the socket after forking the child process. At that
point the application is not done with the conversation, and calling
SSL_close in the server would be a mistake.

Now, if the server is unable to start a child process (e.g. fork fails
because the user's process limit has been reached), or if for whatever
other reason it decides to terminate the conversation without further
processing, SSL_close would be appropriate.

--
Michael Wojcik


RE: Server application hangs on SS_read, even when client disconnects

2020-11-13 Thread Michael Wojcik
> From: Brice André 
> Sent: Friday, 13 November, 2020 09:13

> "Does the server parent process close its copy of the conversation socket?"
> I checked in my code, but it seems that no. Is it needed?

You'll want to do it, for a few reasons:

- You'll be leaking descriptors in the server, and eventually it will hit its 
limit.
- If the child process dies without cleanly closing its end of the conversation,
the parent will still have an open descriptor for the socket, so the network 
stack
won't terminate the TCP connection.
- A related problem: If the child just closes its socket without calling 
shutdown,
no FIN will be sent to the client system (because the parent still has its copy 
of
the socket open). The client system will have the connection in one of the 
termination
states (FIN_WAIT, maybe? I don't have my references handy) until it times out.
- A bug in the parent process might cause it to operate on the connected socket,
causing unexpected traffic on the connection.
- All such sockets will be inherited by future child processes, and one of them 
might
erroneously perform some operation on one of them. Obviously there could also 
be a
security issue with this, depending on what your application does.

Basically, when a descriptor is "handed off" to a child process by forking, you
generally want to close it in the parent, unless it's used for parent-child
communication. (There are some cases where the parent wants to keep it open for
some reason, but they're rare.)

On a similar note, if you exec a different program in the child process (I 
wasn't
sure from your description), it's a good idea for the parent to set the 
FD_CLOEXEC
option (with fcntl) on its listening socket and any other descriptors that 
shouldn't
be passed along to child processes. You could close these manually in the child
process between the fork and exec, but FD_CLOEXEC is often easier to maintain.

For some applications, you might just dup2 the socket over descriptor 0 or
descriptor 3, depending on whether the child needs access to stdio, and then 
close
everything higher.

Closing descriptors not needed by the child process is a good idea even if you
don't exec, since it can prevent various problems and vulnerabilities that 
result
from certain classes of bugs. It's a defensive measure.

The best source for this sort of recommendation, in my opinion, remains W. 
Richard
Stevens' /Advanced Programming in the UNIX Environment/. The book is old, and 
Linux
isn't UNIX, but I don't know of any better explanation of how and why to do 
things
in a UNIX-like OS.

And my favorite source of TCP/IP information is Stevens' /TCP/IP Illustrated/.

> May it explain my problem?

In this case, I don't offhand see how it does, but I may be overlooking 
something.

> I suppose that, if for some reason, the communication with the client is lost
> (crash of client, loss of network, etc.) and keepalive is not enabled, this 
> may
> fully explain my problem ?

It would give you those symptoms, yes.

> If yes, do you have an idea of why keepalive is not enabled?

The Host Requirements RFC mandates that it be disabled by default. I think the
primary reasoning for that was to avoid re-establishing virtual circuits (e.g.
dial-up connections) for long-running connections that had long idle periods.

Linux may well have a kernel tunable or similar to enable TCP keepalive by
default, but it seems to be switched off on your system. You'd have to consult
the documentation for your distribution, I think.

By default (again per the Host Requirements RFC), it takes quite a long time for
TCP keepalive to detect a broken connection. It doesn't start probing until the
connection has been idle for 2 hours, and then you have to wait for the TCP
retransmit timer times the retransmit count to be exhausted - typically over 10
minutes. Again, some OSes let you change these defaults, and some let you change
them on an individual connection.

--
Michael Wojcik



RE: Server application hangs on SS_read, even when client disconnects

2020-11-13 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Brice 
> André
> Sent: Friday, 13 November, 2020 05:06

> ... it seems that in some rare execution cases, the server performs a 
> SSL_read,
> the client disconnects in the meantime, and the server never detects the
> disconnection and remains stuck in the SSL_read operation.

...

> #0  0x7f836575d210 in __read_nocancel () from 
> /lib/x86_64-linux-gnu/libpthread.so.0
> #1  0x7f8365c8ccec in ?? () from 
> /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
> #2  0x7f8365c8772b in BIO_read () from 
> /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1

So OpenSSL is in a blocking read of the socket descriptor.

> tcp0  0 http://5.196.111.132:5413  http://85.27.92.8:25856
> ESTABLISHED 19218/./MabeeServer
> tcp0  0 http://5.196.111.132:5412  http://85.27.92.8:26305
> ESTABLISHED 19218/./MabeeServer

> From this log, I can see that I have two established connections with remote
> client machine on IP 109.133.193.70. Note that it's normal to have two 
> connexions
> because my client-server protocol relies on two distinct TCP connexions.

So the client has not, in fact, disconnected.

When a system closes one end of a TCP connection, the stack will send a TCP 
packet
with either the FIN or the RST flag set. (Which one you get depends on whether 
the
stack on the closing side was holding data for the conversation which the 
application
hadn't read.)

The sockets are still in ESTABLISHED state; therefore, no FIN or RST has been
received by the local stack.

There are various possibilities:

- The client system has not in fact closed its end of the conversation. 
Sometimes
this happens for reasons that aren't immediately apparent; for example, if the
client forked and allowed the descriptor for the conversation socket to be 
inherited
by the child, and the child still has it open.

- The client system shut down suddenly (crashed) and so couldn't send the 
FIN/RST.

- There was a failure in network connectivity between the two systems, and 
consequently
the FIN/RST couldn't be received by the local system.

- The connection is in a state where the peer can't send the FIN/RST, for 
example
because the local side's receive window is zero. That shouldn't be the case, 
since
OpenSSL is (apparently) blocked in a receive on the connection. but as I don't 
have
the complete picture I can't rule it out.

> This let me think that the connexion on which the SSL_read is listening is
> definitively dead (no more TCP keepalive)

"definitely dead" doesn't have any meaning in TCP. That's not one of the TCP 
states,
or part of the other TCP or IP metadata associated with the local port (which is
what matters).

Do you have keepalives enabled?

> and that, for a reason I do not understand, the SSL_read keeps blocked into 
> it.

The reason is simple: The connection is still established, but there's no data 
to
receive. The question isn't why SSL_read is blocking; it's why you think the
connection is gone, but the stack thinks otherwise.

> Note that the normal behavior of my application is : client connects, server
> daemon forks a new instance,

Does the server parent process close its copy of the conversation socket?

--
Michael Wojcik


RE: openssl ocsp(responder) cmd is giving error for ipv6

2020-11-04 Thread Michael Wojcik
> From: perumal v 
> Sent: Wednesday, 4 November, 2020 02:13

> change is highlighted below and basically keeping [] brackets for ipv6 :
>
> OCSP_parse_url
>  p = host;
>if (host[0] == '[') {
>/* ipv6 literal */
> //host++;
>p = strchr(host, ']');
>if (!p)
>goto parse_err;
> //   *p = '\0';
>p++;
>}
>   Is this the correct way to do so?

Based on my very cursory investigation, that looks right to me, but I don't 
know where else OCSP_parse_url might be used, and whether anything else depends 
on the existing semantics of removing the brackets. Someone should take a 
closer look.

You could open an issue in GitHub and do a pull request for your change, to 
make your suggestion official.

--
Michael Wojcik


RE: openssl ocsp(responder) cmd is giving error for ipv6

2020-11-03 Thread Michael Wojcik
> From: openssl-users  On Behalf Of perumal v
> Sent: Monday, 2 November, 2020 07:57

> I tried openssl ocsp for ipv6 and got the error message for the OCSP.

> openssl ocsp -url http://[2001:DB8:64:FF9B:0:0:A0A:285E]:8090/ocsp-100/ 
> -issuer ...
> Error creating connect BIO
> 140416130504448:error:20088081:BIO routines:BIO_parse_hostserv:ambiguous host 
> or
> service:crypto/bio/b_addr.c:547:

A quick look at the code suggests this is a bug in OpenSSL. OCSP_parse_url 
removes the square brackets from a literal IPv6 address in the URL, but 
BIO_parse_hostserv requires they be present. But I didn't look closely, so I'm 
not entirely sure that's the issue.

> IPv6 address without the "[]" bracket.

The square brackets are required by the URL specification. There's no point 
testing without them.

--
Michael Wojcik


RE: OpenSSL version 1.1.1h published

2020-09-22 Thread Michael Wojcik
changelog.html hasn't been updated since 1.1.1e.

https://www.openssl.org/news/changelog.html#openssl-111 shows:

-
OpenSSL 1.1.1
Changes between 1.1.1e and 1.1.1f [xx XXX ]
Changes between 1.1.1d and 1.1.1e [17 Mar 2020]
-

I noticed this because the Release Notes page 
(https://www.openssl.org/news/openssl-1.1.1-notes.html) has a link to 
changelog.html, and I popped over there to see what minor changes might be in 
h. (I haven't downloaded it yet because it's usually someone else on the team 
who does that these days.)

--
Michael Wojcik


RE: Tunelling using OpenSSL.

2020-09-04 Thread Michael Wojcik
> From: openssl-users  On Behalf Of Jason 
> Long via openssl-users
> Sent: Friday, 4 September, 2020 16:55

[Your message had a Reply-To header directing replies to your address rather 
than the list. If you did that deliberately, please don't. It's rude. You post 
here, you read here.]

> Is it possible to tunnel a connection by OpenSSL?

Yes, but probably not the way you mean.

The OpenSSL project delivers a variety of artifacts, including:

- a library (typically built as a collection of binaries, but notionally a 
single library for most purposes) that implements TLS, various cryptographic 
primitives, and related useful functions

- a command-line utility (also named "openssl") which can be used for testing, 
manipulating cryptographic file formats, and other purposes

- SDK components such as headers for developing applications that use OpenSSL

- documentation

What it does NOT include is an end-user application for general-purpose 
cryptography, similar to what OpenSSH provides. That's a rather different 
function.

Of course you can tunnel anything through a TLS connection; you can tunnel 
anything through anything. Any channel that lets you convey unbounded 
information entropy, at whatever rate, can be used as a tunnel. You can tunnel 
IP traffic in DNS requests or carrier-pidgeon messages.

But OpenSSL isn't going to do that for you. There are generic tunnel 
applications (e.g. stunnel) that use TLS and (I believe this is true of 
stunnel) specifically use OpenSSL as their TLS implementation, but those are 
separate projects.

Now, you could play games with, say, NetCat and the openssl utility to create 
proxy TLS connections. For example, on the client side:

   1. ncat -L ... | openssl s_client ...
   2. connect plaintext client to the ncat port via loopback

And on the server side:

   1. openssl s_server ... | ncat ...
   2. ncat connects to the server via loopback

That sort of thing might even have its uses, for example as a simple 
exfiltration shroud. But it's not something you want to use under normal 
circumstances.

> For example, use OpenSSL and a browser to encrypt browsing.

Er ... you know browsers already do that, right? That's the quintessential TLS 
application.

It might help if you explained what you're actually trying to accomplish, and 
why.

--
Michael Wojcik


  1   2   3   4   5   6   >