Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-23 Thread Jakob Bohm via openssl-users

On 2022-11-15 21:36, Phillip Susi wrote:


Jakob Bohm via openssl-users  writes:


Performance wise, using a newer compiler that implements int64_t etc. via
frequent library calls, while technically correct, is going to run
unnecessarily slow compared to having algorithms that actually use the
optimal integral sizes for the hardware/compiler combination.

Why would you think that?  If you can rewrite the code to break things
up into 32 bit chunks and handle overflows etc, the compiler certainly
can do so at least as well, and probably faster than you ever could.


When a compiler breaks up operations, it will do so separately for
every operation such as +, -, *, /, %, <<, >> .  In doing so,
compilers will generally use expansions that are supposedly
valid for all numbers, while manually breaking up code can often
skip cases not possible in the algorithm in question, for example
taking advantage of some values always being less than
SIZE_T_MAX.

Also, I already mentioned that some compilers do the breaking
incorrectly, resulting in code that makes incorrect calculations.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: an oldie but a goodie .. ISO C90 does not support 'long long'

2022-11-11 Thread Jakob Bohm via openssl-users

On 2022-11-06 23:14, raf via openssl-users wrote:

On Sat, Nov 05, 2022 at 02:22:55PM +, Michael Wojcik 
 wrote:


From: openssl-users  On Behalf Of raf via
openssl-users
Sent: Friday, 4 November, 2022 18:54

On Wed, Nov 02, 2022 at 06:29:45PM +, Michael Wojcik via openssl-users
 wrote:


I'm inclined to agree. While there's an argument for backward compatibility,
C99 was standardized nearly a quarter of a century ago. OpenSSL 1.x is
younger than C99. It doesn't seem like an unreasonable requirement.

Would this be a choice between backwards-compatibility with C90
compilers and compatibility with 32-bit architectures?

I don't see how.

It's a question of the C implementation, not the underlying
architecture. A C implementation for a 32-bit system can certainly
provide a 64-bit integer type. If that C implementation conforms to
C99 or later, it ought to do so using long long and unsigned long
long. (I'm excluding C implementations for exotic systems where, for
example, CHAR_BIT != 8, such as some DSPs; those aren't going to be
viable targets for OpenSSL anyway.)


Is there another way to get 64-bit integers on 32-bit systems?

Sure. There's a standard one, which is to include  and
use int64_t and uint64_t. That also requires C99 or later and an
implementation which provides those types; they're not required.

Sorry. I assumed that it was clear from context that I was only
thinking about C90-compliant 64-bit integers on 32-bit systems.


And for some implementations there are implementation-specific
extensions, which by definition are not standard.

And you can roll your own. In a non-OO language like C, this would
be intrusive for the parts of the source base that rely on a 64-bit
integer type.


I suspect that that there are more 32-bit systems than there are
C90 compilers.

Perhaps, but I don't think it's relevant here. In any case, OpenSSL is
not in the business of supporting every platform and C implementation
in existence. There are the platforms supported by the project, and
there are contributed platforms which are included in the code base
and supported by the community (hopefully), and there are unsupported
platforms.

If someone wants OpenSSL on an unsupported platform, then it's up to
them to do the work.

So it sounds like C90 is now officially unsupported.
I got the impression that, before this thread, it was believed
that C90 was supported, and the suggestion of a pull request
indicated a willingness to retain/return support for C90.
Perhaps it just indicated a willingness to accept community
support for it.

I'd be amazed if anyone could actually still be using a
30 year old C90 compiler, rather than a compiler that
just gives warnings about C90. :-)


Regarding C90 compilers, it is important to realize that some system
vendors kept providing (arbitrarily extended) C90 compiler long after
1999.  Microsoft is one example, with many of their system compilers
for "older" OS versions being based on Microsoft's C90 compilers.
 These compilers did not provide a good stdint.h, but might be coached
to load a porter provided stdint.h that maps int64_t and uint64_t to
their vendor specific C90 extensions (named __int64 and unsigned __int64).

Even worse, I seem to recall at least one of those compilers miscompiling
64 bit integer arithmetic, but working acceptably with the older OpenSSL
1.0.x library implementations of stuff like bignums (BN) and various pure
C algorithm implementations in OpenSSL 1.0.x, that happened to do 
everything

by means of 32 and 16 bit types.

As part of our company business is to provide software for the affected
"older" systems, thus desiring the ability to compile OpenSSL 3.x with
options indicating "compiler has no good integral types larger than
uint32_t, floating point is also problematic"

Other major vendors with somewhat old C compilers include a few embedded
platforms such as older ARM and MIPS chips that were mass produced in
vast quantities.

Performance wise, using a newer compiler that implements int64_t etc. via
frequent library calls, while technically correct, is going to run
unnecessarily slow compared to having algorithms that actually use the
optimal integral sizes for the hardware/compiler combination.

I seem to recall using at least one bignum library (not sure if OpenSSL
or not) that could be configured to use uint32_t and uint16_t using the
same C code that combines uint64_t and uint32_t on newer hardware.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Getting cert serial from an OCSP single response

2022-10-31 Thread Jakob Bohm via openssl-users

On 2022-10-31 01:11, Alexei Khlebnikov wrote:


Hello Geoff,

Try the following function, receive the serial number via the 
"pserial" pointer. But avoid changing the number via the pserial 
pointer because it points inside the OCSP_CERTID structure.


int OCSP_id_get0_info(ASN1_OCTET_STRING **piNameHash, ASN1_OBJECT **pmd,
                     ASN1_OCTET_STRING **pikeyHash,
                     ASN1_INTEGER **pserial, OCSP_CERTID *cid);

Med vennlig hilsen / Best regards,
Alexei.


This function prototype really needs basic constification to mark
which arguments are inputs and which are outputs.  The pserial in
particular needs different const modifiers for each level of
indirection to indicate that this is output of a pointer to a
read-only number.

Quite surprised this hasn't been done during all the pointless API
changes after the new management took over the project.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Order of providers breaks my keymgmt

2022-01-17 Thread Jakob Bohm via openssl-users

On 17/01/2022 09:49, Tomas Mraz wrote:

On Mon, 2022-01-17 at 09:36 +0100, Milan Kaše wrote:

Hi,
I successfully implemented OpenSSL v3 provider which provides store
and keymgmt and I can use it to sign a cms with the following
command:

openssl cms -sign -signer myprov:cert=0014 -provider myprov -provider
default

However when I swap the order of providers (in the real world
scenario
the providers are configured through the configuration file), i.e.

openssl cms -sign -signer myprov:cert=0014 -provider default -
provider myprov

the command stops working.

I return the private key from the store through the reference:

int construct_ec_key(LOADER_CTX *myloader, OSSL_CALLBACK *object_cb,
void *object_cbarg) {
     static const int object_type = OSSL_OBJECT_PKEY;
     static const char data_type[] = "EC";
     KEYREF ref = { 0, };
     OSSL_PARAM objparams[] = {
     OSSL_PARAM_int(OSSL_OBJECT_PARAM_TYPE, (int *)_type),
     OSSL_PARAM_octet_string(OSSL_OBJECT_PARAM_REFERENCE, ,
sizeof(ref)),
     OSSL_PARAM_utf8_string(OSSL_OBJECT_PARAM_DATA_TYPE, (char
*)data_type, COUNTOF(data_type) - 1),
     OSSL_PARAM_END,
     };
     return object_cb(objparams, object_cbarg);
}

The try_key_ref function then tries to transform data from the store
into the EVP_PKEY. It first looks up a keymgmt that can handle the
"EC" data type. Since the default provider is the first one that can
do that it is selected. It then tries to export data from my keymgmt
and import it into the selected default keymgmt. But obviously I
can't
export the private key and the operation fails.

We need to add a fallback in the try_key_ref() to try to fetch the
keymgmt from the provider of the store if the key is unexportable.
Could you please open an issue?



When my provider is activated before the default one then everything
works because the EVP_PKEY is constructed from my keymgmt.

What am I doing wrong? Shouldn't OpenSSL first try to construct
EVP_PKEY from the provider it actually returned the data? Is there a
way to force OpenSSL to use the specified provider (some property
"provider=myprov")?

You can set a default property query in the configuration file with
"?provider=myprov" as a workaround. That way your provider will be
preferred for the operations. However it might have some unwanted and
unexpected consequences.


Please, this is clearly a bug.  When the input specifies a specific
providerin the key/cert reference ("-signer myprov:cert=0014"), it
is a serious bug for the code to ignore that and query other
providers from the general priority list.  Ditto when a cert storage
provider identifies a key, that provider should get first chance to
find/provide thekey.

Enjoy,


Jakob Bohm

--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: EVP_PKEY_get_int_param is not getting degree from EC key

2022-01-05 Thread Jakob Bohm via openssl-users

On 2022-01-05 09:45, Tomas Mraz wrote:


...
So you're basically asking to put something like - "The parameter most
probably won't fit into unsigned int." - to every such parameter
documented for PKEYs?

"unsigned BIGNUM" instead of "unsigned integer" would be short and much 
clearer

in the description and naming of parameters unlikely to fit in a C int/long.

Also to me "the degree of an EC curve" refers to the form of the curve 
equation,
not the bit length of the point coordinates, for example, the P-384 
curve uses a

degree 3 equation, and modulo prime p and curve order n both being 384-bit
bignums.

What many API users probably want is a quick way to get the nominal bit 
length
of a public key or group, as a proxy for the cryptographic strength and 
as a rough
guide to allocating data buffers.  This API should not give access to or 
reveal the
exact group parameters or public key, that would be different (but still 
needed)
APIs/parameters.  For example, it would return 4096 for RSA4096, 384 for 
the

NIST P-384 curve etc.

Enjoy,

Jakob Bohm

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Larger RSA keys (Modulus bits > 16384)

2021-12-26 Thread Jakob Bohm via openssl-users

On 26/12/2021 16:21, Grégory Widmer via openssl-users wrote:


Dear OpenSSL users,

I have a question about OpenSSL. Recently, I asked myself if there was 
a maximum bit length for the modulus of a RSA key.


I tried to type :

user@host:~$ openssl genrsa 32768
Warning: It is not recommended to use more than 16384 bit for RSA keys.
 Your key size is 32768! Larger key size may behave not as 
expected.

Generating RSA private key, 32768 bit long modulus (2 primes)

I got this warning, and I wonder why a larger key size may behave not 
as expected.



I don't know, but maybe it is a reference to other RSA libraries not working
with keys larger than 2 Kibibytes.  In particular the GPG documentation 
warns

that using larger RSA or DH keys is much less efficient in terms of security
overhead that they recommend ECC instead.

However only the author of that warning message can answer why they 
wrote it.


Could anyone explain or give resources on why this doesn't work ?

My guess is that, having the following : (M = message, C = Ciphered)


> C = M^e ≡ n
>
> e = 65537
>
> n = p X q


If M^e is < n, we could easily compute the original message ?


In general the formula is C = (M^e % n) also written as C ≡ M^e (mod n),
I am not sure why you used the ≡ congruence symbol as a modulus operator
(% in C, C++ etc. mod in many textbooks).

Also, many systems for using RSA pad M to enough bits that M^e > n, thus
ensuring that the modulo operation affects the result.  In particular,
both versions of PKCS#1 do that in different ways.  There was an
unfortunate ISO standard that forgot to do that and it was found to be
insecure.

For signing, the keys are swapped so S = (M^d % n) or S ≡ M^d (mod n),
where d is the secret key, while the recipient checks that M ≡ S^e (mod n)
or that M2 = (S^e % n) can be securely unpadded back to the actual M.


Also, I want to apologize if my question is redundant, I tried to 
search on GitHub and through the mailing list, but there is no search 
feature in the mailing list.


Have a nice day !

Grégory Widmer


PS : This question is for knowledge purpose only, I don't use RSA keys 
anymore (except with GPG), I prefer ECC :)




--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: New Blog Post

2021-12-03 Thread Jakob Bohm via openssl-users

On 2021-11-25 15:00, Matt Caswell wrote:
Please see the new blog post by Tim Hudson giving an update on the 
OpenSSL Project.


https://www.openssl.org/blog/blog/2021/11/25/openssl-update/


Followup:

While the OpenSSL leadership may think they have made things easier
for algorithm developers, the changes have actually removed the
existing APIs for implementing new modes on top of the existing
library:

1. The ability to easily provide or override new EVP algorithm
implementations within "application" code has been removed with
the opaqueness of the structure defining the implementation
function pointers for an algorithm.

2. The interfaces to directly call primitives like the AES block
function have gone, leaving only awkward workarounds based on
setting specific block cipher modes and calling the entire EVP
stack for each block.  Other "trivial" operations on block mode
states (such as saving and storing running states) have also been
lost.

3. Some BigNum library features have also been lost in the opaque
everything push, in particular the ability to preallocate buffers
for bignums up to an application specific bit count using the
BN_FLG_STATIC_DATA option.

4. Any attempt to compare the "modern" source code to the classic
source code from before the influx of new developers and money is
heavily frustrated by the decision to reformat all source files
midway through the 1.0.x patch series.

All 4 changes have greatly affected my own work to use OpenSSL in
an application originally designed around another open
cryptographic API.  Where the application included such things as
optional use of a different AES mode, and security rules for when/if
to restore algorithm states in error/trial decryption scenarios.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Starting the QUIC Design

2021-12-03 Thread Jakob Bohm via openssl-users

Please note that the embedded github links don't work for me, as all
I get is an error page with a log in form.

One major issue with any QUIC implementation is how closely that
protocol is tied to Google and their desire to have web browsers
quickly load elements from 3rd part webservers, such as Google's
own tracking code.

On 2021-12-03 13:04, Matt Caswell wrote:

Please see my blog post on starting the QUIC design here:

https://www.openssl.org/blog/blog/2021/12/03/starting-the-quic-design/

Matt


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL 1.1 on OSX

2021-12-03 Thread Jakob Bohm via openssl-users
Which is indeed what I do in our notarized MacOsX and iOS applications.  
However to do so, I have historically needed to clean up OpenSSL source 
code to actually behave as a proper static library where only used 
functions are linked in.  Most notably, the source files named xxx_lib.c 
tend to cause the opposite behavior by bundling used and unused code 
into a single .o file, so I had to do thematic splitting of those source 
files, to not only avoid the unused functions getting linked in, but 
also the unused .o files referenced by those unused functions.  This 
problem is fully cross platform, although some more detail work had to 
be done to ensure compatibility of certain source files with XCode 
bundled tool chains (In particular the optimized assembler files).


On 2021-11-20 07:47, Dr Paul Dale wrote:
An alternative would be to statically link libssl and libcrypto.  No 
more dependencies.



Pauli

On 20/11/21 3:48 pm, Viktor Dukhovni wrote:

On Sat, Nov 20, 2021 at 01:38:39PM +1100, Grahame Grieve wrote:


I agree it's sure not a core openSSL issue. But surely lots of people
want to use openSSL in cross platform apps and openSSL is interested
in adoption issues?

Most of the users here are building applications that are not notarised,
and so work with the upstream builds.


Anyway, it looks like I now have to figure out how to maintain a
custom build of openSSL :-(

It shouldn't be too difficult to execute the build, once you've figured
out the actual requirements.  Apparently you need to make sure that
signed code has very explicit dependencies, which makes some sense, so
the libraries bundled with the application need to be built in a way
that can be verified along with the application.

My best guess is that Apple are not specifically picking on OpenSSL
here, and similar issues would arise with any other libraries you'd
want to package with your application.  Good luck.

Feel free to share your findings.  Perhaps someone will then help
you find a way to improve on them, or to add a template to the
build to support this going forward...




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Examples of adding Private Enterprise Numbers (PEN's) Extensions to CSR

2021-10-15 Thread Jakob Bohm via openssl-users

On 2021-10-14 18:43, Philip Prindeville wrote:

Hi,

I need to generate CSR's with some Extensions included that use our PEN's as 
allocated by IANA.

Are there any examples of this?

Also, I'm looking at X509_REQ_add_extensions_nid() and it takes a 
STACK_OF(X509_EXTENSION) but it doesn't seem to actually iterate through the 
stack...  Is this code even correct?  What am I missing?  Ditto for 
X509_REQ_add_extensions().

Thanks,

-Philip


I don't know how to do this via the API, but the source code for the
command line tools may give some good clues.  Here is how I would do it
with the command line tools:

First of all, you need to (administratively) decide how to subdivide
your private OID tree belowyour enterprise ID.  This would be a
company internal document listing how you use the OIDsand where to
put future OIDs of various kinds.  Use whatever document editing
system is usedfor other long term company documents.  Something like:

   Redfish solutions has been allocated the following OID prefix
   via the IANA "Enterprise numbers"process:

   RedfishOid = 1.3.6.1.4.1.999

   We subdivide this as follows:

   RedfishOid.1 = Redfish X.509 extensions
   RedfishOid.1.1 = FooBar extension, see design document RS12345
   RedfishOid.1.2 = BazQux extension, see design document RS12346
   RedfishOid.2 = Redfish SNMP extensions
   RedfishOid.2.1 = Redfish hardware-box-A SNMP extensions
   RedfishOid.3 = Redfish contributions to public standardisation efforts
   RedfishOid.4 = Redfish internal LDAP extensions used by HR

Next for the OpenSSL command line tools, you need to add the individual
X.509 relatedOIDs to the openssl.cnf file:

   In the [default] section:
   oid_section = new_oids

   In the [new_oids] section
   RedFishFooBar=1.3.6.1.4.1.999.1.1
   RedFishBazQux=1.3.6.1.4.1.999.1.2

From there, you should be able to use the new OID names in relevant
sections and options, using the generic syntax that explicitly
states how each value needs to beencoded.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL 3.0.0 custom entropy source

2021-09-25 Thread Jakob Bohm via openssl-users
So is there no longer an API to feed entropy to the default or FIPS 
default RNG?


Creating an entire provider just to feed input to the FIPS provider 
seems overkill.


On 2021-09-14 01:00, Dr Paul Dale wrote:
Try working from providers/implementations/rands/seed_src.c  You'll 
need to reimplement seed_src_generate() to use your RNG.


To use your custom seed source, you can either use the OpenSSL 
configuration file to set a "random" section that includes a "seed" 
setting or you can call RAND_set_seed_source_type() early in your 
startup sequence.



Pauli

On 14/9/21 8:19 am, Kory Hamzeh wrote:

Hi,

We are upgrading from OpenSSL 1.0.1g+OpenSSL-FIPS-2.0.5 to 3.0.0. 
Yes, I know, big jump. We have our own entropy source we use to seed 
the OpenSSL DRBG. This is a basic code snippet of how we set it up:


 DRBG_CTX *dctx = FIPS_get_default_drbg();
 FIPS_drbg_init(dctx, NID_aes_256_ctr, DRBG_FLAG_CTR_USE_DF);
 FIPS_drbg_set_callbacks(dctx,
rand_get_entropy,
rand_free_entropy,
   0,
rand_get_entropy,
rand_free_entropy);


Error checking has been removed in the example for the sake of brevity.

I am trying to figure out  how to implement this with OpenSSL 3. From 
what I have read in the docs, I need to create a rand provider. But I 
still feel like I don’t understand how it all fit together. I did 
look at fuzz_rand.c and fake_rand.c, and if I understood everything 
correctly, neither of them use an external entropy/seed source.


Are there better examples of what I am looking for?

Thanks,
Kory




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Reducing the footprint of a simple application

2021-09-15 Thread Jakob Bohm via openssl-users

On 2021-09-14 12:14, Dr Paul Dale wrote:



> ...low security RNGs and other antifeatures.

Huh  Where?  Why plural?

The only **one** I'm aware of is the one I added to stochastically 
flush the property cache where it doesn't need to be cryptographically 
secure.


Some applications need more than 256 independent random bits to satisfy 
their
security design.  Some of the newer RNGs in OpenSSL presume otherwise in 
their

government design.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Reducing the footprint of a simple application

2021-09-14 Thread Jakob Bohm via openssl-users

Hi fellow sufferer,

I used to do a lot of manual patching of OpenSSL 1.0.x to remove the 
insane object interdependencies (such as objects named foolib.c being 
nexus points that bring in tonnes of irrelevant code because someone was 
too unfamiliar with basic library concepts to make an actual library 
instead of a monolithic file).


However the rush to repeatedly rewrite and deprecate everything after 
the new people joined the OpenSSL project made maintaining the needed 
patches too time consuming.


Some day, I will have to start over turning the 3.0.x code into an 
actual library while removing linker mishandling, low security RNGs and 
other antifeatures.


On 2021-09-12 19:34, Reinier Torenbeek wrote:

Hi,

I have a simple application that uses OpenSSL 3.0.0 for AES-GCM 
encryption and decryption only. Looking at the size of the binary on 
disc, I see it's a few KBs when linking dynamically with libcrypto, 
and  4.8 MB when linking statically. Although I know the large 
footprint of OpenSSL is considered "a fact of life", this seems 
excessive. From experience with other crypto implementations, I know 
that the *actual* crypto functionality that I am using can fit in 10s 
of KBsand I would like to understand the reasons for OpenSSL's size 
better. I am on Linux, 64 bits, using gcc 9.3.0.


Some analysis of the binary reveals (not surprisingly) that it 
contains many OpenSSL symbols that have nothing to do with the 
functionality that I am using. Those seem to get pulled in because 
objects get linked in as a whole and apparently the layout of the 
object contents are such that the symbols needed for my functionality 
are spread out over many different objects.


It was my hope that I could mitigate this by compiling OpenSSL and my 
application with the flags -ffunction-sections, -fdata-sections, -Os 
and -flto and using --gc-sections and -flto when linking. (See 3.10 
Options That Control Optimization 
<https://gcc.gnu.org/onlinedocs/gcc-9.4.0/gcc/Optimize-Options.html#Optimize-Options> of 
GCC's documentation).  This did reduce the binary size by 2 MB, 
leaving me with almost 3 MB. Almost 90% of that was in the text 
section and a bit over 10% in the data section. I do not have 
sufficient experience with these options to assess how well the 
optimizations worked in this case, I think the resulting binary is 
still pretty large.


I have not tried disabling any of the features when building OpenSSL. 
I suspect that may help a little bit because it may result in a 
decrease in size of (some) objects, but I have seen people reporting 
disappointing results of that on the web. Also, it does not seem to be 
a workable approach in general to have to figure out which build 
options to use and to have to rebuild OpenSSL for every type of 
application that I create.


Did any people here try similar things, with better results? Does 
anybody have any other suggestions as to what I could try? And what is 
the explanation (or justification) for this excessive footprint?


Thanks,
Reinier


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: problems with too many ssl_read and ssl_write errors

2021-08-23 Thread Jakob Bohm via openssl-users

For the below symptoms, I would recommend a watching the application
port with WireShark.

This should show any the TLS protocol deviations and any problems in
handling and establishing the TCP connections.

On 2021-08-19 00:38, David Bowers via openssl-users wrote:


  * We have a server that has around  2025 clients connected at any
instant.
  * Our application creates a Server /Listener socket that then is
converted into a Secure socket using OpenSSL library. This is
compiled and built in a Windows x64 environment.  We also built
the OpenSSL for the Windows. The Listener socket is created with a
default backlog of 500. The Accept socket is non-blocking socket
and waits for connections
  * Every Client makes a regular blocking connection to the Server.
The Server accepts the connection after which the Client socket is
converted to a secure socket using the OpenSSL Library.
  * The connections are coming at a rate of about 10 connections
/second ?  Not sure about this number.
  * We are able to connect to all the clients in a few minutes and it
stays like that for some time.  There constant exchange of
messages between Server(COS) and clients without issues.
  * The application logic is to keep trying to connect every timeout.
  * After maybe a few hours/days we see the clients dropping
connections. The logs indicate the SSL_Read or SSL_Write on the
Server fails for a client with SSL_Error number 5
(SSL_ERROR_SYSCALL) and the equivalent Windows error of
WSATimeOut.  We then observe the WSAECONNRESET as the Client
closed connection.  We see this behavior for multiple sites.
  * The number of Clients disconnected starts increasing and we see
the logs in the Client where the server refuses any more
connections form Clients (10061- WSAECONNREFUSED) There is nothing
to indicate this state in the server logs. Our theory is the
backlog is filled and Server refusing further connections.
  * We are trying to find why we get the SSL_Read/SSL_Write Error as
it a Blocking socket. We cannot use to a non-blocking socket due
to platform and application limitation


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Need some help signing a certificate request

2021-08-23 Thread Jakob Bohm via openssl-users

On 21/08/2021 19:42, Michael Wojcik wrote:

From: rgor...@centerprism.com 
Sent: Saturday, 21 August, 2021 11:26

My openssl.cnf (I have tried `\` and `\\` and `/` directory separators):

Use forward slashes. Backslashes should work on Windows, but forward slashes work 
everywhere. I don't know that "\\" will work anywhere.

\\ works only when invoking a \ expecting program from a unix-like shell
that requires each \ to be escaped with a second backslash in order to
pass it through.  A typical example is using CygWin bash to invoke a native
Win32 program.

\\ where neither is an escape (so  in the above shell situation) is
also used in native Windows programs to access a hypothetical root that
is above the real file system roots, typically the syntax is
"\\machine\share\ordinary\path", where:

machine is either a different computer, a "." for a special higher level
  local namespace or "??" for another special namespace.
share is the first level below machine, in particular it is the exported
  name of a remote file system or object.
ordinary\path is whatever else needs to be added to the path for a
  specific use


--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: libcrypto.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64

2021-08-19 Thread Jakob Bohm via openssl-users
http_tcpip_outbound.c.o "_TLS_server_method", referenced from: 
_http_tcpip_inbound_tls_initialize in http_tcpip_inbound.c.o 
"_X509_free", referenced from: 
_http_tcpip_outbound_get_url_using_string_type_tls in 
http_tcpip_outbound.c.o ld: symbol(s) not found for architecture arm64 
clang: error: linker command failed with exit code 1 (use -v to see 
invocation) gmake[3]: *** [CMakeFiles/test.dir/build.make:680: test] 
Error 1 gmake[2]: *** [CMakeFiles/Makefile2:83: 
CMakeFiles/test.dir/all] Error 2 gmake[1]: *** 
[CMakeFiles/Makefile2:90: CMakeFiles/test.dir/rule] Error 2|


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Deciphering a .p7f file

2021-08-11 Thread Jakob Bohm via openssl-users

On 2021-08-11 13:52, Keine Eile wrote:

Hi list members,

I have a .p7f in hands, which seems to be a DER encoded PKCS7 
structure in some way, I can use 'openssl pkcs' to transform it in a 
PEM form, I also can pull a bunch of certificates out of it. But I 
know, there is some encrypted pay load in this file, which I can not 
decipher. What I have tried with openssl's rsautl and smime does not 
seem to work for me.


May be someone of you can push me in the right direction, thanks!

Try the "openssl cms" command, or its older sibling "openssl smime" .

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: query on key usage OIDs

2021-07-16 Thread Jakob Bohm via openssl-users

Question was how to retrieve those lists for any given certificate,
using currently supported OpenSSL APIs.

The lists of usage bits and extusage OIDs in any given certificate
are finite, even if the list of values that could be in other
certificates is infinite.

On 2021-07-16 06:44, Kyle Hamilton wrote:

Also, OIDs for extendedKeyUsage can be defined per-application, so
there's no way to compile a full list of them.

-Kyle H

On Fri, Jul 16, 2021 at 4:23 AM Viktor Dukhovni
  wrote:

On 15 Jul 2021, at 11:55 pm, SIMON BABY  wrote:

I am looking for openssl APIs to get all the OIDs associated with user 
certificate Key usage extension. For example my sample Key usage extension from 
the certificate is below:
X509v3 extensions:
 X509v3 Key Usage: critical
 Digital Signature, Key Encipherment

I am looking for the APIs used to get the OIDs associated with  Digital 
Signature and Key Encipherment from the certificate.

There are no keyUsage OIDs, the field is a bitstring:

https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.3

   id-ce-keyUsage OBJECT IDENTIFIER ::=  { id-ce 15 }

   KeyUsage ::= BIT STRING {
digitalSignature(0),
nonRepudiation  (1), -- recent editions of X.509 have
 -- renamed this bit to 
contentCommitment
keyEncipherment (2),
dataEncipherment(3),
keyAgreement(4),
keyCertSign (5),
cRLSign (6),
encipherOnly(7),
decipherOnly(8) }

There are OIDs in the extendedKeyUsage:

 https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.12




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: email notice [was: Not getting some macros for FIPS]

2021-07-01 Thread Jakob Bohm via openssl-users

On 2021-06-25 22:26, Richard Levitte wrote:

On Wed, 23 Jun 2021 10:51:05 +0200,
Tomas Mraz wrote:

On Wed, 2021-06-23 at 08:12 +, Kumar Mishra, Sanjeev wrote:


Notice: This e-mail together with any attachments may contain
information of Ribbon Communications Inc. and its Affiliates that is
confidential and/or proprietary for the sole use of the intended
recipient. Any review, disclosure, reliance or distribution by others
or forwarding without express permission is strictly prohibited. If
you are not the intended recipient, please notify the sender
immediately and then delete all copies, including any attachments.

It's a little bit strange to send e-mails with such notices to public
mailing lists where the intented recipient is _anyone_.

Those notices are a bit amusing, yeah.  Of course, Sanjeev can't be
blamed for this, as we can probably assume that it's a corporate
filter that automagically adds those.

And oh boy!  openssl-users having almost 3000 subscribers, that's
quite a lot of people to chase down and ensure they have destroyed all
copies, I tell ya!  "Good luck" is probably an appropriate response
;-)



Which is why I have set up dedicated e-mail identities for posting to such
public lists, using a different disclaimer in the sig-block.

I hope this can inspire other sysadmins to set up something similar.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: reg: question about SSL server cert verification

2021-06-19 Thread Jakob Bohm via openssl-users

On 2021-06-18 17:07, Viktor Dukhovni wrote:


On Fri, Jun 18, 2021 at 03:09:47PM +0200, Jakob Bohm via openssl-users wrote:


Now the client simply works backwards through that list, checking if
each certificate signed the next one or claims to be signed by a
certificate in /etc/certs.  This lookup is done based on the complete
distinguished name, not just the CN part of it.  At every step, the
certificate may be referenced by a "key identifier" instead of the
distinguished name, and some clients will compare that instead of the
distinguished name.

All extant (non-EOL) OpenSSL releases prioritise the local trust-store
over the remotely provided CA certificate list when building the
certificate chain.  The remote chain is used only when no match is found
in the trust store.  As as a matching issuer is found in the trust store
all further lookups are from the trust store only.

If the local trust store contains only "root CAs", and the remote peer
provides the rest of the chain, with no overlap in the subject
distinguished names, the behaviour is not observably different from
Jakob's description.

Differences are observed once the local trust store contains some
intermediate certificates or the remote chain provides a cross cert for
which the local store instead contains a corresponding (same subject
name and keyid) self-signed root, or the cross cert is in the local
store, but the remote peer sends a root.  In all such cases chain
construction uses the certs from the trust store.  This tends to produce
less surprising (and ideally better, or at least what you implicitly
asked for) results.


Interesting, earlier today, I observed the confusing effect of
"openssl verify" treating -trusted_first as always on while keeping
document wording suggesting it is an actual option, not historical
remnants of yet another feature removed by the new OpenSSL
management.

--
Jakob Bohm, CIO, partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark. direct: +45 31 13 16 10 


This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: reg: question about SSL server cert verification

2021-06-18 Thread Jakob Bohm via openssl-users

On 2021-06-18 16:23, Michael Wojcik wrote:


From: openssl-users  On Behalf Of Jakob
Bohm via openssl-users
Sent: Friday, 18 June, 2021 07:10
To: openssl-users@openssl.org
Subject: Re: reg: question about SSL server cert verification

On 2021-06-18 06:38, sami0l via openssl-users wrote:

I'm curious how exactly an SSL client verifies an SSL server's
certificate which is signed by a CA.

No, here is what really happens:

First the owner of the server (or a program they use) find
the chain of intermediary certificates which leads from
their actual certificate to a commonly trusted root
certificate, Lets say the chain is:
RootA->CrossB->IntermediaryC->IntermediaryD->EndCertServer.
This list of certificates is put in a server config file
and the complete list is sent in each SSL handshake and
every CMS signed message.

We hope. But, of course, as Jakob says, there are many misconfigured servers.


Now the client simply works backwards through that list,
checking if each certificate signed the next one or claims
to be signed by a certificate in /etc/certs.  This lookup
is done based on the complete distinguished name, not just
the CN part of it. At every step, the certificate may be
referenced by a "key identifier" instead of the
distinguished name, and some clients will compare that
instead of the distinguished name.

And there are a whole bunch of other checks: signature, validity dates, key 
usage, basic constraints...

Those checks would presumably happen after chain building,
verifying that signatures, dates, key usage and other constraints
are correct.

Also, the correspondence between the peer identity as requested by the client, 
and as represented by the entity certificate, should not be done using the CN 
component of the Subject DN (as OP suggested), but by comparing against the 
Subject Alternative Name extension values. The subject CN should only be used 
as a last resort; some applications may refuse to allow a CN match and insist 
on an X.509v3 certificate with a valid SAN.

(Jakob knows all this.)

Actually, I have heard of nothing at all proposing the use of
SANs on CA certificates or their use in chain building.  Hence
why I refer only to matching the complete DN and/or matching
the "key identifier" field.


Certificate chain validation is a very complicated topic. I put together an 
internal presentation with an overview of it some years back and it was a dozen 
or more slides, and I only touched on the major points. It's not something that 
can be covered thoroughly in an email discussion.

However it is something that should be documented in OpenSSL
documents such as the "verify(1ssl)" manpage, but somehow isn't.

For example, some versions of that manpage fail to specify which
name restrictions are checked, which are ignored, and which ones
fail even if they shouldn't.




The big complications are:

Numerous. Jakob's list is a good one, but I'm sure we can come up with others. 
Like, say, the enormous mess which is revocation.

Revocation checks would also be part of the post-chain-building
checks.


My advice, for someone who wants to understand the certificate-validation 
process in TLS, is:
[Snipped: List of academic texts for those who want to implement their own 
X.509 code]


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: reg: question about SSL server cert verification

2021-06-18 Thread Jakob Bohm via openssl-users

On 2021-06-18 06:38, sami0l via openssl-users wrote:
I'm curious how exactly an SSL client verifies an SSL server's 
certificate which is signed by a CA. So, during the SSL handshake, 
when the server sends its certificate, will the SSL client first 
checks the `Issuer`'s `CN` field from the x509 SSL certificate that it 
received for example, and compares against all the `CN`s of all the 
certificates stored `/etc/ssl/certs` of that client and if it matches 
any one of them, next it checks the signature of the received 
certificate by parsing the public key from that CA cert located in 
`/etc/ssl/certs/someCA.crt` and performers the decryption and checks 
the signature of the received certificate and if the signature 
matches, the browser accepts the certificate since it just verified 
that it's signed by the CA which is located in `/etc/ssl/certs` and 
uses that cert? Is this how the SSL client verifies the certificate 
when it receives a server's certificate during the handshake process? 
If not, It'd be really helpful if someone could explain me how it's 
exactly done.




No, here is what really happens:

First the owner of the server (or a program they use) find
the chain of intermediary certificates which leads from
their actual certificate to a commonly trusted root
certificate, Lets say the chain is:
RootA->CrossB->IntermediaryC->IntermediaryD->EndCertServer.
This list of certificates is put in a server config file
and the complete list is sent in each SSL handshake and
every CMS signed message.

Now the client simply works backwards through that list,
checking if each certificate signed the next one or claims
to be signed by a certificate in /etc/certs.  This lookup
is done based on the complete distinguished name, not just
the CN part of it.  At every step, the certificate may be
referenced by a "key identifier" instead of the
distinguished name, and some clients will compare that
instead of the distinguished name.

The big complications are:

1. The server owner may have configured the "wrong" list and
clients may or may not work around that.

2. Not all clients trust the same exact list of root CAs,
hence the invention of "cross-signed roots", which are
intermediary certificates with the same name and public
key as a not-known-everywhere root, but signed by an
already-known-everywhere root.

3. Not all clients react the same way when the server
includes a cross certificate in the list.  Some recognize
the cross as the same as a root they trust and declare
success without having to trust the (possibly old)
compatibility root, others check only the compatibility
root and get confused when that old root dies.

4. Some quality checkers (looking at you QualSys) object
strongly to the server including the root itself in its
list, because the root is supposed to be on all the clients
that trust it.  But experienced human client users can
actually use an included root to make informed decisions
about trust errors.

OpenSSL documentation tends to bury its handling of all
this way too deep inside the programmer documentation
rather than explaining things clearly in the end user
documentation.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: openssl verify question

2021-06-17 Thread Jakob Bohm via openssl-users

On 2021-06-17 15:49, Viktor Dukhovni wrote:

On Sat, Jun 12, 2021 at 10:20:22PM +0200, Gaardiolor wrote:


When I compare those, they are exactly the same. But that's the thing, I
think server.sig.decrypted should be prepended with a sha256 designator
30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20, which is
missing. I do see this designator with working certificates. I suspect
this is the problem.

Is that designator mandatory and likely the cause of my issue ?

Yes, PKCS#1 signatures must have an algorithm OID prefix.


Please beware that a few years ago, I found that a particular Symantec
server signedlong term messages (timestamping countersignatures)
without that prefix, using animplied algorithm of SHA-1.

It may thus be necessary for CMS implementations to accept such
signatures for that special case until they naturally expire,
and maybe a few years past that.

Defining a sufficiently narrow exception is left as an exercise
for implementors.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Best practice for distributions that freeze OpenSSL versions and backports

2021-06-08 Thread Jakob Bohm via openssl-users

Dear team,

It would be nice if there was a user- and security-friendly best 
practice document for distributions (such as Linux distributions) that 
freeze on an OpenSSL release version (such as 1.1.1z) and then backport 
any important fixes.


Perhaps something like the following:

1. The distributor shall seek to backport as many upstream security 
fixes as possible and shall sign up to receive advance confidential 
copies of such code changes to attempt a coordinated release at the same 
time as the upstream release.


1.1. The version number frozen on should be from the upstream branch 
with the latest upstream maintenance end date available at the time of 
freezing the version.


2. Any such backport-patched version (as source, library, shared 
library, and/or openssl binary shall be provided with a document named: 
README.fixes with distribution appropriate extension for such files 
(like .txt or .gz)) listing the following:


2.1 The version number of the most recent upstream release version 
considered at the time of last document update.


2.2 The version number of the upstream release version chosen as the 
frozen base, and the date when that choice was made.


2.3 The current differences from that most recent upstream release 
version, specifying any upstream security advisories and public CVEs not 
completely fixed, but still listing any and all non-security 
enhancements not included.


2.4 The current differences from the named frozen base version, with any 
net changes back and forth cancelled out (thus not a changelog).  Any 
change fixing a security issue shall list the upstream security advisory 
and public CVE.


2.5. The distribution maintainers that did the backporting and writing 
of the document, and (if different) the contact point for reporting 
issues/bugs in the backport work.


3. The README.fixes document should, if possible, be made available to 
the upstream project



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 



Re: FW: X509_verify_cert() rejects all trusted certs with "default" X509_VERIFY_PARAM

2021-06-01 Thread Jakob Bohm via openssl-users

On 2021-05-28 22:50, Michael Wojcik wrote:


Just realized I sent this directly to Graham instead of to the list.

-Original Message-
From: Michael Wojcik
Sent: Friday, 28 May, 2021 09:37
To: 'Graham Leggett' 
Subject: RE: X509_verify_cert() rejects all trusted certs with "default" 
X509_VERIFY_PARAM


From: openssl-users  On Behalf Of Graham
Leggett via openssl-users
Sent: Friday, 28 May, 2021 06:30

I am lost - I can fully understand what the code is doing, but I can’t see
why openssl only trusts certs with “anyExtendedKeyUsage”.

Interesting. I wondered if this might be enforcing some RFC 5280 or CA / 
Browser Forum Baseline Requirements rule.

5280 4.2.1.12 says:

In general, this
extension will appear only in end entity certificates.

and

If the extension is present, then the certificate MUST only be used
for one of the purposes indicated.

Your certificate has serverAuth and emailProtection, yes? So it cannot be used to sign 
other certificates, and OpenSSL is correct as far as that goes. 5280 doesn't define an 
EKU for signing certificates; so perhaps the intent of the OpenSSL code is "if EKU 
is present, this probably can't be used as a CA cert without violating 5280, but I'll 
look for this 'any' usage just in case and allow that".

The errata for 5280 and the RFCs which update it do not appear to affect this 
section.

There is a very common extension to the validation of X.509
certificates (which should ideally be available as an option
parameter to OpenSSL validation APIs): The EKU in a CA:True
certificate limits the end cert EKU values that are acceptable.
The rule is NOT applied to ocspSigning due to a conflict with
that EKU authorizing the CA public key to sign OCSP responses
for the parent CA.

For example a CA with EKU=emailProtection,clientAuth cannot be
used to issue valid EKU=serverAuth certificates, however it can
still issue a delegated EKU=ocspSigning delegated OCSP signing
certificate.

In this filtering anyExtendedKeyUsage acts as a wildcard
indicating a universal CA, and   In practice, the complete
absence of the EKU extension acts as an equivalent wildcard.

The OpenSSL 3 code discussed, as described by Graham, appears
to incorrectly apply the wildcard check without ORing it with
the normal check for inclusion of the usage for which the chain
is built and validated.  (I recommend that where such filtering
is done, it is part of chain building as different chains may
succeed for different usages).


The CA/BF BR 7.1.2.1, the part of the certificate profile that covers root 
certificates, says:

d. extKeyUsage
   This extension MUST NOT be present.

Now, there's no particular reason for OpenSSL to enforce CA/BF BR, and good reason for it 
not to (the "CA" part refers to commercial CAs, and not all clients are 
browsers). But it's more evidence that root certificates, at least, should not have 
extKeyUsage because browsers can correctly reject those.

The CA/BF profile is more complicated regarding what it calls "subordinate" certificates, 
aka intermediates, so for non-root trust anchors there are cases where you can get away with 
extKeyUsage. But a good rule is "only put extKeyUsage on entity [leaf] certificates".


So that really leaves us with the question "do we want OpenSSL enforcing the 
extKeyUsage rules of RFC 5280?". And I'm tempted to say yes. In principle, the 
basicConstraints CA flag and the keyUsage keyCertSign option should suffice for this, but 
defense in depth, and in cryptographic protocols consistency is extremely important.

The CAB/F "guidelines" tend to include arbitrary restrictions above and 
beyond what good X.509 software libraries should do, such as limiting 
validity to 1 year, requiring end certificate holders to be magically 
able to respond to sudden revocations for bureaucratic reasons etc.  Or 
as quoted by Michael, a rule that all roots must be universal roots with 
the no-EKU implicit wildcard.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: I installed Openssl 1.1.1k and Ubuntu 20.04 did an upgrade and reverted it back to 1.1.1f. Usually Ubuntu upgrades don’t break it.

2021-05-21 Thread Jakob Bohm via openssl-users
:~$

$ ls -alF /usr/lib/x86_64-linux-gnu/libssl*

-rw-r--r-- 1 root root 598104 Apr 27 20:37 
/usr/lib/x86_64-linux-gnu/libssl.so.1.1


This shows that the Ubuntu installed OpenSSL was built by Ubuntu on the 
most recent April 27 (2021-04-27) at 20:37 your timezone.




michael@ubuntuwpmm1tb:~$

$ ls -alF /usr/locallib/libssl*

ls -alF /usr/locallib/libssl*

ls: cannot access '/usr/locallib/libssl*': No such file or directory

$ ls -alF /usr/local/bin/openssl

ls -alF /usr/local/bin/openssl

ls: cannot access '/usr/local/bin/openssl': No such file or directory

$ /usr/local/bin/openssl version -a

/usr/local/bin/openssl version -a

-bash: /usr/local/bin/openssl: No such file or directory



*From:*openssl-users  *On Behalf Of 
*Jakob Bohm via openssl-users

*Sent:* Friday, May 21, 2021 10:03 AM
*To:* openssl-users@openssl.org
*Subject:* Re: I installed Openssl 1.1.1k and Ubuntu 20.04 did an 
upgrade and reverted it back to 1.1.1f. Usually Ubuntu upgrades don’t 
break it.


On 2021-05-19 19:56, Michael McKenney wrote:

I installed Openssl 1.1.1k and Ubuntu 20.04 did an upgrade and
reverted it back to 1.1.1f.   Usually Ubuntu upgrades don’t break it.

OpenSSL 1.1.1f  31 Mar 2020 (Library: OpenSSL 1.1.1k  25 Mar 2021)

built on: Thu Apr 29 14:11:04 2021 UTC

platform: linux-x86_64

options:  bn(64,64) rc4(16x,int) des(int) blowfish(ptr)

compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ
-DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5
-DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM
-DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM
-DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB
-DNDEBUG

OPENSSLDIR: "/usr/local/ssl"

ENGINESDIR: "/usr/local/ssl/lib/engines-1.1"

Seeding source: os-specific

How do I change it back to 1.1.1k?  I tried a reinstall.  Didn’t work.

This is the directions I use to install

sudo apt-get update && sudo apt-get upgrade

openssl version -a

sudo apt install build-essential checkinstall zlib1g-dev -y

cd /usr/local/src/

sudo wget https://www.openssl.org/source/openssl-1.1.1k.tar.gz

sudo tar -xf openssl-1.1.1k.tar.gz

cd openssl-1.1.1k

sudo ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl
shared zlib

sudo make

sudo make test

sudo make install

cd /etc/ld.so.conf.d/

sudo vim openssl-1.1.1k.conf

     add    /usr/local/ssl/lib

sudo ldconfig -v

sudo mv /usr/bin/c_rehash /usr/bin/c_rehash.backup

sudo mv /usr/bin/openssl /usr/bin/openssl.backup

sudo vim /etc/environment

add
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games


    :/usr/local/games:/usr/local/ssl/bin"

source /etc/environment

echo $PATH

which openssl

openssl version -a

Sorry, but you did not state what command and output indicates
that Ubuntu undid your upgrade, what is the output of each of
the following diagnostic commands (after Ubuntu apparently
undid your upgrade).

$ dpkg --status libssl1.1
$ dpkg --status libssl-dev
$ dpkg --status openssl
$ type openssl
$ openssl version -a
$ ls -alF /usr/lib/x86_64-linux-gnu/libssl*
$ ls -alF /usr/locallib/libssl*
$ ls -alF /usr/local/bin/openssl
$ /usr/local/bin/openssl version -a





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: I installed Openssl 1.1.1k and Ubuntu 20.04 did an upgrade and reverted it back to 1.1.1f. Usually Ubuntu upgrades don’t break it.

2021-05-21 Thread Jakob Bohm via openssl-users

On 2021-05-19 19:56, Michael McKenney wrote:


I installed Openssl 1.1.1k and Ubuntu 20.04 did an upgrade and 
reverted it back to 1.1.1f.   Usually Ubuntu upgrades don’t break it.


OpenSSL 1.1.1f  31 Mar 2020 (Library: OpenSSL 1.1.1k  25 Mar 2021)

built on: Thu Apr 29 14:11:04 2021 UTC

platform: linux-x86_64

options:  bn(64,64) rc4(16x,int) des(int) blowfish(ptr)

compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ 
-DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 
-DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM 
-DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM 
-DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB -DNDEBUG


OPENSSLDIR: "/usr/local/ssl"

ENGINESDIR: "/usr/local/ssl/lib/engines-1.1"

Seeding source: os-specific

How do I change it back to 1.1.1k?  I tried a reinstall.  Didn’t work.

This is the directions I use to install
sudo apt-get update && sudo apt-get upgrade

openssl version -a

sudo apt install build-essential checkinstall zlib1g-dev -y

cd /usr/local/src/

sudo wget https://www.openssl.org/source/openssl-1.1.1k.tar.gz

sudo tar -xf openssl-1.1.1k.tar.gz

cd openssl-1.1.1k

sudo ./config --prefix=/usr/local/ssl --openssldir=/usr/local/ssl 
shared zlib


sudo make

sudo make test

sudo make install

cd /etc/ld.so.conf.d/

sudo vim openssl-1.1.1k.conf

add    /usr/local/ssl/lib

sudo ldconfig -v

sudo mv /usr/bin/c_rehash /usr/bin/c_rehash.backup

sudo mv /usr/bin/openssl /usr/bin/openssl.backup

sudo vim /etc/environment

add 
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games


:/usr/local/games:/usr/local/ssl/bin"

source /etc/environment

echo $PATH

which openssl

openssl version -a



Sorry, but you did not state what command and output indicates
that Ubuntu undid your upgrade, what is the output of each of
the following diagnostic commands (after Ubuntu apparently
undid your upgrade).

$ dpkg --status libssl1.1
$ dpkg --status libssl-dev
$ dpkg --status openssl
$ type openssl
$ openssl version -a
$ ls -alF /usr/lib/x86_64-linux-gnu/libssl*
$ ls -alF /usr/locallib/libssl*
$ ls -alF /usr/local/bin/openssl
$ /usr/local/bin/openssl version -a


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: PKCS7_decrypt vs RSA OAEP padding

2021-04-15 Thread Jakob Bohm via openssl-users

On 2021-04-15 12:57, Michal Moravec wrote:


Follow-up on my previous email:

I modified my proof-of-problem program to load PKCS7 file into PKCS7 
and convert it to CMS_ContentInfo using the BIO (See convert.c in the 
attachment). It is similar to this:


handle_encrypted_content(SCEP *handle, SCEP_DATA *data, PKCS7 *p7env, 
X509 *dec_cert, EVP_PKEY *dec_key) {

...
CMS_ContentInfo *cmsMessage = NULL;
BIO *convert = NULL;
conversion = BIO_new(BIO_s_mem());
PEM_write_bio_PKCS7(conversion, p7env);
cmsEnv = PEM_read_bio_CMS(conversion, NULL, NULL, NULL);
CMS_decrypt(cmsEnv, dec_key, dec_cert, NULL, decData, 0);


convert.c works well with my test data and CMS_decrypt successfully 
decrypts the CMS_ContentInfo.


When I put this code into practice = using it in the actual library -> 
https://github.com/EtneteraLogicworks/libscep/commit/d94a24b28fcf3a1c1f0dc5e48e274627eed2b3f6

Calling CMS_decrypt results in segfault inside libcrypto library:
Apr 15 12:08:36 scepdev kernel: openxpkid (main[759]: segfault at 
ac6d8cd0 ip 7f6b4d3040a0 sp 7ffde9477738 error 5 in 
libcrypto.so.1.1[7f6b4d29c000+19e000]


I have no idea how to debug this :-( Way out of my league here.



Try linking libcrypto.so.1.1 with debug symbols included (not
stripped).  This should make the error message point to the
function, maybe even show the call stack.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Query on SSL Mutual Authentication on Server

2021-03-02 Thread Jakob Bohm via openssl-users

On 2021-03-01 17:28, Viktor Dukhovni wrote:

On Mon, Mar 01, 2021 at 09:21:29PM +0530, Archana wrote:


I am new to SSL programming. On our SSL Server implementation, we are
trying to enforce Mutual Authentication. Is it Mandatory to provide a user
defined Callback using SSL_ctx_setverify()

No callback is required (callbacks are primarily useful for logging,
though they can also, with care, be used to make chain verification
more "permissive", but there be dragons).  However, you must then
still call:

 int mode = SSL_VERIFY_PEER
  | SSL_VERIFY_FAIL_IF_NO_PEER_CERT
  | SSL_VERIFY_CLIENT_ONCE;

 SSL_CTX_set_verify(ctx, mode, NULL);

to set the verification mode to request (and enforce) the presence of a
client certificate.  Depending on the client, you may also need to make
sure to provide a non-empty list of client CA hints that includes all
the trust-anchor CAs from which you'll accept client certificate chains.
(Clients using Java SSL APIs typically require that to be the case).

This can be done via:

 const char *CAfile = "/your/CA/file";
 STACK_OF(X509_NAME) *calist = SSL_load_client_CA_file(CAfile);

 if (calist == NULL) {
 /* log error loading client CA names */
 }
 SSL_CTX_set_client_CA_list(server_ctx, calist);


If yes, Is it expected to do the IP or hostname validation?

Neither, authorization of the client is up to you.  OpenSSL will check
the dates, validity of the signatures, ... in the clients certificate
chain, but checking whether any of the subject names in the client
certificate are allowed to access your server is up to you.

There is no prior expectation that the client's certificate is
specifically related to its IP address or hostname.

You may in fact, depending on the structure of your code, be able to
configure the expected client name prior to the SSL handshake
with SSL_accept(3), but after accepting the client TCP connection.

To set the expected hostname(s), see the documentation of:

 int SSL_set1_host(SSL *s, const char *hostname);
 int SSL_add1_host(SSL *s, const char *hostname);

For IP addresses, there's a slightly lower-level interface:

 X509_VERIFY_PARAM *SSL_CTX_get0_param(SSL_CTX *ctx);

 int X509_VERIFY_PARAM_set1_ip(X509_VERIFY_PARAM *param,
   const unsigned char *ip, size_t iplen);
 int X509_VERIFY_PARAM_set1_ip_asc(X509_VERIFY_PARAM *param, const char 
*ipasc);

or after the handshake completes, you can call one of:

 int X509_check_host(X509 *, const char *name, size_t namelen,
 unsigned int flags, char **peername);
 int X509_check_email(X509 *, const char *address, size_t addresslen,
  unsigned int flags);
 int X509_check_ip(X509 *, const unsigned char *address, size_t 
addresslen,
   unsigned int flags);
 int X509_check_ip_asc(X509 *, const char *address, unsigned int flags);


Just out of curiousity:  What is the recommended way to check
the authenticated e-mail and/or DN of the client certificate,
given that those are the most common identities in such
certificates (except in server-to-server scenarios).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Encoding of AlgorithmIdentifier with NULL parameters

2021-01-28 Thread Jakob Bohm via openssl-users

If only one or a few parsers are broken, they need to be fixed.

If many broken parsers have proliferated due to generators
semi-violating DER by not omitting the empty field, that has become the
new reality that generators must deal with.

PKIX arbitrarily limiting serial numbers to 159 bits has created a 
similar unfortunate reality.


On 2021-01-29 03:19, Blumenthal, Uri - 0553 - MITLL wrote:
“OPTIONAL” means the parser _must_ deal with complete absence, not only 
encoded as ASN.1 NULL.


Broken parsers should be fixed.

--

Regards,

Uri

//

/There are two ways to design a system. One is to make is so simple 
there are obviously no deficiencies./


/The other is to make it so complex there are no obvious deficiencies./

/  
    -  C. A. R. Hoare/


*From: *openssl-users-bounce  on 
behalf of openssl-users 

*Organization: *WiseMo A/S
*Reply-To: *Jakob Bohm 
*Date: *Thursday, January 28, 2021 at 21:10
*To: *openssl-users 
*Subject: *Re: Encoding of AlgorithmIdentifier with NULL parameters

Also note that the official ASN.1 declaration for
AlgorithmIdentifier (from X.509 (2012), section 7.2) marks
the parameters field as OPTIONAL, so parsers really should
accept its absence.

However if broken parsers are common (this thread
only found one such parser), maybe it would be
good practice to include the NULL value for compatibility.

AlgorithmIdentifier{ALGORITHM:SupportedAlgorithms} ::= SEQUENCE {
     algorithm ALGORITHM.({SupportedAlgorithms}),
     parameters ALGORITHM.({SupportedAlgorithms}{@algorithm}) OPTIONAL,
... }





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: Encoding of AlgorithmIdentifier with NULL parameters

2021-01-28 Thread Jakob Bohm via openssl-users

Also note that the official ASN.1 declaration for
AlgorithmIdentifier (from X.509 (2012), section 7.2) marks
the parameters field as OPTIONAL, so parsers really should
accept its absence.

However if broken parsers are common (this thread
only found one such parser), maybe it would be
good practice to include the NULL value for compatibility.

AlgorithmIdentifier{ALGORITHM:SupportedAlgorithms} ::= SEQUENCE {
    algorithm ALGORITHM.({SupportedAlgorithms}),
    parameters ALGORITHM.({SupportedAlgorithms}{@algorithm}) OPTIONAL,
... }

On 2021-01-28 20:07, Thulasi Goriparthi wrote:
I am trying to provide a test certificate generated by 
openssl-3.0.0-alpha10 to a third party certificate parser/manager. 
This software expects AlgorithmIdentifier to either have parameters or 
to have null encoded (05 00) parameters which seems to be missing in 
the certificate.


Certificate generated by openssl-3.0.0-alpha10

    0:d=0  hl=4 l=1030 cons: SEQUENCE

    4:d=1  hl=4 l= 752 cons: SEQUENCE

    8:d=2  hl=2 l=   3 cons: cont [ 0 ]

   10:d=3  hl=2 l=   1 prim: INTEGER           :02

   13:d=2  hl=2 l=   1 prim: INTEGER           :01

*   16:d=2  hl=2 l=  11 cons: SEQUENCE *

*   18:d=3  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption*

*   29:d=2  hl=3 l= 143 cons: *SEQUENCE

   32:d=3  hl=2 l=  11 cons: SET

   34:d=4  hl=2 l=   9 cons: SEQUENCE

   36:d=5  hl=2 l=   3 prim: OBJECT            :countryName


Certificate generated by openssl-1.1.1g

    0:d=0  hl=4 l= 988 cons: SEQUENCE

    4:d=1  hl=4 l= 708 cons: SEQUENCE

    8:d=2  hl=2 l=   3 cons: cont [ 0 ]

   10:d=3  hl=2 l=   1 prim: INTEGER           :02

   13:d=2  hl=2 l=   1 prim: INTEGER           :01

*   16:d=2  hl=2 l=  13 cons: SEQUENCE *

*   18:d=3  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption*

*   29:d=3  hl=2 l=   0 prim: NULL *

   31:d=2  hl=3 l= 143 cons: SEQUENCE

   34:d=3  hl=2 l=  11 cons: SET

   36:d=4  hl=2 l=   9 cons: SEQUENCE

   38:d=5  hl=2 l=   3 prim: OBJECT            :countryName


From https://tools.ietf.org/html/rfc5280#section-4.1.1.2, It isn't 
clear if NULL parameters can be completely omitted or if it should 
still have NULL encoding.


Is this a too stringent check in the third-party s/w or a miss in 
openss-3.0.0-alpha10?


Thanks,
Thulasi.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: PKCS12 APIs with fips 3.0

2021-01-28 Thread Jakob Bohm via openssl-users
If that is a hypothetical context, what context is the official design 
goal of the OpenSSL Foundation for their validation effort?


On 2021-01-28 11:26, Tomas Mraz wrote:

This is a purely hypothetical context. Besides, as I said below - the
PKCS12KDF should not be used with modern PKCS12 files. Because it can
be used only with obsolete encryption algorithms anyway - the best one
being 3DES for the encryption and SHA1 for the KDF.

Tomas

On Thu, 2021-01-28 at 11:08 +0100, Jakob Bohm via openssl-users wrote:

If the context does not limit the use of higher level compositions,
then
OpenSSL 3.0 provides no way to satisfy the usual requirement that a
product can be set into "FIPS mode" and not invoke the non-validated
lower level algorithms in the "default" provider.

The usual context is to "sell" (give) products to the US Government
or
its contractors that have a "FIPS" box-checking procurement
requirement.

On 2021-01-28 10:46, Tomas Mraz wrote:

There is unfortunately no simple straightforward answer to this
question. It really depends on the context.

Anyway OpenSSL 3.0 gives you all the flexibility needed.

Tomas

On Thu, 2021-01-28 at 10:24 +0100, Jakob Bohm via openssl-users
wrote:

Does FIPS 140 or the related legal requirements limit the use of
higher
level compositions such as PKCS12KDF, when using only validated
cryptography for the underlying operations?

On 2021-01-28 09:36, Tomas Mraz wrote:

I do not get how you came to this conclusion. The "true" FIPS
mode
can
be easily achieved with OpenSSL 3.0 - either by loading just
the
fips
and base provider, or by loading both default and fips
providers
but
using the "fips=yes" default property (without the "?").

The PKCS12KDF does not work because it is not an FIPS approved
KDF
algorithm so it cannot really work in the "true" FIPS mode. But
IMO
this does not mean that PKCS12 keys do not work at all - if you
use
right (more modern) algoritm based on PBKDF2 to do the password
based
key derivation, they should work.

That in 1.0.x the PKCS12 worked with the FIPS module with
legacy
algorithms it only shows that the "true" FIPS mode was not as
"true" as
you might think. There were some crypto algorithms like the
KDFs
outside of the FIPS module boundary.

Tomas Mraz






Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: PKCS12 APIs with fips 3.0

2021-01-28 Thread Jakob Bohm via openssl-users

If the context does not limit the use of higher level compositions, then
OpenSSL 3.0 provides no way to satisfy the usual requirement that a
product can be set into "FIPS mode" and not invoke the non-validated
lower level algorithms in the "default" provider.

The usual context is to "sell" (give) products to the US Government or
its contractors that have a "FIPS" box-checking procurement requirement.

On 2021-01-28 10:46, Tomas Mraz wrote:

There is unfortunately no simple straightforward answer to this
question. It really depends on the context.

Anyway OpenSSL 3.0 gives you all the flexibility needed.

Tomas

On Thu, 2021-01-28 at 10:24 +0100, Jakob Bohm via openssl-users wrote:

Does FIPS 140 or the related legal requirements limit the use of
higher
level compositions such as PKCS12KDF, when using only validated
cryptography for the underlying operations?

On 2021-01-28 09:36, Tomas Mraz wrote:

I do not get how you came to this conclusion. The "true" FIPS mode
can
be easily achieved with OpenSSL 3.0 - either by loading just the
fips
and base provider, or by loading both default and fips providers
but
using the "fips=yes" default property (without the "?").

The PKCS12KDF does not work because it is not an FIPS approved KDF
algorithm so it cannot really work in the "true" FIPS mode. But IMO
this does not mean that PKCS12 keys do not work at all - if you use
right (more modern) algoritm based on PBKDF2 to do the password
based
key derivation, they should work.

That in 1.0.x the PKCS12 worked with the FIPS module with legacy
algorithms it only shows that the "true" FIPS mode was not as
"true" as
you might think. There were some crypto algorithms like the KDFs
outside of the FIPS module boundary.

Tomas Mraz

On Thu, 2021-01-28 at 09:26 +0100, Jakob Bohm via openssl-users
wrote:

Does that mean that OpenSSL 3.0 will not have a true "FIPS mode"
where
all the non-FIPS algorithms are disabled, but the FIPS-
independent
schemes/protocols in the "default" provider remains available?

Remember that in other software systems, such as OpenSSL 1.0.x
and
MS
CryptoAPI, FIPS mode causes all non-validated algorithms to fail
hard,
so all higher level operations are guaranteed to use only FIPS-
validated
crypto.

On 2021-01-27 02:01, Dr Paul Dale wrote:

You could set the default property query to "?fips=yes".  This
will
prefer FIPS algorithms over any others but will not prevent
other
algorithms from being fetched.

Pauli

On 27/1/21 10:47 am, Zeke Evans wrote:

I understand that PKCS12 cannot be implemented in the fips
provider
but I'm looking for a suitable workaround, particularly
something
that is close to the same behavior as 1.0.2 with the fips 2.0
module.

In my case, the default provider is loaded but I am calling
EVP_set_default_properties(NULL, "fips=yes").  I can wrap
calls
to
the PKCS12 APIs and momentarily allow non-fips algorithms
(ie:
"fips=no" or "provider=default") but that prevents the PKCS12
implementation from using the crypto implementations in the
fips
provider.  Is there a property string or some other way to
allow
PKCS12KDF in the default provider as well as the crypto
methods
in
the fips provider?  I have tried "provider=default,fips=yes"
but
that
doesn't seem to work.

Using the default provider is probably a reasonable
workaround
for
reading in PKCS12 files in order to maintain backwards
compatibility.  Is there a recommended method going forward
that
would allow reading and writing to a key store while only
using
the
fips provider?

Thanks,
Zeke Evans
Micro Focus

-Original Message-
From: openssl-users  On
Behalf
Of
Dr Paul Dale
Sent: Tuesday, January 26, 2021 5:22 PM
To: openssl-users@openssl.org
Subject: Re: PKCS12 APIs with fips 3.0

I'm not even sure that NIST can validate the PKCS#12 KDF.
If it can't be validated, it doesn't belong in the FIPS
provider.


Pauli

On 26/1/21 10:48 pm, Tomas Mraz wrote:

On Tue, 2021-01-26 at 11:45 +, Matt Caswell wrote:

On 26/01/2021 11:05, Jakob Bohm via openssl-users wrote:

On 2021-01-25 17:53, Zeke Evans wrote:

Hi,


Many of the PKCS12 APIs (ie: PKCS12_create,
PKCS12_parse,
PKCS12_verify_mac) do not work in OpenSSL 3.0 when
using
the fips
provider.  It looks like that is because they try to
load
PKCS12KDF
which is not implemented in the fips provider.  These
were all
working in 1.0.2 with the fips 2.0 module.  Will they
be
supported
in 3.0 with fips?  If not, is there a way for
applications running
in fips approved mode to support the same
functionality
and use
existing stores/files that contain PKCS12 objects?



This is an even larger issue: Is OpenSSL 3.x so badly
designed that
the "providers" need to separately implement every
standard
or
non-standard combination of algorithm invocations?

In a properly abstracted design PKCS12KDF would 

Re: PKCS12 APIs with fips 3.0

2021-01-28 Thread Jakob Bohm via openssl-users

Does FIPS 140 or the related legal requirements limit the use of higher
level compositions such as PKCS12KDF, when using only validated
cryptography for the underlying operations?

On 2021-01-28 09:36, Tomas Mraz wrote:

I do not get how you came to this conclusion. The "true" FIPS mode can
be easily achieved with OpenSSL 3.0 - either by loading just the fips
and base provider, or by loading both default and fips providers but
using the "fips=yes" default property (without the "?").

The PKCS12KDF does not work because it is not an FIPS approved KDF
algorithm so it cannot really work in the "true" FIPS mode. But IMO
this does not mean that PKCS12 keys do not work at all - if you use
right (more modern) algoritm based on PBKDF2 to do the password based
key derivation, they should work.

That in 1.0.x the PKCS12 worked with the FIPS module with legacy
algorithms it only shows that the "true" FIPS mode was not as "true" as
you might think. There were some crypto algorithms like the KDFs
outside of the FIPS module boundary.

Tomas Mraz

On Thu, 2021-01-28 at 09:26 +0100, Jakob Bohm via openssl-users wrote:

Does that mean that OpenSSL 3.0 will not have a true "FIPS mode"
where
all the non-FIPS algorithms are disabled, but the FIPS-independent
schemes/protocols in the "default" provider remains available?

Remember that in other software systems, such as OpenSSL 1.0.x and
MS
CryptoAPI, FIPS mode causes all non-validated algorithms to fail
hard,
so all higher level operations are guaranteed to use only FIPS-
validated
crypto.

On 2021-01-27 02:01, Dr Paul Dale wrote:

You could set the default property query to "?fips=yes".  This
will
prefer FIPS algorithms over any others but will not prevent other
algorithms from being fetched.

Pauli

On 27/1/21 10:47 am, Zeke Evans wrote:

I understand that PKCS12 cannot be implemented in the fips
provider
but I'm looking for a suitable workaround, particularly
something
that is close to the same behavior as 1.0.2 with the fips 2.0
module.

In my case, the default provider is loaded but I am calling
EVP_set_default_properties(NULL, "fips=yes").  I can wrap calls
to
the PKCS12 APIs and momentarily allow non-fips algorithms (ie:
"fips=no" or "provider=default") but that prevents the PKCS12
implementation from using the crypto implementations in the fips
provider.  Is there a property string or some other way to allow
PKCS12KDF in the default provider as well as the crypto methods
in
the fips provider?  I have tried "provider=default,fips=yes" but
that
doesn't seem to work.

Using the default provider is probably a reasonable workaround
for
reading in PKCS12 files in order to maintain backwards
compatibility.  Is there a recommended method going forward that
would allow reading and writing to a key store while only using
the
fips provider?

Thanks,
Zeke Evans
Micro Focus

-Original Message-
From: openssl-users  On Behalf
Of
Dr Paul Dale
Sent: Tuesday, January 26, 2021 5:22 PM
To: openssl-users@openssl.org
Subject: Re: PKCS12 APIs with fips 3.0

I'm not even sure that NIST can validate the PKCS#12 KDF.
If it can't be validated, it doesn't belong in the FIPS provider.


Pauli

On 26/1/21 10:48 pm, Tomas Mraz wrote:

On Tue, 2021-01-26 at 11:45 +, Matt Caswell wrote:

On 26/01/2021 11:05, Jakob Bohm via openssl-users wrote:

On 2021-01-25 17:53, Zeke Evans wrote:

Hi,


Many of the PKCS12 APIs (ie: PKCS12_create, PKCS12_parse,
PKCS12_verify_mac) do not work in OpenSSL 3.0 when using
the fips
provider.  It looks like that is because they try to load
PKCS12KDF
which is not implemented in the fips provider.  These
were all
working in 1.0.2 with the fips 2.0 module.  Will they be
supported
in 3.0 with fips?  If not, is there a way for
applications running
in fips approved mode to support the same functionality
and use
existing stores/files that contain PKCS12 objects?



This is an even larger issue: Is OpenSSL 3.x so badly
designed that
the "providers" need to separately implement every standard
or
non-standard combination of algorithm invocations?

In a properly abstracted design PKCS12KDF would be
implemented by
invoking general EVP functions for underlying algorithms,
which
would in turn invoke the provider versions of those
algorithms.


This is exactly the way it works. The implementation of
PKCS12KDF
fetches the underlying digest algorithm using whatever
providers it
has available. So, for example, if the PKCS12KDF
implementation needs
to use SHA256, then it will fetch an available implementation
for it
- and that implementation may come from the FIPS provider (or
any
other provider).

However, in 3.0, KDFs are themselves fetchable cryptographic
algorithms implemented by providers. The FIPS module
implements a set
of KDFs - but PKCS12KDF is not one of them. Its only
available from
the default provider.

So, the s

Re: PKCS12 APIs with fips 3.0

2021-01-28 Thread Jakob Bohm via openssl-users
Does that mean that OpenSSL 3.0 will not have a true "FIPS mode" where 
all the non-FIPS algorithms are disabled, but the FIPS-independent 
schemes/protocols in the "default" provider remains available?


Remember that in other software systems, such as OpenSSL 1.0.x and MS 
CryptoAPI, FIPS mode causes all non-validated algorithms to fail hard, 
so all higher level operations are guaranteed to use only FIPS-validated 
crypto.


On 2021-01-27 02:01, Dr Paul Dale wrote:
You could set the default property query to "?fips=yes".  This will 
prefer FIPS algorithms over any others but will not prevent other 
algorithms from being fetched.


Pauli

On 27/1/21 10:47 am, Zeke Evans wrote:
I understand that PKCS12 cannot be implemented in the fips provider 
but I'm looking for a suitable workaround, particularly something 
that is close to the same behavior as 1.0.2 with the fips 2.0 module.


In my case, the default provider is loaded but I am calling 
EVP_set_default_properties(NULL, "fips=yes").  I can wrap calls to 
the PKCS12 APIs and momentarily allow non-fips algorithms (ie: 
"fips=no" or "provider=default") but that prevents the PKCS12 
implementation from using the crypto implementations in the fips 
provider.  Is there a property string or some other way to allow 
PKCS12KDF in the default provider as well as the crypto methods in 
the fips provider?  I have tried "provider=default,fips=yes" but that 
doesn't seem to work.


Using the default provider is probably a reasonable workaround for 
reading in PKCS12 files in order to maintain backwards 
compatibility.  Is there a recommended method going forward that 
would allow reading and writing to a key store while only using the 
fips provider?


Thanks,
Zeke Evans
Micro Focus

-Original Message-
From: openssl-users  On Behalf Of 
Dr Paul Dale

Sent: Tuesday, January 26, 2021 5:22 PM
To: openssl-users@openssl.org
Subject: Re: PKCS12 APIs with fips 3.0

I'm not even sure that NIST can validate the PKCS#12 KDF.
If it can't be validated, it doesn't belong in the FIPS provider.


Pauli

On 26/1/21 10:48 pm, Tomas Mraz wrote:

On Tue, 2021-01-26 at 11:45 +, Matt Caswell wrote:


On 26/01/2021 11:05, Jakob Bohm via openssl-users wrote:

On 2021-01-25 17:53, Zeke Evans wrote:

Hi,


Many of the PKCS12 APIs (ie: PKCS12_create, PKCS12_parse,
PKCS12_verify_mac) do not work in OpenSSL 3.0 when using the fips
provider.  It looks like that is because they try to load PKCS12KDF
which is not implemented in the fips provider.  These were all
working in 1.0.2 with the fips 2.0 module.  Will they be supported
in 3.0 with fips?  If not, is there a way for applications running
in fips approved mode to support the same functionality and use
existing stores/files that contain PKCS12 objects?



This is an even larger issue: Is OpenSSL 3.x so badly designed that
the "providers" need to separately implement every standard or
non-standard combination of algorithm invocations?

In a properly abstracted design PKCS12KDF would be implemented by
invoking general EVP functions for underlying algorithms, which
would in turn invoke the provider versions of those algorithms.


This is exactly the way it works. The implementation of PKCS12KDF
fetches the underlying digest algorithm using whatever providers it
has available. So, for example, if the PKCS12KDF implementation needs
to use SHA256, then it will fetch an available implementation for it
- and that implementation may come from the FIPS provider (or any
other provider).

However, in 3.0, KDFs are themselves fetchable cryptographic
algorithms implemented by providers. The FIPS module implements a set
of KDFs - but PKCS12KDF is not one of them. Its only available from
the default provider.

So, the summary is, while you can set things up so that all your
crypto, including any digests used by the PKCS12KDF, all come from
the FIPS provider, there is no getting away from the fact that you
still need to have the default provider loaded in order to have an
implementation of the PKCS12KDF itself - which will obviously be
outside the module boundary.

There aren't any current plans to bring the implementation of
PKCS12KDF inside the FIPS module. I don't know whether that is
feasible or not.


IMO PKCS12KDF should not be in the FIPS module as this is not a FIPS
approved KDF algorithm. Besides that KDF should not IMO be needed for
"modern" PKCS12 files. I need to test that though.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: PKCS12 APIs with fips 3.0

2021-01-26 Thread Jakob Bohm via openssl-users

On 2021-01-25 17:53, Zeke Evans wrote:


Hi,

Many of the PKCS12 APIs (ie: PKCS12_create, PKCS12_parse, 
PKCS12_verify_mac) do not work in OpenSSL 3.0 when using the fips 
provider.  It looks like that is because they try to load PKCS12KDF 
which is not implemented in the fips provider.  These were all working 
in 1.0.2 with the fips 2.0 module.  Will they be supported in 3.0 with 
fips?  If not, is there a way for applications running in fips 
approved mode to support the same functionality and use existing 
stores/files that contain PKCS12 objects?



This is an even larger issue: Is OpenSSL 3.x so badly designed
that the "providers" need to separately implement every standard
or non-standard combination of algorithm invocations?

In a properly abstracted design PKCS12KDF would be implemented by
invoking general EVP functions for underlying algorithms, which
would in turn invoke the provider versions of those algorithms.

The only exception would be if FIPS allowed implementing PKCS12KDF
using an otherwise unapproved algorithm such as SHA1.  In that
particular case, it would make sense to check if a provider offered
such as PKCS12KDF variant before trying (and failing) to run
provider-independent code that invokes the provider implementation
of a FIPS-unapproved algorithm.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Random and rare Seg faults at openssl library level

2021-01-08 Thread Jakob Bohm via openssl-users

On 2021-01-07 18:05, Ken Goldman wrote:

On 1/7/2021 10:11 AM, Michael Wojcik wrote:


$ cat /etc/redhat-release && openssl version
CentOS Linux release 7.9.2009 (Core)
OpenSSL 1.0.2k-fips  26 Jan 2017


Ugh. Well, OP should have made that clear in the original message.

And this is one of the problems with using an OpenSSL supplied by the 
OS vendor.


In defense of "the OS vendor", meaning the distro, it's a big task to
upgrade to a new openssl major release.  Because there is often not ABI
compatibility, every package has to be ported, built, and tested.
A distro release that is in long term support doesn't do that often.




In defense of long term support distros, until a few years ago, no one 
suspected that OpenSSL would come under a new leadership that actively 
did everything to make it near-impossible to maintain backported 
security patches for a typical 5+ year distro lifecycle (with 
OpenSSL-independent start date).


Until 1.0.2, all OpenSSL releases were incremental patch-steps from the 
old 0.9.x series, allowing distro maintainers to manually cherry pick 
changes for doing ABI-compatible patches for whichever 1.0.x or 0.9.x 
was current at the start of their lifecycle.  Then the new leadership 
started to restructure the code even in supposedly patch-level releases.


A lot of long term support distros are now firmly stuck with unsupported 
OpenSSL 1.0.2 and/or short life cycle 1.1.1.


Not all long term distros are run by rich companies like IBM/RedHat that 
can purchase support plans, resulting in further popularity of OpenSSL 
forks.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Regarding #def for 'SSL_R_PEER_ERROR_NO_CIPHER' and 'SSL_R_NO_CERTIFICATE_RETURNED' in openssl3.0

2020-12-07 Thread Jakob Bohm via openssl-users

On 07/12/2020 12:39, Matt Caswell wrote:


On 04/12/2020 13:28, Narayana, Sunil Kumar wrote:

Hi,

     We are trying to upgrade our application from openssl
usage of 1.0.2 to openssl 3.0, during which we observe following errors.

Looks like the below #def been removed from 1.1 onwards, Should
application also need to take off from its usage ? or is there any
alternative to be used in application ?

1.0.x -> 1.1.x is a breaking change, and so is 1.1.x to 3.0. Return
codes are liable to change in these upgrades.


error: 'SSL_R_PEER_ERROR_NO_CIPHER' was not declared in this scope

This one was only ever used in the SSLv2 implementation. Since no one
uses SSLv2 any more and it is considered highly insecure its
implementation was removed some while ago. So the reason code was also
deleted.

So what error is returned by SSL3/TLS1.x when the client (erroneously)
offers an empty cipher list?

error: 'SSL_R_NO_CERTIFICATE_RETURNED' was not declared in this scope

This reason code existed in 1.0.2 but was never used by anything.

Matt




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Server application hangs on SS_read, even when client disconnects

2020-11-16 Thread Jakob Bohm via openssl-users

(Top posting to match what Mr. André does):

TCP without keepalive will time out the connection a few minutes after
sending any data that doesn't get a response.

TCP without keepalive with no outstanding send (so only a blocking
recv) and nothing outstanding at the other end will probably hang
almost forever as there is nothing indicating that there is actual
data lost in transit.

On 2020-11-13 17:13, Brice André wrote:

Hello,

And many thanks for the answer.

"Does the server parent process close its copy of the conversation 
socket?" : I checked in my code, but it seems that no. Is it needed  ? 
May it explain my problem ?


" Do you have keepalives enabled?" To be honest, I did not know it was 
possible to not enable them. I checked with command "netstat -tnope" 
and it tells me that it is not enabled.


I suppose that, if for some reason, the communication with the client 
is lost (crash of client, loss of network, etc.) and keepalive is not 
enabled, this may fully explain my problem ?


If yes, do you have an idea of why keepalive is not enabled ? I 
thought that by default on linux it was ?


Many thanks,
Brice


Le ven. 13 nov. 2020 à 15:43, Michael Wojcik 
mailto:michael.woj...@microfocus.com>> 
a écrit :


> From: openssl-users mailto:openssl-users-boun...@openssl.org>> On Behalf Of Brice André
> Sent: Friday, 13 November, 2020 05:06

> ... it seems that in some rare execution cases, the server
performs a SSL_read,
> the client disconnects in the meantime, and the server never
detects the
> disconnection and remains stuck in the SSL_read operation.

...

> #0  0x7f836575d210 in __read_nocancel () from
/lib/x86_64-linux-gnu/libpthread.so.0
> #1  0x7f8365c8ccec in ?? () from
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
> #2  0x7f8365c8772b in BIO_read () from
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1

So OpenSSL is in a blocking read of the socket descriptor.

> tcp        0      0 http://5.196.111.132:5413
<http://5.196.111.132:5413> http://85.27.92.8:25856
<http://85.27.92.8:25856>       ESTABLISHED 19218/./MabeeServer
> tcp        0      0 http://5.196.111.132:5412
<http://5.196.111.132:5412> http://85.27.92.8:26305
<http://85.27.92.8:26305>       ESTABLISHED 19218/./MabeeServer

> From this log, I can see that I have two established connections
with remote
> client machine on IP 109.133.193.70. Note that it's normal to
have two connexions
> because my client-server protocol relies on two distinct TCP
connexions.

So the client has not, in fact, disconnected.

When a system closes one end of a TCP connection, the stack will
send a TCP packet
with either the FIN or the RST flag set. (Which one you get
depends on whether the
stack on the closing side was holding data for the conversation
which the application
hadn't read.)

The sockets are still in ESTABLISHED state; therefore, no FIN or
RST has been
received by the local stack.

There are various possibilities:

- The client system has not in fact closed its end of the
conversation. Sometimes
this happens for reasons that aren't immediately apparent; for
example, if the
client forked and allowed the descriptor for the conversation
socket to be inherited
by the child, and the child still has it open.

- The client system shut down suddenly (crashed) and so couldn't
send the FIN/RST.

- There was a failure in network connectivity between the two
systems, and consequently
the FIN/RST couldn't be received by the local system.

- The connection is in a state where the peer can't send the
FIN/RST, for example
because the local side's receive window is zero. That shouldn't be
the case, since
OpenSSL is (apparently) blocked in a receive on the connection.
but as I don't have
the complete picture I can't rule it out.

> This let me think that the connexion on which the SSL_read is
listening is
> definitively dead (no more TCP keepalive)

"definitely dead" doesn't have any meaning in TCP. That's not one
of the TCP states,
or part of the other TCP or IP metadata associated with the local
port (which is
what matters).

Do you have keepalives enabled?

> and that, for a reason I do not understand, the SSL_read keeps
blocked into it.

The reason is simple: The connection is still established, but
there's no data to
receive. The question isn't why SSL_read is blocking; it's why you
think the
connection is gone, but the stack thinks otherwise.

> Note that the normal behavior of my application is : client
connects, server
> daemon forks a new instance,

Does the server parent process close its copy of the conversation
socket?





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: How to make ocsp responder busy

2020-11-09 Thread Jakob Bohm via openssl-users

On 2020-11-09 09:58, Venkata Mallikarjunarao Kosuri via openssl-users wrote:


Hi

We are trying to work scenario to openssl OCSP responder busy, but we 
are not sure how to make OCSP responder busy could please throw some 
pointer to work on.


Ref https://www.openssl.org/docs/man1.0.2/man1/ocsp.html 
<https://www.openssl.org/docs/man1.0.2/man1/ocsp.html>


Thanks

Malli



An OCSP responder is not supposed to be busy.  Ever.

CAs that are trusted by the big web browsers are contractually
required to keep theirs available 24x7.

The man page you reference doesn't contain the word "busy"

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Fencepost errors in certificate and OCSP validity

2020-10-28 Thread Jakob Bohm via openssl-users

Recently, the EJBCA developers publicly warned (via the Mozilla root store
policy mailing list) other CA vendors that they had incorrectly implemented
the handling of the "notAfter" X509 field, resulting in certificates that
lasted 1 second longer than intended.

Prompted by this warning, I checked what the OpenSSL code does, and it 
seems

to be a bit more buggy:

x509_vfy.c seems to be a bit ambivalent if certificate validity should be
inclusive or exclusive of the time values in the certificate.

apps.c seems to convert the validity duration in days as if the notAfter
field is exclusive, but the notBefore field is inclusive.

PKIX (RFC5280) says that both timestamps are inclusive, X.509 (10/2012) 
says

nothing about this aspect of the interpretation of the validity structure.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 



Re: CAPI engine seems to break server validation

2020-10-26 Thread Jakob Bohm via openssl-users

On 2020-10-24 16:09, Brett Stahlman wrote:

Jakob,
I don't really understand why the engine *needs* to do PSS. Neither of 
the badssl certificates seem to use it for signatures. (I'm assuming the 
fact that a cert was signed with RSA-PSS would show up in the Windows 
certificate viewer...) If you could give a short summary of the problem 
as you understand it, perhaps it would help me narrow in on a 
workaround. I'd be happy with even an ugly patch at this point. Given 
that server verification works fine with a ca-bundle file, I wonder 
whether it would be possible to have the capi engine handle only the 
client authentication. As you understand it, would the problem breaking 
server verification also preclude client authentication with the capi 
engine?




From the content of your mails, I inferred that whatever you tried to 
do caused OpenSSL to attempt to generate PSS signatures, but failing to 
pass that job to the CAPI engine.  I was commenting on how that might be

made to work.


On Fri, Oct 23, 2020 at 11:34 AM Jakob Bohm via openssl-users 
mailto:openssl-users@openssl.org>> wrote:


On 2020-10-23 15:45, Matt Caswell wrote:
 >
 > On 23/10/2020 14:10, Brett Stahlman wrote:
 >> It seems that the CAPI engine is breaking the server
verification somehow.
 >> Note that the only reason I'm using the ca-bundle.crt is that I
couldn't
 >> figure out how to get CAPI to load the Windows "ROOT" certificate
 >> store, which contains the requisite CA certs. Ideally, server
 >> authentication would use the CA certs in the Windows "ROOT"
store, and
 >> client authentication would use the certs in the Windows "MY"
store, but
 >> CAPI doesn't appear to be loading either one.
 > This is probably the following issue:
 >
 > https://github.com/openssl/openssl/issues/8872
 >
 > Matt
Looking at the brutal wontfixing of that bug, maybe reconsider if the
existing engine interface can do PSS by simply having the CAPI/CAPIng
engine export the generic PKEY type for PSS-capable RSA keys.  Also,
maybe use a compatible stronger CAPI "provider" (their engines) to do
stronger hashes etc.

  



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: CAPI engine seems to break server validation

2020-10-23 Thread Jakob Bohm via openssl-users

On 2020-10-23 15:45, Matt Caswell wrote:


On 23/10/2020 14:10, Brett Stahlman wrote:

It seems that the CAPI engine is breaking the server verification somehow.
Note that the only reason I'm using the ca-bundle.crt is that I couldn't
figure out how to get CAPI to load the Windows "ROOT" certificate
store, which contains the requisite CA certs. Ideally, server
authentication would use the CA certs in the Windows "ROOT" store, and
client authentication would use the certs in the Windows "MY" store, but
CAPI doesn't appear to be loading either one.

This is probably the following issue:

https://github.com/openssl/openssl/issues/8872

Matt

Looking at the brutal wontfixing of that bug, maybe reconsider if the
existing engine interface can do PSS by simply having the CAPI/CAPIng
engine export the generic PKEY type for PSS-capable RSA keys.  Also,
maybe use a compatible stronger CAPI "provider" (their engines) to do
stronger hashes etc.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL Security Advisory

2020-09-10 Thread Jakob Bohm via openssl-users

On 2020-09-10 09:03, Tomas Mraz wrote:

On Wed, 2020-09-09 at 22:26 +0200, Jakob Bohm via openssl-users wrote:

Wouldn't a more reasonable response for 1.0.2 users have been to
force on
SSL_OP_SINGLE_DH_USE rather than recklessly deprecating affected
cipher
suites
and telling affected people to recompile with the fix off?


You seem to be mixing two different affected things. One is the static
DH ciphersuites. There is no remediation for these except for not using
them. Fortunately they are not really used by anyone. This can be
achieved on the server side by simply not providing the DH certificate.
On the client side they can be dropped from the ciphers string. This is
the "deprecating affected cipher suites" change part.

On the other hand the reuse of DH key for ephemeral DH can be only
disabled by setting SSL_OP_SINGLE_DH_USE by the calling server application. 
This is the part relevant for wider audience.

So yes, both issues can be remediated by application calling the
OpenSSL library. On the other hand it is not always possible to change
the application so we also provide fix to premium support customers in
terms of changing the openssl code.




The advisory didn't include this clarification, and didn't state if 
1.0.2w fixes the DHE case by doing what 1.1.x does and act like 
SSL_OP_SINGLE_DH_USE is always set.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: OpenSSL Security Advisory

2020-09-09 Thread Jakob Bohm via openssl-users

On 2020-09-09 14:39, OpenSSL wrote:

OpenSSL Security Advisory [09 September 2020]
=

Raccoon Attack (CVE-2020-1968)
==

Severity: Low

The Raccoon attack exploits a flaw in the TLS specification which can lead to
an attacker being able to compute the pre-master secret in connections which
have used a Diffie-Hellman (DH) based ciphersuite. In such a case this would
result in the attacker being able to eavesdrop on all encrypted communications
sent over that TLS connection. The attack can only be exploited if an
implementation re-uses a DH secret across multiple TLS connections. Note that
this issue only impacts DH ciphersuites and not ECDH ciphersuites.

OpenSSL 1.1.1 is not vulnerable to this issue: it never reuses a DH secret and
does not implement any "static" DH ciphersuites.

OpenSSL 1.0.2f and above will only reuse a DH secret if a "static" DH
ciphersuite is used. These static "DH" ciphersuites are ones that start with the
text "DH-" (for example "DH-RSA-AES256-SHA"). The standard IANA names for these
ciphersuites all start with "TLS_DH_" but excludes those that start with
"TLS_DH_anon_".

OpenSSL 1.0.2e and below would reuse the DH secret across multiple TLS
connections in server processes unless the SSL_OP_SINGLE_DH_USE option was
explicitly configured. Therefore all ciphersuites that use DH in servers
(including ephemeral DH) are vulnerable in these versions. In OpenSSL 1.0.2f
SSL_OP_SINGLE_DH_USE was made the default and it could not be turned off as a
response to CVE-2016-0701.

Since the vulnerability lies in the TLS specification, fixing the affected
ciphersuites is not viable. For this reason 1.0.2w moves the affected
ciphersuites into the "weak-ssl-ciphers" list. Support for the
"weak-ssl-ciphers" is not compiled in by default. This is unlikely to cause
interoperability problems in most cases since use of these ciphersuites is rare.
Support for the "weak-ssl-ciphers" can be added back by configuring OpenSSL at
compile time with the "enable-weak-ssl-ciphers" option. This is not recommended.

OpenSSL 1.0.2 is out of support and no longer receiving public updates.

Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2w.  If
upgrading is not viable then users of OpenSSL 1.0.2v or below should ensure
that affected ciphersuites are disabled through runtime configuration. Also
note that the affected ciphersuites are only available on the server side if a
DH certificate has been configured. These certificates are very rarely used and
for this reason this issue has been classified as LOW severity.

This issue was found by Robert Merget, Marcus Brinkmann, Nimrod Aviram and Juraj
Somorovsky and reported to OpenSSL on 28th May 2020 under embargo in order to
allow co-ordinated disclosure with other implementations.

Note


OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended
support is available for premium support customers:
https://www.openssl.org/support/contracts.html

OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.
The impact of this issue on OpenSSL 1.1.0 has not been analysed.

Users of these versions should upgrade to OpenSSL 1.1.1.

References
==

URL for this Security Advisory:
https://www.openssl.org/news/secadv/20200909.txt

Note: the online version of the advisory may be updated with additional details
over time.

For details of OpenSSL severity classifications please see:
https://www.openssl.org/policies/secpolicy.html


Wouldn't a more reasonable response for 1.0.2 users have been to force on
SSL_OP_SINGLE_DH_USE rather than recklessly deprecating affected cipher 
suites

and telling affected people to recompile with the fix off?

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Testing

2020-09-03 Thread Jakob Bohm via openssl-users

On 2020-09-03 12:25, Marc Roos wrote:


Why are you defending amazon? Everyone processing significant mail and
http traffic is complaining about them. They were even listed in
spamhaus's top 10 abuse networks (until they started contributing to
them?)



Because we are sending non-spam mail from an AWS hosted server, and
would be seriously inconvenienced if they got generally banned by mail
recipients.

And we did check that they were not in bad standing at spamhaus.org
before choosing them to host that server.  Some of their competitors
failed those checks.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: Testing

2020-09-03 Thread Jakob Bohm via openssl-users

On 2020-09-03 09:42, Marc Roos wrote:


PTR record, SPF, DKIM and DMARC are also set by spammers, and sometimes
even just before a spam run. It is either choosing to do amazons work or
not having any work. If more and more are blocking the amazon cloud it
would make their clients leave and this finally migth have them spend
more on their abuse department.




For your information, AWS apparently blocks TCP port 25 unless the
customer (not someone hacking an AWS instance) explicitly requests a
custom PTR record using a form where the customer promises not to Spam.
Custom PTR records don't look like
ec2-184-72-79-140.compute-1.amazonaws.com .

I am unsure how Richard's example that obviously tricked a server to
send a HTTP request to the OpenSSL mail server got past the port 25
block (this appears to be a common form of server side request forgery).





-Original Message-
To: openssl-users@openssl.org
Subject: Re: Testing

On 2020-08-31 16:28, Marc Roos wrote:

Why don't you block the whole compute cloud of amazon?
ec2-3-21-30-127.us-east-2.compute.amazonaws.com

Please note, that at least our company hosts a secondary MX in the EC2
cloud, with the option to direct my posts to the list through that
server.  However proper PTR record, SPF, DKIM and DMARC checks should
all pass for such posts.

Thus rather than blindly blacklisting the Amazon hosting service, maybe
make the OpenSSL mail server check those things to catch erroneous
transmissions from web servers.




-Original Message-

To: openssl-users@openssl.org
Subject: Testing



--
-BEGIN EMAIL SIGNATURE-

The Gospel for all Targeted Individuals (TIs):

[The New York Times] Microwave Weapons Are Prime Suspect in Ills of

U.S.

Embassy Workers

Link:
https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave
.html

**
**


Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic
Qualifications as at 14 Feb 2019 and refugee seeking attempts at the
United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug
2019) and Australia (25 Dec 2019 to 9 Jan 2020):

[1] https://tdtemcerts.wordpress.com/

[2] https://tdtemcerts.blogspot.sg/

[3] https://www.scribd.com/user/270125049/Teo-En-Ming

-END EMAIL SIGNATURE-










Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: Cert hot-reloading

2020-09-01 Thread Jakob Bohm via openssl-users

On 2020-09-01 06:57, Viktor Dukhovni wrote:

On Mon, Aug 31, 2020 at 11:00:31PM -0500, David Arnold wrote:


1. Construe symlinks to current certs in a folder (old or new / file by file)
2. Symlink that folder
3. Rename the current symlink to that new symlink atomically.

This is fine, but does not provide atomicity of access across files in
that directory.  It just lets you prepare the new directory with
non-atomic operations on the list of published files or file content.

But if clients need to see consistent content across files, this does
not solve the problem, a client might read one file before the symlink
is updated and another file after.  To get actual atomicity, the client
would need to be sure to open a directory file descriptor, and then
openat(2) to read each file relative to the directory in question.

Most application code is not written that way, but conceivably OpenSSL
could have an interface for loading a key and certchain from two (or
perhaps even more for the cert chain) files relative to a given
directory.  I know how to do this on modern Unix systems, no idea
whether something similar is possible on Windows.

On NT-based window, the undocumented Zw family of file I/O syscalls
would do what you call "openat()", "current dir" is in fact a directory
handle plus string equivalent stored in a user mode variable in one
of the core shared objects, which is why rmdir fails if it is the current
directory of any process.


The above is *complicated*.  Requiring a single file for both key and
cert is far simpler.  Either PEM with key + cert or perhaps (under
duress) even PKCS#12.



Does it look like we are actually getting somewhere here?

So far, not much, just some rough notes on the obvious obstacles.
There's a lot more to do to design a usable framework for always fresh
keys.  Keeping it portable between Windows and Unix (assuming MacOS will
be sufficiently Unix-like) and gracefully handling processes that drop
privs will be challenging.

Not all applications will want the same approach, so there'd need to be
various knobs to set to choose one of the supported modes.  Perhaps
the sanest approach (but one that does nothing for legacy applications)
is to provide an API that returns the *latest* SSL_CTX via some new
handle that under the covers constructs a new SSL_CTX as needed.

 SSL_CTX *SSL_Factory_get1_CTX(SSL_CTX_FACTORY *);

This would yield a reference-counted SSL_CTX that each caller must
ultimately release via SSL_CTX_free() to avoid a leak.

 ... factory construction API calls ...
 ctx = SSL_Factory_get1_CTX(factory);-- ctx ref count >= 1
 SSL *ssl = SSL_CTX_new(ctx);-- ctx ref count >= 2
 ...
 SSL_free(ssl);  -- ctx ref count >= 1
 SSL_CTX_free(ctx);  -- ctx may be freed here

To address the needs of legacy clients is harder, because they
expect an SSL_CTX "in hand" to be valid indefinitely, but now
we want to be able age out and free old contexts, so we want
some mechanism by which it becomes safe to free old contexts
that we're sure no thread is still using.  This is difficult
to do right, because some thread may be blocked for a long
time, before becoming active again and using an already known
SSL_CTX pointer.

It is not exactly clear how multi-threaded unmodified legacy software
can be ensured crash free without memory leaks while behind the scenes
we're constantly mutating the SSL_CTX.  Once a pointer to an SSL_CTX
has been read, it might be squirreled away in all kinds of places, and
here's just no way to know that it won't be used indefinitely.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Cert hot-reloading

2020-09-01 Thread Jakob Bohm via openssl-users

On 2020-09-01 04:26, Viktor Dukhovni wrote:

On Aug 31, 2020, at 10:57 PM, Jakob Bohm via openssl-users 
 wrote:

Given the practical imposibility of managing atomic changes to a single
POSIX file of variable-length data, it will often be more practical to
create a complete replacement file, then replace the filename with the
"mv -f" command or rename(3) function.  This would obviously only work
if the directory remains accessible to the application, after it drops
privileges and/or enters a chroot jail, as will already be the case
for hashed certificate/crl directories.

There is no such "impossibility", indeed that's what the rename(2) system
call is for.  It atomically replaces files.  Note that mv(1) can hide
non-atomic copies across file-system boundaries and should be used with
care.

Note that rename(3) and link(2) do replace the file name, by making the
replaced name point to a new inode, thus it would not work with calls
thatmonitor an inode for content or statis change.

There is no basic series of I/O calls that completely replace file contents
inone step, in particular write(2) doesn't shorten the file if the new
contentsis smaller than the old contents.

And this is why I mentioned retaining an open directory handle, openat(2),
...

There's room here to design a robust process, if one is willing to impose
reasonable constraints on the external agents that orchestrate new cert
chains.

As for updating two files in a particular order, and reacting only to
changes in the one that's updated second, this behaves poorly when
updates are racing an application cold start.  The single file approach,
by being more restrictive, is in fact more robust in ways that are not
easy to emulate with multiple files.

What exactly is that "cold start" race you are talking about?

Obviously, coding the logic to react badly to only one of two
files being present would not work with rules that one of those
two needs to arrive/change after the other.



If someone implements a robust design with multiple files, great.  I for
one don't know of an in principle decent way to do that without various
races, other than somewhat kludgey retry loops in the application (or
library) when it finds a mismatch between the cert and the key.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Cert hot-reloading

2020-08-31 Thread Jakob Bohm via openssl-users

On 2020-09-01 01:52, Viktor Dukhovni wrote:

On Sun, Aug 30, 2020 at 07:54:34PM -0500, Kyle Hamilton wrote:

I'm not sure I can follow the "in all cases it's important to keep 
the key

and cert in the same file" argument, particularly in line with openat()
usage on the cert file after privilege to open the key file has been
dropped. I agree that key/cert staleness is important to address in some
manner, but I don't think it's necessarily appropriate here.

Well, the OP had in mind very frequent certificate chain rollover, where
presumably, in at least some deployments also the key would roll over
frequently along with the cert.

If the form of the key/cert rollover is to place new keys and certs into
files, then *atomicity* of these updates becomes important, so that
applications loading a new key+chain pair see a matching key and
certificate and not some cert unrelated to the key.

This, e.g., Postfix now supports loading both the key and the cert
directly from the same open file, reading both sequentially, without
racing atomic file replacements when reopening the file separately
to reach keys and certs.

If we're going to automate things more, and exercise them with much
higher frequency. The automation needs to be robust!

Another synchronization method would be for the application to decree a
specific order of changing the two files, such that triggering reload on
the second file would correctly load the matching contents of the other.

If a future OpenSSL version includes an option to detect such change,
documentation as to which file it watches for changes would guide
applications in choosing which order to specify for changing the files.



Note that nothing prevents applications that have separate configuration
for the key and cert locations from opening the same file twice. If
they're using the normal OpenSSL PEM read key/cert routines, the key
is ignored when reading certs and the certs are ignored when reading
the key.

Therefore, the single-file model is unconditionally superior in this
context. Yes, some tools (e.g. certbot), don't yet do the right
thing and atomically update a single file with both the key and the
obtained certs. This problem can be solved. We're talking about
new capabilities here, and don't need to adhere to outdated process
models.


Given the practical imposibility of managing atomic changes to a single
POSIX file of variable-length data, it will often be more practical to
create a complete replacement file, then replace the filename with the
"mv -f" command or rename(3) function.  This would obviously only work
if the directory remains accessible to the application, after it drops
privileges and/or enters a chroot jail, as will already be the case
for hashed certificate/crl directories.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Testing

2020-08-31 Thread Jakob Bohm via openssl-users

On 2020-08-31 16:28, Marc Roos wrote:

Why don't you block the whole compute cloud of amazon?
ec2-3-21-30-127.us-east-2.compute.amazonaws.com

Please note, that at least our company hosts a secondary MX in the EC2
cloud, with the option to direct my posts to the list through that
server.  However proper PTR record, SPF, DKIM and DMARC checks should
all pass for such posts.

Thus rather than blindly blacklisting the Amazon hosting service, maybe
make the OpenSSL mail server check those things to catch erroneous
transmissions from web servers.




-Original Message-

To: openssl-users@openssl.org
Subject: Testing



--
-BEGIN EMAIL SIGNATURE-

The Gospel for all Targeted Individuals (TIs):

[The New York Times] Microwave Weapons Are Prime Suspect in Ills of U.S.
Embassy Workers

Link:
https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html




Singaporean Mr. Turritopsis Dohrnii Teo En Ming's Academic
Qualifications as at 14 Feb 2019 and refugee seeking attempts at the
United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan (5 Aug
2019) and Australia (25 Dec 2019 to 9 Jan 2020):

[1] https://tdtemcerts.wordpress.com/

[2] https://tdtemcerts.blogspot.sg/

[3] https://www.scribd.com/user/270125049/Teo-En-Ming

-END EMAIL SIGNATURE-





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL compliance with Linux distributions

2020-08-17 Thread Jakob Bohm via openssl-users
The key thing to do is to make those client applications not request the 
ssl23-method from OpenSSL 0.9.x .
ssl23 explicitly requests this backward-compatibility feature while 
OpenSSL 3.x.x apparently deleted the
ability to respond to this "historic" TLS hello format, which is also 
sent by some not-that-old web browsers.



On 05/08/2020 22:19, Skip Carter wrote:

Patrick,

I am also supporting servers running very old Linux systems and I can
tell you that YES you can upgrade from source. I have built
   openssl-1.1.1 from source on such systems with no problems.

On Wed, 2020-08-05 at 21:49 +0200, Patrick Mooc wrote:

Hello,

I'm using an old version of OpenSSL (0.9.8g) on an old Linux Debian
distribution (Lenny).

Is it possible to upgrade OpenSSL version without upgrading Linux
Debian
distribution ?
If yes, up to which version of OpenSSL ?

Are all versions of OpenSSL compliant with all Linux Debian
distribution ?


Thank you in advance for your answer.

Best Regards,




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Software that uses OpenSSL

2020-08-17 Thread Jakob Bohm via openssl-users

On 06/08/2020 22:17, Quanah Gibson-Mount wrote:



--On Thursday, August 6, 2020 1:21 PM -0700 Dan Kegel  
wrote:


lists 861 packages, belonging to something like 400 projects, that 
depend

on openssl


Unfortunately, due to Debian's odd take on the OpenSSL license, many 
projects that can use OpenSSL are compiled against alternative SSL 
libraries, so this can miss a lot of potential applications (OpenLDAP, 
for example).


It's not an odd take.  The SSLeay license explicitly bans releasing 
OpenSSL code under the GPL (as part of SSLeay's own copyleft provisions).


GPL version 2 explicitly prohibits OS bundled GPL code from linking to 
OS-bundled non-GPL code, so this can be done only by violating the 
SSLeay license.


So no OS distribution can include GPL 2 code using OpenSSL 1.x.x

GPL version 2 explicitly allows independently distibuted copies of GPL 2 
programs to link to any OS-bundled libs, including OS-bundled OpenSSL 
(this clause was intended to allow linking to stuff like the Microsoft 
or Sun OS libraries)


Some GPL version 2 programs include an extra license permission to link 
against OpenSSL even when those GPL version 2 programs are bundled with 
the OS.



Hopefully with OpenSSL 3.0 and later, this won't be as much of an issue.

Does the Apache 2.0 license allow redistributing code under GPL 2 ?


--Quanah

--

Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Lack of documentation for OPENSSL_ia32cap_P

2020-07-28 Thread Jakob Bohm via openssl-users

On 2020-07-26 01:56, Jan Just Keijser wrote:

On 23/07/20 02:35, Jakob Bohm via openssl-users wrote:

The OPENSSL_ia32cap_P variable, its bitfields and the code that sets
it (in assembler) seemto have no clear documentation.



Thanks, I somehow missed that document as I was grepping the code.

Looking at x86_64cpuid.pl, I see jumps to ".Lintel" etc. being 
conditional
on stuff other than the CPU being an Intel CPU, while the code in 
there is
generally unreadable due to the backwards SCO assembler format and the 
lack
of clear comments about register usage such as "Here, EDX holds XXX 
and ESI

holds " or eventhe code rationale "P50 microarchitecture stepping A
incorrectly implements FDIV, so clear out private bit for using that in
bignum implementations"

As there is an external interface for changing the variable via an 
environment

var, the lack of documentation makes that useless except for "cargo-cult"
copying of values from old mailing list posts.
in the openssl 1.1.1g tree there's a file 'doc/man3/OPENSSL_ia32cap.pod' 
which documents it a little - not sure if that is still up-to-date 
though...


HTH,

JJK




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Lack of documentation for OPENSSL_ia32cap_P

2020-07-22 Thread Jakob Bohm via openssl-users

The OPENSSL_ia32cap_P variable, its bitfields and the code that sets
it (in assembler) seemto have no clear documentation.

Looking at x86_64cpuid.pl, I see jumps to ".Lintel" etc. being conditional
on stuff other than the CPU being an Intel CPU, while the code in there is
generally unreadable due to the backwards SCO assembler format and the lack
of clear comments about register usage such as "Here, EDX holds XXX and ESI
holds " or eventhe code rationale "P50 microarchitecture stepping A
incorrectly implements FDIV, so clear out private bit for using that in
bignum implementations"

As there is an external interface for changing the variable via an 
environment

var, the lack of documentation makes that useless except for "cargo-cult"
copying of values from old mailing list posts.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: server key exchange signature behavior

2020-06-25 Thread Jakob Bohm via openssl-users

You may also check out the results of the popular ssllabs.com test here:

https://www.ssllabs.com/ssltest/analyze.html?d=jnior.com=on

Note however that in recent years they have become quite aggressive in
labeling things as "weak" when they are simply "slightly less than the
best that the latestbig-brand browsers support" with no consideration
for servers that try to provide compatibility for older clients in
addition to the latest hype.

As for the signature on the key exchange in SSL3/TLS1.0/TLS1.1/TLS 1.2
and the final signature in TLS1.3, those are the one signature that
causes the certificates to do anything meaningful, so I would expect all
but the most crappy clients to check it and make a very serious error
message "SOMEONE IS HACKING YOUR CONNECTION, PULL THE PLUG NOW!" or
something equally serious.

On 2020-06-25 19:09, Bruce Cloutier wrote:

Sorry,

By "If OpenSSL fails to validate this particular digital signature that
would be the case." I meant to question whether or not OpenSSL is in
fact doing the validation? In the case that the signature is being
ignored then clients wouldn't complain. They wouldn't notice.

Bruce

On 6/25/20 1:04 PM, Bruce Cloutier wrote:

Yeah. I doubt it is an OpenSSL issue directly as Apache might be feeding
the wrong key. Just need confirmation that there isn't a default key
configuration setting for OpenSSL that might be taking precedence for
who knows why.

I can connect successfully with the browser so I cannot rule out that my
TLS implementation is faulty. However, it validates with every other
site and it validates with the default install of this bitnami stack.
Once we reconfigure for the new key and certificate, this signature in
the server_key_exchange message fails. Nothing else seems to complain.
My code does, well, because I know that I actually do verify that
signature against the supplied certificate.

So to everyone else it appears that we have configured the new
certificates properly (manually achieved Let's Encrypt cert). If OpenSSL
fails to validate this particular digital signature that would be the
case. If in my TLS implementation I skip this check (actually now I just
post a warning) everything negotiates and proceeds just fine.

Obviously, THAT signature is there for a reason. I should expect if to
validate. Just don't know what key it is using?

I am not sure how to get to the Apache people or, might be, the Bitnami
folks?

Bruce

On 6/25/20 12:07 PM, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of
Bruce Cloutier
Sent: Thursday, June 25, 2020 10:11

Has anyone thought about this question?

 From your description, it sounds like an Apache issue, not an OpenSSL one. I 
don't know enough about Apache configuration to comment. (I've configured a few 
Apache instances in my day, but never had any real issues with it, so I've 
never done more than search the docs for what I needed and implemented it.)


The site is https://jnior.com if
anyone wants to hit it. For me the digital signature in the
server_key_exchange does not verify.

I just tried openssl s_client, and it didn't complain about anything. 
Negotiated a TLSv1.2 session with ECDHE-RSA-AES256-GCM-SHA384 and verified the 
chain.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Unusual certificates

2020-06-25 Thread Jakob Bohm via openssl-users

On 2020-06-25 13:25, Hubert Kario wrote:
On Thursday, 25 June 2020 12:15:00 CEST, Angus Robertson - Magenta 
Systems Ltd wrote:

A client is having problems reading Polish Centum issued personal
certificates with OpenSSL 1.1.1, which read OK with 1.1.0 and earlier,
mostly.
Using PEM_read_bio_X509 with some of these certificates says
error::lib(0):func(0):reason(0), while the X509 command line
tool says 'unable to load certificate'.  Some certificates work with
both methods.

Using the asn1parse command from any version of OpenSSL says 'Error:
offset out of range', while a Javascript based web tool is able to
decode the ASN1, but is perhaps more tolerant of errors.
So it seems there is something in the creation of these certificates
that OpenSSL has never liked, but until 1.1.1 was tolerated
sufficiently to allow them to be read.
This certificate reads OK in 1.1.1 but fails asn1parse:


works just fine for me with 1.1.1g


This certificate can not be read in 1.1.1 but is OK in 1.1.0.


but this one fails parsing



Is there a more tolerant way to read ASN1 than the asn1parse command? 


asn1parse expects BER encoding, that already is the most lenient, while
still standards-compliant, encoding that is supported.

Given that it errors out with
139628293990208:error:0D07209B:asn1 encoding 
routines:ASN1_get_object:too long:crypto/asn1/asn1_lib.c:91:

I'm guessing a mismatch between utf-8 and string encoding that makes
the lengths inconsistent. Some tools may just ignore them, but that 
doesn't

make the certificate well-formed.
I have tried examining these two certificates with Peter Gutmann's 
dumpasn1.c

and a generic hex dumper.

For the second certificate, dumpasn1.c complains about a badly encoded 
SET at

file offset 0x8E (after Base64 decoding):

0008E   31 19
00090 05 c1 80 d5 41 18 43 04 15 90 55 14 13 0b 4d 4c
000A0 8d 4c 0c 0c 0c 4c 0e 4c 0c
000A9                            07 85 c3 4c 4e 4c 0c

My manual attempt to recover from this results in the following
further failures:

Attempt1: Straight BER/DER:
SET {
  NULL-object of a very huge length: The number of bytes is
    0x80 d5 41 18 43 04 15 90 55 14 13 0b 4d 4c 8d 4c
      0c 0c 0c 4c 0e 4c 0c 07 85 c3 4c 4e 4c 0c 8c 8d
      0c 8c cc 0c 0c 0c 16 85 c3 4c 8c 4c 0c 8c 8c cc
      8c cc 0c 0c 0c 16 8c 1d 4c 42 cc 02 41 80 d5 41
      01
    -- ERROR: This runs beyond end-of-file
    -- ERROR: This runs beyond end-of-SET (0x19 bytes)
}

Attempt2: Assume length byte for zero length NULL object omitted:
SET { NULL-object with missing length-encoding of its zero length
  private-tag-1 object with indefinite length
    -- ERROR: This runs beyond end-of-SET (0x19 bytes)
}

Attempt3: Treat SET as an opaque blob
SET { -- Contents ignored }
ObjectDescriptor of length 0x151CC bytes
  -- ERROR: This runs beyond end-of-file

Attempt4: Treat preceding string as encoded with length 1 too small
SET {
  SEQUENCE {
    OBJECT IDENTIFIER commonName -- 2.5.4.3
    UTF8String 'CUZ Sigillum - QCA11'
      -- WARNING: One byte beyond declared length of containing SEQUENCE
  }
    -- WARNING: One byte beyond declared length of containing SET
}
GraphicString '\c1\80\d5\41\18'
Application-tag-3 object of length 4 bytes: 0x15 90 55 14
PrintableString 4d 4c 8d 4c 0c 0c 0c 4c 0e 4c 0c
  -- WARNING: Bad characters
ObjectDescriptor of length 0x151CC bytes
  -- ERROR: This runs beyond end-of-file
  -- WARNING: This runs beyond length of containing DN (0x80 bytes)

Attempt5: Treat preceding string as encoded with length 2 too small
SET {
  SEQUENCE {
    OBJECT IDENTIFIER commonName -- 2.5.4.3
UTF8String 'CUZ Sigillum - QCA11\19'
      -- WARNING: 2 bytes beyond declared length of containing SEQUENCE
  }
    -- WARNING: 2 bytes beyond declared length of containing SET
 }
NULL-object of a very huge length: The number of bytes is
  0x80 d5 41 18 43 04 15 90 55 14 13 0b 4d 4c 8d 4c
    0c 0c 0c 4c 0e 4c 0c 07 85 c3 4c 4e 4c 0c 8c 8d
    0c 8c cc 0c 0c 0c 16 85 c3 4c 8c 4c 0c 8c 8c cc
    8c cc 0c 0c 0c 16 8c 1d 4c 42 cc 02 41 80 d5 41
    01
  -- ERROR: This runs beyond end-of-file
  -- WARNING: This runs beyond length of containing DN (0x80 bytes)

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: endless loop in probable_prime

2020-06-21 Thread Jakob Bohm via openssl-users

On 2020-06-18 18:13, Salz, Rich via openssl-users wrote:

BN_bin2bn assumes that the size of a BN_ULONG (the type of a bn->d) is

 BN_BYTES. You have already told us that sizeof(*d) is 4. So BN_BYTES
 should also be 4. If BN_BYTES is being incorrectly set to 8 on your
 platform then that would explain the discrepancy. Can you check?

This seems HIGHLY likely since Ronny said earlier that the same 
config/toolchain is used for 32bit userspace and 64bit kernel, right?

Maybe the internal headers should contain lines that abort compilation if
inconsistencies are found in the values provided by the (public or private)
headers.

For example, if BN_BYTES > sizeof(BN_ULONG), compilation should stop via
an abstraction over the presence/absence of the _Static_assert, 
static_assert

or macro emulation of same in any given compiler.

/* Something like this, but via a macro abstraction: */
#if (some C++ compilers)
  /* Works if  defined(__cplusplus) && __cplusplus >= 201103l */
  /* Works for clang++ if has_feature(cxx_static_assert) */
  /* Works for g++ >= 4.3.x if defined(__GXX_EXPERIMENTAL_CXX0X__) */
  /* Works for MSC++ >= 16.00 */
  /* Fails for g++ 4.7.x specifically */
  /* Fails for some versions of Apple XCode */
  static_assert(
    (BN_BYTES <= sizeof(BN_ULONG)),
    "Failed static assert: " "BN_BYTES <= sizeof(BN_ULONG)");
#elif (some C compilers)
  /* Works for clang with has_feature(c_static_assert) */
  /* Works for gcc >= 4.6.x */
  /* Fails for some versions of Apple XCode */
  _Static_assert(
    (BN_BYTES <= sizeof(BN_ULONG)),
    "Failed static assert: " "BN_BYTES <= sizeof(BN_ULONG)");
#else
  /* Portable fallback, but some fudging may be needed for compilers
   *    without __COUNTER__ */
  /* If assertion fails, compiler will complain about invalid array size */
  /* If assertion is not a const expression, compiler will complain 
about that */

  typedef char OSSL_const_assert_##fudge##__LINE__##_##__COUNTER__[
    (BN_BYTES <= sizeof(BN_ULONG)) ? 1 : -1];
#endif


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: freefunc - name clash with Python.h

2020-06-21 Thread Jakob Bohm via openssl-users

On 2020-06-15 09:37, Viktor Dukhovni wrote:

On Mon, Jun 15, 2020 at 06:07:20AM +, Jordan Brown wrote:

Supplying names for the arguments in function prototypes makes them
easier to read, but risks namespace problems.

Yes, which I why, some time back, I argued unsuccessfuly that we SHOULD
NOT use parameter names in public headers in OpenSSL, but sadly was not
able to persuade a majority of the team.

If this is ever reconsidered, my views have not changed.  OpenSSL SHOULD
NOT include parameter names in public headers.

No sane compiler should complain about name clashes between unrelated
namespaces, such as between global type names and formal parameter names
in header function declarations (used exclusively for readable compiler
error messages about incorrect invocations).

Syntactically, the only case where there could be any overlap between
those two namespaces would be if the formal parameter names were not
preceded by type names, as might happen in K C.  The warnings leading
to this thread should be treated as a compiler bug, that should be easily
reproduced with short standalone (1 to 3 files) test samples submitted to
the relevant compiler bug tracker.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Which 1.1.1 config options set OPENSSL_NO_TESTS ?

2020-05-12 Thread Jakob Bohm via openssl-users

On 12/05/2020 16:01, Matt Caswell wrote:


On 12/05/2020 14:50, Jakob Bohm via openssl-users wrote:

When running Configure in OpenSSL 1.1.1g with various options, it sometimes
silently sets OPENSSL_NO_TESTS as reported by "perl configdata.pm -d" .

Looking at the code here:

https://github.com/openssl/openssl/blob/69296e264e58334620f541d09a4e381ee45542d4/Configure#L470-L510

It seems that "no-tests" will happen automatically if you specify
"no-apps". "no-apps" will be automatically turned on by "no-autoalginit".

i.e. these 3:

no-tests
no-apps
no-autoalginit

It strikes me as a bit over-zealous to disable all the tests because the
apps are disabled (quite a few tests use the apps, but quite a few do
not - we could at least run the ones that don't use them).
Similarly, I would expect many of the apps to work fine with 
no-autoalginit,
while the remainder could do the equivalent from the bin/openssl code 
without

imposing it upon library users.

These option cascades really ought to be documented in INSTALL, but I
can see that they are not.

This obviously causes "make test" to do nothing with the message "Tests are
not supported with your chosen Configure options" .

Unfortunately, neither the message nor "perl configdata.pm -d" gives any
clue which of the used Configure options triggered this, and neither do
INSTALL nor Configurations/README* nor Configurations/INTERNALS.Configure .

So how should one go about finding the offending Configure options (other
than endless trial and error)?



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Which 1.1.1 config options set OPENSSL_NO_TESTS ?

2020-05-12 Thread Jakob Bohm via openssl-users

When running Configure in OpenSSL 1.1.1g with various options, it sometimes
silently sets OPENSSL_NO_TESTS as reported by "perl configdata.pm -d" .

This obviously causes "make test" to do nothing with the message "Tests are
not supported with your chosen Configure options" .

Unfortunately, neither the message nor "perl configdata.pm -d" gives any
clue which of the used Configure options triggered this, and neither do
INSTALL nor Configurations/README* nor Configurations/INTERNALS.Configure .

So how should one go about finding the offending Configure options (other
than endless trial and error)?

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: TLSv1 on CentOS-8

2020-04-22 Thread Jakob Bohm via openssl-users



On 2020-04-22 15:22, Hubert Kario wrote:
On Tuesday, 21 April 2020 21:29:58 CEST, Jakob Bohm via openssl-users 
wrote:
That link shows whatever anyone's browser is configured to handle 
when clicking

the link.

The important thing is which browsers you need to support, like the 
ones on

https://www.ssllabs.com/ssltest/clients.html

Beware that the list I just linked is woefully incomplete for those 
of us who
actively target "any browser" support, especially when including old 
stuff

like Windows Mobile 5 and Windows XP.


what good is supporting connections from Windows XP when no browser 
that can

run on it will be able to display the web page?


Making the web page itself compatible is another part of that task.
For backward browser compatibility, some pages will have a higher
priority.

Did you by chance encounter a technical issue on our web pages?  If
so, please report to me, webmaster or support.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: CMS in openssl

2020-04-21 Thread Jakob Bohm via openssl-users

A few corrections:

OpenSSL included CMS (RFC3369) support since 1.0.0 (see the CHANGES 
file), though for a long time, there was an arbitrary disconnect between 
functions named CMS and functions named PKCS#7 even though it should 
have been a continuum.


The PKCS#7 and CMS standards equally and fully support any 
non-interactive algorithm that has been assigned an OID, from 
RSA+MD2+DES to HSS/LSS+STREEBOG+CAMELIA, no artifical version 
dependencies like in the OpenSSL interpretation of TLS.


On 2020-04-22 03:46, Michael Richardson wrote:

Michael Mueller  wrote:
 > We've implemented what I gather can be called a CMS on Linux and Windows
 > using openssl evp functions.

I'm not sure why you say it this way.
OpenSSL includes CMS (RFC3369) support, but I think not until 1.1.0.
Did you implement RFC3369, or something else?

You don't say if this is email or something else.

 > We need to expand this CMS to other systems, on which we have not been 
able
 > to build openssl. These other systems have a vendor supplied security
 > application. This application supports PKCS7.

 > We are being asked if our evp CMS is interoperable with PKCS7.

CMS (RFC3369/2630) is an upward revision to PKCS7 (RFC2315) 1.5.
CMS can read PKCS7 messages, but converse is not true.

I think it is possible to configure the CMS routines to produce PKCS7
messages, but I didn't do this in my RFC8366 support. I just forklift
upgraded to CMS.

 > If it is possible and more information is required to answer this 
question,
 > I'll provide such information.

 > If not, advice on how to present that argument to management would be
 > appreciated.

You will understand them, but they won't understand you.

You may be able to configure your end to generate PKCS7 easily, and it may
have little effect.  This might degenerate until just using PKCS7 everywhere.

The major difference is the eContentType that is lacking in PKCS7.
And algorithms: I think that there are few modern algorithms defined for PKCS7.

You could easily run in PKCS7 mode until you receive a CMS message from the
peer, and then upgrade to CMS.  But this winds up in a bid-down attack if
both parties run this algorithm, so you'd want to insert some extension that
said: "I can do CMS" into your PKCS7 messages.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: opensssl 1.1.1g test failure(s)

2020-04-21 Thread Jakob Bohm via openssl-users

Summary: The OpenSSL 1.1.1g test suite contains at least two bugs:

TestBug#1: Test suite fails if local network has no IPv6, error message 
(non-
  verbose) doesn't say that's the issue.  [ Testing IPv6 makes sense, 
rejecting
  regression tests on inadequate machines is important to avoid silent 
failures,
  but not telling testers that a test precondition failed or which ones 
is bad.

  ]

TestBug#2: Test suite uses ambiguous wording to report the index of 
failed test.

  Should have said "Failed test indexes:  2" or "Failed test: #2" (latter
  needs to repeat "#" for each index listed)

On 21/04/2020 19:34, Claus Assmann wrote:

On Tue, Apr 21, 2020, Benjamin Kaduk via openssl-users wrote:

On Tue, Apr 21, 2020 at 07:22:38PM +0200, Claus Assmann wrote:

../test/recipes/80-test_ssl_old.t ..
Dubious, test returned 1 (wstat 256, 0x100)

Please run again with `make V=1 TESTS=test_ssl_old test` and post the relevant 
parts of the output?

Thanks for the reply, below is the output, It seems it only fails
because the host doesn't support IPv6?

make depend && make _tests
( cd test;  mkdir -p test-runs;  SRCTOP=../.  BLDTOP=../.  RESULT_D=test-runs  
PERL="/usr/bin/perl"  EXE_EXT=  OPENSSL_ENGINES=`cd .././engines 2>/dev/null && 
pwd`  OPENSSL_DEBUG_MEMORY=on  /usr/bin/perl .././test/run_tests.pl test_ssl_old )
../test/recipes/80-test_ssl_old.t ..
1..6
# Subtest: test_ss
 1..17

...

0:error:0200E016:system library:setsockopt:Invalid 
argument:crypto/bio/b_sock2.c:255:
0:error:2008B088:BIO routines:BIO_listen:listen v6 
only:crypto/bio/b_sock2.c:256:
Doing handshakes=1 bytes=256
TLSv1.3, cipher (NONE) (NONE)
../../util/shlib_wrap.sh ../ssltest_old -s_key keyU.ss -s_cert certU.ss -c_key 
keyU.ss -c_cert certU.ss -ipv6 => 1
 not ok 13 - test TLS via IPv6
 #   Failed test 'test TLS via IPv6'
 #   at ../test/recipes/80-test_ssl_old.t line 390.
 # Looks like you failed 1 test of 13.
not ok 2 - standard SSL tests
#   Failed test 'standard SSL tests'
#   at /home/ca/pd/security/openssl-1.1.1g/test/../util/perl/OpenSSL/Test.pm 
line 1212.

...

# Looks like you failed 1 test of 6.
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/6 subtests

Test Summary Report
---
../test/recipes/80-test_ssl_old.t (Wstat: 256 Tests: 6 Failed: 1)
   Failed test:  2
   Non-zero exit status: 1
Files=1, Tests=6, 12 wallclock secs ( 0.04 usr  0.06 sys +  1.77 cusr  9.78 
csys = 11.65 CPU)
Result: FAIL
*** Error 1 in . (Makefile:217 '_tests')
*** Error 1 in /home/ca/pd/security/openssl-1.1.1g (Makefile:205 'tests')



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: TLSv1 on CentOS-8

2020-04-21 Thread Jakob Bohm via openssl-users
That link shows whatever anyone's browser is configured to handle when 
clicking

the link.

The important thing is which browsers you need to support, like the ones on
https://www.ssllabs.com/ssltest/clients.html

Beware that the list I just linked is woefully incomplete for those of 
us who

actively target "any browser" support, especially when including old stuff
like Windows Mobile 5 and Windows XP.

On 21/04/2020 17:06, Junaid Mukhtar wrote:

Hi Tomas/Team

I have managed to block the RC4 and enable tlsv1 as per our requirements.

We have a requirement to match cipher list on the internal server to 
match the native browser cipher list as shown by the 
https://clienttest.ssllabs.com:8443/ssltest/viewMyClient.html


I have tried setting up different combinations on the CipherString but 
none helped. Do you have any suggestions as to how to do achieve this?



On Fri, Apr 17, 2020 at 6:22 PM Tomas Mraz <mailto:tm...@redhat.com>> wrote:


On Fri, 2020-04-17 at 13:03 -0400, Viktor Dukhovni wrote:
> On Fri, Apr 17, 2020 at 05:17:47PM +0200, Tomas Mraz wrote:
>
> > Or you could modify the /etc/pki/tls/openssl.cnf:
> > Find the .include /etc/crypto-policies/back-ends/opensslcnf.config
> > line in it and insert something like:
> >
> > CipherString =
> >
@SECLEVEL=1:kEECDH:kRSA:kEDH:kPSK:kDHEPSK:kECDHEPSK:!DES:!RC2:!RC4:
> > !IDEA:-SEED:!eNULL:!aNULL:!MD5:-SHA384:-CAMELLIA:-ARIA:-AESCCM8
>
> How did this particular contraption become a recommended cipherlist?

To explain - this is basically autogenerated value from the crypto
policy definiton of the LEGACY crypto policy with just added the
!RC4.


> What's wrong with "DEFAULT"?  In OpenSSL 1.1.1 it already excludes
> RC4 (if RC4 is at all enabled at compile time):

Nothing wrong with DEFAULT. For manual configuration. This is however
something that is autogenerated.

>     $ openssl ciphers -v 'COMPLEMENTOFDEFAULT+RC4'
>     ECDHE-ECDSA-RC4-SHA     TLSv1 Kx=ECDH     Au=ECDSA Enc=RC4(128)
> Mac=SHA1
>     ECDHE-RSA-RC4-SHA       TLSv1
> Kx=ECDH     Au=RSA  Enc=RC4(128)  Mac=SHA1
>     RC4-SHA                 SSLv3
> Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=SHA1
>
> I find too many people cargo-culting poorly thought cipher lists
from
> some random HOWTO.  Over optimising your cipherlist is subject to
> rapid bitrot, resist the temptation...

Yeah, I should have probably suggested just: CipherString = DEFAULT

There is not much point in being as close to the autogenerated policy
as possible for this particular user's use-case.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: certificate verification error OpenSSL 1.1.1

2020-03-03 Thread Jakob Bohm via openssl-users

On 2020-03-03 08:19, Viktor Dukhovni wrote:

On Mon, Mar 02, 2020 at 01:48:20PM +0530, shiva kumar wrote:


when I tried to verify the the self signed certificate in OpenSSL 1.0.2 it
is giving error 18 and gives OK as o/p, when I tried the same with OpenSSL
1.1.1 there is slight change in the behavior it also gives the same error,
but instead of OK it gives different error as "*ca.crt: verification failed*"
as follows.

The 1.1.1 behaviour is correct.  But you also don't seem to have a clear
idea of what it means to "verify" a self-signed certificate.  Indeed
most likely you don't actually want to verify it at all, and are really
trying to solve other problem, which you've decided involves verifying
the certificate in question.  So it is likely best to describe the
*actual* issue you're trying to solve.

Depends heavily if you formally interpret a self-signed and self-issued
end cert as a CA issuing itself (thus requiring CA:TRUE and making it
invalid as an end cert) or as an end cert with no separate CA chain
(thus requiring CA:FALSE and making it not trusted as a CA for any other
certificate).

Either way, the typical case is to use such a self-signed and self-issued
cert in the various OpenSSL supported protocols (SSL, TLS, CMS etc.)

However, that said:


openssl verify ./ca.crt

This command verifies the certificate in question by trying to find in
the default store a chain of issuers leading up to a trust anchor
(typically a self-signed root CA).

But a self-signed certificate is self-issued, so unless it is itself
present in the trust store, no possible issuer can be found there.  So
verification must always fail, and so it does.


why I'm getting this error?

Well ultimately because you don't know what you're trying to do,
but specifically because the certificate is not issued by an
already trusted issuer.


is this an expected behavior in OpenSSL 1.1.1?

Yes.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Which RFC explains how the mac-then-encrypt needs to be implemented

2020-03-02 Thread Jakob Bohm via openssl-users

On 2020-03-03 07:46, Phani 2004 wrote:

Hi Team,

I am trying to implement mac-then-encrypt for aes_cbc_hmac_sha1 
combined cipher. From the code i could understand that the first 16 
bytes were being used as explicit IV while decrypting and the hmac is 
done for 13 bye AAD and 16 byte Fin record in finish message.


Which RFC/section explains this in detail?



For TLS 1.2, this is RFC5246 Section 6.2.3.2

Note that each version of TLS makes arbitrary changes to the record
encryption.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Support FFDHE?

2020-02-28 Thread Jakob Bohm via openssl-users

On 2020-02-28 03:37, Salz, Rich via openssl-users wrote:


*>*Per section Supported Groups in RFC 8446 [1], FFDHE groups could be 
supported.


I was wrong, sorry for the distraction.

As others have pointed out, it will be in the next (3.0) release.


Note that the group identifiers for the hardwired DH groups were also
present in TLS 1.2, though it is generally safer to use random groups
not shared with other hosts.

The RFC that introduced these groups also added crazy rules that
signaling support for those groups should disable general FFDH
support, making implementation for TLS 1.2 inadvisable.

With the removal of general FFDH from TLS 1.3, it has now become
advisable to implement for TLS 1.3 session but ignore for TLS 1.2
and below sessions, as if not implemented for those, at least as a
default-on compatibility option.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Static linking libssl.a and libcrypto.a on Linux x64 fails

2019-11-13 Thread Jakob Bohm via openssl-users

On 13/11/2019 15:23, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of 
Aijaz Baig
Sent: Wednesday, November 13, 2019 01:45
I am trying to statically link libssl.a and libcrypto.a into a static library 
of my own
which I will be using in an application (Linux).

You can't link anything into a Linux static library, technically.

ELF static libraries, like the older UNIX static libraries they're descended 
from, are just collections of object files, possibly with some additional 
metadata. (In BSD 4.x, for example, libraries often had an index member added 
using the ranlib utility, so that the linker didn't have to search the entire 
library for each symbol.)

Actually, that is also the format and mechanism with Microsoft Win32 tools,
they just use the DOS-like file name "foo.lib" instead of "libfoo.a" to
maintain makefile compatibility with their older Intel OMF-based toolchains.

The object files inside the archive are in COFF format, as they seem to
have used Unix tools to bring up the initial "NT" operating system
internally back before the initial 1993 release.


On some platforms, where objects can be relinked, the constituent object files 
produced by compiling source files are sometimes combined into a single large 
object. This is most often seen on AIX, which uses IBM's XCOFF object format 
(an enhanced COFF); XCOFF supports relinking objects, so you can bundle objects 
up this way and save some time in symbol resolution when you link against the 
library later. But even on AIX this is commonly seen with dynamic libraries and 
relatively rare for static ones.

Normally the linker isn't even involved in creating a static library. You 
compile sources to objects, and then use ar(1) to create the static library. 
The makefile you posted to StackOverflow doesn't include this step, so it's 
hard to tell what exactly you're doing.

But in any case, linking a static library against another static library is 
essentially a no-op.

What you *can* do, if you don't want to have to list your library and the 
OpenSSL libraries when linking your application, is combine multiple static 
libraries into a single one - provided the object names don't conflict. This is 
straightforward:

$ mkdir tmp; cd tmp
$ ar x /path/to/libssl.a
$ ar x /path/to/libcrypto.a
$ cp /path/to/your/objects/*.o .
$ ar c ../your-library.a *.o
$ cd ..
$ rm -rf tmp

(Untested, but see the ar manpage if you run into issues.)

That should create a single archive library containing all the objects from the 
three input libraries. Again, it relies on there being no filename clashes 
among the objects; if there are, you'll have to rename some of them.

Note: I seem to recall from a long time ago that GNU ar can combine
static libraries directly (without all those temporary file names).

In BinUtils 2.25 this was apparently done by invoking ar in "MRI
compatibility mode" and using the script command "ADDLIB" inside
the provided MRI-style linker script.  For more details see the
"ar scripts" part of the full GNU BinUtils TexInfo manual.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: static linking libssl and libcrypto

2019-11-06 Thread Jakob Bohm via openssl-users

Regarding #1: Using libSSL.a instead of libSSL.so should avoid using
libSSL.so by definition.  Otherwise something went seriously wrong
with the linking.  Same for any other library.

On 05/11/2019 18:22, Aijaz Baig wrote:

Thank you for the information.

I will address your points here:
1. I was not aware of the fact that only those symbols that have been 
used get imported when linking a library statically. So that very well 
could be the case. I didn't get what you mentioned about the static 
linking preventing the program from requiring libSSL.so. I mean the 
way I am linking my library should be of no concern to the source code 
right? Or so I think.


2. when I downloaded and compiled the openssl library (from source), I 
followed the INSTALL read me. All it resulted was libssl.a and 
libcrypto.a. I didn't find any file name libSSL.so. So how will this 
static library (archive) have references to libSSL.so on the system?? 
I am kind of confused here now.



On Mon, Nov 4, 2019 at 4:59 PM Brice André <mailto:br...@famille-andre.be>> wrote:


Hello,

It's not an open-ssl issue, but more a compiler specific one.

With info you provided, I cannot tell you what you get as results,
but two points that may help:

 1. regarding the 87 ssl symbols : when you link with a library,
only the useful symbols are imported. So, if the code in you
libAPP library only uses some sparse functions of libSSL, it's
normal you only have corresponding symbols in your final
image. I don't know what you plan to do, but note that
statically linking your dll with open-ssl will prevent this
dll from needing the openssl dynamic library. But it will not
prevent your main program to require the open-ssl library to
run properly if some part of it is dynamically linked with
open-ssl !
 2. depending on how you compiled your libssl.a, it can be either
a static library containing the full openssl binary code, or a
static library that just makes the "link" between you code and
the ssl dynamic library. In the second case, even if you
properly statically link with this lib, you will still need
the dll to execute your program.




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: SHA_CTX h0, h1, h2, h3, h4

2019-10-30 Thread Jakob Bohm via openssl-users

On 30/10/2019 04:04, ratheesh kannoth wrote:

Hi,

1. what are these h0h4 ?

2. How are they generated ?

3. Could you help to locate code in openssl ?

typedef struct SHAstate_st {
 SHA_LONG h0, h1, h2, h3, h4;
 SHA_LONG Nl, Nh;
 SHA_LONG data[SHA_LBLOCK];
 unsigned int num;
} SHA_CTX;

Thanks,,


Read the specification of the SHA-1 algorithm (either in the FIPS 180-1
standard or in a textbook).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Questions about secure curves

2019-10-16 Thread Jakob Bohm via openssl-users

To clarify, Firefox/Mozilla the organization enforces an unexplained
policy of prohibiting all included CAs from issuing any P-521
certificate, thus effectively banning their use on public servers
regardless of technical abilities.

On 15/10/2019 19:02, Mark Hack wrote:

I believe that Firefox does still support P-521 but Chrome does not.
Also be aware that if you set server side cipher selection and use
default curves, that OpenSSL orders the curves weakest to strongest (
even with @STRENGTH) so you will end up forcing P-256.


On Tue, 2019-10-15 at 17:24 +0200, Jakob Bohm via openssl-users wrote:

On 15/10/2019 15:43, Stephan Seitz wrote:

Hi!

I was looking at the output of „openssl ecparam -list_curves” and
trying to choose a curve for the web server together with
letsencrypt.

It seems, letsencrypt supports prime256v1, secp256r1, and
secp384r1.

Then I found the site https://safecurves.cr.yp.to/.
I have problems mapping the openssl curves with the curve names
from
the web site, but I have the feeling that none of the choices
above
are safe.


safecurves.cr.yp.to lists some curves that Daniel J. Bernstein
(who runs the cr.yp.to domain) wants to promote, and emphasizes
problems with many other popular curves.

prime256v1 = secp256r1 = P-256 and secp384r1 = P-384 are two curves
that the US government (NIST in cooperation with NSA) wants to
promote.

It so happens that the CA/Browser forum has mysteriously decided
that the big (US made) web browsers should only trust CAs that
only accept curves that the US government promotes.  So if you
want your SSL/TLS implementation to work with widely distributed
US Browsers (Chrome, Safari, Firefox, IE, Edge etc.) you have to
use the US government curves P-256 and P-384 .  The third US
governmentcurve P-521 is banned by Firefox, so no trusted CA can
support it.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Questions about secure curves

2019-10-15 Thread Jakob Bohm via openssl-users

On 15/10/2019 15:43, Stephan Seitz wrote:

Hi!

I was looking at the output of „openssl ecparam -list_curves” and 
trying to choose a curve for the web server together with letsencrypt.


It seems, letsencrypt supports prime256v1, secp256r1, and secp384r1.

Then I found the site https://safecurves.cr.yp.to/.
I have problems mapping the openssl curves with the curve names from 
the web site, but I have the feeling that none of the choices above 
are safe.




safecurves.cr.yp.to lists some curves that Daniel J. Bernstein
(who runs the cr.yp.to domain) wants to promote, and emphasizes
problems with many other popular curves.

prime256v1 = secp256r1 = P-256 and secp384r1 = P-384 are two curves
that the US government (NIST in cooperation with NSA) wants to
promote.

It so happens that the CA/Browser forum has mysteriously decided
that the big (US made) web browsers should only trust CAs that
only accept curves that the US government promotes.  So if you
want your SSL/TLS implementation to work with widely distributed
US Browsers (Chrome, Safari, Firefox, IE, Edge etc.) you have to
use the US government curves P-256 and P-384 .  The third US
governmentcurve P-521 is banned by Firefox, so no trusted CA can
support it.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: error 114

2019-10-03 Thread Jakob Bohm via openssl-users

On 03/10/2019 14:32, russellb...@gmail.com wrote:

fetchmail fails when openssl reports an error 114 (I think)

stat("/etc/ssl/certs/4a6481c9.0", {st_mode=S_IFREG|0644, st_size=1354, ...}) = 0
openat(AT_FDCWD, "/etc/ssl/certs/4a6481c9.0", O_RDONLY) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=1354, ...}) = 0
read(4, "-BEGIN CERTIFICATE-\nMIID"..., 4096) = 1354
read(4, "", 4096)   = 0
close(4)= 0
stat("/etc/ssl/certs/4a6481c9.1", 0x7ffefc274100) = -1 ENOENT (No such file or 
directory)
write(1, "fetchmail: SSL verify callback d"..., 71) = 71
write(1, "fetchmail: Certificate chain, fr"..., 70) = 70
write(1, "fetchmail: Issuer Organization: "..., 43) = 43
write(1, "fetchmail: Issuer CommonName: Gl"..., 41) = 41
write(1, "fetchmail: Subject CommonName: G"..., 42) = 42
write(1, "fetchmail: SSL verify callback d"..., 71) = 71
write(1, "fetchmail: Certificate at depth "..., 35) = 35
write(1, "fetchmail: Issuer Organization: "..., 43) = 43
write(1, "fetchmail: Issuer CommonName: Gl"..., 41) = 41
write(1, "fetchmail: Subject CommonName: G"..., 42) = 42
write(1, "fetchmail: SSL verify callback d"..., 71) = 71
write(1, "fetchmail: Server certificate:\n", 31) = 31
write(1, "fetchmail: Issuer Organization: "..., 54) = 54
write(1, "fetchmail: Issuer CommonName: GT"..., 41) = 41
write(1, "fetchmail: Subject CommonName: p"..., 45) = 45
write(1, "fetchmail: Subject Alternative N"..., 51) = 51
write(1, "fetchmail: pop.gmail.com key fin"..., 90) = 90
fstat(2, {st_mode=S_IFREG|0644, st_size=6732357, ...}) = 0
write(2, "fetchmail: pop.gmail.com fingerp"..., 52) = 52
write(3, "\25\3\3\0\2\2P", 7)   = 7
write(2, "fetchmail: OpenSSL reported: err"..., 114) = 114


What is an error 114?  Why does openssl look for
/etc/ssl/certs/4a6481c9.1 ?  All the hashes for my certs end in .0

Linux kernel 5.3.2, Slackware latest, fetchmail 6.4.1, OpenSSL 1.1.1d  
10 Sep 2019


This looks like the output of running strace on fetchmail.

114 in the last line is just the number of characters in the error
message printed by fetchmail, the first 33 of those 114 characters
are "fetchmail: OpenSSL reported: err", the remaining 81 are not
shown above.

The hashed name ending in ".1" is OpenSSL looking to see if you
have more than one cert with the hash value 4a6481c9, which does
happen for some users.  If you had such a second cert, OpenSSL
wouldalso load 4a6481c9.2, then 4a6481c9.3 and so on until it
reaches a name you don't have.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Proposed change to linux kernel about random numbers

2019-09-18 Thread Jakob Bohm via openssl-users

On 18/09/2019 20:58, Salz, Rich via openssl-users wrote:


Please take a look at 
https://lore.kernel.org/lkml/CAHk-=wiGg-G8JFJ=r7qf0b+utqa_weouk6v+mcmfsljlrq6...@mail.gmail.com/ 
and consider giving your comments.


TL;DR:  see the comment below.

+ * Hacky workaround for the fact that some processes

+ * ask for truly secure random numbers and absolutely want

+ * to wait for the entropy pool to fill, and others just

+ * do "getrandom(0)" to get some ad-hoc random numbers.

+ *

+ * If you're generating a secure key, you'd better ask for

+ * more than 128 bits of randomness. Otherwise it's not

+ * really all that secure by definition.

+ *

+ * We should add a GRND_SECURE flag so that people can state

+ * this "I want secure random numbers" explicitly.


Well, I guess that comes from library authors suddenly ignoring
proper usage of the original *random* API definitions, as well
as all related compatibility needs.

Until recently, the rules were clear:

1. If a program or library wanted seeding or bits for generating long
  term keys and was willing to wait, it would use /dev/random or (if
  in both running kernel and loaded libc) getrandom(2) with GRND_RANDOM
  (and soon GRND_SECURE).  This includes waiting for the OS to say it
  has actually gathered entropy etc.

2. If a program or library wanted to set up an internal RNG that can
  be reseeded later it would use /dev/urandom or (if in both running
  kernel and loaded libc) getrandom(2) with neither GRND_RANDOM or
  GRND_SECURE, nor waiting for the kernel to estimate having entropy.
  Then reseed later when OS has more entropy, but not so often as to
  keep the system dry.

3. If a multipurpose library or tool (such as OpenSSL and the openssl
  command line tool) uses the random bits in both ways, it needs to
  pass the choice onto the caller, like OpenSSL 1.0.x did with the
  difference between RAND_pseudo_bytes and RAND_bytes.

  For example, a TLS or SSH implementation can use the weaker entropy
  source to start handling incoming calls (with session keys) soon
  after boot, while a tool to set up initial private keys at first
  boot would need to wait for the stronger entropy source (which may
  in fact get initial randomness over such an encrypted early
  connection!).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Openssl-1.0.2t availability

2019-09-09 Thread Jakob Bohm via openssl-users

On 09/09/2019 20:56, Nikki D'Ambra wrote:

Hello,

I was wondering when the latest version openssl, version 1.0.2t will 
be available for public download?


Announcement is 2019-09-10 between 12:00 and 16:00 UTC approximately.  
That's about 17 to 21 hours after your question.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Subject: SSL_connect returned=1 errno=0 state=error: dh key too small

2019-08-29 Thread Jakob Bohm via openssl-users

On 29/08/2019 17:05, Hubert Kario wrote:

On Wednesday, 28 August 2019 23:20:49 CEST Marcelo Lauxen wrote:

...

that server is willing to negotiate ECDHE_RSA ciphers, you'd be better off
disabling ciphers that use DHE and RSA key exchange and using ECDHE_RSA
instead of trying to make 1024 bit work – it really is weak and should not be
used (see also: LOGJAM)



Where in the LOGJAM papers does it say that 1024 bit DH is too little,
provided the group is not shared among millions of servers?

Where, does it reliably say that ECDH with a choice of very few published
groups is more secure than DH with random group parameters shared among
a much smaller number of connections and servers?

Also note that the following factors make it necessary to support
traditional DHE for compatibility:

1. Red Hat OpenSSL builds until a few years ago disabled EC support.

2. Microsoft (and the TLS protocol specs themselves) until fairly
  recently allowed ECDHE only with (EC)DSA server certificates, which
  are not as easily available as RSA certs.

3. The "supported groups" TLS extension cannot be used without jamming
  the TLS clients into a short list of fixed DH groups.  Thus servers
  have to ignore that extension and use heuristic guesses to choose the
  DH strength.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Acquire Entropy for embedded platform

2019-08-16 Thread Jakob Bohm via openssl-users

Not just dedicated black box rngs in verious chips (such as
multifunction I/O chips or the old Intel BIOS chip), but also
general hardware that happens to have plenty of inherent
randomness due to its design or implementation.

The simple easy to add RNG circuits include some completely
open discrete designs if that's desired.

On 16/08/2019 12:53, Dr Paul Dale wrote:
I agree.  Using internal hardware in the processor for entropy depends 
on everything.  Each processor needs to be independently quantified 
and not doing so becomes a risk assessment.


As for hardware sources, they are essentially black boxes and could 
contain anything.  It is extremely difficult, if not impossible, to 
tell if the hardware RNG is good or not.  This doesn’t mean that they 
should not be used, it just means that using them involves another 
risk assessment.




On 16 Aug 2019, at 8:42 pm, Jakob Bohm via openssl-users 
mailto:openssl-users@openssl.org>> wrote:


[Top posting for consistency]

More than OS dependency, this depends on the exact hardware on the 
platform:

CPU, support chips, peripheral chips.   Usually some of these can provide
much more randomness than the highly predictable time of day/year RTC 
clock.
 And if none do, there are simple RNG hardware designs that could be 
added
in a corner of the circuit, either on a plugin board or as part of a 
board

already customized to the application.


On 16/08/2019 11:33, Dr Paul Dale wrote:
Two bits of RTC is nowhere near enough entropy.  I could break two 
bits by hand in a few seconds — there are only four possibilities.


The best outcome is an hardware random number generator.  These are 
often not readily available.


Next would be waiting for enough entropy from interrupts, timers and 
the like.


You didn’t specify what operating system/kernel you are using so 
further advise is less than useful.



On 16 Aug 2019, at 7:26 pm, Chitrang Srivastava 
<mailto:chitrang.srivast...@gmail.com> 
<mailto:chitrang.srivast...@gmail.com>> wrote:


Hi,

I am working on an embedded platform and now ported openssl 1.1.1b
TLS 1.2/1.3 is working fine.
While analysing random number , Rand pool initialization calls 
where I am returning like this ,

size_t *rand_pool_acquire_entropy*(RAND_POOL *pool)
{
        return rand_pool_entropy_available(pool);
}
As noticed that *rand_unix.c* has an implementation wcih samples 2 
bits of RTC, would that give enough entropy or any other 
recommendation to have enough entropy for embedded platforms?





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Acquire Entropy for embedded platform

2019-08-16 Thread Jakob Bohm via openssl-users

[Top posting for consistency]

More than OS dependency, this depends on the exact hardware on the platform:
CPU, support chips, peripheral chips.   Usually some of these can provide
much more randomness than the highly predictable time of day/year RTC clock.
 And if none do, there are simple RNG hardware designs that could be added
in a corner of the circuit, either on a plugin board or as part of a board
already customized to the application.


On 16/08/2019 11:33, Dr Paul Dale wrote:
Two bits of RTC is nowhere near enough entropy.  I could break two 
bits by hand in a few seconds — there are only four possibilities.


The best outcome is an hardware random number generator.  These are 
often not readily available.


Next would be waiting for enough entropy from interrupts, timers and 
the like.


You didn’t specify what operating system/kernel you are using so 
further advise is less than useful.



On 16 Aug 2019, at 7:26 pm, Chitrang Srivastava 
<mailto:chitrang.srivast...@gmail.com>> wrote:


Hi,

I am working on an embedded platform and now ported openssl 1.1.1b
TLS 1.2/1.3 is working fine.
While analysing random number , Rand pool initialization calls where 
I am returning like this ,

size_t *rand_pool_acquire_entropy*(RAND_POOL *pool)
{
        return rand_pool_entropy_available(pool);
}
As noticed that *rand_unix.c* has an implementation wcih samples 2 
bits of RTC, would that give enough entropy or any other 
recommendation to have enough entropy for embedded platforms?


Thanks,










Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: IPv6 address encoding in commonName

2019-08-15 Thread Jakob Bohm via openssl-users

[Top posting to match]

Note that the actual DC name element is still used for actual domains 
when interacting with Microsoft Active Directory authentication, 
including associated X.509 certificates.  So it shouldn't be used for 
something contrary.


The shortest useful form in terms of certificate size would probably be:

Put an informal (but fixed format) description of the address scope in 
the user readable CN in certificates at all levels (rootCA, itemCA and 
end cert).  Put appropriate human readable organization name or 
equivalent in the O name element in rootCA and itemCA.  Make the end 
cert DN as short as possible.


For example "CN=HIT CA 2 example corp,O=example corp,C=TH" -> "CN=HIT 
factory CA xy,O=Example Chon Buri plant,C=TH" -> "CN=HIT CA for 
[...],O=In your device,C=XX" -> "CN=[2001:db8:a:b::],C=XX" (Using "XX" 
to represent the device might be in any country).


Put the actual address in the appropriate SAN in the end cert (this will 
be a binary address).


Put name restrictions in the all the CAs (intermediary and special 
purpose root), these will be a binary address and length for the allowed 
type and the appropriate "nothing" notation for all the other defined 
name restriction types except the distinguished name type.


Do not include ID number fields except the certificate serial number, 
which also protects the signature hash algorithm via randomization 
(since SHA-1 phase out began, but potentially useful for modern algorithms).


Use a short offline-compatible revocation URL such as "ex.th/xy.crl" for 
hierarchies run by the hypothetical EXample conglomerate in THailand, 
where the xy part is a very short name assigned by that conglomerate to 
the issuing central CA or factory intermCA.


On 15/08/2019 18:49, Robert Moskowitz wrote:



On 8/14/19 6:47 PM, Michael Richardson wrote:

Robert Moskowitz  wrote:
 > I am fiddling around with an intermediate CA signing cert that 
the CA's
 > 'name' is it HIP (RFC 7401) HIT which is a valid IPv6 address. 
Actually a
 > Hierarchical HIT as in draft-moskowitz-hierarchical-hip (to be 
revised soon).


 > For a client cert, it would be easy to put the HIT in 
subjectAltName per RFC
 > 8002 (with a null subjectName), but a CA cert MUST have a 
non-empty

 > subjectName.

 > Thus all I want in this subjectName is commonName with the HIT.
 > I am looking for examples of IPv6 addresses in commonName.

I thought that RFC3779 did exactly what you want, but it does not 
define new

Subject DN, but rather a new extension that will be bound to the Subject.
(I was surprised that RFC3779 was not in the SIDR WG's list of 
documents,but

I guess it preceeded the SIDR working group, and occured in PKIX)

In ANIMA's ACP document, we have an abomination that leverages 
rfc822Name,

mostly because we figure the odds of getting anything else through
off-the-shelf CAs is nil.
Note to consumed with things in your stomach:
https://tools.ietf.org/html/draft-ietf-anima-autonomic-control-plane-20#section-6.1.2

Jakob Bohm via openssl-users  wrote:
 > As the author of a proposal in this area, could you define a 
notation
 > for IPv6 DNs, perhaps one that actually reflects the 
hierarchical nature

 > of IPv6 addresses?

RFC3779 does some of that, but not in the DN itself.

 > You could take inspiration from the (unfortunately rarely used)
 > hierarchical DN representation of DNS names (this used the DNS
 > specific DC name components).  Overall the goal is to allow X.500
 > distinguished name restrictions to work correctly.

Yes, we could abuse the DC component.
Were you thinking about:
  DC=2001/DC=0db8


This looks closest to what is needed here, as the prefix for HHITs is 
currently proposed at /64.


So it would be DC=2001/DC=0024/DC=0028/DC=0014

But the OID for DC is bi: 0.9.2342.19200300.100.1.25

Ouch.

So I will research this more, but for this early stage in the 
development I will use:


CN=2001:24:28:14::/64

Thanks for all the comments here.



 > In practice you could follow the nibble notation as already used
 > for delegation of IPv6 reverse lookups in DNS.

so more correctly:
  DC=2/DC=0/DC=0/DC=1/DC=d/DC=b/DC=8

 > However for the CN in the end cert you could perhaps use the full
 > DNS reverse IPv6 name
 > 
"x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.ip6.arpa"
 > or the URL/Mail notation 
"[:::::::]"

 > where the hex notation shall be the shortest form permitted by the
 > IPv6 notation spec.

Bob, this seems like the best immediate hack to me.



Enjoy

Jakob

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: openssl req error with DN having a / in it

2019-08-14 Thread Jakob Bohm via openssl-users

On 15/08/2019 00:33, Jordan Brown wrote:

On 8/14/2019 2:11 PM, Robert Moskowitz wrote:

[...]
   commonName="/CN=IPv6::2001:24:28:24/64"
[...]
req: Hit end of string before finding the equals.
problems making Certificate Request 


Some systems present distinguished names using slashes as separators.  
I assume that that's what you're running into here, that your string 
is being processed as a valid RDN "CN=IPv6::2001:db8:28:24" and an 
invalid RDN "64".


You'll need to quote the slash.  I don't happen to know how, but my 
bet would be either \/ or %2F.



This is why my mail proposed CN=[2001:24:28:24::9] with no
slashes for an end cert with a specific IP and a human readable
name that would sort with related names in the CA's CN element.
Also note that the "IPv6:" notation might confuse OpenSSL or
OpenSSL derived string parsing code.

Certificates for Bluetooth MAC addresses would be a different
notation such as CN=DC-BA-98-76-54-32 for a 48-bit MAC address,
or (to reuse name restrictions on via IPv6 SANs), the equivalent
[fe80::dcba:98ff:fe76:5432].

I don't understand what use case Moskowitz wants for a subnet
mask length such as /64 in an end cert.

P.S. 2001:db8::/32 is the official prefix for use in examples.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: IPv6 address encoding in commonName

2019-08-14 Thread Jakob Bohm via openssl-users

On 14/08/2019 04:55, Robert Moskowitz wrote:
I am fiddling around with an intermediate CA signing cert that the 
CA's 'name' is it HIP (RFC 7401) HIT which is a valid IPv6 address. 
Actually a Hierarchical HIT as in draft-moskowitz-hierarchical-hip (to 
be revised soon).


For a client cert, it would be easy to put the HIT in subjectAltName 
per RFC 8002 (with a null subjectName), but a CA cert MUST have a 
non-empty subjectName.


Thus all I want in this subjectName is commonName with the HIT.

I am looking for examples of IPv6 addresses in commonName.

My searches today have come up empty.


If no one comes up with good established practice examples, here are
some ideas you may work on.

For CA certificates that are not self-signed end certs, it would be
practical to use a CN that is intentionally different from the end
certs, such as "Example corp HIP CA for 2001:db8::/48" .

As the author of a proposal in this area, could you define a notation
for IPv6 DNs, perhaps one that actually reflects the hierarchical nature
of IPv6 addresses?

You could take inspiration from the (unfortunately rarely used)
hierarchical DN representation of DNS names (this used the DNS
specific DC name components).  Overall the goal is to allow X.500
distinguished name restrictions to work correctly.

In practice you could follow the nibble notation as already used
for delegation of IPv6 reverse lookups in DNS.

However for the CN in the end cert you could perhaps use the full
DNS reverse IPv6 name
"x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.ip6.arpa"
or the URL/Mail notation "[:::::::]"
where the hex notation shall be the shortest form permitted by the
IPv6 notation spec.

Examples of boundaries where hierarchical divisions would be practical
(if making a new design, it should be useful outside the HIT/HIP
standards):

1. After the 1st nibble to cater for IANA design assignments (0 is
  special, 2 and 3 used for current live assignments, f used for
  special transmission modes such as multicast and local segment).

2. After the 2nd to 4th nibble to reflect assignments to continents
  (RIRs).  Different continents may operate under conflicting legal
  regimes for internal purposes, such as certificate privacy.

3. After the 4th to 6th nibble to reflect typical operator (LIR)
  assignments.

4. After the 6th to 8th nibble to reflect customer or other specific
  net assignments.

5. After the 14th nibble to reflect a single IEEE assigned MAC prefix
  (For example fe80:0:0:0:3a94:ed00::/88 would match the net local
  addresses of NETGEAR hardware using the 38-94-ED OUI block).

6. After the 18th nibble to reflect a single IEEE assigned MAC
  prefix excluding similar-looking non-MAC addresses (For example
  fe80:0:0:0:3a94:edff:fe00:/104 for that same block).

7. Even later nibbles to reflect assignment of part of an OUI block
  to a factory or production line that generates certificates for
  devices as they are manufactured.





Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Serialize/Deserialize SSL state

2019-08-10 Thread Jakob Bohm via openssl-users

On 09/08/2019 23:21, Felipe Gasper wrote:

On Aug 9, 2019, at 3:42 PM, Osama Mazahir via openssl-users 
 wrote:

Is there a way to serialize and deserialize the ssl_st state (i.e. including 
any child objects)?
  
Background: I would like to handoff all the SSL state (along my own managed state, file descriptors, etc) to another Linux running process (I will handle the IPC handoff).  The connection already had its handshake completed, app data flow had already occurred (i.e. it is not a new or early’ish context).  So, trying to see if it is possible to serialize the openssl state, shove it through a unix domain socket to the target process and then have the target process unpack the openssl state and resume IO.

For what it’s worth, I have also wished for something like this, where I could 
pass a file descriptor as well as the OpenSSL state over a socket to a separate 
process.


A possible workaround is to run the SSL code in a dedicated process
and hand around a pipe or unix domain socket carrying the plaintext.

If this is server side, the SSL process could be run under a
dedicated UID which has exclusive access to load the private key etc.,
but no access to the stored application data.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL Security Advisory

2019-07-30 Thread Jakob Bohm via openssl-users

Having reviewed the git commit for 1.1.1 I notice the following issue:

The environment variables that usually point to the secure administrator
directories (such as "Program Files") are not themselves secured, and not
intended as a secure means of obtaining these directory locations, which
are (by definition) subject to change via system configuration (initial
or later!).

There are official system library calls to obtain the actual locations
as follows:

1. If looking for the location where a program is itself installed, use
  the GetModuleFilenameW(own-hinstance) call to obtain the path to once
  own DLL or EXE.  This automatically adapts to wherever the DLL or EXE
  is copied or moved.   This is a kernel32.dll API and returns a location
  with security very close to that of the binary itself.The name
  returned is from the in-process instance of the dynamic linker.

2. If looking for the location where the running program's top level file
  (such as openssl.exe or 
some-program-loading-an-openssl-using-plugin.exe),

  use that same call but pass NULL for the hinstance parameter.

3. If looking for the system-wide secured "/etc" directory, use the
  GetSystemDirectoryW() call and append the fixed string "\\Drivers\\etc" .
  This location is permanently restricted to the system administrators and
  already contains a few traditional unix files such as "hosts". This too
  is a kernel32.dll API.  The name returned is from a system internal value
  set during OS boot.

4. If looking for the directory intended to hold system-wide configuration
  and data files, use the SHGetFolderPathW(CSIDL_COMMON_APPDATA) API from
  shfolder.dll or shell32.dll (fallback) to ask for the "all-users data
  directory", append a company/project name (such as "\\OpenSSL") and
  specify an appropriate ACL in the security argument to CreateDirectoryW()
  (if the directory doesn't already exist with a user-modified ACL,
  CreateDirectoryW will atomically detect this and return a specific error
  code in the per thread GetLastError() variable).Note that mkdir()
  only creates one level of directories per invocation and you may want
  different ACLs when creating missing parent directories.  The values
  returned by SHGetFolderPathW() are typically from one or more 
Administrator

  controlled registry keys.

Some of the above APIs may require their return value to be canonicalized
via the GetFullPathNameW() API in corner cases, retaining the result in
a global variable is advisable.

On 30/07/2019 16:27, OpenSSL wrote:

OpenSSL Security Advisory [30 July 2019]


Windows builds with insecure path defaults (CVE-2019-1552)
======



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Will my application be FIPS 140-2 Certified under following conditions?

2019-07-08 Thread Jakob Bohm via openssl-users

On 08/07/2019 10:12, Dr Paul Dale wrote:
I have to disagree with the “decision not to make a FIPS module for 
the current 1.1.x series” comment.  Technically, this is true.  More 
practically, 3.0 is intended to be source compatible with 1.1.x.  Thus 
far, nothing should be broken in this respect.



The key word is "intended".

If support for 1.0.2 is required beyond the end of this year, it is 
available: https://www.openssl.org/support/contracts.html



I am unsure if this is an affordable route for all affected users
and distributions (especially non-profit OS distributions).



I’d also be interested to know what is wrong with the policy page?



Only that it states the policy of stopping 1.0.2 support at end of
2019, which would be fine if a FIPS-capable replacement had been
ready by now (as is fortunately the case for non-FIPS).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Will my application be FIPS 140-2 Certified under following conditions?

2019-07-07 Thread Jakob Bohm via openssl-users

On 06/07/2019 16:30, Salz, Rich wrote:


 >> They would have to get their own validation, their own lab to verify, 
etc., etc.

That seems to contradict the other answer, which is that legally, the
FIPS cannister (properly built) can be used with any software outside
the cryptographic boundary, the soon-to-be-deprecated OpenSSL 1.0.2
library just being the normal default.
   
You are correct.  My statement, which was technically incorrect, is more likely to be realistic :)
   

The point is that some people may soon be in a desperate need to find a

 FIPS-capable replacement for OpenSSL 1.0.x.
   
It seems to me that the easiest thing to do is maintain that release of OpenSSL by themselves.


Which would be another variation of such unofficial work.



If someone is thinking of fitting OpenSSL 1.1.x to become a user of the 
existing FOM, then they will probably find it easier to, well, just maintain 
what currently works.

Just because something is past "end of life" does not mean that anyone's 
ability to use it is revoked.  It just means that keeping it working is their 
responsibility.  Anyone can use the FOM until it expires (sunsets is the term used), 
which lasts one year beyond 1.0.2 as I recall.  See 
https://www.openssl.org/blog/blog/2018/05/18/new-lts/ for some more information on this.




That policy page is half the problem, the other half being the decision
not to make a FIPS module for the current 1.1.x series.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: Will my application be FIPS 140-2 Certified under following conditions?

2019-07-06 Thread Jakob Bohm

On 04/07/2019 16:44, Salz, Rich wrote:

Is the use of OpenSSL an actual legal requirement of the certification of

 the FIPS object module, or just the easiest way to use it?
   
I'm not sure who you are asking this.


The exiting FIPS validations for OpenSSL only cover the 1.0.2 based source code.
   

Difference would be particularly significant in case someone created code

 to use the validated FOM 2.0 module with the OpenSSL 1.1.x feature
 enhancements (as the project itself has indicated no desire to do so).
   
They would have to get their own validation, their own lab to verify, etc., etc.





That seems to contradict the other answer, which is that legally, the
FIPS cannister (properly built) can be used with any software outside
the cryptographic boundary, the soon-to-be-deprecated OpenSSL 1.0.2
library just being the normal default.

If the other answer is correct, it should be perfectly OK (legally) for
someone to modify OpenSSL 1.1.1 source code to call the FIPS canister
for everything, and the result should be an application that is as FIPS
"compliant" as an application that runs something unrelated (such as
Apache mod_ssl) on top of OpenSSL-1.0.2 on top of FOM 2.x , thus no new
validation required.

The point is that some people may soon be in a desperate need to find a
FIPS-capable replacement for OpenSSL 1.0.x.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: Will my application be FIPS 140-2 Certified under following conditions?

2019-07-04 Thread Jakob Bohm via openssl-users

Is the use of OpenSSL an actual legal requirement of the certification of
the FIPS object module, or just the easiest way to use it?

Difference would be particularly significant in case someone created code
to use the validated FOM 2.0 module with the OpenSSL 1.1.x feature
enhancements (as the project itself has indicated no desire to do so).

On 04/07/2019 04:09, Kyle Hamilton wrote:
Also, on question b: No.  You need to build a compatible version of 
openssl as specified in the User Guide, and link that version.  
FIPS_mode_set() tells the library to always and only use the 
implementations in the FIPS canister; the canister does not replace 
the library entirely.


-Kyle H

On Wed, Jul 3, 2019, 11:55 Dipak B > wrote:


Dear Experts,

Can you please help me with the following question?

My win32 desktop application uses 'libcurl' to interact with web
service, in order to get my application FIPS 140-2 certified,
following is the plan which I arrived at after going through the
'User Guide' and 'Security Policy' pdfs.

Plan:
a. After verifying HMAC-SHA1 of openssl-fips-2.0.16.tar.gz, build
it to generate fipscanister.lib (FOM) as windows static library.
b. Build libcurl as windows static library using above
fipscanister.lib
c. Link my desktop application with above libcurl.lib after adding
FIPS_mode_set()

Questions:
a. On following points a, b,c, can I confirm that my application
is FIPS 140-2 certified?
b.  fipscanister.lib is always static library and it can be
substituted for libssl.lib / ssleay.lib?





Re: openssl-fips configure parameters to force IANA cipher suite compliance

2019-07-03 Thread Jakob Bohm via openssl-users

On 02/07/2019 22:13, Larry Jordan via openssl-users wrote:


I want to build an openssl-fips canister to force IANA cipher suite 
compliance.


With the help of an openssl-iana mapping 
(https://testssl.sh/openssl-iana.mapping.html) I can identify the 
corresponding OpenSSL cipher suites.



Not sure what you mean?  To my knowledge IANA doesn't (and has no authority
to) define TLS compliance requirements.  They merely keep a database of
various numbers and names assigned in Internet standards ("Internet Assigned
Numbers Authority").

And the openssl-fips canister is a very specific, legally defined exact 
binary

that has gone through expensive US-government tests to allow use by said US
government, with absolutely no changes permitted, even to fix security bugs!

Now it so happens that since long before there were any Internet standards
for SSL/TLS, the OpenSSL/SSLeay family of libraries have used slightly
different names for the numbered cipher suites, especially the ones that
existed before official IETF standards were established.

The key spelling differences obviously being:

1. OpenSSL doesn't put TLS_ in front and _WITH_ in the middle of all the
  names, because that just gets in the way when administrators type in
  configuration changes on their servers.

2. OpenSSL uses dash, not underscore.

3. Because it was the historic default, OpenSSL lets you omit the "RSA_" and
  CBC_ in those cipher suite names.

4. OpenSSL omits the _ or - between AES and the bit count, just like IETF
  already does with SHA.

5. For triple-DES (once the strongest common algorithm, and thus still
  needed to talk to older systems), OpenSSL historically considered it a
  variant of the CBC mode for DES, not a variant of DES for CBC mode.
   Thus the oldest 3DES_CBC cipher suites use DES-CBC3 in their names,
  while the new ones follow IETF naming.  Similarly, DHE_ is spelled EDH-
  in older suites.

6. Whatever user interface a program runs on top of OpenSSL can display the
  names however it wants.

7. If a program wants to map the IETF names to the OpenSSL names, it can
  probably start by doing the string substitutions in differences 1 to 4
  above, then add some special cases for things like
 TLS_DHE_RSA_WITH_3DES_CBC_SHA maps via rules to
     DHE-RSA-3DES-CBC-SHA which maps via special case lookup to
 EDH-RSA-DES-CBC3-SHA

8. To be absolutely sure you handle all known cases, you have to find the
  OpenSSL source file that defines the names and check it against the IANA
  database of cipher suite numbers from IETF standards and non-IETF 
extensions

  (Such as Camelia and GOST cipher suites).



IANA OpenSSL

TLS_RSA_WITH_AES_128_CBC_SHA as defined in RFC 5246 [0x2f] AES128-SHA

TLS_RSA_WITH_AES_128_CBC_SHA256 as defined in RFC 5246 [0x3c] 
AES128-SHA256


TLS_RSA_WITH_AES_256_CBC_SHA256 as defined in RFC 5246 [0x3d] 
AES256-SHA256


TLS_RSA_WITH_AES_256_GCM_SHA384 as defined in RFC 5288 [0x9d] 
AES256-GCM-SHA384


TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 as defined in RFC 5246 [0x67] 
DHE-RSA-AES128-SHA256


TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 as defined in RFC 5246 [0x6b] 
DHE-RSA-AES256-SHA256


TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 as defined in RFC 5288 [0x9f] 
DHE-RSA-AES256-GCM-SHA384


TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 as defined in RFC 
5289   [0xc023] ECDHE-ECDSA-AES128-SHA256


TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 as defined in RFC 
5289 [0xc02b] ECDHE-ECDSA-AES128-GCM-SHA256


TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 as defined in RFC 
5289   [0xc024] ECDHE-ECDSA-AES256-SHA384


TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 as defined in RFC 
5289 [0xc02c] ECDHE-ECDSA-AES256-GCM-SHA384


TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 as defined in RFC 5289 [0xc027] 
ECDHE-RSA-AES128-SHA256


TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 as defined in RFC 
5289  [0xc02f] ECDHE-RSA-AES128-GCM-SHA256


TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 as defined in RFC 5289 [0xc028] 
ECDHE-RSA-AES256-SHA384


TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 as defined in RFC 
5289  [0xc030] ECDHE-RSA-AES256-GCM-SHA384


How would I configure openssl-fips to force this precise compliance, 
eliminating all other cipher suites?


Thank you.

--Larry

C++ Developer




Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: TLSv12 Client Certificate Selection Behavior !!

2019-06-11 Thread Jakob Bohm via openssl-users

On 11/06/2019 19:21, Viktor Dukhovni wrote:

On Jun 11, 2019, at 1:02 PM, Michael Wojcik  
wrote:

And, of course, there are no doubt still people out there running internal CAs 
that generate X.509v1 certs, which won't have any extensions at all. No KU, no 
EKU, no SAN, no SKID/AKID ... Presumably a check for proper KU on the client 
certificate would be bypassed if the client cert is v1 - but then using a v1 
certificate is another violation of RFC 5246 (7.4.2) that OpenSSL probably 
should not enforce.

Yes, v1 certs would get a free ride.  The reason to enforce KU
in client certs would be that client certs are not infrequently
(though not always) optional, and it can be better to not send
any client cert, than to send one the server will reject.

RSA client certs without digital signature in KU are increasingly
not interoperable as more server implementations are checking the
keyUsage these days.  So at some point it makes sense to consider
not offering such (client) certs to the peer server.

But at the end of the day, the user should not have configured
such a client cert in the first place, so it may also make sense
to just leave the responsibility with the user.


Note that the most common variant of encrypt-only RSA client certs
is probably encrypt-only e-mail client certs with other client uses
tacked on.

Such certificates are typically paired with a "same logical
identity" sign-only e-mail/client certificate, with the key
difference being that the encrypt-only-private key is kept around
for a lot longer in order to decrypt stored e-mails that are
(wisely) stored only in their original encrypted form.

In that /specific/ case, attempting to use the encrypt-only cert
as a TLS client cert is typically some kind of logic certificate
selection error, such as a Web client blindly using the locally
stored long-term decryption key instead of the signing key stored
on a removable, but also loosable, smart card, however there may
be company-internal reasons to do so deliberately in order for
background activities to operate when the user (and smartcard) is
"away from terminal".

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: TLSv12 Client Certificate Selection Behavior !!

2019-06-11 Thread Jakob Bohm via openssl-users

On 11/06/2019 12:50, Hareesh D wrote:
TLSv12 client is sending RSA certificate even when it dont have 
digitalSignature bit in keyUsage extension. But RFC5246 sectiin-7.4.6 
says its MUST condition for client to send RSA certificate with 
digitalSignature bit set in keyUsage extension.


1. Though server is rejecting such certificates, not sure why client 
sends such certificates even when there is MUST condition for this 
point. Should client send empty certificate list instead of sending 
wrong one? Client has the provision of sensing empty certificate list 
when it don't have a suitable certificate according to certificate 
request.


2. And also client is not checking the certificate_types requested in 
certificate_message and also server not validating if the response is 
according to the type requested. Consider server requesting only DSA 
certificate. Client sending RSA certificate and server accepting it.


Is this behavior valid and according to RFC ?

There's an overarching OpenSSL policy that certificate checks are
done exclusively by the relying end (for client certs, that's the
server), except when certified end is trying to choose from
multiple certificates.

Thus with only one certificate available, the OpenSSL sends the
(untrusted, and in this case inappropriate) certificate, just in
case the server was somehow configured to make a special exception
for this particular case.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Compile EC(Elliptic Curve) crypto

2019-06-03 Thread Jakob Bohm via openssl-users

On 03/06/2019 14:35, Chitrang Srivastava wrote:

Hi,

I am porting Openssl 1.1.1b for an embedded platform.
I see that EC folder generate some of function in assembly for e.g
These functions are generated based on environment like 
x86-64/ppc/armv8 etc.

Is there any C version of these function to use directly ?
Thanks,


All algorithms etc. are available as C code, the assembler optimizations
are used if they exist for a compilation target and have not been
explicitly disabled with the configure option "no-asm".

Because embedded platforms often have slow CPUs, keeping the assembler
optimizations enabled is especially advantageous on such systems.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Reg missing rc4-ia64.pl in openssl 1.1.1

2019-05-31 Thread Jakob Bohm via openssl-users

On 30/05/2019 02:10, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of J. 
J. Farrell
Sent: Wednesday, May 29, 2019 15:02
On 29/05/2019 18:39, ramakrushna mishra wrote:

In Openssl 1.1.1,  the file "rc4-ia64.pl" is missing. This cause degradation of
performance on AIX. ...

The AIX port to Itanium was never released as a product, and was abandoned
altogether in 2002; I'm surprised that a degradation of performance on it
matters to anyone.

What, no love for unobtainable archaic platforms?

Note that while the OP may actually be running AIX (old beta or special
contract), IA64 had publicly released OS versions of Linux, HP-UX, Windows
(NT 5.01 to NT 6.01) and possibly others. Linux IA64 may or may not be in
some supported distributions (in addition to site OS builds), don't know
the status of HP-UX IA64, Windows NT 6.01 (== Server 2008R2) is still
publicly supported by Microsoft with options to buy private support years
on top.


Personally, I'm bemoaning the lack of a rc4-romp.pl for AIX 2 on my RT PC. And 
the shocking lack of assembly modules for the PDP-11.

In all seriousness: It's pretty cool that OpenSSL still includes assembly 
modules for what are now rather niche architectures such as MIPS and PA-RISC. 
And in case all this is too convoluted for the OP, rc4-ia64.pl doesn't apply to 
extant AIX systems, which are all some variant of POWER, not IA64.

MIPS is still a common platform in Linux-based routers, which generally
use OpenSSL as their main cryptographic library for everything from WiFi
security to OpenVPN and browser configuration.  As these are often speed
constrained chips, assembler optimizations are important.  While ARM was
making inroads in this market, RiscV or an Asian design are more likely
successor for low cost low power router hardware.

(OK, somewhere someone probably has one of the other AIX variants running - 
AIX/390 might be the last non-POWER AIX to die, if I had to bet. But probably 
not AIX IA64.)



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Performance Issue With OpenSSL 1.1.1c

2019-05-29 Thread Jakob Bohm via openssl-users

On 28/05/2019 23:48, Steffen Nurpmeso wrote:

Jay Foster wrote in <84571f12-68b3-f7ee-7896-c891a2e25...@roadrunner.com>:
  |On 5/28/2019 10:39 AM, Jay Foster wrote:
  |> I built OpenSSL 1.1.1c from the recent release, but have noticed what
  |> seems like a significant performance drop compared with 1.1.1b.  I
  |> notice this when starting lighttpd.  With 1.1.1b, lighttpd starts in a
  |> few seconds, but with 1.1.1c, it takes several minutes.
  |>
  |> I also noticed that with 1.1.1b, the CFLAGS automatically included
  |> '-Wall -O3', but with 1.1.1c, '-Wall -O3' is no longer included in the
  |> CFLAGS.  was this dropped?  I  added '-Wall -O3' to the CFLAGS, but
  |> this did not seem to have any affect on the performance issue
  |> (unrelated?).
  |>
  |> This is for a 32-bit ARM build.
  |>
  |> Jay
  |>
  |I think I have tracked down the change in 1.1.1c that is causing this.
  |It is the addition of the DEVRANDOM_WAIT functionality for linux in
  |e_os.h and crypto/rand/rand_unix.c.  lighttpd (libcrypto) is waiting in
  |a select() call on /dev/random.  After this eventually wakes up, it then
  |reads from /dev/urandom.  OpenSSL 1.1.1b did not do this, but instead
  |just read from /dev/urandom.  Is there more information about this
  |change (i.e., a rationale)?  I did not see anything in the CHANGES file
  |about it.

I do not know why lighttpd ends up on /dev/random for you, but in
my opinion the Linux random stuff is both sophisticated and sucks.
The latter because (it seems that many) people end up using
haveged or similar to pimp up their entropy artificially, whereas
on the other side the initial OS seeding is no longer truly
supported.  Writing some seed to /dev/urandom does not bring any
entropy to the "real" pool.

Something equivalent to your program (but not storing a bitcount field)
used to be standard in Linux boot scripts before systemd.  But it
typically used the old method of just writing the saved random bits
into /dev/{u,}random .

This makes me very surprised that they removed such a widely used
interface, can you point out when that was removed from the Linux
kernel?

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: why does RAND_add() take "randomness" as a "double"?

2019-05-22 Thread Jakob Bohm via openssl-users

On 22/05/2019 19:32, Dennis Clarke wrote:



Good options inspired by other cryptographic libraries include:

- Number of bits of entropy passed in call (For example, a
  perfectly balanced coin flipper could provide the 4 byte
  values "head" or "tail" with an entropy of 1 bit).


Let's drop the coin flipper. It was an off hand remark and by now we
all know there ain't no such thing as a good coin flip for rng.

    See Professor Persi Diaconis at Stanford for that :
    https://www.youtube.com/watch?v=AYnJv68T3MM

Bell's theorem and kolmogorov aside get a radiation decay source as
that is really the *only* real rng that we know of.
Or that I know of.   http://www.fourmilab.ch/hotbits/hardware.html

The coin flipper, even if theoretically problematic, is the standard
statistical example used to describe a 1-bit-at-a-time hardware RNG.

It includes a nice conceptual model to discuss hardware bias (using
Shannon's entropy formula etc.).  Actual 1-bit sources include the
classic semiconductor shot noise fed to a comparator and some primitive
implementations of radioactive RNGs.

Also, radioactive sources are an unacceptable danger in many of the
embedded and portable applications most likely to lack floating point
support.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded


Re: why does RAND_add() take "randomness" as a "double"?

2019-05-22 Thread Jakob Bohm via openssl-users

On 21/05/2019 16:44, Salz, Rich via openssl-users wrote:

When I overhauled the RAND mechanism, I tried to deprecate this use of floating 
point, in favor of just a number from 0 to 100 but was voted down.

It *is* stupid.  Luckily, on a modern system with system-provided randomness to 
seed the RNG, you never need this call.




Perhaps it would have been more acceptable to use a binary base,
instead of a decimal percentage, as there is nothing inherently
decimal about this value.

Good options inspired by other cryptographic libraries include:

- Number of bits of entropy passed in call (For example, a
 perfectly balanced coin flipper could provide the 4 byte
 values "head" or "tail" with an entropy of 1 bit).

- 256th of bits ditto (for example a coin flipper with a known
 slight imbalance could report 252/256th of a bit for each flip
 included in the buffer).

- 32th of bits ditto (makes the "100%" case equal to
 (bytecount << 8)).

In each of those 3 cases, the internal measure of "entropy
left" would be in that same unit, and a compatibility mapping
for the old API would do the conversion of the double as a
pure inline macro that doesn't trigger "float used" compiler
magics in programs that don't use it.

Clarifying notes:

- The current limit of considering only 32 bytes of entropy
 is an artifact of the current set of RNG algorithms, and
 should not be exposed in the API design.  For example
 future addition of post-quantum algorithms may necessitate
 having an RNG with an internal state entropy larger than
 256 bits.

- Future RNG implementations may include logic to safely
 accumulate obtained entropy into "batches" before updating
 the RNG state, as this may have cryptographic benefits.

- The use of a dummy double to force the alignment of
 structures and unions to the "highest known" value can
 be trivially replaced by another type where it is not
 already treated as "not actually floating point
 operations" by the compiler.  For example by passing
 "-Ddouble=long long" as a compiler option.

- The use of floating point registers in CPU-specific
 vector unit optimizations can be readily avoided by
 a no-asm compile.

- Floating point calculations in test programs such as
 "openssl speed" is not relevant to actual library use.

- On Linux x86, test programs that avoid all floating
 point can be checked via the PF_USED_MATH flag or its
 upcoming Linux 5.x replacement.  This may be useful
 in the test suite.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Build the FIPS Object Module issue on Ubuntu 18.04

2019-05-16 Thread Jakob Bohm via openssl-users

On 16/05/2019 02:11, Paul Dale wrote:

Just noting that any module built in this manner is *not* FIPS compliant.

The distribution must be unmodified and build exactly as per the documentation. 
 Any change to the files or the build process renders the result invalid from a 
FIPS perspective.


Only deviations from the official process in creating the
fipscanister invalidates the FIPS validation.

The FIPS-capable OpenSSL is "outside the boundary" of the
FIPS module and can be changed at will.  This is why a new
FIPS validation is not needed every time OpenSSL releases
a bugfix to OpenSSL 1.0.x .  1.1.x will not have FIPS
support, and 4.y.x may lack this agility.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: Crashes when generating certificate

2019-05-15 Thread Jakob Bohm via openssl-users

On 14/05/2019 18:39, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf Of 
Karl Denninger
Sent: Tuesday, May 14, 2019 09:22
On 5/14/2019 09:48, Michael Wojcik wrote:

I can't think of what remnant of the old certificate would be there,
except the certificate itself, in whatever the configuration file
specifies for the new_certs_dir. And I've never seen that cause this problem.

There's a directory (by default "newcerts" but can be changed in the config
file) that has a copy of the certs that OpenSSL generates.

Yeah, that's the new_certs_dir that I referred to. (That's the name of the 
config
file setting.)


  If there's a collision in there (which could happen if the serial number is
reused) "bad things" could happen.

Right, the filename is taken from the serial number. So if the ca app does 
something
like an open(..., O_CREAT | O_EXCL), and that fails, it might quit.


  I've not looked at the code to see if that would cause a bomb-out but the
risk with playing in the database file, although it's just a flat file,
and/or the serial number index is that you can wind up with conflicts.

Agreed.

Let's see... In 1.1.1b, the ca app does a BIO_new_file for the new
certificate file, and if that returns null, it does a perror followed by a goto 
end.

I don't know what version the OP is running, though, and that perror may be 
missing in older OpenSSL releases. (Why do people post questions without 
identifying their OpenSSL version, platform, and so on?)

Interestingly, right before that the ca app does a bio_open_default on outfile, 
which is the argument of the -out option (if any) or null for the default 
(stdout, I think); and if *that* fails it does a goto end without a perror. So 
if OP's command line has a -out and that file can't be open for output, ca will 
exit silently.


The "ca" function in openssl lacks the sort of robustness and "don't do that"
sort of protections that one would expect in a "production" setting.  That's not
say it can't be used that way but quite a bit of care is required to do so
successfully, and toying around in the database structure by hand is rather
removed from that degree of care.

Oh, definitely. I wouldn't recommend using openssl ca for any sort of 
production use unless you're confident you understand how openssl ca works, how 
PKIX works, how production CAs are supposed to work, and any details particular 
to your use, such as CA/BF requirements if you want to generate certificates 
for HTTPS servers. And then you need a lot of infrastructure around the ca app, 
including at least partial automation for all the CA operations, a mechanism 
for key hygiene, backups, auditing, and so on.

Unfortunately, the CA function isn't really suitable for a free turnkey 
implementation (too many variables, too many infrastructure requirements), but 
customers who don't already have some kind of organizational CA need some way 
to get started with TLS. For many years we've shipped a demonstration CA based 
on openssl ca plus some scripts with some of our products, and some customers 
insist on using it in production, despite our warnings against it. I'm not 
happy about it, but we haven't found a good alternative.

Despite its obvious shortcomings, I have yet to find another ca program
suitable for offline use on a small command-line only machine. Everything
I have found has been bloated GUI stuff with builtin web servers and other
unwanted garbage.

It would be nice if a good command-line offline CA product existed, but
until then, disciplined use of the OpenSSL ca "sample" command seems to be
the best there is.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: OpenSSL 1.1.1b tests fail on Solaris - solution and possible fix

2019-05-15 Thread Jakob Bohm via openssl-users
.1: openssl: fatal: relocation error: file 
/vobs_tools/prgs/src/openssl/solaris64/bin/openssl: symbol OPENSSL_sk_new_null: 
referenced symbol not found Killed

The same machine works fine for openssl1.1.1.
So, in 1.1.1b I can observe that OPENSSL_sk_new_null has been defined as below 
in safestack.h.
pragma weak OPENSSL_sk_new_null
Can this be related ? Am I missing anything while Configure ?
<<<<<<<<<<<<<<<

Regards,
John.

-Original Message-
From: openssl-users  On Behalf Of John 
Unsworth
Sent: 09 May 2019 10:13
To: openssl-users@openssl.org
Subject: RE: OpenSSL 1.1.1b tests fail on Solaris

CAUTION: This email originated from outside of Synchronoss.


This looks like the problem:

ld.so.1: sanitytest: fatal: relocation error: file ../../test/sanitytest: symbol 
OPENSSL_sk_new_null: referenced symbol not found ../../util/shlib_wrap.sh 
../../test/sanitytest => 137 not ok 1 - running sanitytest

#   Failed test 'running sanitytest'
#   at 
/home/metabld/OpenSSL/openssl-1.1.1b/test/../util/perl/OpenSSL/Test/Simple.pm 
line 77.
# Looks like you failed 1 test of 1.
Dubious, test returned 1 (wstat 256, 0x100) Failed 1/1 subtests

This results in the same error:
sol-mds-build-01 $ cd apps
sol-mds-build-01 $ ./openssl version
ld.so.1: openssl: fatal: relocation error: file openssl: symbol 
OPENSSL_sk_new_null: referenced symbol not found

I have built static libraries.

John

-Original Message-
From: openssl-users  On Behalf Of Matt 
Caswell
Sent: 09 May 2019 09:38
To: openssl-users@openssl.org
Subject: Re: OpenSSL 1.1.1b tests fail on Solaris

CAUTION: This email originated from outside of Synchronoss.


What is the output from:

$ make V=1 TESTS=test_sanity test

Matt

On 08/05/2019 19:22, John Unsworth wrote:

I have build OpenSSL 1.1.1b 64 bit on Solaris SunOS 5.10
Generic_Virtual sun4v sparc SUNW,T5140.



./Configure -lrt solaris64-sparcv9-cc no-shared -m64 -xcode=pic32
-xldscope=hidden



It builds fine but all the tests fail, with or without no-asm. Can
anyone help please? Here is the start of the test run:



$ make test

make depend && make _tests

( cd test; \

   mkdir -p test-runs; \

   SRCTOP=../. \

   BLDTOP=../. \

   RESULT_D=test-runs \

  PERL="/opt/perl-5.26.1/bin/perl" \

   EXE_EXT= \

   OPENSSL_ENGINES=`cd .././engines 2>/dev/null && pwd` \

   OPENSSL_DEBUG_MEMORY=on \

 /opt/perl-5.26.1/bin/perl .././test/run_tests.pl  )

../test/recipes/01-test_abort.t  ok

../test/recipes/01-test_sanity.t ... Dubious, test
returned 1 (wstat 256, 0x100)

Failed 1/1 subtests



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded



Re: openssl failed to connect to MS Exchange Server (Office365) on RHEL 7.x

2019-05-11 Thread Jakob Bohm via openssl-users

Your transcript below seems to show a successful connection to Microsoft's
cloud mail, then Microsoft rejecting the password and closing the 
connection.


You are not connecting to your own Exchange server, but to a central 
Microsoft
service that also handles their consumer mail accounts (hotmail.com, 
live.com,
outlook.com etc.).  This service load balances connections between many 
servers

which cab give different results for each try.

On 10/05/2019 17:01, Chandu Gangireddy wrote:

Dear OpenSSL Users,

At my corporate environment, I'm experience a challenge to use openssl 
s_client utility. I really appreciate if someone can help me narrow 
down the issue.


Here the details -

Platform: RHEL 7.x
*Openssl version:*
OpenSSL 1.0.2k-fips  26 Jan 2017
built on: reproducible build, date unspecified
platform: linux-x86_64
options:  bn(64,64) md2(int) rc4(16x,int) des(idx,cisc,16,int) 
idea(int) blowfish(idx)
compiler: gcc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DZLIB 
-DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT 
-m64 -DL_ENDIAN -Wall -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 
-grecord-gcc-switches   -m64 -mtune=generic -Wa,--noexecstack -DPURIFY 
-DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 
-DOPENSSL_BN_ASM_GF2m -DRC4_ASM -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM 
-DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM 
-DGHASH_ASM -DECP_NISTZ256_ASM

OPENSSLDIR: "/etc/pki/tls"
engines:  rdrand dynamic

Command tried to tes the connectivity between my Linux client server 
to remote office 365 exchange server using POP3 port -


$ openssl s_client -crlf -connect outlook.office365.com:995 
<http://outlook.office365.com:995>

...
...
subject=/C=US/ST=Washington/L=Redmond/O=Microsoft 
Corporation/CN=outlook.com <http://outlook.com>

issuer=/C=US/O=DigiCert Inc/CN=DigiCert Cloud Services CA-1
---
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3952 bytes and written 415 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 
072FFFDC6177DE9CAB2B59EA06E486A25AD8A2882A9B82F16678BAD74E79

    Session-ID-ctx:
    Master-Key: 
DD7B59F38867FEAB9656B519FBCD743158E528C63FF9A96CE758120424159F26967F9F6FE57A9B5E7CAD806798322278

    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    Start Time: 1557500061
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---
+OK The Microsoft Exchange POP3 service is ready. 
[QgBOADYAUABSADEANABDAEEAMAAwADQAMgAuAG4AYQBtAHAAcgBkADEANAAuAHAAcgBvAGQALgBvAHUAdABsAG8AbwBrAC4AYwBvAG0A]

*USER netco...@cox.com <mailto:netco...@cox.com>*
*+OK*
*PASS *
*-ERR Logon failure: unknown user name or bad password.*
*quit*
*+OK Microsoft Exchange Server POP3 server signing off.*
*read:errno=0*

Operating System:
Red Hat Enterprise Linux Server release 7.2 (Maipo)

When I did the same from a different server, it worked as expected. 
Following are the two difference which I noticed between a working 
server and non-working server.

*
*
*Working server details:*
1. Red Hat Enterprise Linux Server release 6.9 (Santiago)
2. openssl version
OpenSSL 1.0.1e-fips 11 Feb 2013
built on: Mon Jan 30 07:47:24 EST 2017
platform: linux-x86_64
options:  bn(64,64) md2(int) rc4(16x,int) des(idx,cisc,16,int) 
idea(int) blowfish(idx)
compiler: gcc -fPIC -DOPENSSL_PIC -DZLIB -DOPENSSL_THREADS 
-D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DKRB5_MIT -m64 -DL_ENDIAN 
-DTERMIO -Wall -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic 
-Wa,--noexecstack -DPURIFY -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT 
-DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM 
-DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM 
-DWHIRLPOOL_ASM -DGHASH_ASM

OPENSSLDIR: "/etc/pki/tls"
engines:  dynamic

Please let me know if you need any further details from my end.

Thanks, in advance.
Chandu



--
Jakob Bohm, CIO, partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Soborg, Denmark. direct: +45 31 13 16 10 


This message is only for its intended recipient, delete if misaddressed.
WiseMo - Remote Service Management for PCs, Phones and Embedded


  1   2   3   4   5   6   7   8   9   10   >