Re: [openssl-users] The default cipher of executable 'openssl'

2015-06-19 Thread Dave Thompson
 From: openssl-users On Behalf Of Viktor Dukhovni
 Sent: Friday, June 12, 2015 02:47

  1) 1.0.1l
  ./apps/openssl s_server -ssl3 -cert certdb/ssl_server.pem -WWW -CAfile
  certdb/cafile.pem
  Using default temp DH parameters
  Using default temp ECDH parameters
  ACCEPT
 
 With SSL 3.0, no extension support, thus no supported curves
 extension, thus ideally no EDCHE support.  If ECDHE happened anyway
 with earlier releases, that was a bug that is perhaps now fixed.
 
That is it.

I'm not sure a bug, but I'd agree not  ideal. 4492 says client SHOULD 
send the curves and pointformats extensions, but if it doesn't the server 
is free to choose any one of [4492 named curves] (no BCP14 verb).
OpenSSL's old behavior of using a particular curve is permitted.

I'm not sure it was an intentional change. =1.0.1 had all the logic 
in ssl3_choose_cipher, with (large clumsy) code blocks of the form 
if ECC suite is in intersection of client and server lists and we have 
ECC keycert, but client specified curves and our curve isn't among 
them, don't use ECC suite, and similarly for pointformats. If client 
didn't send the extensions the don't use branch wasn't taken.
1.0.2 has new APIs for both client and server apps to restrict curves,
and ssl3_choose_cipher is rearranged into several new routines, 
using I think some new data, with result that if the client doesn't 
send extensions ECC is NOT selected (and in the OPs case DHE is).

  2) 1.0.2
  ./apps/openssl s_server -ssl3 -cert certdb/ssl_server.pem -WWW -CAfile
  certdb/cafile.pem
  Using default temp DH parameters
  ACCEPT
 
  Note that, in 1.0.2, openssl doesn't print out 'Using default temp ECDH
  parameters'.
 
That's a red herring. That code was also refactored; s_server still 
defaults to P256, it just doesn't say so. If I run 1.0.2* s_server -ssl3
then s_client allowing at least 1.0, it sends clienthello containing 
ECC suites in cipherlist (by default), with applicable extensions 
including two for ECC; receiving this, server negotiates version=3.0,
but DOES select ECDHE-RSA (given RSA certkey) and client agrees.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] The default cipher of executable 'openssl'

2015-06-11 Thread Dave Thompson
 From: openssl-users On Behalf Of Aaron
 Sent: Wednesday, June 10, 2015 03:47

 We are using executable 'apps/openssl' in our test cases. We upgraded from
 OpenSSL 1.0.1l to OpenSSL 1.0.2a recently. Since then one of our test
cases
 started to fail. After checking, I noticed that the default cipher of
 'openssl' was changed from ECDHE-RSA-AES256-SHA to DHE-RSA-AES256-SHA

'openssl' doesn't have a default cipher; it implements over 40 subcommands

which use different kinds of ciphers with different defaults or none. You
appear 
to be talking about the 's_client' or 's_server' subcommand, which use the 
library's SSL/TLS default cipherLIST, which contains about 100 ciphersuites 
in preference order. The only differences in this list between 1.0.1l and
1.0.2a 
are that 1.0.2a (also 1.0.1m and 1.0.0r) removes the long-obsolete EXPORT 
suites (finally, perhaps due to the FREAK and Logjam attacks exploiting
them)
and adds newly-implemented static-DH suites, which are ignored unless your 
server has a certificate for a DH key, which in practice nobody does, so
they 
don't affect you (other than further bloating the ClientHello message).

Both 1.0.1 and 1.0.2 have ECDHE-RSA-AES256-SHA ordered before 
DHE-RSA-AES256-SHA, so s_client talking to a server that honors client 
preference should still get the same result, and s_server listening to a 
client that has the same preference should still get the same result.
Whatever changed in your test this wasn't it.

 OpenSSL 1.0.2. The related description in OpenSSL 1.0.2 change log is as
 follows. snip
 My question is how to enable automatic EC temporary key parameter
 selection?

Commandline doesn't use that feature (yet?), only updated app code 
using the library. Both 1.0.1 and 1.0.2 default to a fixed curve, P256, and 
allow you to specify any (fixed) named curve, see -named_curve.

 Is it possible to change the default cipher back to ECDHE-RSA-AES256-SHA?

There's no change to be changed back.

All the above assumes that when you identify versions of OpenSSL you 
mean executables compiled from those version source releases without 
modification. If either or both of your executables was built with any 
source changes or any configuration options that alter the release 
behavior, all bets are off; you'll have to look at your specific builds.
E.g. RedHat builds used to nobble all ECC (but that was fixed by 1.0.2a).
If you ARE using release versions, try getting traces (either externally 
with something like wireshark or tcpdump, or internally with -msg and/or 
-debug in either s_client or s_server) to see if anything is materially 
different on the wire (and what).



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] [openssl-dev] Is there openssl API to verify certificate content is DER or PEM format ?

2015-06-11 Thread Dave Thompson
 From: openssl-dev On Behalf Of Nayna Jain
 Sent: Wednesday, June 10, 2015 20:31

 If I have a pem file with private key in that, how do I check if that is
RSA/DSA ?

If it uses a legacy format, the BEGIN line specifies the algorithm
-BEGIN RSA PRIVATE KEY-
-BEGIN DSA PRIVATE KEY-
-BEGIN EC PRIVATE KEY-

If it uses either PKCS#8 format: if unencrypted there is an
AlgorithmIdentifier 
field near the beginning that specifies the type of the key; if encrypted,
you 
must first decrypt and the decrypted value contains the AlgorithmIdentifier.

It's usually easier to let PEM_read_PrivateKey figure out for you. It reads
all 
formats (encrypted only if you provide the correct passphrase) and returns 
an EVP_PKEY object whose type you can check with EVP_PKEY_type 
following the instructions on the manpage for EVP_PKEY_type.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Testing OpenSSL based solution

2015-05-13 Thread Dave Thompson
 From: openssl-users On Behalf Of Marcus Vinicius do Nascimento
 Sent: Tuesday, May 12, 2015 16:50

 I did some quick research and found this:
http://en.wikipedia.org/wiki/Digital_Signature_Algorithm
 If my understanding is correct, the public key is (p, q, g, y).

You might want to look at the actual standard, FIPS 186, free from NIST 
and referred to by wikipedia as well as easily searchable. The current 
version is revision -4, but the basic logic of DSA hasn't changed since 
-0 (although the sizes used have increased).

Standardly a DSA public key is (parameters, y) where parameters is 
(p, q, g {, seed, counter}) where the optional fields in the parameters 
allow verification of the parameter generation process. OpenSSL does 
not use that option, so it uses only p,g,q and y. See below.

 The private key would be x, such that y = g^x mod p.
 Is there some way to generate both public and private keys using OpenSSL, 
 based on p, q, g and y?

You cannot recover the private key from the public key for any 
secure PKC scheme used with appropriate sizes. DSA is a secure 
scheme, and DSS and these test cases use appropriate sizes.

 De: openssl-users Em nome de Marcus Vinicius do Nascimento
 Enviada em: terça-feira, 12 de maio de 2015 17:06

 I tried using Y as the public key, but ssl seems not to accept that.
 From the FIP file: snip
 So I tried reformatting Y to pass it to PEM_read_bio_DSAPrivateKey.
 Converting Y to Base64 = snip 
 Reformatting in PEM format = -BEGIN DSA PRIVATE KEY- snip
[doesn't work]

As above, the public key requires all of p,q,g and y, not just y. 
The private key would require x as well, and you don't have x.

For public keys for _all_ algorithms in files including PEM 
OpenSSL uses the format standardized by X.509 called 
SubjectPublicKeyInfo or SPKI for short, which is an ASN.1 
sequence containing an AlgorithmIdentifier which is a(nother) 
sequence containing an OID identifying the algorithm and an 
optional parameters field whose type depends on the algorithm,
followed by a BITSTRING containing a nested encoding of the 
public key value relative to the parameters for that algorithm.

For DSA, the OID identifies DSA, the parameters are a sequence 
of three INTEGERs for p,g,q, and the nested key encoding is 
just an INTEGER. All elements in ASN.1 use a TLV (tag, length,
value) encoding, and INTEGER (thus) consists of a tag octet of 02 
specifying integer, a length whose length itself varies depending 
on the length it encodes, and a value field which for INTEGER is 
a _signed_ big-endian binary number. Since the particular y
you tried to encode below happens to have a magnitude size of 
1024 bits, a multiple of 8, it requires a leading sign octet of 00.
So does g in this case, and p and q by design (they are specified with 
magnitude sizes which are multiples of 8, and indeed of 32).

See rfc 5280 for the generic SPKI format, and rfc 3279 (referenced there) 
for the specifics for several algorithms including DSA.

Note that the PEM type is just BEGIN/END PUBLIC KEY  (no DSA) 
because as above the format handles all algorithms.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] PEM_read_bio_PrivateKey(..) or PEM_read_bio_RSAPrivateKey(..) both returns NULL

2015-05-02 Thread Dave Thompson
 From: openssl-users On Behalf Of Nayna Jain
 Sent: Friday, May 01, 2015 22:37

 I have a privatekey file written using the call
PEM_write_bio_RSAPrivateKey(...)

 The file write operation has been successful.

Do you mean the PEM_write_ returned 1, or do you mean the file contains 
correct (or at least reasonable) contents? Can you read it with commandline 
'openssl pkey file' ?

 However, when i am trying to read it via calls PEM_read_bio_PrivateKey(..)
or 
 PEM_read_bio_RSAPrivateKey(..) , it is always returning NULL.
 There is no encryption also done . It i unencrypted private key. 
 Can someone help me to understand what might be wrong.

http://www.openssl.org/support/faq.html#PROG6 
http://www.openssl.org/support/faq.html#PROG7



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SHA256() to EVP_* ?

2015-04-30 Thread Dave Thompson
 From: openssl-users On Behalf Of jonetsu
 Sent: Wednesday, April 29, 2015 10:07
snip
 The man page (the one online from OpenSSL project - SHA256.html)
 gives a description using SHA1() which computes a message digest.

Note this is the same page for
SHA{1,224,256,384,512}{,_Init,_Update,_Final}.html 
and is the same content that is provided as 'man' pages on a Unix install of
OpenSSL.
On Unix systems a man page for several related routines (or types/structures
etc) 
can actually be one file with multiple links to it, but the website doesn't
bother.

 Being generally new to OpenSSL at that level, what is then the
 difference between using, say, SHA1() vs. using SHA1_Init,
 SHA1_Update and SHA1_Final ?  Is it only that the latter allows
 for continuously add data until _Final is called ?
 
Very nearly. The 'all-in-one' routine SHA1() consists of:
- declare (thus implicitly allocate) CTX 
- provide a static buffer by default (for legacy but this is a bad idea,
it is unsafe for threads or recursion, and should not be used today)
- do SHA1_Init and test for error (error won't actually occur but this 
preserves a consistent structure with other algorithms that might)
- do EXACTLY ONE SHA1_Update
- do SHA1_Final
- cleanse the CTX to prevent leakage of data that might be sensitive
(whether it actually is sensitive depends on what the data is, but to be 
on safe side always cleanse) and implicitly deallocate 

and similarly for the other algorithms.

So the difference using separate calls is: you can do multiple _Update 
steps/buffers, and you must handle the CTX and output buffer.

And you can do more flexible things like compute both SHA1 and MD5 
for the same data concurrently, without needing to buffer all the data 
(which in some applications might exceed your memory) or reread it 
(which may be impossible in some applications like streaming video).

You may be thinking: this is just a small convenience, it's not hard to 
do the separate routines. You're right, it's not. But if it happens 10 
or 20 or 50 places in your code, saving 10 lines 50 times is 500 lines 
you don't have to write, read, keep in source control, compile every 
build, cover in your test strategy and coverage reports, etc.
Even a small convenience is still a convenience.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Error signing document

2015-04-30 Thread Dave Thompson
 From: openssl-users On Behalf Of m.de.groot
 Sent: Thursday, April 30, 2015 14:46

 I converted the pfx file to a pem file using the following command
 openssl pkcs12 -in CustKeyIcBD001.pfx -out CustKeyIcBD001.pem -nodes
 
 After this I trying to sign a file using this key with the following
command
 
 openssl cms -sign -in TestfileIN.txt -out TestfileSign.tmp -outform DER -
 binary -nodetach -md SHA1 -signer CustKeyIcBD001.pem
 
 However I keep getting the message
 
 No signer certificate specified
 
If you have accurately copied your command to the email, you are using 
a  Windows-cp1252 dash character (hex code 96) not a hyphen (2D) 
for the -signer option. Use the classic traditional ASCII hyphen.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] SHA256() to EVP_* ?

2015-04-28 Thread Dave Thompson
 From: openssl-users On Behalf Of jonetsu
 Sent: Tuesday, April 28, 2015 13:53

 What would be the equivalent of the SHA256() function in the EVP
 class of methods ?  EVP_sha256() could be it, although from the
 short description in manual page it does not seemingly fit in,
 returning a EVP_MD which is, if not mistaken, a env_md_st
 structure.
 
The LOWLEVEL modules use separate routines. There are routines 
for SHA1, and *separate* routines for SHA256, and separate routines 
for SHA384, and separate routines for MD5, and separate routines for 
RIPEMD160. There are routines for AES, and separate routines for 
RC4, and separate routines for Blowfish, and routines for DES and 
tripleDES aka DESede that overlap *some* because of the very 
close relationship but separate from all other symmetric ciphers. 
There are routines for RSA, and separate routines for DSA, and 
separate routines for DH, and separate routines for ECDSA, 
and separate routines for ECDH. 

EVP DOES NOT. EVP has *one* set of digest routines used for ALL 
digest algorithms, but with a data object specifying *which* digest.
EVP has *one* set of Cipher routines used for all symmetric ciphers,
with a data object specifying which. EVP has due to history *two* 
sets of asymmetric signature routines, which apply to three (and 
possibly more) asymmetric algorithms specified by data objects.

Thus the EVP equivalent to the SHA256*() lowlevel calls is 
to call the EVP_Digest* routines with a data object specifying 
SHA256, which is exactly what the value of EVP_sha256() is.

The man page named for EVP_DigestInit which also covers 
EVP_DigestInit_ex, EVP_DigestUpdate, EVP_DigestFinal,
EVP_DigestFinal_ex, and some related routines (although 
the link for EVP_DigestFinal original seems to be missing)
tells you how to do digests with EVP in general. Apparently 
it wasn't updated to list SHA2 digests, but that variation 
should be obvious from documented pattern.
 
 The code I'm adapting to EVP has a first pass of shortening the
 key if too long:
 
 /* Change key if longer than 64 bytes */
 if (klen  HMAC_INT_LEN) {
   SHA256(key, klen, nkey);
   key = nkey;
   klen = SHA256_DIGEST_LENGTH;
 }
 
 Before proceeding with the usual SHA256_Init(),
 SHA256_Update() (twice), and SHA256_Final.  All of which I have
 tested with the corresponding EVP_* methods.  For the use of
 SHA256() above, though, I'm puzzled regarding its EVP_*
 counterpart.
 
If you are implementing HMAC, perhaps for PBKDF2 (which does 
that prehash-if-too-long), I hope you mean the code does 
one hash of ipad+data, which can consist of Init, 2 Updates, 
and Finial (although there are other ways) and then a SECOND 
ENTIRE HASH of opad+innerhash, similarly. If that's not what 
you're doing, you're not doing standard HMAC, so it definitely 
won't be interoperable and may well not be secure, because 
HMAC was defined the way it is precisely because it was found 
the naïve way merely hashing key+data is not reliably secure.

Although if what you want is PBKDF2-HMAC, there is already 
two OpenSSL routines for that (again due to history).


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] base64 decode in C

2015-03-18 Thread Dave Thompson
 From: openssl-users On Behalf Of Prashant Bapat
 Sent: Wednesday, March 18, 2015 03:37

 I'm trying to use the base64 decode function in C. snip

 This works well for simple b64 encoded strings like hello world! etc. 
 But when I want to b64 decode the contents of a SSH public key, it fails. 
 Returns nothing. 

It returns pointer to a buffer containing the pubkey, but no indication of 
the length. You don't show the caller, but if the caller tries to treat it as a 
C string with strlen()fputs() etc that won't work because it isn't a C string.
A C string can't contain any zero-bits aka null byte, and SSH pubkey contains 
many of them; it even starts with *three* consecutive nulls, which 
makes it appear to be empty if treated as a C string.

Use the length returned from (successful) BIO_read (b64bio,...).

 This decodes fine. 
 dGhpcyBpcyBhd2Vzb21lCg==  : this is awesome

Actually that decodes as this is awesome PLUS A NEWLINE, or in 
C source notation this is awesome\n. Yes that matters. 


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to disable all EXPORT Ciphers?

2015-03-10 Thread Dave Thompson
 From: openssl-users On Behalf Of Viktor Dukhovni
 Sent: Monday, March 09, 2015 12:47

 On Mon, Mar 09, 2015 at 02:23:53PM +0530, Deepak wrote:
  kEDH:ALL:!ADH:!DES:!LOW:!EXPORT:+SSLv2:@STRENGTH
  with SSL_CTX_set_cipher_list() be good enough to disable EXPORT40, 56
 and 1024?
 
You only need worry about the original exports retronymed EXPORT40.
EXPORT56 was a draft RFC that was not adopted, and the SSL_CIPHER 
blocks still in source are disabled by a macro hardcoded in tls1.h (q.v.).
EXP1024-blah would be the names of the nonexistent EXPORT56 ciphers.

 Note that doing so does not address the FREAK CVE in SSL clients.  Even
 with EXPORT ciphers disabled they are still vulnerable, unless patched!
 
Yes.

 As for your proposed cipherlist it is too exotic.
 
 * ALL:!ADH is simply DEFAULT.  DEFAULT already prefers PFS (including
   ECDHE) and is sorted by strength.
 
For 1.0.0+ DEFAULT is ALL:!aNULL:!eNULL:!SSLv2; !aNULL disables both 
ADH and AECDH. (0.9.8 excludes all ECC, including AECDH, unless ECCdraft.)
!eNULL actually has no effect because ALL already excludes it; if you want 
eNULL (you shouldn't) you need the absurd-looking COMPLEMENTOFALL.

 * DES is a subset of LOW
 
In fact DES is the only algorithm in LOW. (In math a set is a subset of
itself
and also a superset of itself but laypeople often don't expect that.)

 * I would also disable SSLv2, which is a subset of MD5, so I generally
   disable that instead which also drops the SSLv3's RC4-MD5 leaving
RC4-
 SHA
   for interop.  Note for many applications RC4 is no longer supposed
to be
   used, consider whether disabling RC4 is appropriate for you.
 
And disabling SSLv2 *ciphers* has the good effect of disabling SSLv2
*protocol* 
even if old or poor code calls SSLv23 and doesn't explicitly OP_NO_SSLv2. 

 Therefore, I'd suggest:
 
   DEFAULT:!EXPORT:!LOW:!MD5
 
 Which keeps things simple by starting with DEFAULT and removing
 what you want to disable.
 


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Max size on ASN1_item_d2i_bio()?

2015-02-21 Thread Dave Thompson
 From: openssl-users On Behalf Of Dr. Stephen Henson
 Sent: Friday, February 20, 2015 17:24

 On Fri, Feb 20, 2015, Nathaniel McCallum wrote:
 
  I'd like to use ASN1_item_d2i_bio() (or something similar) to parse an
  incoming message. However, given that types like ASN1_OCTET_STRING
  have (essentially) unbounded length, how do I prevent an attacker from
  DOS'ing via OOM?
 
  Is there some way to set a max packet size?
 
 
 No there isn't but if the input is in DER form you can peek the first few
 bytes and get the tag+length fields to determine the size of the
structure. If
 the input uses indefinite length encoding that isn't possible however.
 
Some other possibilities:

If the bio is memBIO or fileBIO its input size is known before you start,
at least if it contains only one root item. More generally you could layer 
a simple filter BIO that limits total reads to a chosen amount like 1M, 
probably measured from a CTRL operation  -- or a more complex one 
that looks dynamically at your memory-used and/or memory-available 
and chooses whether/when to force EOF, but that would be dependent 
on your particular platform and not portable.

Alternatively or in addition, OpenSSL allows you to provide your own 
malloc/realloc/free implementations used instead of the standard ones. 
But these are used for *all* OpenSSL heap allocations, so you might need 
some care to count the space used for or at least during a d2i 
as opposed to other purposes and times.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] genpkey usage for openssl-1.0.1k on openSUSE-13.2

2015-02-19 Thread Dave Thompson
 From: openssl-users On Behalf Of open...@lists.killian.com
 Sent: Wednesday, February 18, 2015 13:26

 I noticed that openssl(1) says that various things have been superseded by
 genpkey, so I tried changing my scripts to use it. It works fine for RSA,
but the
 man page is not very helpful on EC. I tried
 
 openssl genpkey -out key.new -algorithm EC -pkeyopt
 ec_paramgen_curve:secp384r1
 
 and got
 
 parameter setting error
 139638314907280:error:06089094:digital envelope
 routines:EVP_PKEY_CTX_ctrl:invalid operation:pmeth_lib.c:404:
snip

genpkey has a standard idea, across all algorithms that have parameters 
(which RSA does not), to generate parameters and key(s) as separate 
steps with a file in between. For DSA and DH this is good; you may want 
to generate your own params, or you may want to use existing ones 
(in an existing file) e.g. Oakley or SSH-non-GEX. For EC it makes less
sense, 
as generating your own curve is complicated (OpenSSL certainly doesn't do
it) 
and in practice everyone* uses the named curves. Nonetheless you still do:

openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:x pfile
openssl genpkey -paramfile pfile keyfile 

Depending on your OS and shell you may be able to combine these like
openssl genpkey -genparam | openssl genpkey -paramfile /dev/fd/0
or openssl genpkey -paramfile $(openssl genpkey -genparam)

* Well, everybody except the crowd around Dan Bernstein, and they use 
non-Weierstrauss curves that OpenSSL can't even represent (now?).


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d and d2i fucntions

2015-02-16 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Monday, February 16, 2015 03:05

 Our current signature and verification logics are working just fine 
 with TLS1.0 and TLS1.1 for ECDHE_ECDSA cipher suite.

 But, when tested the same cipher suite with TLS1.2, SSL handshake 
 always failing with bad signature.

 Do we need to take care of anything specific for TLS1.2 handshake?

Not as such. But you do need to correctly handle truncating a hash 
to be signed/verified that is longer than the key size, both in bits, 
as shown in OpenSSL's implementation in ecs_ossl.c.

That case will occur for TLS1.2 if SHA512 is offered and chosen for the 
hash and the key in use is a 384-bit key, which your previous questions 
have suggested. That case will only occur for 1.0 and 1.1 only if using 
a key too small to be secure, which obviously you shouldn't do.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d and d2i fucntions

2015-02-16 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Friday, February 13, 2015 23:50
 Hello Dave,
 Based on your input, have stopped calling i2d_ECDSA_SIG() 
 and used BN_bn2bin() to overcome the der headers. 
 And now, my verification is working fine.

ECDSA_verify in ecs_vrf.c only uses i2d to *check* that the 
input was canonical, to block certain possible attacks. It's 
the d2i that parsed the signature, and the internal form 
(ECDSA_SIG structure) is used for the actual verification.

 Is there any function at openssl, to get the HASH used for 
 the digest at ECDSA_verify()?
 I see that, for ECDSA_verify(), first argument is type. But 
 when its calling the function pointer, ECDSA_verify() is not 
 passing the type of the hash. 
 So, would like to get the hash type from digest data. 

ECDSA (and DSA) signatures do not care about the hash 
algorithm, only the length of the hash *value*. Notice 
that ECDSA_verify does not pass type to ECDSA_do_verify, 
which does the actual dispatch to a possible engine.
(This differs from RSA, at least PKCS#1 as used by SSL/TLS, 
where the hash algorithm identifier is included in padding.)

 I can understand that for TLS1.2, openssl uses SHA512. 
 But the same information i would like to get from digest data. 
 Is there any way to get this? Please share. 

For the ServerKeyExchange message (the case you said 
you cared about) in TLS1.2, it appears OpenSSL server uses 
the client's preference as stated in the sigalgs extension,
except in 1.0.2 a new SuiteB option forces SuiteB choices.
If the client offers all current hashes for ECDSA in strength 
order, which is very reasonable, SHA512 will be the choice.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d and d2i fucntions

2015-02-13 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Friday, February 13, 2015 09:48
snip
 As part of [ECDSA] signature verification, we first take lenght_of_signature 
 received 
 and compare with double the size of number_of_bytes from curve parameter. 
 Have converted the ECDSA_SIG to unsigned char * using the function 
 i2d_ECDSA_SIG().
 Length returned by i2d_ECDSA_SIG() is 103.
 Whereas, the number_of_bytes value from curve parameter is 48. 

An EDCSA signature, like a DSA signature, and as the 'i2d' should clue you in,
is an ASN1 DER-encoded value. Specifically it is a SEQUENCE of two INTEGERs.
That means it consists of:

2 octets tag and length for the sequence -- OR 3 if the components together 
exceed 127 octets, which will occur almost always if the curve size exceeds 
496 bits and sometimes for slightly smaller curves, see below.

For each integer, 2 octets tag and length then N octets value, as long as the 
curve size does not exceed 1015 bits (and none currently come even close).
Remember DER INTEGERs are two's complement, and the R and S values 
are positive numbers that are for practical purposes uniform random up to 
the curve order which is usually chosen to be nearly a power of two that 
is a multiple of 8 (like 192, 256, 384) and thus require an extra sign octet.

Thus for a 384-bit curve, the encoded signature will be 6+2*48=102 
roughly 25% of the time, 6+48+49 about 50% and 6+49*2 about 25%.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] i2d and d2i fucntions

2015-02-12 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Thursday, February 12, 2015 00:40

 I have a query on d2i_PUBKEY() and i2d_PUBKEY().

 i have a EC public key in form of character buffer. 
 Have inputted this character buffer to d2i_PUBKEY() and got EVP_PKEY format 
 EC key. 

To be exact, a byte (or even more exact octet) buffer. In C 
(and C++ and ObjC) it's type 'char[]' or better 'unsigned char[]',
but the values do not and often cannot represent *characters.

 Now i tried to input this EVP_PKEY to i2d_PUBKEY() to compare will i get 
 exactly same data which i gave as input to d2i_PUBKEY().

 But i see that the outputs are completely different.

 i2d_PUBKEY() is leaving lots of 0's at the o/p buffer. 

 0
 0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
 0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0
 FD  1   10  DF  AB  12  34  CD  0   6F  0   0   0   0   1   83
 F   8B  AF  D8  D  

You must be doing something wrong. Probably the most common is,
are you looking at the beginning of the buffer? Remember that after 
calling i2d_whatever, the pointer you give it is moved to point 
*after* the encoded data, at unused and often junk memory.

If that's not it, reduce your code to the minimum that shows the 
problem, post it, and identify the version and build you are using.

 My goal is, to get complete EC public key in form of asn1 der 
 encoded from EC_KEY structure.

 I tried to use i2d_EC_PUBKEY() and i20_ECPublickey(). snip

Note that PUBKEY is the X.509 SPKI format: it contains an 
AlgorithmIdentifier identifying the algorithm and the curve,
*and* the public key value (a point) embedded in a bitstring,
all combined into an ASN.1 structure and DER encoded.

i2o_ECPublicKey (letter o not zero) uses a special non-ASN1 
non-DER encoding that contains *only* the point. 


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL 1.0.1l: X509_NAME_add_entry_by_txt broken?

2015-02-11 Thread Dave Thompson
 From: openssl-users On Behalf Of Jörg Eyring
 Sent: Wednesday, February 11, 2015 03:44

 I'm generating a certificate request and the necessary entries are added
 with:
 ...
 if(!X509_NAME_add_entry_by_txt(subj,C, MBSTRING_ASC, (unsigned
 char *) CountryName,-1,-1,0)) snip
 X509_NAME_add_entry_by_txt does only respect the given encoding
 MBSTRING_ASC for the first entry, the subsequent entries are encoded with
 MBSTRING_UTF8 (seen with a BER Viewer). The certificate request is
 declined by the authority with an error: ...doesn't contain five
 PRINTABLESTRING elements...
 
 The most recent version of OpenSSL we've been using was 1.0.1c where
 everything worked fine.
 
ASN1 strings set with the generic MBSTRING_ types that are for 
known/standard OID-value pairs are constrained by tbl_standard in 
asn1/a_strnid.c. A few like Country are forced to Printable as per standard.

Those standardized as DirectoryString are anded with a default mask then 
a_mbstr.c chooses the lowest type supporting the characters in the value.
Which allowed *two* of the eight single-byte types (Teletex and Printable).
This is mentioned, very briefly, in the manpage for X509_NAME_add_entry.

1.0.1h in 2014 and later changed this mask to force UTF8 only, I believe 
to implement the MUST UTF8 for DirectoryString's in 2459 and 3280, 
even though 5280 in 2008 had relaxed it to MUST UTF8 OR Printable, 
I suspect to be safe for implementations of the older standard.

req and ca override this by calling ASN1_STRING_set_default_mask_asc  
with the (string) value of string_mask in the configuration if specified,
and 
the supplied openssl.cnf back to 1.0.0 in 2009 set utf8only for those utils.
There is also a numeric version ASN1_STRING_set_default_mask .

HTH.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] How to load local certificate folder on windows

2015-02-06 Thread Dave Thompson
 From: openssl-users On Behalf Of Jerry OELoo
 Sent: Wednesday, February 04, 2015 21:54

 I am using openssl 1.0.2 on windows 7 OS.
 
 I have put some root certificate files into a folder certs. when I
 using X509_STORE_load_locations() to load this folder into store, it
 returns 1 means success,
 but when I using X509_verify_cert(), it will return 0, and error shows
 19(self signed certificate in certificate chain).

Nitpick: STORE_load_locations (and CTX_load_verify_locations which uses it) 
actually loads the contents of a CAfile into memory, but it only stores the 
*name* of a CApath and *later* dynamically loads files from that directory.

Did you use filenames, or possibly* linknames, based on subject hash 
as described in https://www.openssl.org/docs/apps/verify.html ?

* Windows beginning AIR XP or maybe NT does support links on NTFS,
but they're not easy to use and not well known, and I think I saw a recent 
bug report that they don't even work for OpenSSL,  at least sometimes.

Less likely but possible if these files were prepared on an another system: 
did you use hashnames created with OpenSSL 1.0.0 or higher?

Is this a FAQ yet?


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] ECDHE-ECDSA certificate returning with no shared cipher error

2015-02-04 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Monday, February 02, 2015 22:17

 Thanks for responding. Following is the output printed by openssl

 ./openssl req -in csr.csr -noout -text 
snip
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
   
ASN1 OID: prime256v1

Yes, that is named form. Then I don't know what the problem is.

Generic debugging advice, if you haven't tried these already:

Does the problem occur with s_client to your server?

Does the problem occur with s_client to s_server using the same 
certkey, cipherlist (if not default) and same or reasonable tmp-ECDH?

Actually, that's a thought. You said your server uses tmp-ECDH callback; 
does that (always) provide a curve/parameters object that *has* an OID 
which maps to one of the TLS standard curves in 4492 (and one specified 
in the client hello but your earlier trace looked like the client specified 
all).
s_server *only* supports named curves (and defaults to p256).



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] ECDHE-ECDSA certificate returning with no shared cipher error

2015-02-02 Thread Dave Thompson
 From: openssl-users On Behalf Of Rajeswari K
 Sent: Sunday, February 01, 2015 21:18

 Am facing an issue of no shared cipher error during SSL Handshake, 
 when tried to negotiate ECDHE cipher suite. 
snip
 *Feb  2 01:00:47.894: SSL_accept:error in SSLv3 read client hello C
 *Feb  2 01:00:47.894: 3854049196:error:1408A0C1:SSL routines:
 SSL3_GET_CLIENT_HELLO:no shared cipher  s3_srvr.c:1381:

 Have updated with temporary ECDH callback during SSL Server initialization. 

 ECDSA certificate is being signed using openssl commands. 

How was the keypair and CSR generated? In particular, check the 
publickey in the CSR, and thus in the cert, has the curve encoded in 
named form (as an OID) not explicit form (with all the details of 
prime or polynomial, equation coefficients, base point, and cofactor).



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Intermediate certificates

2015-01-27 Thread Dave Thompson
 From: openssl-users On Behalf Of Kurt Roeckx
 Sent: Tuesday, January 27, 2015 17:14

 On Tue, Jan 27, 2015 at 11:42:51PM +0300, Serj wrote:
snip
 What browsers do is cache the intermediate certificates.  snip

That's one possibility. Another is that it uses AuthorityInfoAccess 
to fetch the cert automatically, which OpenSSL currently does not
(unless you figure out a custom X509_LOOKUP to do so).


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Intermediate certificates

2015-01-27 Thread Dave Thompson
 From: openssl-users On Behalf Of Kurt Roeckx
 Sent: Tuesday, January 27, 2015 17:14

 On Tue, Jan 27, 2015 at 11:42:51PM +0300, Serj wrote:
snip
 What browsers do is cache the intermediate certificates.  snip

That's one possibility. Another is that it uses AuthorityInfoAccess 
to fetch the cert automatically, which OpenSSL currently does not
(unless you figure out a custom X509_LOOKUP to do so).


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] HMAC-MD5 OpenSSL 1.0.1e and FIPS 2.0.7

2015-01-21 Thread Dave Thompson
 From: openssl-users On Behalf Of Dr. Stephen Henson
 Sent: Wednesday, January 21, 2015 09:28

 On Wed, Jan 21, 2015, John Laundree wrote:
 
  Ok, so I will naively ask the question How does one do TLS 1.0/1.1 in
FIPS
 mode? Or is this no longer allowed, i.e. TLS 1.2 only?
 
 The use of MD5 for TLS 1.0/1.1 is treated as an exception which is allowed
in
 FIPS mode but general MD5 use is not.
 
To be exact, as I read it, the TLS1.0/1.1 *PRF* *combines* MD5+SHA1 for 
handshake/keyexchange, and is Approved on the basis that the combination 
is secure even if MD5 is not. The SSL3 PRF combines them more weakly and
isn't Approved so SSL3 protocol is prohibited. Suites using (pure) HMAC-MD5 
for data are not Approved, in any protocol version. 

And as you say MD5 as such is not allowed anywhere.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] HMAC-MD5 OpenSSL 1.0.1e and FIPS 2.0.7

2015-01-21 Thread Dave Thompson
 From: openssl-users On Behalf Of Dr. Stephen Henson
 Sent: Wednesday, January 21, 2015 09:28

 On Wed, Jan 21, 2015, John Laundree wrote:
 
  Ok, so I will naively ask the question How does one do TLS 1.0/1.1 in
FIPS
 mode? Or is this no longer allowed, i.e. TLS 1.2 only?
 
 The use of MD5 for TLS 1.0/1.1 is treated as an exception which is allowed
in
 FIPS mode but general MD5 use is not.
 
To be exact, as I read it, the TLS1.0/1.1 *PRF* *combines* MD5+SHA1 for 
handshake/keyexchange, and is Approved on the basis that the combination 
is secure even if MD5 is not. The SSL3 PRF combines them more weakly and
isn't Approved so SSL3 protocol is prohibited. Suites using (pure) HMAC-MD5 
for data are not Approved, in any protocol version. 

And as you say MD5 as such is not allowed anywhere.



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Read cer file failed

2015-01-20 Thread Dave Thompson
 From: openssl-users On Behalf Of Jerry OELoo
 Sent: Tuesday, January 20, 2015 00:34

 I am reading cer file into X509 object,
 http://SVRSecure-G3-aia.verisign.com/SVRSecureG3.cer
 
 cert = d2i_X509_fp(fp, NULL);
 it will return fail, as below
 
 Error: error:0D07207B:asn1 encoding routines:ASN1_get_object:header too
 long

Worked for me, although I observe the server is labelling 
content-type: text/plain when 2585 (confirmed by 5280)
says application/pkix-cert .  (I resolved 23.13.165.163 
after CNAMEing through edgekey and akamaiedge, but 
another ISP I can look at got 23.61.69.163. YMMV.)

I note this certificate contains a control-Z byte (hex 1A).
Are you possibly running on Windows with the Microsoft 
C runtime and opening the file in text mode? Windows C
treats 1A as terminating a text file, to be compatible with 
MS-DOS and before that CP/M. Windows C also tries to 
use MS-DOS line ending CRLF instead of LF in text files.
To read and write the exact bytes of a file in Windows C,
as is needed for DER objects, use binary mode. 



___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL AES encryption using AES_* functions and EVP_* functions

2014-12-31 Thread Dave Thompson
 From: openssl-users On Behalf Of Purushotham Nayak
 Sent: Wednesday, December 31, 2014 12:22

 I have some data that was encrypted using the openssl (`AES_*`) functions. 
 I want update this code to use the newer (EVP_*) functions which are 
 FIPS compliant. But I should be able to decrypt data that was encrypted 
 using the old code.

 I've pasted below both the old and the new code. The encrypted/decrypted 
 contents are different. i.e. I can't use them interchangeably. This means 
 I can't upgrade the code without having to decrypt using the old code 
 and then re-encrypt.

 Are there any values for the parameters to EVP_BytesToKey so that 
 aes_key derived is the same in both cases. Or is there any other way 
 to accomplish the same using the (EVP_*) functions? I've tried snip

What on earth makes you think EVP_BytesToKey is appropriate? 
EVP_BytesToKey is used for *password-based* encryption, where 
the key (and sometimes IV) is derived in a complicated way from 
a character string intended to be remembered by humans. 
(Even for that it is outdated and PKCS5_PBKDF2* is better.)

If you have an actual key, as you did in your old code, you simply pass 
that same actual key to EVP_EncryptInit as the actual key.

 What algorithm is being used in AES_set_encrypt/decrypt_key function?

The key scheduling defined in FIPS 197 (and before that Rijndael).
And so is EVP_EncryptInit* when used on an AES cipher.

 The code using the `AES_*` functions
snip
int main()
{
unsigned char p_text[]=plain text;
unsigned char c_text[16];
snip set_encrypt_key
  AES_encrypt(p_text, c_text, aes_key);

This old code allocates a buffer of 11 bytes to contain the plaintext and 
passes 
it to AES_encrypt which reads 16 bytes (always). The remaining 5 bytes are 
undefined by the C standard -- in fact the *behavior* is undefined by standard, 
although in practice nearly all implementations will read *some* values. 
*What* values they read may depend on your compiler, version, options, 
computer, operating system, and the phase of the moon, which means you 
probably can't duplicate it in the new code. You can still decrypt old values 
with the fixes described below, but you can't faithfully recreate them.

 The code using the `EVP_*` functions. (Encryption code is below and the 
 decryption code is similar).
snip
int main()
{
EVP_CIPHER_CTX *ctx = (EVP_CIPHER_CTX*)malloc(sizeof(EVP_CIPHER_CTX));
EVP_CIPHER_CTX_init(ctx);

EVP_CIPHER_CTX_new() is simpler.
  
const EVP_CIPHER *cipher = EVP_aes_128_ecb(); // key size 128, mode 
 ecb (not FIPS compliant?)

FIPS140 covers only the algorithm not the mode, so all modes are equally 
compliant.
However ECB is frequently insecure and a Very Bad Idea if used for anything 
other 
than random data. See Wikipedia, or practically any good crypto reference.

snip EVP_BytesToKey and related: just remove that

EVP_EncryptInit(ctx, cipher, aes_key, aes_iv);

Pass the actual key, of the correct size. ECB does not use an IV, so you can 
pass NULL 
or any value of the correct size which will be ignored. Not using an IV is one 
of the 
(several) things that makes ECB frequently insecure, see above.

EVP will by default add and remove PKCS5 (or more exactly PKCS7) padding. 
To avoid that, and particularly to decrypt values from your old code which used 
some garbage data (see above) rather than valid padding, 
EVP_CIPHER_CTX_set_padding (ctx, 0);

unsigned char p_text[]=plain text; int p_len = sizeof(p_text);
unsigned char c_text[16]; int c_len = 16;
int t_len;
EVP_EncryptUpdate(ctx, c_text, c_len, p_text, p_len);
EVP_EncryptFinal(ctx, (c_text + c_len), t_len);
c_len += t_len;
 
Happy New Year.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] Differences in openssl 0.9.8 and 1.0.1x for private pem key file

2014-12-30 Thread Dave Thompson
 From: openssl-users On Behalf Of Jaya Nageswar
 Sent: Tuesday, December 30, 2014 02:36

 ... the output [is] different between openssl 0.9.8 and 1.0.1x versions as 
 the following methods 
 are being used in the code flow for the method PEM_write_bio_PrivateKey.
 1.0.1x - PEM_write_bio_PKCS8PrivateKey
 0.9.8 - PEM_ASN1_write_bio((i2d_of_void *)i2d_PrivateKey,...)

Yes. To be complete, it's 0.9.8anything versus 1.0.0anything OR 1.0.1anything.

 1. As I mentioned earlier, We have a sample application where we try to read 
 a  sample pem key file, create an EVP_PKEY indirectly using 
 PEM_read_bio_PrivateKey 
 and try to create pem key files encrypted using different ciphers like (RC2, 
 RC4etc.) 
 using the method PEM_write_bio_PrivateKey. I am getting a different output in 
 1.0.1x 
 while using the cipher RC2 by using the method PEM_write_bio_PrivateKey.That 
 is 
 understandable as we use PKCS8 in 1.0.1x. However if I try to use the cipher 
 RC4 
 for encyrption,PEM_write_bio_PKCS8PrivateKey is failing.Is there a known 
 issue or a bug for RC4.  

I don't see anything in RT (the bug tracker) but yes privatekey encryption 
doesn't work 
for RC4, apparently because it's a stream cipher with no IV. The symptoms vary:

- writing PKCS8 encrypted gives an error, in either DER or PEM (PKCS8 is 
encrypted 
in the DER, the PEM just base64's it). In 1.0.0+ PEM_write_PrivateKey maps to 
PEM_write_PKCS8PrivateKey and therefore gets this.

- writing traditional RSA/etc encrypted PEM (which encrypts at the PEM level) 
writes a file and returns success, but that file can't be decrypted because it 
has no IV.
In 0.9.8 PEM_write_PrivateKey maps to PEM_write_{RSA/etc}PrivateKey and gets 
this.

- for completeness remember there is no traditional encrypted DER format.

 2. Also Can I use the method PEM_ASN1_write_bio((i2d_of_void 
 *)i2d_PrivateKey,...) 
 in 1.0.1x instead of the method PEM_write_bio_PrivateKey if I want to have 
 the same output similar to 0.9.8.

It looks like you can, but it's not documented that I can see and looks a bit 
fragile.

The long-documented way that works on all versions (so far!) is to call the 
correct 
per-algorithm routine PEM_write_{RSA,DSA,EC}PrivateKey .


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] can I parse PKCS8 file and decrypt it later?

2014-12-30 Thread Dave Thompson
 From: openssl-users On Behalf Of Bear Giles
 Sent: Tuesday, December 30, 2014 16:53

 I've been able to read and write most objects using both the PEM bio 
 and i2d/d2i functions. I know I can write an encrypted PKCS8 file with 
 PEM_write_bio_PKCS8PrivateKey().
 How do I read encrypted PKCS8 files? I can read unencrypted files with 
 PKCS8_PRIV_KEY_INFO but have been stumped by the encrypted file. 
 
For PKCS8 encrypted DER: d2i_PKCS8PrivateKey following the usual pattern.

All of the PEM_read_*PrivateKey routines can read *any* privatekey 
as long as the key type is satisfactory (and if encrypted the correct 
password is supplied, of course). Thus 
- PEM_read_RSAPrivateKey can read traditional-RSA or PKCS8-RSA 
- PEM_read_DSAPrivateKey can read traditional-DSA or PKCS8-DSA 
- PEM_read_ECPrivateKey can read traditional-EC or PKCS8-EC 
and the slightly less obvious one
- PEM_read_PrivateKey can read any traditional or any PKCS8

On the _write side you have to specify what file format you want, 
but on the _read side the BEGIN line says what file format it is
and you only need to specify what *key* you want from it.

 Obviously 'openssl pkcs8 ...' can do it but maybe I'm overlooking 
 a source of documentation. Otherwise it's a dive into the source code.

 Second question - can I parse encrypted PKCS8 files without decrypting it? 
 I know the traditional keys have to be decrypted (and thus parameter-less 
 readers can't use encrypted files) but I thought PKCS8 was a container and 
 it was possible to parse the object without the password. Does this involve 
 X509_SIG?  
 I noticed that the i2d/d2i PKCS8 functions work with X509_SIG objects.

Using X509_SIG is kind of a crock; it is because outer (encrypted) PKCS8
is just AlgorithmIdentifier plus opaque encrypted data while an X.509 signature 
is just AlgorithmIdentifier plus opaque signature data, and this saved one 
struct!

Yes you can read in a PKCS8-encrypted file with PEM_read_PKCS8 or d2i_PKCS8 
without decrypting, and that's actually the first step of what 
_read_*PrivateKey 
or d2i_PKCS8PrivateKey does, but what good does that do you? There is SOME key, 
but you can't use it for anything. You don't even know its algorithm or size or 
anything that might be of use in deciding whether or when to use it.

If you just want to read the file because it might become inaccessible,
read the file into memory as-is and then PEM_read_bio_x or d2i_x_bio 
from a memory BIO that reads that memory.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] Differences in openssl 0.9.8 and 1.0.1x for private pem key file

2014-12-22 Thread Dave Thompson
 From: openssl-users On Behalf Of Jaya Nageswar
 Sent: Monday, December 22, 2014 05:51

 In our application, we have been using openssl 0.9.8 and trying to move to 
 openssl 1.0.1x as 0.9.8 is going to be EOS by December 2015. We have a 
 sample application where we try to read a  sample pem key file, create an 
 EVP_PKEY indirectly using PEM_read_bio_PrivateKey [and] try to create 
 pem key files encrypted using different ciphers like (RC2, RC4 etc.). 
 
snip lots of mechanism

The mechanism was refactored some, but the visible change is deliberate.

There have long been routines for the algorithm-specific traditional 
formats PEM_read/write_RSAPrivateKey/DSAPrivateKey/ECPrivateKey 
AND for the newer standard and algorithm-generic PKCS8 format
PEM_read/write_PKCS8PrivateKey.

Through 0.9.8 PEM_write_PrivateKey used (the appropriate one of) 
traditional formats; in 1.0.0 and later it changed to use PKCS8. 
If you want to continue writing traditional formats in 1.0.0+ call 
specifically _write_RSAPrivateKey, _write_DSAPrivateKey, etc.
using the algorithm-specific struct from (instead of) EVP_PKEY.

At least for now; there is another thread started just a few days ago 
about all PEM formats used by OpenSSL suggesting the traditional
privatekey forms are obsolete and maybe should be deleted!

Note all PEM_read_xyzPrivateKey routines can read *either* 
format, legacy or PKCS8, distinguished by the BEGIN line, although 
if e.g. you _read_RSAPrivateKey and the file is PKCS8 for *another* 
algorithm that's an error; if you _read_PKCS8PrivateKey it accepts 
any algorithm into an EVP_PKEY.

If you are writing differently-encrypted privatekey files because 
you are concerned with key security, note one reason PKCS8
encrypted is preferred over traditional encrypted formats is
that PKCS8 allows and OpenSSL uses a much stronger PBE 
key derivation compared to the older and weaker but 
now set in stone and unchangeable one for traditional.

On checking I see the PEM_most manpage has not 
been updated for this change.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL performance issue

2014-12-19 Thread Dave Thompson
 From: openssl-users On Behalf Of Michael Wojcik
 Sent: Thursday, December 18, 2014 21:27

  From: openssl-users [mailto:openssl-users-boun...@openssl.org] On
 Behalf
  Of Kurt Roeckx
  Sent: Thursday, December 18, 2014 16:36
  To: openssl-users@openssl.org
  Subject: Re: [openssl-users] OpenSSL performance issue
 
  So the differnce here is that jave picks a DHE ciphersuite while
otherwise
 you
  didn't.  DHE gives you forward secrecy but is slower.
 
 And if DH parameters have not been set, OpenSSL will have to generate
 them on the fly, which can be *very* slow (relative to normal conversation
 establishment).
 
I think this is new in trunk; in all released versions of OpenSSL server 
it won't use DHE/A and or ECDHE/A if parameters have not been set.

And the case here is OpenSSL client to Java proxy acting as server.
JSSE server uses hardcoded parameters, from some standard -- 
I vaguely recall it being Oakley but don't remember details.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] OpenSSL performance issue

2014-12-19 Thread Dave Thompson
 From: openssl-users On Behalf Of Kurt Roeckx
 Sent: Thursday, December 18, 2014 16:36

 On Fri, Dec 19, 2014 at 02:30:07AM +0530, Prabhat Puroshottam wrote:
  ***
  This is for *Client - Agent*
  ***
 [...]
      Version 3.1
 [...]
      cipherSuite TLS_RSA_WITH_AES_256_CBC_SHA
 [...]
  ***
  This is for *Client - Proxy Server*
  ***
      cipherSuite TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 
 So the differnce here is that jave picks a DHE ciphersuite while
 otherwise you didn't.  DHE gives you forward secrecy but is
 slower.
 
Good catch, I missed that. But, it shouldn't be many *seconds* 
unless this is very poor hardware. Especially since Java 7 
(and IIRC 6) uses, as you can see later in the trace, 768 bits.
(Except export suites use 512 per RFC. Java 8 defaults DHE 
to 1024 and offers some new options for better.)

Although that reminds me, on the *first* session in a process, 
there might be delay to initialize SecureRandom, depending on 
the platform and options/environment. But not for all sessions.

To OP: assuming this delay happens on non-initial sessions 
more than rarely, can you try putting jconsole or the newer 
(but more complicated) Java Mission Control tools on 
the JVM running the proxy server while driving it with 
as many requests as you can, to get some (rough) idea 
what's going on: is it CPU bound? which threads? if you can 
capture stacks, which methods? Is it swapping?

One other thought: normally JSSE server uses a key manager 
that is preloaded from a JKS. Are you using an unusual 
key manager like a PKCS#11 token, or even a custom one 
that does something costly like fetching from LDAP?

 You're also not using session resumption which might speed up the
 whole process.  It at least looks like that proxy server might
 support that.
 
I assumed OP's traces are the first session. Besides OpenSSL 
client doesn't cache by default; you must code to enable it.

 You also seem to be using an old version of openssl that only
 supports TLSv1, I suggest you upgrade.
 
Good in general, but very unlikely to change JSSE-server performance.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


Re: [openssl-users] Strange SSL_read behavior: 1/N-1

2014-12-08 Thread Dave Thompson
 From: openssl-users On Behalf Of Hooman Fazaeli
 Sent: Monday, December 08, 2014 09:36

 1. The SSL_read in my http server app always reads the first byte of 
   http request, instead of the whole. To read the rest, I should do 
   further SSL_reads: snip
   I have seen this pattern with firefox, IE and opera as client. 

This is/was the consensus mitigation for BEAST in 2011. 
Senders (browsers etc) break a record of N bytes into 
two SSL records, 1 byte then N-1; this makes the IV for 
the second record (e.g. session cookies) unpredictable.
Although SSL/TLS is defined as a stream service and an 
implementation could recombine these, OpenSSL doesn’t.

This mitigation is only needed for CBC ciphers in protocols 
below TLS1.1. I don't know if (all? some?) browsers only 
implement when needed, but you could try to make sure 
your server supports at least 1.1 (and OpenSSL 1.0.1* 
can support 1.2 as well), and supports and possibly prefers
RC4 (which is the other mitigation for IV, but now itself 
vulnerable to other attacks e.g. Paterson et al at RHUL).

But given that SSL/TLS is a stream service and any implementation 
*must* fragment a record over 16K and *may* choose to fragment 
a smaller record for any reason it likes, your receiver should be 
doing the read-until-complete or in some cases read-until-timeout 
loop anyway. Note TCP (and plaintext HTTP) has this same feature 
and HTTP/TCP does actually fragment in numerous real cases 
including at sizes of realistic HTTP requests. HTTP is carefully 
designed so that both requests and responses are delimited 
either by a distinct close or length header(s) precisely so that 
it works robustly and reliably over such stream channels.


___
openssl-users mailing list
openssl-users@openssl.org
https://mta.opensslfoundation.net/mailman/listinfo/openssl-users


RE: OpenSSL performance issue

2014-12-04 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Prabhat Puroshottam
 Sent: Tuesday, December 02, 2014 07:04

 We have a product which uses OpenSSL to connect and transfer
 application level data. There are two ways to connect, and get the
 application level data from *Agent* to *Client*
 
 1. Client (C/C++) - Agent (C/C++)
 2. Client (C/C++) - Proxy Server (Java) - Agent (C/C++)
 
 *Client* and *Agent* are implemented in C, while *Proxy Server* uses Java
 code (This shouldn't really matter). But might be helpful for you to know.
 The issue is, connecting *Client* to *Agent* is very fast (that is
relatively).
 While connecting *Client* to *Proxy Server* is very slow - that is orders
of
 magnitudes slow.
 
 I was trying to determine the root cause. From my analysis is appears
that,
 maximum time on the *Client* side is taken by SSL_Connect during
 connection
 establishment, while the actual application level data transfer takes very
 small time. Similarly, on the *Proxy Server* side (Java Code), maximum
 time
 is taken in the first read/write whichever happens first. Further, I don't

Both the OpenSSL and Java (SSLSocket) APIs have the feature that you 
can do the handshake explicitly (SSL_connect/accept or .startHandshake) 
or implicitly on the first read or write (set_connect/accept_state,
default).
It appears you use explicit in Client but implicit in Proxy. This is valid 
but may be confusing if you don't keep it firmly in mind.

 think this is a network latency issue, as the problem is very pronounced
 and all the three boxes are on the same network. Also, the *Client* code
 seems to be similar, whether we connect to *Agent* (method 1 above) or
 *Proxy Server* (method 2 above). So, the issue is with *Proxy Server*,
 IMHO.
 
It does look like that, see below. And Proxy is Java not OpenSSL,
which somewhat reduces the suitability of this group to help.

 To further locate the issue, I did some tests using ssldump command,
snip long data

 As you can see the big time difference between the two executions - which
 actually involve the same application level data. The largest chunk of

On the trace of Client-Proxy connection, once the (delayed) handshake 
completes Proxy sends some initial data to Client without waiting for 
data from Client, but Client-Server doesn't. That is not quite the same.

 time is spent waiting for handshake from *Proxy Server*. The response time
 of *Proxy Server* in replying back with ServerHello, varies greatly
 between 1.5 to 11 seconds across different runs. In the present case it is
 nearly 3.3 seconds - which IMO is not acceptable.
 
Yes ServerHello is nearly all of your delay, and that is in Proxy. What 
does Proxy code do between accepting on the [SSL]ServerSocket 
(which is Java's way of representing the listen-ing socket) and the 
first read or write -- apparently write? One particular thing, is it 
using BufferedReader/Writer on the socket streams? Java code 
often does buffered because it is more efficient than going to OS 
each time (at least for real I/O rather than ByteArray Streams).
In that case for output you need .flush() to actually send. 
Or you could try doing .startHandshake() explicitly as soon as 
convenient after the .accept() and see what difference that 
makes (although it may only move the problem elsewhere).



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: How to disallow openssl to pick up local openssl settings?

2014-12-04 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jeffrey Walton
 Sent: Monday, December 01, 2014 16:18
(reordered)
 On Mon, Dec 1, 2014 at 3:47 PM, Tanel Lebedev tanel.lebe...@gmail.com wrote:
  I'm building and packaging OpenSSL as a third party library in my app. I
  also include a certificate bundle with it.
 
  Now it seems that the OpenSSL library that is packaged with my app, tries to
  pick up users local OpenSSL settings (/some/path/openssl.cnf).
 
  Is there any way to turn this off, when building OpenSSL? I'd like the
  OpenSSL library not to poke around on users machine, only use the
  certificate bundle I've specified etc.
 
 I'm not sure if there is a configuration switch like no-conf.
 `Configure` is not much help here since it silently consumes bad
 options.
 
 If interested, I believe you can change the behavior at runtime with
 `OPENSSL_no_config`. See
 https://www.openssl.org/docs/crypto/OPENSSL_config.html.
 
If your app actually calls OPENSSL_config presumably you want config.

The hidden one is that OPENSSL_add_all_algorithms, also known as 
SSLeay_add_all_algorithms, which many apps call as part of a 
standard initialization, can be compiled to a _conf or a _no_conf 
variant and _conf calls OPENSSL_config(NULL).  But only if you 
set macro OPENSSL_LOAD_CONF at *app* compile which shouldn't 
happen unintentionally. See evp.h.

But unless your app makes calls to look at specific config items 
and sections, the only thing configured automatically is
modules and engines. In particular config does not alter app
trusted CAs. That is controlled only and explicitly by whether 
your app calls _load_verify_locations or _default_verify_paths.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: SSL alert number 51

2014-11-23 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Charles Mills
 Sent: Friday, November 21, 2014 12:30

 Thanks. I guess I may have to open a problem with IBM. The IBM
 documentation
 clearly lists a number of cipher suites (at they call them) that use
SHA1
 (including the one we (IBM+OpenSSL) default to as being FIPS 140-2
 compliant.
 
cipher suite(s) is the official term in the TLS standards,
mostly two words but sometimes hyphenated or run together,
so not surprisingly most implementations use it or a variant.
The SHA at the end a suite name defined before TLS1.2 
is actually SHA1 used within HMAC for integrity check.
(HMAC is a generic MAC-from-hash construction.)
The new suites defined in or after TLS1.2 use SHA256 or SHA384 
for HMAC, or are authenticated-encryption with *no* HMAC,
although they still vary the hash used in the PRF for key derivation.

 GSK appears to only support SHA1 and MD5, and MD4 is pretty clearly not
 FIP 140-2 compliant.
 
(That's a typo. SSL/TLS never used MD4, or MD2. It did use RC4 and RC2.)

Not quite, the picture is more nuanced. Although if you *can* 
go to TLS1.2 and a SHA256 or SHA384 suite that is Best Practice.

800-131A (Jan 2011) codified in 800-57 part1 rev3 (July 2013)
prohibits SHA1 *for signature and hash-only* (which are assumed 
subject to collision attack) after 2013. It is still allowed for HMAC 
and some other uses that protect against collision. (Even after 2030 
when 3TDEA, SHA-224, IFCFFC 2048, and ECC 224 are scheduled 
to go away, although they may well re-think before then.)

In particular, draft 800-52 rev1 (Jan 2013) allows the TLS1.01.1
PRF (key derivation) with SHA1-xor-MD5; MD5 is not Approved 
at all but this construction doesn't rely on it and SHA1 *for KDF* 
is okay. However TLS1.0 is disallowed for another reason.

Similarly in non-FIPS situations the two (HMAC-)MD5 suites 
that are not SSLv2-only and not export-weakened are still  
mostly considered acceptable, though at the same time 
certs *signed* with MD5 are not, and certs signed with SHA1 
won't be within a year or two. Not that this really matters,
since you practically always have a better option.




__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


new c_rehash, was RE: differing outputs using cli utility and c interface

2014-11-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Malatesh Ankasapur
 Sent: Tuesday, November 18, 2014 23:17
Note: you should post a new topic as a new message, not a reply. subject fixed

 citrix reciever using the symbolic link .pem certificate so i did c_rehash 
 for my ceritficate
 1. openssl-0.9.8e created 1 hash value for my certificate. but i dont know 
 why openssl-1.0.0 is creating 2 hash value
 suppose i am going to update openssl in my linux, its depends on the libc, 
 its difficult to update i think.
 is any other solution to create 2 hash values for same file with 
 openssl-0.9.8e

I don't see any situation where 1.0.0 (or any other) c_rehash would 
*create* two hash values.

1.0.0 or later does use a *different* hash algorithm (for cert and CRL names)
than 0.9.8 and earlier, and a different hash algorithm produces different 
values.
If you do 0.9.8 c_rehash AND 1.0.0+ c_rehash, TOGETHER they will produce 
2 different hash values and resulting links (or copies) for each cert or CRL.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Unable to sign a certificate: for Java codesigning

2014-11-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Joerg Schmitz
 Sent: Saturday, November 15, 2014 12:16

 I hope you can help me. I'm about to sign jar-files with a self created 
 certificate 
 using OpenSSL. The jar-File contains an old Java-Applet which Java is 
 blocking 
 (as long as it is not signed) in the browser since version 7.51. Once it is 
 signed, 
 I just have to install the certificate (in the system / browser / JRE).

 Right now I have a problem signing the certification request (see below Step 
 7): 
 unable to load certificate. What do I have to change to pass this step? 

That's already answered.

 In addition I am not sure about the further steps (which I also added below). 
 Could you pls also tell me if these are right?

But as to the others:

 1.) Create folder structure cd test mkdir private certs newcerts conf export 
 csr 
 echo '01'  serial touch index.txt export 
 OPENSSL_CONF=/home/joerg/cacerts/myca/openssl.cnf

Those are all run together and need to be separated, but are then reasonable.

 2.) Create the Certificate Authority
 openssl req -new -x509 -days 3650 -keyform PEM -outform PEM 
 -keyout test/private/cakey.pem -out test/cacert.pem

By default req -new -x509 is only valid for 30 days. If you want your apps 
to last longer than that, choose a suitably longer period in days. You use 
365 for the child cert below, and the CA needs to be at least that long.
Since you need to install any reissued root cert in each client, 
you probably want to make it longer like 5 or 10 years as long as 
you're confident you will keep your privatekey secure and noone 
else can get at it to create unauthorized certs and thus apps.

 3.) Copy the CA into a format which can be managed by the Java-keystore:
 openssl x509 -outform der -in test/cacert.pem -out test/cacert.crt

Not needed. keytool has been able to read cert in PEM a long time.
(The API for a *program* doesn't, or not easily, but keytool does.)
 
 4.) Generate Keystore
 keytool -genkey -keystore javakeystore.jks -alias test

Create keystore *and generate privatekey*. 

 5.) Check Keystore
 keytool -list -keystore javakeystore.jks -storepass whatever
snip

 6.) Create certification request
 keytool -certreq -v -file test/certs/caRequest.csr -alias test -keystore 
 javakeystore.jks -storepass whatever

 7.) Sign the certificate with the CA
 openssl ca -days 365 -in test/certs/caRequest.csr -out 
 test/newcerts/caRequest.pem -policy policy_anything
snip error, see other answer

 My plan is to continue like this:

 8.)
 openssl x509 -in test/newcerts/caRequest.pem -out test/newcerts/caRequest.pem 
 -outform PEM

 9.)
 openssl x509 -outform der -in test/newcerts/caRequest.pem -out 
 test/newcerts/caRequest.crt

Not needed. Current Java (7 or 8) keytool can read PEM even with the comments 
'ca' adds.

 10.) Concatenate the certificate chain
 cat test/newcerts/caRequest.pem test/cacert.pem  
 test/newcerts/caRequest.chain

Not needed if you separately load the one-and-only CA cert as you do.

 11.) Indicate that I trust this CA
 keytool -import -trustcacerts -file test/cacert.pem -alias test -keystore 
 javakeystore.jks -storepass whatever

On this step the -alias must NOT MATCH your privatekey entry which is 'test'. 
Maybe 'myroot'.  
-trustcacerts is not relevant here, only when importing *some* child certs.

 12.) Import it into your keystore
 keytool -import -file test\newcerts\caRequest.chain -alias test1 -keystore 
 javakeystore.jks -storepass whatever

This step must be -alias test to  MATCH the privatekey entry.

 13.) Sign jar file
 jarsigner -keystore javakeystore.jks TestApplet.jar test



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: SSL alert number 51

2014-11-19 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Charles Mills
 Sent: Wednesday, November 19, 2014 14:08

 10280:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt
error:.\ssl\s3_pkt.c:1275:SSL alert number 51

http://tools.ietf.org/html/rfc5246.html#section-7.2
   decrypt_error
  A handshake cryptographic operation failed, including being unable
  to correctly verify a signature or validate a Finished message.
  This message is always fatal.

Either there's a bug somewhere or you are being attacked (MitM'ed).

 OpenSSL 1.01h is the server, running on Windows 7 Pro 64 bit. 

Do you mean the server, running 1.0.1h on Win7, produced this error message,
or some client talking *to* such a server produced the error?
In either case, what is in the error output or log of the opposite peer?

If you try to connect s_client to the server, or the client to s_server,
respectively,
does it work or what error info does it give you?



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: openSSL equivalent of RSA/ECB/PKCS1Padding

2014-11-19 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Dan Si Atat
 Sent: Wednesday, November 19, 2014 14:32

 I am trying to emulate in OpenSSL java encryption algorithm.  
 When using RSA_public_encrypt are there parameters to emulate any of
these 
 combinations of parameters in Java?
 RSA/ECB/OAEPWITHMD5ANDMGF1PADDING or RSA/ECB/PKCS1Padding? 
 I tried using RSA_PKCS1_PADDING as a padding parameter but when I
decrypt 
 the encrypted text in Java I get a BadPadding exception.
 
Then you're doing something wrong. I can encrypt in openssl with
PKCS1_PADDING or OAEP_PADDING 
and decrypt in Java with RSA/ECB/PKCS1PADDING (or just RSA, because Java
defaults to PKCS1 padding)
or RSA/ECB/OAEPWITHSHA1ANDMGF1PADDING (SHA1 not MD5; openssl doesn't do
OEAP-MD5, nor the SHA-2s)
and vice versa. Tested with openssl 0.9.8za 1.0.0j,m 1.0.1c,h versus Java
7u15 7u72 8u05 8u25
which are the versions I currently have on my throwaway test environment.
(FWIW I do have 
unlimited crypto policy in all Javas, although I don't think that makes a
difference for RSA.)

Are you sure you're using (halves of) the same keypair on both sides? 
Key mismatch will definitely manifest as bad padding.

Are you treating the data as binary (C/C++ b, Java Stream bytes not
Reader/Writer chars) 
or encoding/decoding to a form that exactly preserves bits (usually base64
or hex)?
(Although if not I would usually expect length errors rather than padding.)



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Query regarding SSLv23 methods

2014-11-15 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Kyle Hamilton
 Sent: Friday, November 14, 2014 22:03

 SSL_OP_* are bitmasks.

 SSL_CTX_set_options(conn-ssl_ctx, SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3);

 On 11/14/2014 12:37 AM, Vaghasiya, Nimesh wrote:
conn-ssl_ctx = SSL_CTX_new(SSLv23_server_method());
  SSL_CTX_set_options(conn-ssl_ctx, SSL_OP_NO_SSLv2);
SSL_CTX_set_options(conn-ssl_ctx, SSL_OP_NO_SSLv3);

Although _set_options is oritive i.e. only sets bits on -- so 
the sequence of two calls does work, although a bit less clear.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: sign issue

2014-11-15 Thread Dave Thompson
Your questions are confused and I don’t have time to read through a lot of 
code, but:

 

In OpenSSL, type RSA (typedef struct rsa_st) is used for both/all RSA keys.

 

When you generate a new keypair, the RSA structure is filled with fields for 

both private  key and public key. If you use the routines to write and read 

RSAPrivateKey format, or [PKCS8]PrivateKey for an EVP_PKEY “holding” RSA,

the key written and read back is usable as either private key or public key.

 

If you pass a “both” RSA to a routine that needs the private key,

like _private_decrypt or Open or _private_encrypt or [Digest]Sign, it uses 

the private fields. If you pass it to a routine that needs the public key,

like _public_encrpyt or Seal or _public_decrypt or [Digest]Verify, it uses 

only the public fields and ignores the private ones even though present.

 

If you put the public key in a cert (directly, or via a CSR) and then fetch 

the RSA from the cert, it contains only the public fields. If you pass that

to a public-key operation it works, because it ignores the private fields.

If you pass it to a private-key operation it fails, because the private fields 

are missing and needed.

 

HTH 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Amir Reda
Sent: Saturday, November 15, 2014 06:37
To: openssl-users@openssl.org
Subject: sign issue

 

dear all

i'm a Msc student that uses NS3 simulator to do some researches. my target for 
right now is to make a sample code for a client and a server then add it to the 
simulator 

as a brief 
1-the client send a certificate request and the server send the certificate to 
the client 

2- the client create a shared key and encrypt it using function 
RSA_public_encrypt and create some data and sign the data and encrypted shared 
key and send (client certificate and the data and the encrypted shared key and 
the sign (of both encrypted shared key and the data)) to the server side

3- the server will verify the certificate and decrypt the encrypted shared key 
using its private key. and verify the sign using the public key extracted from 
the client certificate

i have created the certificate and its working well and verified and the 
encrypted shared key is done

 my problem is 
1- how to sign both the data and encrypted shared key with the private key of 
the client even i have only RSA structure 

2- the encrypted shared key should be encrypted by the public key of the server 
which can be extracted from the server certificate but the method it self 
RSA_public_encrypt got RSA structure as an argument 

3-how can i verify the sign

 do i need to make all of the data and encrypted shared key to digest then sign 
it  even i don't separated private and public key i have only RSA structure 
and how to do that

thanks allot for help  



-- 

Warmest regards and best wishes for a good health,urs sincerely 
mero



Re: Openssl IPv6 Support

2014-11-05 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Marcus Meissner
 Sent: Wednesday, November 05, 2014 04:10

 On Wed, Nov 05, 2014 at 08:28:40AM +, Mody, Darshan (Darshan)
 wrote:
  Hi,
 
  Does Openssl support IPv6 officially?.
 
 AFAIK the libssl and libcrypto libraries do not use sockets at all,
 these are left to the applications/libraries using them.
 
libssl requires something it can send and receive on using the BIO API
that represents the connection to the peer and is normally a socket,
although in principle you could write your own module to substitute 
something crazy like IP-over-carrier-pigeon.

The BIO module in libcrypto provides a BIO_sock instance that 
does I/O on an OS socket and provides the BIO API to libssl 
(or to code that wants to use plain non-SSL sockets, FTM).

BIO_sock can send and receive on any opened socket, IP4 or IP6.
So if the application 'connect's or 'accept's the sockets, 
and then passes them to SSL_set_fd (or equivalent) it works.
But last I looked, BIO_sock cannot do IP6 *connect*, and 
only does IP6 *accept* if you give it an already IP6 listen socket.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Why public key SHA1 is not same as Subject key Identifier

2014-11-05 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jerry OELoo
 Sent: Wednesday, November 05, 2014 03:11

 But when I go to www.google.com website, I find the leaf certificate
 and intermediate certificate is ok, but root CA certificate (GeoTrust
 Global CA) is not.
snip
 Public Key SHA1:
 00:f9:2a:c3:41:91:b6:c9:c2:b8:3e:55:f2:c0:97:11:13:a0:07:20
 
 Subject Key Identifier: c0 7a 98 68 8d 89 fb ab 05 64 0c 11 7d aa 7d
 65 b8 ca cc 4e
 
http://tools.ietf.org/html/rfc5280.html#section-4.2.1.2

notice the difference between MUST and SHOULD.
See the referenced RFC 2119 if necessary.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: sign data and verify it

2014-11-05 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Amir Reda
 Sent: Wednesday, November 05, 2014 02:42

 1- i generate rsa key pairs and try to print it in a pem file but when i open 
 the file it was empty

You never close or even flush the file. openssl uses C I/O and C I/O by default 
is usually buffered and not actually written until the file is closed, flushed, 
repositioned, direction changed on an update: file, or the buffer is filled.
Details vary depending on your C implementation which you don't identify.
For file-BIO, the generic BIO_free does the close, otherwise see the manpage.

Also, you tell BIO_new_file to open in mode wb. PEM data is text not binary, 
and on implementations where these are different (mostly Windows) writing 
PEM as binary will produce a file that other tools may not handle correctly 
(Notepad is particularly bad) although other programs using C including those 
using openssl file-BIO will probably read okay and that may be enough.

 2- when i use function RSA_public_encrypt () to encrypt some data it does 
 nothing because 
 i print the data using cout before encryption then print it after 
 encryption it was the same

You generate a key of 2048 *bits* and then try to encrypt 256 *bytes* of data. 
You can’t do that much; the data you encrypt plus some overhead determined 
by the padding must be smaller than the modulus. For RSA PKCS1 padding 
(actually retronymed PKCS1-v1.5 or some variant) this is 11 bytes; see rsa.h.

If you checked the return code from RSA_public_encrypt you would know 
it had an error. When any openssl routine returns an error indication, 
you should call the ERR_ routines to get and usually display details about 
the error, usually after loading error strings, except that some SSL_ routines 
you should first check SSL_get_error to see if it's a real openssl error, 
a system call (I/O) error, or a nonblocking case like WANT_READ.
See https://www.openssl.org/support/faq.html#PROG6
and https://www.openssl.org/support/faq.html#PROG7

Most real systems use hybrid encryption: the bulk data is encrypted by 
a symmetric cipherusing a newly generated symmetric key (and usually IV 
if applicable), and the symmetric key which is a fixed size always small enough 
is encrypted with RSA. See the PKCS7_ and CMS_ routines as one example, 
although these also protect the publickey with a certificate so that the 
encrypted data has a decent chance of actually being safe against attacks,
which is usually the desired result of using cryptography.

 - the sign function RSA_sign () has a problem 

Similarly you try to sign 256 bytes, which won't work. Again real systems 
generate a *hash* of the data, which is a small fixed size, and RSA-sign 
the hash with padding, except that here the padding also includes adding 
(and removing/checking) an ASN.1 header that identifies the hash algorithm.

The EVP_Digest{Sign,Verify} and EVP_{Seal,Open} series of routines handle 
these details for you and are usually better than rolling your own crypto.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: certificate verification problem

2014-10-31 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of tho...@koeller.dyndns.org
 Sent: Thursday, October 30, 2014 14:50

 I have... root_ca.pem ... self-signed ... issued host_ca.pem ...
 I would expect the two to form a valid chain. And indeed,
 verification succeeds:

 ... openssl verify -CAfile root_ca.pem host_ca.pem
 host_ca.pem: OK

 However, if I add -issuer_checks to the command line, I get errors:

 openssl verify -CAfile root_ca.pem -issuer_checks host_ca.pem
 host_ca.pem: C = DE, ST = Hamburg, L = Hamburg, O = K\C3\B6ller Family, 
 OU = Network Administration, CN = K\C3\B6ller Family Host Signing Certificate
 error 29 at 0 depth lookup:subject issuer mismatch
 C = DE, ST = Hamburg, L = Hamburg, O = K\C3\B6ller Family, OU = Network 
 Administration, CN = K\C3\B6ller Family Host Signing Certificate
 error 29 at 0 depth lookup:subject issuer mismatch
 C = DE, ST = Hamburg, L = Hamburg, O = K\C3\B6ller Family, OU = Network 
 Administration, CN = K\C3\B6ller Family Host Signing Certificate
 error 29 at 0 depth lookup:subject issuer mismatch
 OK

 Next, I look at the subject and issuer fields of both certificates, and 
 find them to be matching: snip
 Am I wrong to expect the verify command to succeed without errors in 
 this case, even with -issuer_checks? I am attaching the two certificates,
 in case someone wants to investigate the problem.

As the manpage says:
Print out diagnostics relating to searches for the issuer certificate of the 
current certificate. 
This shows why each candidate issuer certificate was rejected. The presence of 
rejection messages does not itself imply that anything is wrong; during 
the normal verification process, several rejections may take place.

In particular, although the manpage doesn't say so, X509_verify_cert 
checks several(!) times whether your cert is self-issued, only to find it isn't,
causing the errors you see in this case.

The result is OK; the errors should be ignored.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: How to get https web site certificate public key

2014-10-30 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jerry OELoo
 Sent: Tuesday, October 28, 2014 04:20
snip
 Now I use i2d_RSAPublicKey() to encode on RSA* from EVP_PKEY which
 will show same as [Chrome]
  
 One more thing, I find use i2d_RSAPublicKey() will be get same public
 between openssl API and browser for some sites (twitter.com,
 developer.apple.com), but for www.google.com, I find that is not
 exactly same (just has same begin 30 82 01 0a 02 82 01 01 and others
 are not same).
 so why google is not same?
 
RSA public key is a (default-tagged) SEQUENCE of two INTEGERs. 
Some of the len bytes in DER depend on the key size and pubexpt.
At the moment most servers including the three sites you name 
are using 2048-bit keys, although that was different in the past 
and may change again in the future, and the conventional 
pubexpt 65537 aka F4. For those parameters the encoding is
  30 82 01 0a # SEQUENCE 
02 82 01 01 00 (256 bytes modulus) # INTEGER modulus varies
02 03 01 00 01 # INTEGER pubexpt = 65537

Big websites like google, yahoo, twitter are not one machine.
They are maybe hundreds or machines to share the load,
often spread in locations around the world to reduce latency.
Usually they try to use the same certkey for all of them or 
at least big chunks, but depending on who is managing what 
from where and when there is sometimes variation.

As of Tue from my network location, over about 15 minutes,
www.google.com resolves to 16 different IP addresses. 
Of these 11 are using a cert with
 - serial 04:29:2e:de:7a:09:f6:10
- validity starting 2014 oct 15 10:57:04 Z
- modulus beginning bb:cb:8a:0e
and 5 are using a cert with
- serial 1b:a9:d1:40:05:83:5c:00
- validity starting 2014 oct 22 12:57:51 Z
- modulus beginning c1:52:36:91

For twitter.com I get 11 IPs, all using the same cert.

It may be different at your location or different times.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Know Extended Key Usage

2014-10-13 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Lewis Rosenthal
 Sent: Wednesday, October 08, 2014 10:57

 Actually, Jakob, I think it's the second one (the first one after the
 pipe) which can come out, i.e.:
 
Yes.

 openssl s_client -showcerts -connect google.com:443  \
 /dev/null | openssl x509 -noout -text | grep -A1 X509v3 Extended Key
 Usage
 
 which seems to produce a little less noise, but it's still not down to a
 single line of output. Still, it's more elegant than what I cited, I think.
 
The remaining noise is a few lines s_client writes to stderr.
Add 2/dev/null, or 21 and let the next stage discard it.
(I prefer the latter because it's the same Unix/Windows;
one less on the list of adjustments I must remember.)

Also the -showcerts is useless and misleading. x509 -noout -text 
only decodes and displays the *first* cert in the s_client output,
so including and then ignoring the CA certs is just wasted.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Openssl err 18

2014-10-07 Thread Dave Thompson
verify status 18 (not strictly an openssl error) means that you (usually as a 
client)

received a cert chain (usually from the server) with a root cert that is not in 
your 

truststore. Yes, this is a slightly confusing error description for this case.

 

If the root cert used should be trusted, fix to use a truststore that contains 
it.

This has the following subcases:

 

- If you are currently not using any truststore, fix to use a good one.

 

- If you are currently using the wrong truststore, fix to use the right one.

 

- If you currently using the right truststore but it doesn’t contain this root 
cert,

  add this root cert to the truststore.

 

If the root cert being used should  NOT be trusted, fix the server to use a 
chain

from a CA that should be trusted. If openssl does not recognize THAT root,

return to the cases above.

 

To decide whether the root cert should be trusted, you may need to look at it.

This can be accomplished using openssl s_client –showcerts with a little work,

but if you are able to connect HTTPS to this server from a browser like IE or 
FF,

(1) that means the *browser* truststore *does* contain this root; if the 
browser 

store has not been modified this means MS or Mozilla respectively thinks 

this root is trustworthy, which is a pretty good reason to think you should;

(2) from the browser you can extract a copy of the cert to use with openssl.

 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of sandhya reddy
Sent: Monday, October 06, 2014 04:19
To: openssl-users@openssl.org
Subject: Openssl err 18

 

I'm getting an openssl error 

Err:18 self signed certificate because of which not able to succeed with TLS 
handshake completion.

 

Any idea on what is to be done to get it fixed ?



RE: Certificate chain

2014-10-02 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of salih ahi
 Sent: Thursday, October 02, 2014 04:03

 I wrote an openssl server, which uses an on-the-fly created certificate 
 and signs it with the private key of another already created self-signed 
 certificate file. I am adding them both to the ctx:

 X509 cert = X509_new();
 .
 X509_set_pubkey(cert, base_pkey)
 X509_sign(cert, base_pkey, EVP_sha1());
 
 SSL_CTX_use_certificate(ctx, cert);      
//cert = just created
 SSL_CTX_add_extra_chain_cert(ctx, base_cert);    //base_cert =
read from file

A keycert used to issue other (child) certs is called a CA keycert, and a 
CA cert that is selfsigned is called a CA root cert or just root cert.

What are you using for _use_PrivateKey? If you are using a new or different 
keypair for protocol then the pubkey *in* the new cert(s) should be that 
key, not the 'base' key. If you are sharing the same key for both CA and 
protocol (and new cert(s)), you are okay here. 

 When I connect to this server from a browser while tracing client traffic 
 from wireshark, I see both certificates being received in Certificate
record, 
 but if I want to see the certificates in the certificication path of
current page 
 I only see ‘cert’, not both. I set the following fields as shown in both
certificates

 cert.subject.commonname = servername
 cert.issuer.commonname = salih 
 base_cert.subject.commonname = salih
 base_cert.issuer.commonname = salih

To be clear, the *entire* issuer field in the child cert must equal 
the subject field in the CA cert, and for the CA cert to properly 
be a root the entire subject field must equal the issuer field.
Are you saying the commonname fields are set as you show 
and the other fields are something else, or are you saying the 
commonname fields are set and there are no other fields?

Also, the string types should be the same; you can see this in 
wireshark if you look at the underlying bytes not just the 
decoded display, or you can display files (for base you already 
have a file; for on-the-fly child cert if your server doesn't/can't 
save it somewhere you can save it from the browser as a cert 
or wireshark as raw bytes) with openssl asn1parse or 
x509 -noout -issuer -subject -name_opt multiline,show_type
to check. ASN.1 has about six different string types/encodings.
If you *copy* parent.subject to child.issuer it will be correct, but 
if you just set child.issuer to a value that *looks like* the value 
of parent.subject it might be wrong.

 What I want to do is, add base_cert to trusted certificate list of client 
 and any certificate signed with base_cert to show up without any 
 certificate warnings. And I need the certificate chain tree to be 
 parsed correctly by the browser for this. 

You aren't clear, but I guess you *are* getting a browser warning 
because the browser does *not* correctly chain your cert to 'base'?

Did you successfully put the 'base' cert in your Windows store 
(aka InternetOptions / Content / Certificates) in TrustedRoots?
If that gave (or gives) any error, provide details. 

 Am I  missing something during the certificate creation process?

In addition to above, are you using any extension(s) in the 'base' cert?
You don't mention one way or the other.
If you do, they must be suitable for a CA cert. If BasicConstraints is 
present it must have ca=true. If KeyUsage is present, it must have 
keyCertSign enabled (and preferably should not have anything more 
than keyCertSign and crlSign).



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Generate DH parameters on the fly

2014-09-26 Thread Dave Thompson
(Sorry, got stuck in my outbox and I didn't notice for a while)

 From: owner-openssl-us...@openssl.org On Behalf Of Marco Bambini
 Sent: Monday, September 22, 2014 02:44

 Thanks a lot for the explanation, so instead of generating new parameters
on
 the fly I could just create them once and then load on requests via the
 SSL_CTX_set_tmp_dh_callback?
 
 Like in the example listed on:
 https://www.openssl.org/docs/ssl/SSL_CTX_set_tmp_dh_callback.html
 
If you generate one set of parameters you can just set them in set_tmp_dh,
which is specified on the same manpage and is just called before connecting.
The _callback variant is only needed if you want to select different
parameters 
for different connections. That example is to support old export
ciphersuites 
where you are/were required to use DH-512 because of legal restrictions 
that no longer apply since about 1999. You should never use export suites 
unless you are dealing with very old systems that cannot be upgraded,
in which case it's probably a waste to bother with DHE at all. Even though 
OpenSSL does still permit them by default (although based on discussions 
here that will probably change in the next release or two).

 Should I provide just 4 files: dh512.pem, dh1024.pem, dh2048.pem, and
 dh4096.pem?
 
You should use any DH group of size 512 (the supplied file or one you
generate)
only if required for export suites (see above). 512 is now practical to
break. 
1024 is adequate for now, although =2048 provides a better safety margin
and 
is specified by standards like NIST SP800-57. However, you should test with
your 
clients first; the SSL implementation (JSSE) in Sun-now-Oracle Java before
v8 
does not support DH  1024, and there may be others. If you use 1024 now,
you should have a plan to switch to 2048 or maybe more in a few years.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Change in default behavior from 1.0.1g to 1.0.1h: string global_mask

2014-09-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Andy Schmidt
 Sent: Wednesday, September 17, 2014 18:28

 I just tracked down an obscure bug in our certificate authentication
 code to a change in in the global mask for ASN.1 strings in
 crypto/asn1/a_strnid.c.
 (https://github.com/openssl/openssl/commit/3009244da47b989c4cc59ba02c
 f81a4e9d8f8431)
 I have a couple of questions about this:
 
 1. Was this change made for a security related reason?
 That is, by changing global_mask back to the 1.0.1g initialized value,
 are we introducing a security vulnerability?
 
Going back (probably, depending on the actual string values you use) 
may encode differently than standards call for. AFAICS there is no direct 
security impact, but if and to the extent it causes compliance or 
interop problems, those may indirectly affect security. (Canonical 
example: browser displays a dialog box about this certificate may 
be invalid because $technical_details. 99.999% of users click on 
the box that says I don't want this computer gibberish, just 
connect me to the website even if it is run by thieves so that 
I can have my money and personal data stolen QUICKLY.)

 2. Is there a changelist somewhere in the source tarball that lists
 the 1.0.1g to 1.0.1h revisions? Or a list that outlines changes in the
 default settings?
 This would be extremely helpful to incorporating newly released 1.0.1
 subversions. The file CHANGES appears to only list security
 vulnerabilities.
 
IME CHANGES generally lists visible (i.e. commandline or API) changes,
and internal ones (like refactoring) if they are considered important.
You are not the only one visibly unhappy this change was made unlisted.
It was apparently made for http://rt.openssl.org/Ticket/Display.html?id=3371
then affirmed by http://rt.openssl.org/Ticket/Display.html?id=3402
and http://rt.openssl.org/Ticket/Display.html?id=3469 .
AFAICT rt ticket creations are published on openssl-dev,
and these two were definitely discussed there.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Generate DH parameters on the fly

2014-09-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Marco Bambini
 Sent: Friday, September 19, 2014 12:04

 my server needs to accept DHE ciphers from clients so I think I would need
to
 be able to load static dh512.pem, dh1024.pem, dh2048.pem and dh4096.pem
 certificates on server side. In order to increase security I would like to
skip
 the pem file loading step and generate these dh certificates on the fly.
 
Those aren't certificates, they are parameters. For DHE (and also DH-anon) 
server and client each generates a new (ephemeral) keypair for each
handshake 
using the same parameters. Having many keypairs under the same parameters 
is secure, this is how Diffie-Hellman works. Generating a new keypair is 
nearly instantaneous; generating new parameters takes a minute or 
several, which would be unacceptable per connection on most servers.
Generating them on server startup, or now and then such as monthly,
would give you the same extremely tiny increase in security.

If you really want that, generate parameters using the DH_ specific 
routine or the EVP_PKEY_ wrapper and pass that to set_tmp_dh or 
use it (or maybe them) in the callback set by set_tmp_dh_callback,
instead of the one(s) read from file(s).

The protocol does define static DH suites which use DH certificates.
(SSLv3 through TLSv1.1 distinguished DH certs signed by RSA or DSS 
in the ciphersuite; 1.2 essentially merges them and uses the new 
sigalgs extension instead.) OpenSSL did not implement these in any 
release yet; 1.0.2 is planned to. DH certificates cannot be requested 
using the standard PKCS#10 CSR (because DH can't sign) and I've never 
seen nor heard of any CA that issues a DH cert nor any system wanting 
to use static-DH. (OpenSSL *does* implement the static *EC*DH suites,
although I haven't seen them used in anger either.)



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: TLS handshake error : No shared cipher (SSL error 40)

2014-09-17 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Francis GASCHET
 Sent: Wednesday, September 17, 2014 13:35

 We use openSSL in OFTP2 implementation. The OFTP2 working group
 decided
 to strongly recommend to use preferably the cipher suites including PFS
 (ephemeral Diffie Hellman).
snip

To date*, in order to agree a DH-ephemeral or ECDH-ephemeral suite, 
the server must be configured with temporary DH/ECDH parameters:
https://www.openssl.org/docs/ssl/SSL_CTX_set_tmp_dh_callback.html
tmp_ecdh* is similar but has no manpage. Is it?

For ECDHE, the temporary parameters must be a curve allowed by the 
client's list of supported curves. For openssl clients (except RedHat) 
all standard named curves are allowed, but other clients may differ. 
P-256 and P-384, and maybe P-521, seem to be most widely supported,
and therefore probably the best choices in general.

* 1.0.2 is expected to have some more convenient options in this area.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Certificate pass phrase brute force...

2014-09-16 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Gregory Sloop
 Sent: Monday, September 15, 2014 22:50

 And, one more question: 
 How can I tell what format/encryption my pkcs12 files are in? 
 [I believe for Android platform use, I need p12 certs/keys - so I'm working 
 on the export/conversion part too.]
 I export my cert+key like so:
 [openssl pkcs12 -export -aes256 -in somecert.crt  -inkey somekey.key -out 
 somep12.p12]
 An openssl pkcs12 -info -in somecert.p12 gives something like this:
snip
 Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048
 My take on what that says [which may well be wrong.]
 The cert is protected with what appears to be quite a weak chpher, 
 but we don't care, since it's public anyway.

Right. 

 However this looks like the key is encrypted with 3DES, but I exported it 
 from the Cert+Key with -aes256 - so I'm puzzled why I'd have a 3DES 
 encrypted p12.

You thought you did but you didn't.

The doc is a bit subtle, but the -$cipher option is listed under PARSING. 
It applies when *reading* a PKCS#12 and extracting the cert(s) and key(s?) 
to separate files or sections, for (most) other OpenSSL operations, and 
specifically to encrypting the extracted privatekey section.

To specify the PBE algorithm for the key when exporting *to PKCS12*, 
use -keypbe, as listed on the man page under EXPORTING.

And yes, it isn't very helpful that commandline doesn't warn when you 
specify a combination of options that doesn't make sense. This is true 
for most of the commandline functions historically, although a few that 
have been (re)written recently are better.

snip earlier


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: cannot read PEM key file - no start line

2014-09-13 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Dave Thompson
 Sent: Friday, September 12, 2014 04:31

 *If* you are now using a legacy-format encrypted private-key (and your 
 original 
 error message suggested you might need some form of private key, which does 
 necessarily mean legacy-format encrypted) yes 76 chars is a problem.
 The example(s) I saw earlier were certificates, where 76 chars works okay.

Argh! private key does NOT necessarily mean legacy-format encrypted.
If you need encrypted PEM private key (and that remains a separate question) 
you can use PKCS#8 PEM private key with any width base64 up to 76.
On general principles PKCS#8 is preferably to legacy anyway; it's more 
standard/interoperatble, more flexible, the encryption is better.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: issuer_hash

2014-09-12 Thread Dave Thompson
-fingerprint is the hash of the whole cert. The question was hash of issuer
name.

 

If you’re satisfied with hash of the issuer name as encoded, which should
not 

but can differ from the canonicalized form OpenSSL uses for lookup, you can:

- use asn1parse to find the byte position of the issuer DN

- use asn1parse –strparse to extract issuer in DER to a separate file

or more clumsily use something general like dd or perl to extract issuer in
DER

- hash that file

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Jakob Bohm
Sent: Thursday, September 11, 2014 06:25
To: openssl-users@openssl.org
Subject: Re: issuer_hash

 

On 11/09/2014 09:40, Steven Madwin wrote:

cid:image001.gif@01CFCE32.F3F4B120

I see that the x509 command used with the –issuer_hash option returns a four
byte digest value. Is there any method using OpenSSL to procure the 20-byte
SHA-1 digest value of the issuer name?

 

use -fingerprint

(-subject_hash and -issuer_hash are used to look up CAs in a disk-based
 database, as used by the -CAdir option to various other OpenSSL commands.
 Basically, each CA is listed under its own -subject_hash, and calling
 -issuer_hash on a certificate then tells where to look for the CA
 certificate).




Enjoy
 
Jakob
-- 
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded 


RE: cannot read PEM key file - no start line

2014-09-12 Thread Dave Thompson
*If* you are now using a legacy-format encrypted private-key (and your original 

error message suggested you might need some form of private key, which does 

necessarily mean legacy-format encrypted) yes 76 chars is a problem.

The example(s) I saw earlier were certificates, where 76 chars works okay.

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Liz Fall
Sent: Wednesday, September 10, 2014 11:20
To: openssl-users@openssl.org
Subject: RE: cannot read PEM key file - no start line

 

Hi Dave,

 

Are you saying that the 76 characters per line is causing the problem with 
openSSL?

 

Thank you,

Liz

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Dave Thompson
Sent: Tuesday, September 09, 2014 5:49 PM
To: openssl-users@openssl.org
Subject: RE: cannot read PEM key file - no start line

 

I was half wrong before. 

 

The base64 read in EVP_Decode* allows 76. But the PEM parser in PEM_read_bio 

enforces exactly 64 only for input files that have PEM-encrypt headers 

which in practice is only encrypted legacy-format privatekey files.

(Nonprivate things like cert, CSR, publickey, params, etc. aren’t encrypted at 
all.

PKCS8 privatekey or PKCS12 key-plus-cert is encrypted within the ASN1, not as 
PEM.)

 

I have and know of no software to create encrypted legacy-format privatekeys

other than OpenSSL itself which always writes 64, so I never encountered this 
before.

(Other sw does do PKCS8-e or PKCS12 but see above.)

 

(As seen elsethread, OP apparently had PEM certs where PEM key was expected.)

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Jeffrey Walton
Sent: Tuesday, September 09, 2014 08:09
To: OpenSSL Users List
Subject: Re: cannot read PEM key file - no start line

 

 

 

On Sun, Sep 7, 2014 at 10:26 PM, Liz Fall f...@sbcglobal.net wrote:

All,

 

I am getting the following with my client cert when trying to connect to an 
SSL-enabled MongoDB:

 

2014-09-03T13:37:56.881-0500 ERROR: cannot read PEM key file: 
/users/apps/tstlrn/u019807/DTCD9C3B2F42757.ent.wfb.bank.corp_mongo_wells.pem 
error:0906D06C:PEM routines:PEM_read_bio:no start line

I just tried to duplicate with a key (not a certificate) that uses line breaks 
at 76 characters. I don't have a certificate because my routines don't support 
certificates. But it should reveal a little about the OpenSSL parser.

Reading the public and private keys were OK when the line size was 76 (see 
below). So the OpenSSL parser is lenient during a read. This seems very 
reasonable to me.

Reading an encrypted private key resulted in an error PEM_read_bio:bad end 
line:pem_lib.c:802 when the line size was 76 (see below). This kind of 
surprised me.

Since you are receiving the no start line error (and not another error), I 
would suspect you are reading an ASN.1/DER encoded certificate; and not a PEM 
encoded certificate. The error occured before anything related to line lengths.

snip rest

 

  _  


 http://www.avast.com/ 

This email is free from viruses and malware because avast! Antivirus 
http://www.avast.com/  protection is active. 

 



RE: Certificate pass phrase brute force...

2014-09-09 Thread Dave Thompson
(Sorry not inline, my Outlook can’t do that for HTML.)

 

That’s actually a subvariant I forgot to describe: PKCS#8 *version 2*.

It has “BEGIN ENCRYPTED PRIVATE KEY” (not specifying RSA etc.) like version 1,

but instead of a single PBE algorithm-id PBE-with-$kdf-and-$cipher it has a 
structure

PBES2 with {$kdf-alg using $params} and {$cipher-alg using $params}.

So yes you read right, the cipher part is TDEA aka [3]DES[3]-EDE[3] in CBC mode.

 

Yes, req –newkey can only encrypt with TDEA. You can do that and then 

re-encrpyt as you did; or you generate the key separately with genpkey 

encrypting with any algo and then use req –new on that key.

Either way is two steps.

 

However, your conversion apparently produced a legacy-format file 

“BEGIN RSA PRIVATE KEY” with DEK-Info. You/the script probably used 

rsa -$cipher , which does this. This is MUCH LESS SECURE.

As I believe was mentioned, no one will bruteforce the data cipher, 

neither TDEA nor AES-anything. Even 112 would take basically all the 

computers on Earth for many many years, and 128 millions or more.

Even NSA can’t do that. What can be attacked is the password-based 

derivation, especially if the password is something a human can remember.

And for backward compatibility the legacy-format files use a poor PBKDF – 

based on PBKDF1 (slightly poor) WITH ITERATIONS=1 (AWFUL!!!).

 

If you want decent security at all, much less anything even approaching the 

strength AES-256 appears to promise, use pkcs8 –topk8 –v2 $cipher

(which unobviously works for input that is already pkcs8) or pkey -$cipher .

 

Cheers.

 

 

From:  mailto:owner-openssl-us...@openssl.org owner-openssl-us...@openssl.org 
[ mailto:owner-openssl-us...@openssl.org 
mailto:owner-openssl-us...@openssl.org] On Behalf Of Gregory Sloop
Sent: Tuesday, September 09, 2014 01:19
To:  mailto:openssl-users@openssl.org openssl-users@openssl.org
Subject: Re: Certificate pass phrase brute force...

 

I used the asn1parse command [thanks Dave!] and while the key looks old style 
it parses as follows:

50:d=4  hl=2 l=   8 prim: OBJECT:des-ede3-cbc

Which appears to equate to: des-ede3-cbc   Three key triple DES EDE in CBC 
mode

The full asn parse is:
---
 0:d=0  hl=4 l=2446 cons: SEQUENCE
   4:d=1  hl=2 l=  64 cons: SEQUENCE
   6:d=2  hl=2 l=   9 prim: OBJECT:PBES2
  17:d=2  hl=2 l=  51 cons: SEQUENCE
  19:d=3  hl=2 l=  27 cons: SEQUENCE
  21:d=4  hl=2 l=   9 prim: OBJECT:PBKDF2
  32:d=4  hl=2 l=  14 cons: SEQUENCE
  34:d=5  hl=2 l=   8 prim: OCTET STRING  [HEX DUMP]:ABCABCABCABCABCA 
(REDACTED)
  44:d=5  hl=2 l=   2 prim: INTEGER   :0800
  48:d=3  hl=2 l=  20 cons: SEQUENCE
  50:d=4  hl=2 l=   8 prim: OBJECT:des-ede3-cbc
  60:d=4  hl=2 l=   8 prim: OCTET STRING  [HEX DUMP]:ABCABCABCABCABCA 
(REDACTED)
---
[I don't know if I needed to redact those fields at all, but I don't think it 
matters.)

So, if I've read that properly, the encryption method is 3DES.

---
While this isn't really relevant to OpenSSL, and more relevant to the EasyRSA 
script from OpenVPN - I thought I'd share a solution that appears to work and 
do what I want.

It doesn't appear easy to modify the EasyRSA script to use aes-256 [or any non 
3DES cypher] in the script. From my look at the syntax of a openssl req -new 
-newkey ... command, you don't get to specify the cypher it will use in 
encrypting the private key. This appears to be a result of generating both the 
key and the signing request in a single step - in this case you don't appear to 
get to choose what crypto is used to encrypt the private key. [I'd be glad to 
be shown a way you can specify it - it doesn't appear possible from the 
command-line options at least.] 

However, as I pointed out there is code in the EasyRSA tool to re-encrypt the 
private key with a new password, or remove the password.
You can edit the script to use aes256 as follows: [or any of the other cyphers 
here:  https://www.openssl.org/docs/apps/rsa.html 
https://www.openssl.org/docs/apps/rsa.html#]
In the easyrsa bash script:
Look for the line: [ local crypto=-des3 ] (It's line 861 in the current 
EasyRSA version)
Change it to: [ local crypto=-aes256 ]

Now when you issue the command easyrsa set-rsa-pass, and issue the old 
encryption key, along with a new one [you can certainly use the same key for 
the old and new] it will re-encrypt it with aes-256.

Looking at the key file it does appear to indeed work and re-encrypts it with 
AES-256.

#cat somekey.key
-BEGIN RSA PRIVATE KEY-
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-256-CBC ...

---
Thus, this is the best work-around for the tool I can find. Unfortunately it 
requires a redundant step unless someone can show me a way to put the 
encryption type for private keys in a config file or specify it as part of a 
openssl -req ... command

But at least it works the way I want it to, and makes the task of setting up 
keys and certs a bit easier than 

RE: cannot read PEM key file - no start line

2014-09-09 Thread Dave Thompson
I was half wrong before. 

 

The base64 read in EVP_Decode* allows 76. But the PEM parser in PEM_read_bio 

enforces exactly 64 only for input files that have PEM-encrypt headers 

which in practice is only encrypted legacy-format privatekey files.

(Nonprivate things like cert, CSR, publickey, params, etc. aren’t encrypted at 
all.

PKCS8 privatekey or PKCS12 key-plus-cert is encrypted within the ASN1, not as 
PEM.)

 

I have and know of no software to create encrypted legacy-format privatekeys

other than OpenSSL itself which always writes 64, so I never encountered this 
before.

(Other sw does do PKCS8-e or PKCS12 but see above.)

 

(As seen elsethread, OP apparently had PEM certs where PEM key was expected.)

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Jeffrey Walton
Sent: Tuesday, September 09, 2014 08:09
To: OpenSSL Users List
Subject: Re: cannot read PEM key file - no start line

 

 

 

On Sun, Sep 7, 2014 at 10:26 PM, Liz Fall f...@sbcglobal.net wrote:

All,

 

I am getting the following with my client cert when trying to connect to an 
SSL-enabled MongoDB:

 

2014-09-03T13:37:56.881-0500 ERROR: cannot read PEM key file: 
/users/apps/tstlrn/u019807/DTCD9C3B2F42757.ent.wfb.bank.corp_mongo_wells.pem 
error:0906D06C:PEM routines:PEM_read_bio:no start line

I just tried to duplicate with a key (not a certificate) that uses line breaks 
at 76 characters. I don't have a certificate because my routines don't support 
certificates. But it should reveal a little about the OpenSSL parser.

Reading the public and private keys were OK when the line size was 76 (see 
below). So the OpenSSL parser is lenient during a read. This seems very 
reasonable to me.

Reading an encrypted private key resulted in an error PEM_read_bio:bad end 
line:pem_lib.c:802 when the line size was 76 (see below). This kind of 
surprised me.

Since you are receiving the no start line error (and not another error), I 
would suspect you are reading an ASN.1/DER encoded certificate; and not a PEM 
encoded certificate. The error occured before anything related to line lengths.

snip rest



RE: cannot read PEM key file - no start line

2014-09-08 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Viktor Dukhovni
 Sent: Monday, September 08, 2014 08:42

 On Sun, Sep 07, 2014 at 07:26:05PM -0700, Liz Fall wrote:
 
  I have checked and verified that there is no whitespace.  Also, the
BEGIN
  and END statements look correct.  However, each line in the cert is 76
 chars
  in length, except for the last line.  Should the lines be 64-characters
  long?
 
 Yes.  The OpenSSL base64 decoder limits input lines to 64 characters.
 
Nope. The encoder writes 64 (the original PEM spec), but the decoder 
will accept up to 76 (the less-old MIME spec). As one case I hit often,
Java keytool -exportcert writes 76 and openssl reads it just fine.

And the error here is no start line. *On Windows* that often occurs 
when Windows editors treat text files as Unicode/UTF-8 with  an
invisible BOM (Byte Order Mark) at the beginning of the first line.
Try prepending a semantically-meaningless comment line like:

Hello! This is my Key!! Rah Rah Go Key Go!!
-BEGIN EC PRIVATE KEY-
MHcCAQEEIAqD7NQvpg74v7Pik4rAIfk/BIQlQa1fbM9BKkHOkKJBoAoGCCqGSM49
AwEHoUQDQgAE/BR1oMSfz4WgklW7t83E0xClrBh0md1Ata8rsPq8VAsB1WDXPXwk
T7WbcXlsyxuyOb7ok8F544xmr+pKreWbHw==
-END EC PRIVATE KEY-


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Certificate pass phrase brute force...

2014-09-08 Thread Dave Thompson
For the legacy formats (dashes-BEGIN PRIVATE RSA KEY or PRIVATE EC KEY)

just look on the DEK-Info: header line.

 

For PKCS#8 format (dashes-BEGIN ENCRYPTED PRIVATE KEY) do

  openssl asn1parse key.pem

and the third line will be an OBJECT (really OID) in the form 
pbeWithhashandcipher.

 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Gregory Sloop
Sent: Monday, September 08, 2014 20:58
snip

--On that note: Is there a way to determine from an encrypted key-file what 
encryption was used to encrypt it? [I have the password, so it doesn't need to 
be a blind test.]





RE: design clarification using openssl

2014-09-07 Thread Dave Thompson
1) That doesn't make sense. Maybe you mean the socket come from (TCP-level)
accept and you give it to SSL_set_fd? 

That does make sense and should work for one connection=socket at a time
i.e. accept #3, connect SSL to #3,

do send and receive until connection closed, close socket and SSL_clear,
accept #7, ditto.

 

2) 1 is not a real error code. If SSL_get_error(ssl) returns 1 ==
SSL_ERROR_SSL, you should call 

ERR_get_error or its variants, or just ERR_print_errors[_fp] for the
simplest handling. Note ERR not SSL.

Note SSL_get_error() is 5 == SSL_ERROR_SYSCALL you must also look at your
*OS* error: errno on Unix 

or [WSA]GetLastError() on Windows. On Unix perror or strerror gives a nice
decode; Windows is harder.

 

However if your keys or certs were bad you would get the error on loading
them or at the latest at handshake,

which if you don't do it explicitly would be on the first SSL_read or
SSL_write. Not the second.

This error is almost certainly something else and the ERR_* details above
should help spot it.

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of kasthurirangan balaji
Sent: Friday, September 05, 2014 13:49
To: openssl-users@openssl.org
Subject: design clarification using openssl

 

Hi,

 

After searching the web, I am writing to this address as my questions are
still un-answered. 

 

1) Can a SSL structure, allocated memory once via SSL_CTX be used with
various socket descriptors just

by changing the descriptors using SSL_set_fd? The socket descriptor used
would have been passed thru SSL_accept before reaching SSL_set_fd. The
socket is in blocking mode only.

 

2) I generated key and certificate files locally using the openssl commands.
Is anything else needs to be done before loading them? I ask this because,
the first read via SSL_read is always success and subsequent reads fail with
error:0001:lib(0): func(0): reason 1. 

 

If this is not the right place to ask, pls direct me to the right place so
that I can get my queries cleared.

 

Thanks,

Balaji.



RE: Performance related queries for SSL based client server model

2014-09-07 Thread Dave Thompson
This is not a –dev question, and there’s no need to send three times.

 

scp uses the SSH protocol. OpenSSL does not implement SSH.

OpenSSH, which is a different product from a different source, implements 

SSH, although in their design the scp program doesn’t do any comms at all, 

it just pipes to the ssh program which does.

 

What kind of network(s) are you transiting, and what are your endpoints? 

On my dev LAN, which is one uncongested reliable 100Mbps switch, I get 

plain TCP at nearly the hardware limit 8sec per 100MB, and within 10% of 

that for SCP/SSH or trivial-app/SSL. These do 700MB in barely a minute.

 

SSL and SSH differ significantly in connection setup/handshake, and slightly 

in multiplexing the data, but once actually sending application data they use 

mostly the same range of ciphers and MAC, with openssh actually calling 

libcrypto, and use TCP pretty much the same way, so unless you’re doing or 

(perhaps unintentionally) invoking something wrong, you should get roughly 

the same speed for both.

 

Try netcat to measure only the network (and disk) with almost no CPU; 

that gives you an upper bound on any protocol – except one that can and does 

compress well: I believe openssh can and openssl definitely can depending 

on how it’s built, but many people disable it post-CRIME, and it certainly 

depends very much on your data. You might try gzip on your data and 

if that makes much difference send the gzipped form.

 

 

From: owner-openssl-...@openssl.org [mailto:owner-openssl-...@openssl.org] On 
Behalf Of Alok Sharma
Sent: Sunday, September 07, 2014 03:30
To: openssl-...@openssl.org; openssl-users@openssl.org
Subject: Performance related queries for SSL based client server model

 

Hi,

   I am writing one sample ssl based client server model which uses SSL_Read  
SSL_Write API provided by openssl. But I found that my application is very slow 
it takes around 40 mins to copy 700MB file. While same file using scp finishes 
in 10 mins.

   So my query is that is there  an alternative way to use open ssl read or 
write to improve performance. I searched in scp code and found it does not use 
SSL_read/SSL_write. So if there is another set of APIs which I can use or any 
idea how I can meet the same performance as scp.

Regards,
Alok



RE: [SPAM?] Re: ECDSA Certificate

2014-08-13 Thread Dave Thompson
 and how do I generate an ECDSA certificate?

To generate a selfsigned ECDSA cert the same ways you do RSA, 
except use EC instead of RSA.

- use req -new with EC key or -newkey with EC parms and -x509 
to generate selfsigned cert directly.

- use req -new with key or -newkey to generate CSR,
then x509 -req -signkey to create selfsigned cert

Set other attributes as appropriate. If you set KeyUsage,
it must include digSign to use this cert for ECDHE-ECDSA.
(KU for RSA should include digSign or encrypt depending 
on the suites to be used, but sometimes isn't enforced.)

Use a curve supported by the peers you will communicate with.

To obtain a CA-signed ECDSA cert the same ways as RSA,
except EC instead of RSA, and harder.

- generate CSR for EC key as above, for suitable curve

- find a CA that issues EC certs, with usage allowing 
at least digSign=ECDSA. I haven't found any yet.

- submit CSR to CA, prove your identity, pay fees.

- receive cert and any chain cert(s) from CA. 

snip

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: ECDSA Certificate

2014-08-10 Thread Dave Thompson
Both of those are using an RSA certificate; DHE or ECDHE is key-exchange
only 

not authentication. However the servers must configure *parameters* for 

temp DH and temp ECDH respectively; do they? For ECDHE the parameters 

must use one of the (named) curves specified by the client; openssl client 

supports all named curves, but other clients like browsers might not.

 

Is the second server on not-very-recent RedHat or CentOS?

Until late 2013, RedHat openssl packages disabled all elliptic curve crypto 

due to what they called legal concerns. Everyone believes this meant 

the Certicom patents, although I don't think they ever confirmed it.

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Walter H.
Sent: Sunday, August 10, 2014 02:39
To: openssl-users@openssl.org
Cc: Dr. Stephen Henson
Subject: ECDSA Certificate

 

On 08.08.2014 02:11, Dr. Stephen Henson wrote: 

 

Well maybe, maybe not. Just because a ciphersuite is included in the
cipherlist doesn't mean it is included or could be selected. For example if
you set a ciphersuite which uses ECDSA authentication it wont be selected if
the server doesn't include an ECDSA certificate.

can you please give an example of an ECDSA certificate, Thanks

I'm asking this, because
one Web-Server connects with
SSL_CIPHER=ECDHE-RSA-AES256-GCM-SHA384
and one with
SSL_CIPHER=DHE-RSA-AES256-GCM-SHA384
both with the same client;

and both Web-Server (Apache) have this
SSLCipherSuite RC4-SHA:RC4-MD5:HIGH:MEDIUM:!ADH:!DSS:!SSLv2:+3DES



-- 
Greetings,
Walter
 


RE: Help diagnosing SSL connection problem needed

2014-08-07 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Kyle Hamilton
 Sent: Thursday, August 07, 2014 16:48

 Your client is saying that it's failing the certificate verification of
 the server certificate.  It's probably not using the CAfile that you
 passed to openssl s_client.
 
 -Kyle H
 
 On 8/5/2014 12:19 PM, Ted Byers wrote:
  I have Perl code, which uses a library that in turn uses openssl for
  HTTPS connections.  I have been trying to use Wireshark to diagnose
  this, but I have yet to find a way to have it tell me what steps in
  the SSL handshaking are happening at a given time (client hello,
  server hello, c.).  Thus, I am having trouble seeing whether the
  problem is in my client not doing something right or the server not
  doing something right.  I have not yet figured out how to have it
  export everything in a capture file in plain text so that I could
  copy/paste it in a note like this so you could see for yourself what
  is happening.
 
About Wireshark: first make sure you have only the desired packets 
displayed: filter the display unless you previously filtered the capture 
or you were (very) lucky and nothing else happened during the capture.

For everything, which is a lot (usually too much), File / Print 
plainText toFile (and specify filename), range: Allpackets Displayed, 
format: summary on, details on all expanded, bytes off.

For just which handshake steps have occurred, same except details off.

  I did get openssl s_client to connect properly, and here is the output
  from that (sanitized of the server operator's ID):
 
  ted@linux-jp04:~/Work/Projects/FirstData openssl s_client -CAfile
  server-test.pem -cert client_test.pem -key client_test.key -connect
  n.n.n.n:8443
snip except
  Server certificate
  -BEGIN CERTIFICATE-
  DELETED
  -END CERTIFICATE-
  subject=/C=LV/O=FDL/CN=lv-rtps-proxy-test.ne.1dc.com

  Now, here is the output I get from my Perl client (also sanitized):
 
  $url = https://n.n.n.n:8443/
  $scheme = https
  $self-{ssl_set} = 0
  $self-{ca_cert_dir} = .
  $self-{ca_cert_file} = server-test.pem
  $LWP::VERSION = 6.05
  Setting cert dir and file if available
  $self-{ssl_set} = 1

Are you setting the client keycert/chain? This doesn't indicate it.
The s_client command did provide them, and the server did request 
a client cert; if the server *requires* client cert and client doesn't 
provide one, the server will normally reject the connection.

  DEBUG: .../IO/Socket/SSL.pm:2503: new ctx 26349088
  DEBUG: .../IO/Socket/SSL.pm:526: socket not yet connected
  DEBUG: .../IO/Socket/SSL.pm:528: socket connected
  DEBUG: .../IO/Socket/SSL.pm:550: ssl handshake not started
  DEBUG: .../IO/Socket/SSL.pm:586: not using SNI because hostname is
 unknown
  DEBUG: .../IO/Socket/SSL.pm:634: set socket to non-blocking to enforce
  timeout=180
  DEBUG: .../IO/Socket/SSL.pm:647: Net::SSLeay::connect - -1
  DEBUG: .../IO/Socket/SSL.pm:657: ssl handshake in progress
  DEBUG: .../IO/Socket/SSL.pm:667: waiting for fd to become ready: SSL
  wants a read first

This appears to be sent CHello, nonblocking expect SHello,Cert...SDone

  DEBUG: .../IO/Socket/SSL.pm:687: socket ready, retrying connect
  DEBUG: .../IO/Socket/SSL.pm:647: Net::SSLeay::connect - -1
  DEBUG: .../IO/Socket/SSL.pm:657: ssl handshake in progress
  DEBUG: .../IO/Socket/SSL.pm:667: waiting for fd to become ready: SSL
  wants a read first
  DEBUG: .../IO/Socket/SSL.pm:687: socket ready, retrying connect
  DEBUG: .../IO/Socket/SSL.pm:647: Net::SSLeay::connect - -1
  DEBUG: .../IO/Socket/SSL.pm:657: ssl handshake in progress
  DEBUG: .../IO/Socket/SSL.pm:667: waiting for fd to become ready: SSL
  wants a read first

These two look like partial reads therefore expecting more

  DEBUG: .../IO/Socket/SSL.pm:687: socket ready, retrying connect
  DEBUG: .../IO/Socket/SSL.pm:2384: ok=1 cert=26317968
  DEBUG: .../IO/Socket/SSL.pm:2384: ok=1 cert=26323136

Those look like verify-callback; verify of the cert looks okay.

  DEBUG: .../IO/Socket/SSL.pm:1539: scheme=www cert=26323136
  DEBUG: .../IO/Socket/SSL.pm:1549: identity=n.n.n.n
  cn=lv-rtps-proxy-test.ne.1dc.com alt=

Those look like hostname check, which openssl doesn't do yet 
(1.0.2 will) but your perl library probably adds. If you used an IP form 
URL as indicated above https://a.b.c.d:8443/ and the cert has only 
domainname as is common and indicated here, then this should fail.
It is requirement of HTTPS (and some other *S protocols but not 
SSL/TLS by itself) that the authority field in the URL matches *a* 
name in the cert. That doesn't necessarliy mean only one name;
the/a cert name can be a wildcard that matches multiple hostnames 
but yours isn't; there can be multiple names in the cert using Subject 
Alternative Names, but I take alt= here to mean yours doesn't.

  DEBUG: .../IO/Socket/SSL.pm:647: Net::SSLeay::connect - -1
  DEBUG: .../IO/Socket/SSL.pm:1757: SSL 

RE: Query on X509 certificate validation- EVP_VerifyUpdate EVP_VerifyFinal

2014-08-07 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Viktor Dukhovni
 Sent: Monday, August 04, 2014 11:21

 On Mon, Aug 04, 2014 at 05:43:47AM +, Mitra, Rituparna (STSD) wrote:
 
  1.   app1: sends a CGI POST request to app2 ? the POST request has
the
 UN (username). 
 
  2.   app2: does a CGI GET to receive the UN within app1?s POST
request.
 
  3.   app2: has app1?s x509 certificate already stored, since it has
to allow
 SSO from app1 ? gets verification ctx from here.
 
  4.   app2: uses the UN (containing ! character) to form a hashdata,
 
  5.   app2: passes hashdata to EVP_VerifyUpdate(ctx, .. )
 
If you mean app2 hashes UN and passes that hash to VerifyUpdate, that's
wrong.
If you mean it passes the data *to be hashed*, that's good.

EVP_Verify{Init,Update,Final} does the hash of the data as part of verifying
a signature 
just as EVP_Sign{Init,Update,Final} does the hash of the data to be signed.
In fact {Sign,Verify}{Init,Update} are just macros for Digest{Init,Update},
the PK operations are done only in Final.

  6.   app2: calls EVP_VerifyFinal -- this eventually fails during
public key
 check (EVP_PKEY_verify), due to the ! character in UN
 
snip broader points


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: found half of it: EC key gen

2014-08-07 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of dave
 Sent: Monday, August 04, 2014 15:50

 I have it that the elliptic multiply is not standard.  So I have been
 skip tracing though the code.
 It starts with ec_key.c, with   EC_KEY_generate_key.  This grabs the
 group or or the particular curves prime field size.  It then uses this

No, it uses the order of the generator or equivalently the subgroup 
generated by the generator and used for operations. For a curve
over a prime field Zp the subgroup order is either slightly less than p
or slightly less than p divided by a small integer called the cofactor 
(small meaning usually 2 or 4). For a curve over a binary (m-bit) field 
the order is somewhat less than 2^m, or that divided by a small cofactor.

 as the range for   bn_rand_range.  This is in bn_rand.c.  In that it
 uses the first half which is bnrand.  That grabs the time and shifts it
 around to start the process.  Since the order or range is a large number

It logically adds the current time (assuming available) to the entropy pool.
Adding entropy is done by mixing bits in a fashion that should depend on 
both/all inputs in a complicated way, but I haven't looked recently. Using 
*only* the current time to seed random generation would not be secure,
and is a common mistake by inexperienced people.

 in hex it looks like the output of the private key is also in hex.

The private key is a large (integer) number. There are many ways of 
representing integers. Hex is a common way of representing large integers,
because it can easily be broken up into, or formed from, 8-bit bytes or
other 
power-of-2 size units that are common on modern computers. In particular 
when an EC private key is stored in the standard ASN.1 format defined in
X9.62 
and used in among others PKCS#8 and PKCS#12, the privatekey is stored as 
ASN.1 integer which some tools including openssl asn1parse show in hex.

 After that the generate key does the point multiply to make the public.
   Is there some other variable used here that I am missing?
 
Doesn't sound like it.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: found half of it

2014-08-01 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of dave paxton
 Sent: Thursday, July 31, 2014 20:12

   In looking at this today I found what the new ec key is doing.  It
 does a BN_rand_range operation.  That does have the rand.h include.  It
 looks like it is using from the random area pseudorand, pseudo,
 RAND_pseudo_bytes and RAND_bytes.  So I guess it is a matter of putting
 this together with the various rand subroutines to get a handle on a
 logical flow chart.
 
I don't understand what your question is, but generating an EC keypair
consists of
- choosing a random number from 1 to the order of the curve (actually
subgroup) = private
- computing the point which is the multiplication of the generator and
private = public
(Note elliptic curve multiplication is very different from ordinary
multiplication.)

This is exactly what the EC_KEY_generate_key routine does.
It uses the random number module to generate the random number,
and the EC point computation routines to do the point computation.
The numbers for EC cryptography are bigger than fit in a computer word
so the BN (bignum) routines are used for computations on those numbers.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Use of parity bits on DES

2014-08-01 Thread Dave Thompson
If by heavy bit you mean the most significant bit, that's backwards.

DES (and 3DES) keys put the parity bits in the least significant bit.

 

The low-level DES_* API in OpenSSL has options to set a key with 

checking for parity and weak and semi-weak keys, or without,

and also routines to check or set parity by itself. But the normal API 

by default does not check, and that's probably what you're using.

 

In that case, yes, you get the same results regardless of parity.

 

$ od -tx1 y1

000 54 45 53 54 44 41 54 41

010

$ (path)/openssl version

OpenSSL 1.0.1h 5 Jun 2014

# odd parity

$ (path)/openssl des-ecb -K 0123456789ABCDEF -nopad y1 | od -tx1

000 55 51 bb 7e 93 c0 d9 03

010

# wrong parity

$ (path)/openssl des-ecb -K 0022446688AACCEE -nopad y1 | od -tx1

000 55 51 bb 7e 93 c0 d9 03

010

 

(Older versions give the same results, but in some older versions 

the enc utility with -K requires -iv even for ECB, which was silly.)

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Laurent Broussy
Sent: Thursday, July 31, 2014 07:41
To: openssl-users@openssl.org
Subject: Use of parity bits on DES

 

Hi,

 

Like describe in the FIPS 46-3 a DES key must have it heavy bit as parity
bit. I try to encipher with a key without no correct parity bits and with
this key where I put the correct parity bits the same message using openssl.
I obtain two  different enciphered messages. My answers are :

 

1 Is-it normal that OpenSSL can use a DES key with no
correct parity bits ?

 

2 Why the result with the two different key is not the same
(normally only 56 bits are used and are the same in the two keys)

 

Thank you for your response.

 

Regards.

 

L. Broussy



RE: SSL connection broken after upgrading from 0.9.8a to 1.0.1e version of openssl

2014-08-01 Thread Dave Thompson
This is almost certainly belongs in -users only, but if I restrict reply it
looks unanswered.

 From: owner-openssl-us...@openssl.org On Behalf Of Nayna Jain
 Sent: Thursday, July 31, 2014 17:37

 We got one of our openssl version  upgraded to openssl 1.0.1e version.
 But after that I am facing this error at client side.
 
 error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
 
 But I am not sure why is it giving wrong version number as both client and
 server has SSLv3 connection.  Below are the details:
 
Client is 0.9.8a and calls SSLv3_method()   for ivSMethod()
Server is upgraded to 1.0.1e and calls SSLv3_method() for ivSMethod()
Client when tries to connect to server , I get the error
 error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Logically I thought, it will work as both are SSLv3 and nothing changed
there, but still it fails with wrong version number ..
When I tried using openssl s_client it fails as below with similar
error
message
 testsystem:~ # openssl s_client -connect ip:port -msg
 CONNECTED(0003)
  SSL 2.0 [length 008f], CLIENT-HELLO
snip
0.9.8 s_client by default sends SSLv2 hello, as this shows.
Either use 1.0.0 or higher s_client, or use s_client -ssl3.
Or at least s_client -no_ssl2.

 Can someone help to debug this please ? There is no more further
 information could be traced on why it failed. If someone have idea on
 debugging tools for tracking openssl connection, do let me know.

See above. 

Does the server start immediately in SSL, or does it require any kind of
STARTTLS?
If the latter, s_client supports a few forms of STARTTLS but not all, and
only if 
you specify which one explicitly. Otherwise you'll need a custom program.

If neither of those helps, the usual best debugging method, if you have
access 
on at least one end system or another system on the same network segment 
(typically LAN hub, but may vary greatly depending on your network hardware)
is to run a network capture like Wireshark (for Windows or MacOSX or Linux),
tcpdump (for most Unix), etc. and look at it. 



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Adding client peer verification to my server

2014-07-28 Thread Dave Thompson
Did you successfully load the root cert into the SERVER truststore?

 

The requirements are not quite symmetric:

 

Almost always (except for anon and non-PK):

server MUST set privatekey and matching cert, and preferably any chain
cert(s) (you have none) 

client MUST set truststore containing root FOR SERVER, and any chain cert(s)
server does NOT send

 

For client auth:

client MUST set privatekey and matching cert, and preferably any chain
cert(s) (you have none)

server MUST set truststore containing root FOR CLIENT, and any chain cert(s)
client does NOT send

server MAY set client-ca-list

 

Note that setting client-ca-list does not set truststore, and setting
truststore does not set client-ca-list.

Although they usually should be the same (you should request the certs you
will trust) they are separate.

s_server partially conceals this because it uses -CAfile both to load
truststore and to set client-ca-list.

 

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Marco Bambini
Sent: Sunday, July 27, 2014 13:33
To: openssl-users@openssl.org
Subject: Re: Adding client peer verification to my server

 

Hello,

thanks to your help I made some progresses (I also removed the intermediate
CA file).

 

Using the command line:

openssl s_client -connect localhost:4430 -cert /Users/test/client.pem -state

snip
4722:error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown
ca:/SourceCache/OpenSSL098/OpenSSL098-50/src/ssl/s3_pkt.c:1106:SSL alert
number 48
4722:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake
failure:/SourceCache/OpenSSL098/OpenSSL098-50/src/ssl/s23_lib.c:182:

so it seems that the real error is unknown CA.

 

Please note that on server side I successfully call:

list = SSL_load_client_CA_file(root_certificate);
if (list != NULL) SSL_CTX_set_client_CA_list(CTX, list);

 

In my opinion the real issue is the way certificate files are generated
starting from root CA certificates.

Here you go the up to date command line I use to generate root.pem,
server.pem, client.pem:

snip

 



RE: Program to convert private key from pem to der format

2014-07-28 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Viktor Dukhovni
 Sent: Thursday, July 24, 2014 14:18

 On Thu, Jul 24, 2014 at 08:07:01AM -0700, phildoch wrote:
 
  The key format needed by the system is algorithm-specific DER format.
 
 I am not aware of any standard formats for keys other than PKCS#8
 or PKCS#12.  In particular, the algorithm-specific PEM encodings
 output by openssl rsa|ec are I believe non-standard, and their
 DER forms are even more so supported on an ad-hoc basis.
 
RSA is PKCS1, and EC is X9.62 (according to SECG1 and rfc3279).

PKCS8 only defines the wrapper, basically AlgId and optional PKCS5 PBE.
It leaves the chocolately inside to other standards, including 
but not limited to those above.

PKCS12 defines an outer wrapper and uses PKCS8 for privatekey bag.

AFAICS the more flexible PKCS12 and to some extent PKCS8 are 
indeed more widely used/supported.

 Note that the pkey(1) utility included with OpenSSL 1.0.0, reads
 any of the various ad-hoc formats in either PEM or DER encoding,
 but outputs PKCS#8.  Thus:
 
   openssl ec -outform DER | openssl pkey -inform DER
 
 is not an identity transformation, as can be seen by looking at
 the ASN.1 with asn1parse(1).
 
(And even in 0.9 pkcs8 -topk8 could do that, but the OP is 
obviously on 1.0.0 or higher.)

  It is received from the user in the same algorithm-specific in PEM
format.
  The algorithm can be:
 
  1) secp384r1  (i.e. created by openssl ecparam -out ec_key.pem -name
  secp384r1 -genkey)
 
 This outputs an ad-hoc algorithm-specific PEM encoding.
 
  2) rsa:2048(i.e. created by openssl genrsa -out rsa2048_key.pem
2048)
  3) rsa:4096(i.e. created by openssl genrsa -out rsa4096_key.pem
4096)
 
 As do these.  What software could possibly want to consume these in
 DER encoding, rather than as a DER-encoded PKCS#8 object?
 
Any that uses the d2i_{alg}PrivateKey routine(s) which handle 
only the matching alg-specific format, unlike d2i_PrivateKey which 
handles all algorithms using a really ugly guessing game.

In contrast PEM_read_{alg,PKCS8,}PrivateKey can all recognize 
and parse both PKCS8 and alg-spec formats from the BEGIN line; 
the only difference is that {RSA,DSA,EC}PrivateKey then gives an 
error if the key is not the corrrect algorithm (in either format)

  I tried to create a program based on the code of the command openssl
 pkey
  -in key.pem -outform DER -out keyout.der in file /apps/pkey.c in
openssl
  project.
 
 This reads any of the various legacy formats and outputs DER-encoded
 PKCS#8.
 
(In 1.0.0+. In 0.9.8 i2d_PrivateKey, or PEM_write_PrivateKey, 
dispatches to the various alg-specific formats; only the PKCS8PrivateKey or 
PKCS8_PRIV_KEY_INFO versions do PKCS8.)



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Adding client peer verification to my server

2014-07-28 Thread Dave Thompson
It's a good idea for server to set client-CA list, but not required. If it
isn't set,

libssl server will send CertReq with an empty list, which the RFCs permit,

and the browsers I have to hand (IE9, FF31, Chrome36.something) all handle.

The OP's problem is more likely on the client side.

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Michael Wojcik
Sent: Friday, July 25, 2014 12:44
To: openssl-users@openssl.org
Subject: [SPAM?] RE: Adding client peer verification to my server

 

Unless I've overlooked it, you don't appear to be calling
SSL_CTX_set_client_CA_list or SSL_CTX_add_client_CA anywhere.

 

When an SSL/TLS server wants to request a peer certificate, it has to send a
list of the CAs it recognizes to the client, so the client knows which
certificate to send. (The client may have a number of certificates, issued
by various CAs; for example, the client might be a browser running on behalf
of a user who has an internally-issued company certificate and a personal
certificate issued by a well-known commercial CA.)

 

The simplest API to set that up in OpenSSL is SSL_CTX_load_client_CA_file:

 

SSL_CTX_set_client_CA_list(CTX,
SSL_CTX_load_client_CA_file(/path/to/CAcerts.pem));

 

(or with, you know, error handling, if you want to be fancy). See
http://www.openssl.org/docs/ssl/SSL_load_client_CA_file.html.

 

Michael Wojcik 
Technology Specialist, Micro Focus 

 

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Marco Bambini
Sent: Friday, 25 July, 2014 03:36
To: openssl-users@openssl.org
Subject: Adding client peer verification to my server

 

Hello,

I am adding client peer verification to my own server but I continue to
receive an error:

SSL3_GET_CLIENT_CERTIFICATE:no certificate
returned:/SourceCache/OpenSSL098/OpenSSL098-50/src/ssl/s3_srvr.c:2631:

 

Here you go some relevant code:

 

SERVER:

ssl_initialize called at startup

 

int ssl_initialize (void)

{

  SSL_CTX *CTX = NULL;

  char ssl_certificate[MAXPATH];

  char root_certificate[MAXPATH];

  inti, size;

  

  // initialize SSL crap

  SSL_library_init();

  SSL_load_error_strings();

  

  // allocate CTX opaque datatype

  if ((CTX = SSL_CTX_new(SSLv23_server_method())) == NULL)

  goto initialize_ssl_abort;

  

  // check if a root CA file is present

  if (get_path_to_root_ca_file(root_certificate)) {

  settings.ssl_verify_peer = kTRUE;

  if (SSL_CTX_load_verify_locations(CTX,
root_certificate, NULL) != 1) {

  goto initialize_ssl_abort;

  }

  if (SSL_CTX_set_default_verify_paths(CTX) !=
1) {

  goto initialize_ssl_abort;

  }

  }

  

  // try to set up SSL certificate

  if (get_path_to_ssl_certificate(ssl_certificate)) {

  if (SSL_CTX_use_certificate_file(CTX,
ssl_certificate, SSL_FILETYPE_PEM) == 0) {

  goto initialize_ssl_abort;

  }

  else if (CTX != NULL 
SSL_CTX_use_PrivateKey_file(CTX, ssl_certificate, SSL_FILETYPE_PEM) == 0) {

  goto initialize_ssl_abort;

  }

  } else {

  log_system(SSL certificate not found. SSL
server could refuse connections from clients.);

  }

  

  // try to set up SSL chain file

  if (get_path_to_ssl_chain_file(ssl_certificate)) {

  if (SSL_CTX_use_certificate_chain_file(CTX,
ssl_certificate) == 0) {

  goto initialize_ssl_abort;

  }

  }

  

  if (settings.ssl_verify_peer) {

  log_system(SSL peer verification
activated.);

  SSL_CTX_set_verify(CTX,
SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT, verify_callback);

  SSL_CTX_set_verify_depth(CTX, 4);

  }

 

  // initialize locking callbacks, needed for thread safety.

  //  http://www.openssl.org/support/faq.html#PROG1
http://www.openssl.org/support/faq.html#PROG1

  size = sizeof(pthread_mutex_t) * CRYPTO_num_locks();

  if ((settings.ssl_mutexes = (pthread_mutex_t *)
cubesql_malloc((size_t)size)) == NULL) {

  

RE: Openssl SSL3_GET_RECORD:block cipher pad is wrong: on Ubuntu

2014-07-23 Thread Dave Thompson
Then there’s two approaches and you can try either or both:

 

- get someone who can look at the Debian/Ubuntu version, which clearly differs 

from upstream. Maybe the Debian and/or Ubuntu packagers can help you. Maybe 

some other developer (though none has stepped forward here). Maybe you can 

get source and work on it yourself.

 

- get “standard” source from www.openssl.org and build it yourself. (DON’T

overwrite the package-managed version if at all possible; put yours somewhere 
else 

like your home dir or /var/mystuff. The config or Configure script has options 
for this.)

If you can reproduce the problem with a standard version, this list and/or the 

OpenSSL devs can help. If the problem occurs only in the Debian/Ubuntu version,

then you need someone who can look there specifically.  Which isn’t me, sorry.

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of davidsnt
Sent: Tuesday, July 22, 2014 05:59
To: openssl-users
Subject: Re: Openssl SSL3_GET_RECORD:block cipher pad is wrong

 

Hello Dave,

Thank you for your response, yes I am using Ubuntu 12.0 and recently did a 
ubuntu openssl page upgrade and got ubuntu 1.0.1-4ubuntu5.14 installed

OpenSSL 1.0.1 14 Mar 2012
built on: Fri Jun 20 18:54:15 UTC 2014
platform: debian-amd64

As you pointed yes the server preference is set on the origin side.




--David

 

On Tue, Jul 22, 2014 at 9:17 AM, Dave Thompson dthomp...@prinpay.com wrote:

You can’t be running 1.0.1 as released; it doesn’t have 
BLOCK_CIPHER_PAD_IS_WRONG 

in s3_pkt at all (instead in s3_enc and t1_enc) and doesn’t have 
UNKNOWN_ALERT_TYPE 

at that line number. BLOCK_CIPHER_PAD is at 419 in 1.0.1e through g, and 

UNKNOWN_ALERT_TYPE shortly before (but not at) 1270 in 1.0.1 (original) through 
g.

 

Google finds https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=742152 

reporting (in March through May) 1408F081 and some other “shouldn’t happen” 
errors
but without source/line#s, against several Debian-patched builds of 1.0.1e. 
Are you using a Debian or Debian-derived build? If not, did you build it 
yourself,
and how, or who did?
 
Also BTW: with HIGH (and nothing else added) !MD5 and !EXP are redundant.
And moving to end exactly one of the several dozen (new) SHA2 suites 
doesn’t make particular sense. (+3DES makes some sense, because on 
many CPUs now 3DES is slower than AES and possibly less secure.
Although this makes a difference only if server preference is set.)

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of davidsnt
Sent: Monday, July 21, 2014 07:03
To: openssl-users
Subject: Openssl SSL3_GET_RECORD:block cipher pad is wrong

 

Hi,

I recently changed my cipher ordering on my web server to drop RC4 support and 
currently I have  
HIGH:!RC4:!MD5:!aNULL:!EDH:!EXP:+ECDHE-RSA-AES128-SHA256:+3DES on my Origin.

On the other side my proxy load balancer which acts as the reverse proxy 
supports the following cipher suites RC4:HIGH:!aNULL:!MD5

 

Both the origin server and proxy runs the same openssl version

OpenSSL 1.0.1 14 Mar 2012

I see the following errors on my origin server logs from when I changed the 
cipher suit to HIGH:!RC4:!MD5:!aNULL:!EDH:!EXP:+ECDHE-RSA-AES128-SHA256:+3DES 


07/16 08:29:23.712888 ssl_support.c:158 ssl[31473] ERR 
(76:accept:[xxx.xxx.xxx.xx]:60004:443): OpenSSL Error 336130177 in s3_pkt.c:410 
is 'error:1408F081:SSL routines:SSL3_GET_RECORD:block cipher pad is wrong' 


07/16 13:06:51.721824 ssl_support.c:158 ssl[16812] ERR 
(105:accept:[xxx.xxx.xxx.xx]:44048:443): OpenSSL Error 336150774 in 
s3_pkt.c:1270 is 'error:140940F6:SSL routines:SSL3_READ_BYTES:unknown alert 
type' 

I couldn't find why these errors are triggred, can you please help me with some 
information on the errors and let me know the best way to fix it.



--David

 



RE: Openssl SSL3_GET_RECORD:block cipher pad is wrong

2014-07-21 Thread Dave Thompson
You can’t be running 1.0.1 as released; it doesn’t have 
BLOCK_CIPHER_PAD_IS_WRONG 

in s3_pkt at all (instead in s3_enc and t1_enc) and doesn’t have 
UNKNOWN_ALERT_TYPE 

at that line number. BLOCK_CIPHER_PAD is at 419 in 1.0.1e through g, and 

UNKNOWN_ALERT_TYPE shortly before (but not at) 1270 in 1.0.1 (original) through 
g.

 

Google finds https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=742152 

reporting (in March through May) 1408F081 and some other “shouldn’t happen” 
errors
but without source/line#s, against several Debian-patched builds of 1.0.1e. 
Are you using a Debian or Debian-derived build? If not, did you build it 
yourself,
and how, or who did?
 
Also BTW: with HIGH (and nothing else added) !MD5 and !EXP are redundant.
And moving to end exactly one of the several dozen (new) SHA2 suites 
doesn’t make particular sense. (+3DES makes some sense, because on 
many CPUs now 3DES is slower than AES and possibly less secure.
Although this makes a difference only if server preference is set.)

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of davidsnt
Sent: Monday, July 21, 2014 07:03
To: openssl-users
Subject: Openssl SSL3_GET_RECORD:block cipher pad is wrong

 

Hi,

I recently changed my cipher ordering on my web server to drop RC4 support and 
currently I have  
HIGH:!RC4:!MD5:!aNULL:!EDH:!EXP:+ECDHE-RSA-AES128-SHA256:+3DES on my Origin.

On the other side my proxy load balancer which acts as the reverse proxy 
supports the following cipher suites RC4:HIGH:!aNULL:!MD5

 

Both the origin server and proxy runs the same openssl version

OpenSSL 1.0.1 14 Mar 2012

I see the following errors on my origin server logs from when I changed the 
cipher suit to HIGH:!RC4:!MD5:!aNULL:!EDH:!EXP:+ECDHE-RSA-AES128-SHA256:+3DES 


07/16 08:29:23.712888 ssl_support.c:158 ssl[31473] ERR 
(76:accept:[xxx.xxx.xxx.xx]:60004:443): OpenSSL Error 336130177 in s3_pkt.c:410 
is 'error:1408F081:SSL routines:SSL3_GET_RECORD:block cipher pad is wrong' 


07/16 13:06:51.721824 ssl_support.c:158 ssl[16812] ERR 
(105:accept:[xxx.xxx.xxx.xx]:44048:443): OpenSSL Error 336150774 in 
s3_pkt.c:1270 is 'error:140940F6:SSL routines:SSL3_READ_BYTES:unknown alert 
type' 

I couldn't find why these errors are triggred, can you please help me with some 
information on the errors and let me know the best way to fix it.



--David



RE: [SPAM?] x509v3 Extension: X509v3 Name Constraints?

2014-07-18 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Walter H.
 Sent: Thursday, July 17, 2014 13:58

 does anybody know what to write in the extension config to get this
 X509v3 Name Constraints as the attached certificate (intel-ca.pem,
 intel-ca.text)?
 
http://www.openssl.org/docs/apps/x509v3_config.html#Name_Constraints

(or man x509v3_config on a Unix with-doc installation)


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Certificate problem - SOLVED

2014-07-10 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jeffrey Walton
 Sent: Tuesday, July 08, 2014 20:33

 On Tue, Jul 8, 2014 at 7:00 PM, Dave Thompson dthomp...@prinpay.com
 wrote:
  From: owner-openssl-us...@openssl.org On Behalf Of Jeffrey Walton
  Sent: Tuesday, July 08, 2014 16:20
  ...
  Not sure if this is any consolation, but countryName is a
  DirectoryString, and PrintableString is OK per RFC 5280
  (http://tools.ietf.org/html/rfc5280#section-4.1.2.6):
 
  Actually it's not. 4.1.2.4 Issuer says Name.RDN.AVA values are
  'generally' DirectoryString, but see appendix A on p115:
  countryName is PrintableString size(2), presumably because its
  allowed values are from ISO 3166 which in turn uses ASCII letters.
 So countryName is not PrintableString?
 
countryName IS PrintableString. countryName is specified as 
exactly PrintableString, unlike other fields which are specified as 
DirectoryString where DirectoryString is CHOICE that includes 
PrintableString as one option so those fields MAY BE PrintableString.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Certificate problem - SOLVED

2014-07-08 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jeffrey Walton
 Sent: Tuesday, July 08, 2014 16:20

 On Tue, Jul 8, 2014 at 3:39 PM, Barbe, Charles
 charles.ba...@allworx.com wrote:
  I figured it out and am now wondering if there is a defect in the openssl
 verify command. This suggestion from Dave Thompson:
  I would first try x509 -noout -subject|issuer -nameopt multiline,show_type
  and see if that helps.
  Pointed me in the right direction. What i found was that Issuer for
 certificate A, which was the one that was NOT working, looked like this:
  [cbarbe@localhost foropensslusers]$  openssl x509 -noout -issuer -
 nameopt multiline,show_type -in CertA.pem
  issuer=
  countryName   = UTF8STRING:US
snip
  While the issuer for certificate B and subject for my CA looked like this:
  [cbarbe@localhost foropensslusers]$ openssl x509 -noout -issuer -
 nameopt multiline,show_type -in CertB.pem
  issuer=
  countryName   = PRINTABLESTRING:US
snip
  So it looks like openssl verify is not taking the type of countryName into
 account while the browsers are. Is this 
expected behavior or a defect?
 
 Not sure if this is any consolation, but countryName is a
 DirectoryString, and PrintableString is OK per RFC 5280
 (http://tools.ietf.org/html/rfc5280#section-4.1.2.6):

Actually it's not. 4.1.2.4 Issuer says Name.RDN.AVA values are 
'generally' DirectoryString, but see appendix A on p115:
countryName is PrintableString size(2), presumably because its 
allowed values are from ISO 3166 which in turn uses ASCII letters. 

Similarly dnQualifier is PrintableString and emailAddress is IA5String.

 
DirectoryString ::= CHOICE {
  teletexString   TeletexString (SIZE (1..MAX)),
  printableString PrintableString (SIZE (1..MAX)),
  universalString UniversalString (SIZE (1..MAX)),
  utf8String  UTF8String (SIZE (1..MAX)),
  bmpString   BMPString (SIZE (1..MAX)) }
 
 However, there is the following on page 23:
 
When encoding attribute values of type DirectoryString, conforming
CAs MUST use PrintableString or UTF8String encoding, with the
following exceptions:
 
   (a)  When the subject of the certificate is a CA, the subject
field MUST be encoded in the same way as it is encoded in the
issuer field (Section 4.1.2.4) in all certificates issued by
the subject CA.  Thus, if the subject CA encodes attributes
in the issuer fields of certificates that it issues using the
TeletexString, BMPString, or UniversalString encodings, then
the subject field of certificates issued to that CA MUST use
the same encoding.
 
 So the DirectoryString must be the same type. You can't make it
 utf8String in the server certificate's issuer and PrintableString in
 the CA's subject.
 
4.1.2.4 says name matching should use the rules in 7.1, which allow 
'insignifcant' variations in the string values, and doesn't say anything 
specific I can find about the encoding.

I'm not sure if X.509/X.has quite the same rules, or if CA's have 
historically done so (for certs that might still be usable).

OpenSSL is generally pretty Postelian in accepting slightly 'broken' 
protocols and data to maximize interoperability. But 1.0.2 is slated 
to enhance chain validation, and checks like this might fit in there 
better than in the past flag bits that always run out approach.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Certificate problem

2014-07-07 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Barbe, Charles
 Sent: Sunday, July 06, 2014 22:42

 I have the following certificates and associated private keys:
 
 A - certificate A generated with one version of my software not using
openssl
 B - certificate B generated with a new version of my software that does
use
 openssl
 CA - a local certificate authority whose private key is used to sign both
A and
 B
 
 I can verify both A and B using openssl verify using CA as the cafile
argument.
 
 However, when I install CA on a client and try to connect a web browser to
 my server running the two different versions of software, they complain
that
 they cannot find the issuer with A but not with B.
 
 I have examined both certificates and cannot find anything different about
 them. As far as I can tell, the only difference is that B used openssl to
 generate the certificate and A used our own custom software. The odd thing
 to me is that openssl verify can verify both just fine. What are the web
 browsers doing different? I've tried chrome, Firefox and opera and all
 behave the same... Accepting B and rejecting A.
 
 Does anybody have any suggestions on where to look to figure this out? A
 tool to use?
 
You are installing in the correct placeS which can be different per browser,
right?

The only thing that springs to mind that could be invisible is string types
and 
some options of the cert Issuer fields vs the CA Subject. RFC 5280 requires
a 
fairly complicated Unicode-aware comparison algorithm which I believe
openssl 
does (it definitely canonicalizes before comparison, but I haven't gone
through 
the canonicalization to make sure it exactly matches the RFC); browsers
might 
not do the same (perhaps indirectly) although I'd be surprised if NONE do. 

I would first try x509 -noout -subject|issuer -nameopt multiline,show_type 
and see if that helps.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Certificate problem

2014-07-07 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Barbe, Charles
 Sent: Monday, July 07, 2014 21:59

 I will try an ASN.1 decoder tomorrow. Thanks for the suggestion!
 
 One thing I did try today was to have both servers generate their
certificates
 using the same private key. Theoretically I would expect the two certs to
 then be exactly the same to the bit... I am not providing any domain or ip
 specific fields just so that I can do this comparison and made sure all
other
 variable fields would be static. The only variable left should be my
signing

If these certs are (intended to be) for the same server(s), then the server 
identity (usually name, rarely IP) can be the same, but it should not be
omitted.
SSL clients are supposed to, and at least browsers do, enforce it.

Every cert should have a serial# which should be unique, and for real CA
certs 
normally isn't even serial, it's random. OpenSSL still normally does serial.
Did you do something to fake the serial, or did you ignore that (small)
difference?

 algorithm vs the one used my openssl's code. What I think I found was that
 the two certs were identical except for 4 bytes. There was a 0x05 and 0x00
 following two fields in the open ssl generated cert. Each occurrence of
these
 2 bytes was following the signature algorithm identifier (in two places I
 think). These 4 bytes were not in the non-open ssl cert. could this be my
 problem? Is there a significance to the 0x05 and 0x00? They seemed to be
 part of the enclosing structure that contained the signature alg id but
not part
 of the id itself. At least according to wireshark. Are they necessary
padding
 that I'm missing in my custom cert generation?
 
They aren't necessary. Yes, the AlgorithmIdentifier occurs in two places;
X.509 was designed in early days when there was great concern over 
the possibility of algorithm substitution attacks on publickey crypto as 
there historically had been on some symmetric crypto, so it occurs in the 
signed content (near the beginning) as well as after. 

The AlgorithmIdentifier is a general structure used in numerous places 
for numerous purposes. In some of these uses it includes 'parameters'
which basically specialize the algorithm. In this use, the parameters 
are not needed, and ASN.1 allows two ways of handling this: the parameters 
can be omitted entirely, or they can be encoded as an ASN.1 NULL, which 
is the bytes 05 00. A robust parser/verifier should accept both.

 Like I said earlier, I'll try to attach the certs tomorrow. I really
appreciate
 everybody's help!
 
FYI ASN.1 decode can also be done by openssl commandline 'asn1parse', 
not as flexibly as some but it's already right to hand.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: 2 Server certificates

2014-06-16 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of nicolas@free.fr
 Sent: Friday, June 13, 2014 06:15

 the fact is a server can only send a single certificate, however this one can 
 be
 signed by multiple CAs

Kind of. There's a difference between what we humans perceive as a CA 
(somebody that is trusted by some group of users for the purpose of 
validating applicants and issuing certs to them) and what the protocol sees as 
a CA -- always an issuing privatekey and one or more matching certs; usually 
some combination of CRL distribution points and/or OCSP responders; often 
some online-published policy(ies); and sometimes name or policy constraints 
in the cert(s). (Even if not in the cert there are practically always policies 
and 
constraints in the real world.) Most real CA organizations actually consist of 
several techincal CAs: usually one root CA with 2 or 3 or 10 subordinate CAs.

On a given handshake the server can send only one server cert, which is 
signed by exactly one issuing/parent CA (or is self-signed in which case 
effectively it is its own CA). But if that issuing CA is not a root, and for 
public/wellknown CAs it never is, it can obtain and publish chain certs 
signed by several different roots and/or higher-level CAs, especially 
for old and new generations or mergers etc. Then there exist multiple 
trust chains leading to (or from?) different roots/anchors, and 
different validators may use different chains depending on which 
roots/anchors each trusts and which intermediates it has or gets.

 on the other side, a client have (in general) a list of trusted CAs, not a 
 single
 one
 
yes. Usually (but not mandatorily) in the form of a list of trusted CA certs.

 so there are two options :
 - either each client knows the two CAs, then the server can send a certificate
 signed by any of them
 - or each client knows only about its own CA, then the server must send a
 certificate signed by both CAs
 (note that this is symmetrical, the server verify client certificate the same
 way)
 
when client auth aka client cert is used, which is rare and apparently not 
the case here, although the OPs posts have been rather unclear.

 I've never heard about a server with multiple certificates, at least not with
 SSL/TLS protocols...
 
It can and must when supporting certain different key-exchange methods.
In particular a cert for an RSA key can support plain-RSA and [EC]DHE-RSA 
key-exchange, but only a DSA cert can support DHE-DSS and only an ECC 
cert (perhaps restricted to signature) can support ECDHE-ECDSA. (There 
are also rules for static-[EC]DH but practically nobody uses those.) These 
certs usually (should) have the same subject name, but must have 
different subject keys and may well be issued by different CAs.

In addition, a single physical server can implement multiple virtual 
or logical servers by responding on multiple IP addresses and/or ports 
(the old way which always works) or using ServerNameIndication in the 
ClientHello (requires client support, but now pretty widespread).
These logical servers can have different certs, and often must because 
they usually are seen by the users/clients at different domain names 
(which map or forward to the one physical server) and the cert used 
must match the requested domain name -- although one cert *can* 
use wildcards or SubjectAlternativeNames to match multiple names.

 
 concerning the list of trusted CAs sent by the server to the client, it comes
 from the fact that a client can have multiple certificates, for different 
 servers
 that can use their own CA
 so it allows a client to choose the good certificate to send to a specific 
 server
 
Usually yes, although some clients notably including openssl commandline 
s_client are dumb and select client cert ignoring server's CertReq.

 concerning the server, if it's in public access it uses a certificate issued 
 by a
 well-known CA (for example one included in your browser)
 if it's private, it can use its own CA or even a self-signed certificate, 
 and the
 client has to recover the trusted certificates by itself (this happens the 
 first
 time you connect to a SSH server for which you have no certificate, or on
 some websites)
 
A more-or-less private server *can* use a private CA -- possibly its own,
possibly private but separate e.g. the corporate security department 
might run one CA that certifies machines at all 100 branch offices.
But a private server can get a public CA cert as long as you use domain(s) 
you control, and using that may be more convenient. (I don't think this 
disagrees with what you said, just spells it out explicitly.)

SSH doesn't use certificates at all; client learns the server *publickey* 
on first connection (which *should* be manually checked, but probably 
.0001% of users actually do) and remembers it thereafter. For SSH client 
auth there are several options only one of which uses publickey, and for 
that option the publickey must be explicitly 

Re: ECDSA - Signature verify

2014-06-12 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Anant Rao
 Sent: Wednesday, June 11, 2014 09:45

 The signature is generated by a client program (also a 'c' program). What is 
 the format of a signature? How do I find out?

The format for an ECDSA or DSA signature is an ASN.1 SEQUENCE of two INTEGERs.
I'm practice I've always seen DER, but I don't know if that's required; the two 
reasons 
that commonly require DER (hashed and byte-compared) don't apply.

 Just to confirm - whether it's ECDSA or RSA, for verification, we just get 
 the EVP_PKEY data structure filled with 
 the public key correctly and call in a sequence ending up with a call to 
 EVP_VerifyFinal. Is that correct?

Either the old way with EVP_Verify{Init,Update,Final} and the key on the Final,
or the new way with EVP_DigestVerify{Init,Update,Final} and the key on the Init.
But either way independent of the keytype = PKalgorithm.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: No OPENSSL_Applink

2014-06-10 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of open...@comaxis.com
 Sent: Saturday, June 07, 2014 09:35

 I am attempting to use the d2i_PKCS12_fp() API call in a Windows DLL
 compiled with the multi-threaded (/MT) runtime library.  On this call I
 get the runtime error OPENSSL_Uplink(03CE1000,08): no
 OPENSSL_Applink.
 From discussions I have seen about this error, I thought I could fix it by
 adding applink.c to my project, and calling CRYPTO_malloc_init().
 However this has no effect.  Is use of /MT causing this?  It will be
 difficult to change that, due to other components of the project.  I have

applink.c (and OpenSSL_Applink) only works in an EXE, not a DLL.

 used the HMAC and SHA256 APIs in this project with no problem.  If it is
 just file I/O causing the problem, is there a way that I can
 read in the .p12 file myself, and just pass a buffer to OpenSSL in order
 to initialize the PKCS12 structure?
 
Yes, uplink is for file access (and malloc_init is for memory allocation).

You can:

- read the file contents into memory and call d2i_PKCS12 to parse from memory
(pass a temporary *copy* pointer because it gets changed, which isn't possible 
for an array and is wrong for a malloc/etc pointer that you need to free later)

- call BIO_new_file to open the file *in OpenSSL NOT your code* and use 
d2i_PKCS12_bio.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: OpenSSL: build my version

2014-06-02 Thread Dave Thompson
On platforms where shared-lib is supported at all it is usually the default 
build

and the conventional packaging. Are you sure you don’t already have it?

Or do you mean you want to build a different and/or modified version, as shared?

 

What almost(?) everybody does and the build process is set up to support is: 
put a

modified/personal version in a different DIRECTORY e.g. 
/usr/myopenssl/lib{ssl,crypto}.

See –prefix.  If you really want, add symlinks from a standard location as 
libmy* 

(but that’s going to complicate the build procedures for your applications).

 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Rohit Goel
Sent: Saturday, May 31, 2014 03:28
To: openssl-users@openssl.org
Subject: *** Spam *** OpenSSL

 

Hi, 

 

I am trying to build OpenSSL on Linux as a shared library.

I want the libraries to be renamed as libmycrypto.so and libmyssl.so so 
that it doesn't conflict with the libraries available on the system.

 

Can some please guide on how to do it ?

Any help would be greatly appreciated.

 

-Rohit

 



RE: Verification of a certificate chain

2014-05-29 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Sven Reissmann
 Sent: Thursday, May 29, 2014 12:24
snip
 What I did was:
 
 - I generated a newRootCA (new keypair, selfsigned certificate).
 
 - I generated another selfsigned certificate (bridgeCert) from the
   newRootCA's private key. From this cert, I used the -x509toreq
   option to generate a csr (bridgeCSR).
 
(You didn't really need that, you could just -x509toreq from newroot,
but no harm done as long as the DN is the same and per below it is.
Or you can req -new directly from the key, but that's less convenient.)

 - I took the bridgeCSR and issued a certificate using the oldRootCA.
   The certificate issued has AKI = oldRootCA and SKI = newRootCA
 
 - I generated a CSR for a new subCA and issued a certificate using
   newRootCA. The AKI of the subCA cert is the same as the SKI of
   newRootCA and bridgeCert. The AKI does not include any issuer or
   serial information
 
 What I am able to do now is:
 
 - As long as I trust oldRoot and my server handshake send subCA and
 bridge, the trustchain verifies: Verify return code: 0 (ok)
 
 - As long as I trust newRoot any my server handshake send subCA but NOT
 bridge, the trustchain verifies: Verify return code: 0 (ok)
 
As expected and desired.

 - If I only trust newRoot but send the bridgeCert, I get an error. Also,
 if I only trust oldRoot and do not send bridge, I get an error.
 
If the client trusts only oldroot and server doesn't send bridge, that 
should fail. The server isn't providing a chain that reaches something 
the client trusts, so the client can't verify the server is authentic.

If the client trusts only newroot and the server does send bridge, 
are you talking about a client using OpenSSL or something else?
OpenSSL client, at least through 1.0.1, doesn't handle that well,
but in my experience other clients (notably browsers and Java) do.

As I said in part:
snip
  A stale client has only oldroot will chain bridge to oldroot and succeed
as long
 as
  oldroot doesn't expire. A clever fresh client has newroot abd will chain
subCA
 to
  newroot, ignoring bridge -- while a dumb client will ignore available
newroot
 and
  insist on chaining bridge to oldroot. Every time I've looked (not
 systematically)
  major browsers and Java are clever, but OpenSSL (client/relier) through
1.0.1
 is not.
  I know 1.0.2 will change verification but don't know about this
particular
 point.
snip
In my simplified description, not clever is dumb. OpenSSL relier (here
client) 
through 1.0.1 when it receives a chain that *could* reach a trust anchor
from 
the middle (i.e. subCA-newroot) doesn't actually look for that, it looks
only 
at the end (i.e. bridge-oldroot?) and when that isn't found it returns
unverified.

As indicated I haven't looked whether 1.0.2 will fix this; if it does, I
don't know 
when it will be released and how long it will take your systems to be
upgraded 
if they even can be (e.g. there is not another dependency on older OpenSSL).


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Re?: How to make a secure tcp connection without using certificate

2014-05-29 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Jakob Bohm
 Sent: Wednesday, May 28, 2014 13:04

 On 5/25/2014 2:22 PM, Hanno Böck wrote:

  Some clients (e.g. all common browsers) do fallbacks that in fact
  can invalidate all improvements of later tls versions.
 
  These fallbacks also can happen by accident (e.g. bad connections) and
  sometimes disable features like SNI.
 
  That's why I recommend to everyone that we need at least to deprecate
  SSLv3.
 
 
 
 There is also the very real issue that a few platforms which no longer
 receive feature updates (such as new TLS protocol versions) are stuck
 at SSLv3.  Permanently.  So until those platforms become truly extinct,
 a lot of servers need to cater to their continued existence by allowing
 ancient TLS versions.
 
 At that point the problem is how to do the best defense against
 man-in-the-middle downgrade-to-SSLv3 attacks.  For instance is there a
 way to
 ensure that the server certificate validation done by an SSLv3
 (compatible) client will fail if both server and client were capable of
 TLS v1.0, but a man in the middle tampered with the version negotiation?
 
I don't think you want it on the cert. The cert only asserts identity and 
ownership of the key, it isn't specific to the server implementation or 
features and making it so would I bet actually discourage people from 
upgrading to the latest and most complete protocol (not a benefit).
And of course very few TLS connections use a client cert, so the 
server would almost never be able to detect/report the problem,
even though a decent server operator would like to know about an attack 
targeted (substantially) at them and their users, and I'd bet is more likely 
to try to do something about it than most web users at least.

The Finished exchange protects against *tampering* in a handshake,
and has since SSLv3 (inclusive). The problem is clients that fall back 
at the application level if the (good) handshake is *blocked* (denied).
Remember we had a fair number of legit cases of this when TLSv1.2 
in 1.0.1 added numerous suites by default plus one extension and 
ClientHello growing beyond 256 broke some servers -- even though 
they claimed to implement specs that implicitly required it. In those cases 
it was actually reasonable for a client to fall back to 1.1.

 Failing that, is this something that could be added to the TLS v1.3
 standard (i.e. some signed portion of the SSLv3 exchange being
 unnaturally different if the parties could and should have negotiated
 something better).
 
I see no reason to tie this to a TLSv1.3 document, when and if there is one.
This is a proposed change to SSL, which is not TLS (only technically similar).
The prohibition on SSLv2 is a standalone document: 6176, which updates
2246 4346 5246 to retroactively remove the SSLv2 compatibility.
(Of course an IETF prohibition has no legal force and doesn't actually 
prevent or even deter people from continuing to use SSLv2, it just lets us 
wag our fingers at them.) Since SSLv3 was belatedly retroactively published 
as 6101, this could even be labelled as an update to that, FWIW.

 Not remembering the SSLv3 spec details, one option could be to announce
 support for a special we also support TLS v1.0 cipher suite, which no
 one can really implement (by definition), but whose presence in a
 cipher suite list from the other end indicates that said other end
 announced SSLv3.1 (TLS v1.0) support in an unsigned part of the
 exchange.  This could even be specified in an UPDATE RFC for the
 existing TLS v1.0..v1.2 versions, and a CVE number assigned to the
 common bug of its non-implementation (after library implementations
 become available).
 
In other words like the Signaling CipherSuite Value (SCSV) used for 
5746 Secure Renegotiation (aka the Apache bug) in cases where the 
extension didn't work (or might not work reliably). I'd say experience 
confirmed that worked well enough to be considered an option.

But many users, especially web users, want to connect to the server 
even if it isn't truly secure. When we make it harder for https to 
work, they *will* use http instead, or else complain very loudly.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Verification of a certificate chain

2014-05-27 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Eisenacher, Patrick
 Sent: Tuesday, May 27, 2014 12:41

  From: Sven Reissmann
 
  What I want to achieve is having a new rootCA, which replaces an
  oldRootCA, which I am using until now.
 
  So far the trust chain is: oldRoot - oldServerCert.
 
  What I thought should be possible is building this trust chain:
  oldRoot - newRoot - newSubCA - newServerCert
 
  As Users are trusting oldRoot, changing the oldServerCert to
  newServerCert is no problem. After some time, users would move trust to
  newRoot and I can disable oldRoot.
 
  This doesn't seem possible, if I understand your answers correct.
 
  Is there another/better/default way of smoothly changing a trust anchor?
  I.e. by cross-signing the newRoot by itself and the oldRoot?
 
 Just add the new root-CA certificate to all relevant truststores. Afterwards 
 you
 can start issueing certificates that are trusted by all parties with updated
 truststores.
 
If you can get (all) the clients to update, yes. Sometimes that's hard.
Sometimes it's hard even *locating* the clients, especially programs.

You shouldn't cross-sign the new root itself, then it isn't a true root
and (more importantly) won't consistently be accepted as an anchor.

What you can do, and in my experience real public CAs actually do, is:

- create new root key, and new (selfsigned) root cert for it.

- also create a 'bridge' cert for the new root key, and the new root DN if 
different (which it usually is, e.g. Joe's Clam, Oyster and Cert Emporium Gen3 
supercedes Joe's Clam and Cert Shack Gen2), (cross)signed by the old root
(thus with issuer and AKI if present identifying oldroot, but SKI same as 
newroot).

- issue the subCA cert(s) with AKI keyid for the new root key or omitted, and
issuer matching both newroot and bridge, but NOT using AKI issuerserial 

- have server(s) handshake send subCA and bridge certs, at least for a period 
of time. 
A stale client has only oldroot will chain bridge to oldroot and succeed as 
long as 
oldroot doesn't expire. A clever fresh client has newroot abd will chain subCA 
to 
newroot, ignoring bridge -- while a dumb client will ignore available newroot 
and 
insist on chaining bridge to oldroot. Every time I've looked (not 
systematically) 
major browsers and Java are clever, but OpenSSL (client/relier) through 1.0.1 
is not. 
I know 1.0.2 will change verification but don't know about this particular 
point.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: PKCS7_sign PKCS7_verify

2014-05-27 Thread Dave Thompson
The third arg of PKCS7_verify (indata) should only be used for an ‘external’ or 
‘detached’ signature

where the PKCS#7 does not contain the data. In your case it should be null.

 

Also note that the _BINARY flag isn’t actually used for “plain” PKCS#7, only 
for SMIME.

And I don’t think it really works right for SMIME. Avoid it. 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Dikarev Evgeniy
Sent: Tuesday, May 27, 2014 03:45
To: openssl-users@openssl.org
Subject: PKCS7_sign  PKCS7_verify

 

Hey, guys. 

I have a small problem when using the PKCS7_sign and PKCS7_verify. Do not check 
the signature in the example, but is checked by using the openssl in command 
line. What am I doing wrong?

code is attached.



Dikarev Evgeniy



RE: Openssl crashed when loading certificates

2014-05-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org 
 [mailto:owner-openssl-us...@openssl.org] On Behalf Of David Li
 Sent: Tuesday, May 20, 2014 13:05
snip
 I am using SSL_CTX_use_certificate_chain_file() to load my server certificate 
 files at initialization. 
 The PEM file is created by concatenating server cert, server key and CA cert 
 together.  
 I used the following command line to check its format and it seemed OK.

 $ openssl s_server -cert servercert.pem -www
 Using default temp DH parameters
 Using default temp ECDH parameters
 ACCEPT

Note s_server does use_certificate_file (and also use_PrivateKey_file) not the 
_chain_ version,
so that really only checks that the server (first) cert and privatekey are 
good, not the rest 
of the file. However, even if the CA cert is somehow bad it should at worst 
give an error return
(and maybe just discard it) not a SEGV. However, if the CA cert is a root 
(possibly your own 
DIY root) it doesn't matter if it's in the file and good or not, because 
servers aren't required 
to send the root of their chain -- because clients can never trust a root (or 
in general anchor) 
sent by the server and must already have it local anyway. 

 And I can use openssl s_client command line to connect to the above server 
 without any issues.

What did you use for s_client's trust store (-CAfile and/or -CApath)?

 Now when I started my server, the code crashed inside the 
 SSL_CTX_use_certificate_chain_file():
snip
 There wasn't any detailed errors printed out but only:Segmentation fault 
 (core dumped)

When you get an unhandled signal -- and SEGV usually isn't and often can't be 
handled -- 
a C program aborts without outputting anything that wasn't output (and where 
applicable 
flushed) before the signal. This is unlike 'voluntary' error handling where the 
code gets 
a return value indicating an error (such as -1 from SSL_connect or NULL from 
fopen) 
and can -- and should -- print information about the problem. And unlike some 
other 
languages that (more or less reliably) catch exceptions and give details for 
them.

 Can anyone suggest how to debug this issue? 

The same way you debug SEGV in any C program. In this case you got a core dump 
file;
open it with the debugger of your choice -- gdb is common and popular -- and 
try to 
look at the stack (bt in gdb). Sometimes the stack is clobbered by the same bug 
that 
caused the SEGV, but usually it shows where -- or nearly where -- the code was 
executing and called from and sometimes (often?) the function arguments at each 
level.

Alternatively, (re)run the program under control of a debugger like gdb to 
start with.
Set breakpoints before or at the call that fails, and look to make sure the 
arguments 
are good -- for use_cert_chain, ctx points to a validly allocated and 
initialized SSL_CTX 
(to a first approximation if p *ctx doesn't give a gdb error and isn't all zero 
or obvious 
garbage, it's likely okay) and *file is the correct filename (and null 
terminated).
If they look okay and you either built openssl from source or have the source 
from which 
it was built installed, step in and see where it fails; but that's only needed 
if the bug is 
in the openssl code which is unlikely as thousands or millions of other people 
use it 
without problem. (Though not completely impossible.)

Or if you don't like the debugger, try taking out parts of your code that don't 
appear 
to be related to the problem to see if it still occurs. While it does, keep 
reducing until 
you either find the problem or get to a small self-contained example that 
exhibits 
the problem and post it. Unless you are using a good revision control system, 
it's 
usually best to 'remove' code by putting #if 0 and #endif lines around it 
instead of 
actually deleting it, so that you can easily put it back correctly if necessary.

Unfortunately if the symptom stops when you remove some code, that doesn't 
reliably 
prove that code was the problem (or the only problem); problems at the machine 
level 
are usually due to 'undefined behavior' in C where your code is wrong in a way 
that 
isn't required to be caught, like using an invalid pointer, and the actual 
results vary 
depending on seemingly irrelevant factors like the size of code before and 
after the 
location of the actual bug in a complicated way that won't make any sense 
unless 
you understand in detail the machine code generated for your source code -- 
and to be frank if you knew that you wouldn't be asking a question like this.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: Openssl crashed when loading certificates

2014-05-20 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org 
 [mailto:owner-openssl-us...@openssl.org] On Behalf Of Dustin Oprea
 Sent: Tuesday, May 20, 2014 14:07

 On Tue, May 20, 2014 at 1:04 PM, David Li dlipub...@gmail.com wrote:
snip

 The code that you cited doesn't use SSL_CTX_use_certificate_chain_file.

You're right; I missed that in my answer. But use_cert does nearly the same 
things 
use_cchain does, so a SEGV in either is pretty likely the same bug.

 I'm new to this arena, too. However, I don't think the public-key should be 
 in the trust chain. 
 Make sure that's correct, and that you're only sending the one certificate 
 into SSL_CTX_use_certificate_file.

The publickey is in the cert which is in the trust chain. But what the OP 
called server key
is undoubtedly the privatekey, which is treated as an object in its own right 
(unlike the publickey) 
and which must (also) be configured in the server; openssl's treatment of PEM 
input allows you 
to use one file for both the cert (or chain) and the privatekey, and this is 
often convenient.
That's exactly what 's_server -cert file1' without a separate '-key file2' does.
Similarly if you call use_cert (not use_cchain) on a file that contains 
multiple certs,
it takes the first one and ignores the rest. That may or may not be what you 
want 
in a particular case, but it is definitely not a SEGV.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: SSL_CTX_use_PrivateKey_file does not work with Elliptic Curve Private Key

2014-05-19 Thread Dave Thompson
 

http://www.openssl.org/support/faq.html#PROG6

and if you haven't loaded error strings

http://www.openssl.org/support/faq.html#PROG7

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Darshan Mody
Sent: Monday, May 19, 2014 09:13
To: openssl-users@openssl.org
Subject: SSL_CTX_use_PrivateKey_file does not work with Elliptic Curve
Private Key

 

Hi,

 

I am new to openssl APIs. However I am using the current code from SIPp.
Below is the code snippet for the Private Key

 

 if ( SSL_CTX_use_PrivateKey_file(sip_trp_ssl_ctx_client,

   tls_key_name,

   SSL_FILETYPE_PEM ) != 1 ) {

ERROR(FI_init_ssl_context: SSL_CTX_use_PrivateKey_file (client)
failed);

return SSL_INIT_ERROR;

  }

 

When I provide the Elliptic Private Key it always returns an Error.

 

-BEGIN EC PARAMETERS-

 

-END EC PARAMETERS-

-BEGIN EC PRIVATE KEY-

 

-END EC PRIVATE KEY-

 

My Private key looks as above

 

Thanks

Darshan

 



RE: encrypt - salt

2014-05-15 Thread Dave Thompson
EVP_BytesToKey implements (a tweak on) the original PKCS#5, which derived a key 
and IV 

by iterated hashing of a (reusable but secret) password with random (i.e. 
unique) salt.

Given random salt this gives effectively random IV, but is unnecessarily 
complicated.

 

This was recognized as a not terribly good plan, and the improved PBKDF2 in 
PKCS#5v2 

derives only the key in a slightly different way (iterated *HMAC* with salt 
*cumulated*) 

leaving the IV, if any, as plain random outside the scope of the PBKDF2 
primitive.

OpenSSL does implement PBKDF2, and can use it for PKCS#8 and PKCS#12 etc.,

but kept BytesToKey for compatibility with existing ‘enc’ files and ‘legacy’ 
(pre PKCS#8) keys.

(Which don’t even really use the iteration feature; they are hardcoded 1!)

 

Using BytesToKey with random salt to generate the IV is a waste of time, 

and using it with fixed salt violates its specification. Just use random IV.

Unless you don’t trust your RNG. But in that case you’re better off fixing or 

replacing the RNG, not trying weird things to prop it up.

 

BytesToKey (like PBKDF1) uses the one iteration count to produce data which is 
returned 

for both key and IV. It does additional round(s) if and only if necessary, a 
PBKDF2-like tweak 

not in standard PBKDF1, but still using the same count.

 

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Anant Rao
Sent: Saturday, May 10, 2014 21:58
To: openssl-users@openssl.org
Subject: *** Spam *** encrypt - salt

 

Hi, 

 

I'm trying to encrypt non-password data with EVP_aes_256_cbc algo.

 

Here's what I'm currently doing:

I have the key already generated by some other means outside of my program - 
assume it's cryptographically strong. I'm, however, generating the IV with 
RAND_bytes within my program.

 

When I looked at an example of AES encryption on the page 
http://saju.net.in/code/misc/openssl_aes.c.txt , I see that there is a call to 
EVP_BytesToKey to generate the key and the IV.

 

My first question is if generating the IV this way is any stronger than calling 
RAND_bytes. Just looking at the signature of the function, I tend to think it 
is as it has an extra param salt. If the answer is affirmative, then I plan 
to call the function (with some fixed salt) and use only IV out of it and 
ignore the key generated (as I already have the key from some external source 
as mentioned before). Is this a good/workable idea?

 

My second question is if EVP_BytesToKey's count param is used (by OpenSSL) in 
the key generation, IV generation or both.

 

Thanks!

 

 



RE: How to include intermediate in pkcs12?

2014-04-28 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of Edward Ned Harvey
(openssl)
 Sent: Thursday, April 24, 2014 16:15

 openssl pkcs12 -export -out mypkcs12.pfx -inkey my.private.key -in
  mycert.crt -certfile intermediate.crt -CAfile ca.crt
 (Correct?)
 
 So ...  I just tried this, and confirmed, that it doesn't work...  The
root CA cert is
 not included in the pfx.
 
Works for me.

Are you sure you used the correct root? Note that you can put a mismatching
root 
in the pkcs12 using the other ways (infile or -certfile) and the pkcs12 will
still work
correctly often -- at least IE+Chrome, Firefox, and Java using JKS.

   Alternatively, I could
 cat mycert.crt intermediate.crt ca.crt  mychain.crt
 openssl pkcs12 -export -out mypkcs12.pfx -inkey my.private.key -in
  mychain.crt
 
 It seems the easiest thing to do is...
 
 cat intermediate.crt ca.crt  chain.crt
 openssl pkcs12 -export -out mypkcs12.pfx -inkey my.private.key -in
mycert.crt -
 certfile chain.crt
 
Both of those will always put the (putative) root.


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


RE: How to include intermediate in pkcs12?

2014-04-24 Thread Dave Thompson
A lot of things on the Internet are wrong. The OpenSSL man page does not say
multiple 

occurrences work and I'm pretty sure it never did, nor did the code. In
general 

OpenSSL commandlines don't handle repeated options; the few exceptions are
noted.

pkcs12 -caname (NOT -cafile) IS one of the few that can be repeated, and
possibly 

some things on the Internet got that confused. However, the commandlines (at
least 

usually?) don't *diagnose* repeated (and overridden) options.

 

pkcs12 -export gets certs from up to three places:

- the input file (-in if specified else stdin redirected or piped)

- -certfile if specified (once, as you saw)

- the truststore if -CAfile and/or -CApath specified IF NEEDED

 

In other words, any cert in infile or certfile is always in the output,
needed or not.

If that set does not provide a complete chain, pkcs12 will try to complete
it using 

the truststore if specified, but will produce output even if it remains
incomplete.

Like other commandlines, and many programs using the library, the truststore


can be a single file with -CAfile (NOT -cafile) or a directory of hashnamed 

links or files with -CApath or both.

 

If the cert you are putting in pkcs12 is under a CA that you trust other
peers to use 

and thus you have in your truststore, easiest to use it from there.
Similarly if your cert 

is under an intermediate (or several) that you have in your truststore to
allow peers 

to use even if the peers don't send (as they should), easiest to use from
there.

Otherwise IMO it's easiest to just put in infile or -certfile (or a
combination),

although the option of temporarily creating or modifying a truststore works.

 

Whether to do your trustore with CAfile or CApath or both is a more general
question 

and depends partly on whether you use somebody's package.

For example the curl website supplies the Mozilla truststore in CAfile
format;

when I want to use that I don't bother converting to CApath format.

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Edward Ned Harvey
(openssl)
Sent: Tuesday, April 22, 2014 15:31
To: openssl-users@openssl.org
Subject: *** Spam *** How to include intermediate in pkcs12?

 

A bunch of things on the internet say to do -cafile intermediate.pem
-cafile root.pem or -certfile intermediate.pem -certfile root.pem and
they explicitly say that calling these command-line options more than once
is ok and will result in both the certs being included in the final
pkcs12...  But I have found this to be untrue.

 

I have found, that if I concatenate intermediate  root into a single glom
file, and then I specify -certfile once for the glom, then my pfx file will
include the complete chain.  But if I use -certfile twice, I get no
intermediate in my pfx.  And I just wasted more time than I care to
describe, figuring this out.

 

So...  While concatenation/glom is a viable workaround, I'd like to know,
what's supposed to work?  And was it a new feature introduced after a
certain rev or something?   I have OpenSSL 0.9.8y command-line on Mac OSX,
and OpenSSL 1.0.1e command-line on cygwin.  I believe I've seen the same
behavior in both.



RE: Verify Two Way SSL Certificates.

2014-04-22 Thread Dave Thompson
What exactly do you include in correctly?

 

As that entry (rightly) explains, the (or each) server must have a key  cert 
from a CA 

trusted by the client, and the (or each) client must have a key  cert from a 
CA trusted 

by the server. Most clients trust the “well-known” CAs like Verisign and 
GoDaddy and 

maybe 10-100 more depending on the client and OS. Some servers similarly trust 

well-known CAs, but sometimes the organization operating the server also 
operates 

or links to a particular CA to issue certs to its clients, and the SSL server 
trusts that CA.

Most if not all clients and servers can be configured to change which CAs they 
trust.

 

As it says the server must “request” the client cert; this is often a separate 
option.

E.g. you must set “request client auth” AND “trust these client CAs: X, Y, Z”.

Often there are several options like request but proceed if a client doesn’t 
agree,

or request and refuse to proceed if client doesn’t agree.

 

It isn’t said explicitly but for most SSL/TLS applications and particularly 
HTTPS, the

server cert must correctly name the server, and for most (sane) servers using 
client auth 

the client cert must correctly name the client.

 

For both one-way (server) auth and two-way (server+client) auth, if the cert is 
issued by 

a CA using an “intermediate” or “chain” cert – and certs from well-known CAs do 
– 

the server or client respectively should be configured with both the entity 
cert 

AND the correct intermediate cert (or sometimes a few of them). The CAs usually

provide the needed intermediate(s) and instructions for use with common servers,

but you have to pay attention to the instructions and follow them.

(Although if you want to test/debug with commandline s_server, it does NOT 

directly support own-chain certs and you must sneak them in via truststore.)

 

And last, but rarely important, the server cert and the client cert when used 
must 

be for keys using the same public key algorithm: RSA, ECDSA, DSA, ECDH, or DH.

In practice almost everybody uses RSA and this is not a problem.

 

You can check these points directly, or you can try making a connection and 

if it doesn’t work look at the error(s) or other results that you get (such as 

selection of a different client cert you expected).

 

Do you have a specific problem you want to diagnose?

 

From: owner-openssl-us...@openssl.org [mailto:owner-openssl-us...@openssl.org] 
On Behalf Of Kaushal Shriyan
Sent: Monday, April 21, 2014 10:14
To: openssl-users@openssl.org
Subject: *** Spam *** Verify Two Way SSL Certificates.

 

Hi,

 

Is there a way to test if 2 way ssl certs are installed correctly?

 

More Info :- 
http://stackoverflow.com/questions/10725572/two-way-ssl-clarification

 

Regards,

 

Kaushal



RE: Help me for ECDHE algorithm

2014-04-15 Thread Dave Thompson
 From: owner-openssl-us...@openssl.org On Behalf Of chetan
 Sent: Monday, April 14, 2014 00:42

 xxx.c is my program file.
 So, i'm compile simply like cc xxx.c .
 I am Gettting [undefined reference]

This is basic C programming. Whenever you link (not just compile) a C
program 
that uses a library (or several) other than the standard C lib(s) you must 
specify it(them) to the linker, or to the compiler when it runs the linker
as here.

The exact syntax depends on what compiler and/or linker you are using, 
which you don't say, but AFAIK the component 'collect2' indicates
GCC/binutils.
The syntax for that (and some others) is -lxxx where l is lowercase ell and
xxx 
is the 'short' name of the library; the actual filename is usually libxxx.so
or libxxx.a .
For OpenSSL EVP* routines (and more generally everything but actual SSL/TLS)

the library you need is -lcrypto .

If the library(s) you want isn't placed in the compiler's (or platform's)
default 
location for libs, you also need to specify -L (uppercase ell) with the
directory.
If you are using a OS-vendor-provided or packaged version, as on Linux, it
will 
almost certainly be in the default location, whatever that is for a given
distro.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   majord...@openssl.org


Re: Heart bleed with 0.9.8 and 1.0.1

2014-04-15 Thread Dave Thompson
Possibly too Postelian, OpenSSL answers a received heartbeat request 

(and thus before the fix answers a malicious request with leaked data) 

even if the heartbeat extension was negotiated off.

Only the build option to exclude the code stops it.

OpenSSL will *send* hb request only if/after negotiating on.

 

The first OpenSSL version with heartbeat is 1.0.1 (base).

The extension RFC is written against current 5246 TLSv1.2, but like 

most extensions the logic can apply to any version that supports extensions 

which is since TLSv1(.0) and that’s what OpenSSL implements.

The only exception I see is sigalgs which only makes sense for D/TLS1.2.

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of cvishnuid
Sent: Sunday, April 13, 2014 12:24
To: openssl-users@openssl.org
Subject: *** Spam *** Re: Heart bleed with 0.9.8 and 1.0.1

 

Will client respond for heart beat request even if server doesn't support
heart beat . ?

 

Which version of ssl this heart beat in introduced ? 

 

I am assuming as the client know that the session establish with sever
doesn't support heart beat it will not respond am I correct ?

 

 


On Sunday, April 13, 2014, Jin Jiang [via OpenSSL] [hidden email] wrote:

Hi,

I think your client is vulnerable, if the attacker can touch your client.

 

Regards,

Jin 

 

On Fri, Apr 11, 2014 at 5:32 PM, cvishnuid [hidden email]
http://user/SendEmail.jtp?type=nodenode=49373i=0  wrote:

Hi I am having 0.9.8 open ssl libraries in my server and 1.0.1 in my client.
Am I venerable to heart bleed attach? Regards, Vishnu. 

  _  

View this message in context: Heart bleed with 0.9.8 and 1.0.1
http://openssl.6102.n7.nabble.com/Heart-bleed-with-0-9-8-and-1-0-1-tp49300.
html 
Sent from the OpenSSL - User mailing list archive
http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html  at Nabble.com.

 

 

  _  

If you reply to this email, your message will be added to the discussion
below:

 
http://openssl.6102.n7.nabble.com/Heart-bleed-with-0-9-8-and-1-0-1-tp49300p
49373.html
http://openssl.6102.n7.nabble.com/Heart-bleed-with-0-9-8-and-1-0-1-tp49300p4
9373.html 

To unsubscribe from Heart bleed with 0.9.8 and 1.0.1, click here.
 
http://openssl.6102.n7.nabble.com/template/NamlServlet.jtp?macro=macro_view
erid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNa
mespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.No
deNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_ema
ils%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml NAML 

 

  _  

View this message in context: Re: Heart bleed with 0.9.8 and 1.0.1
http://openssl.6102.n7.nabble.com/Heart-bleed-with-0-9-8-and-1-0-1-tp49300p
49374.html 
Sent from the OpenSSL - User mailing list archive
http://openssl.6102.n7.nabble.com/OpenSSL-User-f3.html  at Nabble.com.



Re: Enabling s_server to use a local CRL file

2014-04-03 Thread Dave Thompson
In order to validate a client cert at all, with or without CRL(s), 

yes the server must request the client cert 

and s_server does that only if you specify -verify or -Verify.

The client must also agree to provide the cert, which it might not;

if it does not and you use -verify the handshake proceeds without 

client auth; if you use -Verify the handshake fails.

 

As the usage and man says, -crl_check_all checks the whole chain

(except the root) vs -crl_check checks only the leaf cert.

 

If you use CAfile only, which I think is simplest for testing, it should 

contain at least the root cert, and any intermediate certs needed that 

aren't supplied by the peer (client); and: CRL for the leaf-issuer i.e. 

the lowest and possibly only CA if -crl_check, or CRL(s) for all issuer(s)

i.e. all CA cert(s) in the chain if -crl_check_all. I'm not sure it works 

(and can't easily test) to have more than one full CRL for the same issuer; 

if you don't seem to be finding the correct CRL(s) make sure you have 

no more than one per issuer. From skimming the code I'm pretty sure 

you can have full + delta, but I don't do deltas so I didn't test that.

 

Remember s_server, like s_client, prints a message when cert validation 

fails for ANY reason: no issuer, signature bad, expired, wrong type,

OR revoked; but continues with the connection unlike a real app 

which should abort the connection. Look closely at the lines before 

-BEGIN SSL SESSION PARAMETERS- to see the error if any.

 

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Lakshmi Reguna
Sent: Monday, March 31, 2014 18:33
To: openssl-users@openssl.org
Subject: *** Spam *** Re: Enabling s_server to use a local CRL file

 

Thanks for the response Dave. Would you also know how -Verify option
interacts with the -crl_check_all. This what I gather from the Openssl
s_server help documentation. Is the entire certificate chain checked against
CRLs issued by each intermediate CA in the chain. Would you have a use case
example of how the CAfile is expected to look like.

 

-verify depth, -Verify depth

The verify depth to use. This specifies the maximum length of the client
certificate chain and makes the server request a certificate from the
client. With the -verify option a certificate is requested but the client
does not have to send one, with the -Verify option the client must supply a
certificate or an error occurs.

-crl_check, -crl_check_all

Check the peer certificate has not been revoked by its CA. The CRL(s) are
appended to the certificate file. With the -crl_check_all option all CRLs of
all CAs in the chain are checked.

-CApath directory

The directory to use for client certificate verification. This directory
must be in ``hash format'', see verify for more information. These are also
used when building the server certificate chain.

-CAfile file

A file containing trusted certificates to use during client authentication
and to use when attempting to build the server certificate chain. The list
is also used in the list of acceptable client CAs passed to the client when
a certificate is requested.

 

Thanks,

Lakshmi.

 

From: Dave Thompson  mailto:dthomp...@prinpay.com dthomp...@prinpay.com
Reply-To:  mailto:openssl-users@openssl.org openssl-users@openssl.org 
mailto:openssl-users@openssl.org openssl-users@openssl.org
Date: Monday, March 31, 2014 at 2:54 PM
To:  mailto:openssl-users@openssl.org openssl-users@openssl.org 
mailto:openssl-users@openssl.org openssl-users@openssl.org
Subject: RE: Enabling s_server to use a local CRL file

 

Through 1.0.1, put the CRL in PEM format in CAfile (specified or defaulted) 

or in CApath (ditto) named or linked as $hash.r$num (c_rehash can do for
you).

I've never seen a CA distribute PEM so you almost certainly need to convert.

And specify -crl_check or -crl_check_all (see the man page or -?).

 

1.0.2 apparently has new capabilities in this area but I haven't looked yet.

 

From: mailto:owner-openssl-us...@openssl.org
owner-openssl-us...@openssl.org [ mailto:owner-openssl-us...@openssl.org
mailto:owner-openssl-us...@openssl.org] On Behalf Of Lakshmi Reguna
Sent: Friday, March 28, 2014 14:16
To:  mailto:openssl-users@openssl.org openssl-users@openssl.org
Subject: *** Spam *** Enabling s_server to use a local CRL file

 

Hi,

 

 I would like to know how I can specify s_server to use a local CRL file. Do
I need to specify a LDAP CRL distribution field in the certificate which is
being checked against the CRL ? 

 

Thanks,

Lakshmi. 

 

  _  

*** Please note that this message and any attachments may contain
confidential and proprietary material and information and are intended only
for the use of the intended recipient(s). If you are not the intended
recipient, you are hereby notified that any review, use, disclosure,
dissemination, distribution or copying of this message and any attachments
is strictly prohibited. If you have received

RE: no OPENSSL_Applink in my DLL

2014-04-03 Thread Dave Thompson
1. Modify the uplink logic to hardcode your DLL, and make sure your users'

programs never call this modified openssl, probably by using a nonstandard 

filename(s), and then stand ready to provide updates every few months.

 

2. Rewrite the uplink logic to figure out which DLL is providing the
troublesome 

arguments, i.e. FILE pointers. You probably have to decompile and reverse

engineer the calling code, and in some cases it may not be possible and you 

have to fail the operation with an explanation that no user will understand.

 

3. Don't call OpenSSL routines that need uplink, namely those that use 

FILE pointers. E.g. instead of fp = fopen (certfile,r); PEM_read_X509
(fp, .)

use BIO*bio = BIO_new_file (certfile,r); PEM_read_bio_X509 (bio, .).

 

I recommend 3. It's a bit tedious, but it works, and will continue to work.

 

From: owner-openssl-us...@openssl.org
[mailto:owner-openssl-us...@openssl.org] On Behalf Of Mohan Kumar
Sent: Wednesday, April 02, 2014 14:46
To: openssl-users@openssl.org
Subject: *** Spam *** no OPENSSL_Applink in my DLL

 

Hi,

 

I am writing a DLL plugin which works with a third party plugin. The DLL
uses open ssl. I was able to successfully connect to a ssl server from a
console application (.exe). But when I added the same code to my dll, it is
not working. Discussions point that i should include applink.c in my code
which has main function, but sadly DLL is all I got. Please point me to a
soln.


 

-- 
Thanks,

Mohan



  1   2   3   4   5   6   7   8   9   10   >