[openssl-users] sign sub CA issue

2015-12-07 Thread Mohammad Jebran
I have to sign a sub-CA through my current root CA using openSSLeverything
I have configured as per instructions but still getting an error that
"stateorProvanceName field needed to be the same" As mentioned below.





*root@machine:~/ImportantCACerts/intermediate# openssl ca
-configopenssl.cnf -extensions v3_intermediate_ca -days 3650 -notext -md
sha256 -in csr/subca2.csr -out certs/subca2.crt*

*Using configuration from openssl.cnf*

*Check that the request matches the signature*

*Signature ok*

*The stateOrProvinceName field needed to be the same in the*

*CA certificate (HK) and the request (HK)*






*Thanks & Regards,Jebran.*
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] explicitly including other ciphers.

2015-12-07 Thread Ron Croonenberg

Yes I think that probably would be the case.

on EDR HTTPS vs HTTP I loose about 15-20GB/s, almost half that is why am 
trying to do HTTPS for the authentication only


On 12/03/2015 07:10 PM, Jakob Bohm wrote:

On 04/12/2015 03:03, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf
Of Ron Croonenberg
Sent: Thursday, December 03, 2015 18:35
To: openssl-users@openssl.org
Subject: Re: [openssl-users] explicitly including other ciphers.

The network is isolated from the outside worl,   BUT  we still need
authentication because different users are using it.

So what I preferably want is sort of a set up where,
authentication is done the "standard way" and after that just use the
https connection without the overhead of actually encrypting anything.
(and the lesss modifications and recompiling the better)

So rather than connecting directly to Apache, how about connecting to
a TLS proxy like stunnel, which would then connect to Apache over
vanilla HTTP. Configure Apache to only bind to loopback addresses
(127/8 and/or ::1), so no one can bypass the proxy.

That's assuming stunnel doesn't also play silly buggers with the
cipher suite list.


Wouldn't that extra hop via stunnel cost performance
(noting that Ron is apparently running at faster than
gigabit speed).

Enjoy

Jakob

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] explicitly including other ciphers.

2015-12-07 Thread Ron Croonenberg

if the proxy is another host, I'd probably loose too much bandwith


On 12/03/2015 07:03 PM, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf
Of Ron Croonenberg
Sent: Thursday, December 03, 2015 18:35
To: openssl-users@openssl.org
Subject: Re: [openssl-users] explicitly including other ciphers.

The network is isolated from the outside worl,   BUT  we still need
authentication because different users are using it.

So what I preferably want is sort of a set up where,
authentication is done the "standard way" and after that just use the
https connection without the overhead of actually encrypting anything.
(and the lesss modifications and recompiling the better)


So rather than connecting directly to Apache, how about connecting to a TLS 
proxy like stunnel, which would then connect to Apache over vanilla HTTP. 
Configure Apache to only bind to loopback addresses (127/8 and/or ::1), so no 
one can bypass the proxy.

That's assuming stunnel doesn't also play silly buggers with the cipher suite 
list.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] explicitly including other ciphers.

2015-12-07 Thread Ron Croonenberg
That is something we have been considering, but someone is going to 
bring up the fact that passwords would be in the clear.
It would be an option to have some sort of encrypted authentication 
'thing' over HTTP


No it is strictly for having users, on front ends authenticate so they 
will only have access to their own data/objects


On 12/03/2015 07:11 PM, Jakob Bohm wrote:

Since the network is (as I understand it) physically secure
against wiretapping, how about using plain http with http auth?

Or are you trying to protect against TCP connection hijacks by
other computers/processes on the "secure" network?

On 04/12/2015 00:35, Ron Croonenberg wrote:

The network is isolated from the outside worl,   BUT  we still need
authentication because different users are using it.

So what I preferably want is sort of a set up where,

authentication is done the "standard way" and after that just use the
https connection without the overhead of actually encrypting anything.
(and the lesss modifications and recompiling the better)

thanks,

Ron


On 12/03/2015 02:50 PM, Richard Moore wrote:



On 2 December 2015 at 17:53, Ron Croonenberg > wrote:

So the idea is to use an object store on an isolated network and
push and get objects out of it using https.


​If network is fully isolated you could use plain text. Using 'https'
and null encryption is basically just pretending to do security.




Enjoy

Jakob

___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] explicitly including other ciphers.

2015-12-07 Thread Michael Wojcik
> From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf
> Of Ron Croonenberg
> Sent: Monday, December 07, 2015 14:24
> To: openssl-users@openssl.org
> Subject: Re: [openssl-users] explicitly including other ciphers.
> 
> if the proxy is another host, I'd probably loose too much bandwith

As I described it, it wouldn't be on another host. From my previous message: 
"Configure Apache to only bind to loopback addresses (127/8 and/or ::1), so no 
one can bypass the proxy." If the proxy is connecting to Apache over the 
loopback interface, by definition it's running on the same system.

There might still be an unacceptable performance hit, of course. It wouldn't be 
due to an additional physical network leg (because there wouldn't be any), but 
you'd have some processing overhead, an extra set of copies for every packet, 
and some time spent in the proxy connecting to Apache - though depending on the 
requirements of the application and the capabilities of the proxy, that might 
be amortized over long-running connections.

Conversely, if your application can benefit from caching, you might gain some 
performance in actually serving content. It's impossible to guess without 
knowing more about the application and its behavior.

(And you mean "lose", not "loose".)

-- 
Michael Wojcik
Technology Specialist, Micro Focus


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] explicitly including other ciphers.

2015-12-07 Thread Ron Croonenberg
well...  the performance loss would be high, I know that from other 
applications.


Also, each server (there are 50) would need it's own 'proxy' that 
probably is a little impractical.


We're moving a lot of data...  machines that cache run out of memory to 
actually cache in no time. caching would only work with little bits of data.



On 12/03/2015 10:32 PM, Michael Wojcik wrote:

From: openssl-users [mailto:openssl-users-boun...@openssl.org] On Behalf
Of Jakob Bohm
Sent: Thursday, December 03, 2015 21:11
To: openssl-users@openssl.org
Subject: Re: [openssl-users] explicitly including other ciphers.

On 04/12/2015 03:03, Michael Wojcik wrote:

So rather than connecting directly to Apache, how about connecting to a

TLS proxy like stunnel, which would then connect to Apache over vanilla
HTTP. Configure Apache to only bind to loopback addresses (127/8 and/or
::1), so no one can bypass the proxy.



Wouldn't that extra hop via stunnel cost performance
(noting that Ron is apparently running at faster than
gigabit speed).


Yes, but depending on the actual application behavior, it might be negligible 
compared to the cost of certificate validation and the like. I don't know 
enough about the situation to guess whether the impact would be an issue, so I 
thought I'd propose this as one possible alternative.

The application might even be such that a caching proxy could be used in front 
of Apache for a performance gain - for example if the same content is re-read 
frequently and the HTTP cache control mechanisms allow it to be usefully cached.


___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Question about TLS record length limitations

2015-12-07 Thread Software Engineer 979
Hello,

I'm currently developing an data transfer application using OpenSSL. The
application is required to securely transfer large amounts of data over a
low latency/high bandwidth network. The data being transferred lives in a
3rd part application that uses 1 MB buffer to transfer data to my
application. When I hook OpenSSL into my application I notice an
appreciable decline in network throughput. I've traced the issue the
default TLS record size of 16K. The smaller record size causes the 3rd
party application's buffer to be segmented into 4 16K buffers per write and
the resulting overhead considerably slows things down. I've since modified
the version of OpenSSL that I'm using to support an arbitrary TLS record
size allowing OpenSSL to scale up to 1MB or larger TLS record size. Since
this change, my network throughput has dramatically increased (187%
degradation down to 33%).

I subsequently checked the TLS RFC to determine why a 16K record size was
being used, and all could find was the following:

length
  The length (in bytes) of the following TLSCompressed.fragment.

  The length MUST NOT exceed 2^14 + 1024.

The language here is pretty explicit stating that the length must not
exceed 16K (+ some change).Does anyone know the reason for this? Is there a
cryptographic reason why we shouldn't exceed this message size? Based on my
limited experiment, it would appear that a larger record size would benefit
low latency/high bandwidth networks.

Thanks!
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Question about TLS record length limitations

2015-12-07 Thread Salz, Rich
I suggest you ask on the TLS mailing list, t...@ietf.org
/r$
--
Senior Architect, Akamai Technologies
IM: richs...@jabber.at Twitter: RichSalz
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Question about TLS record length limitations

2015-12-07 Thread Benjamin Kaduk
On 12/07/2015 02:43 PM, Software Engineer 979 wrote:
> Hello,
>
> I'm currently developing an data transfer application using OpenSSL.
> The application is required to securely transfer large amounts of data
> over a low latency/high bandwidth network. The data being transferred
> lives in a 3rd part application that uses 1 MB buffer to transfer data
> to my application. When I hook OpenSSL into my application I notice an
> appreciable decline in network throughput. I've traced the issue the
> default TLS record size of 16K. The smaller record size causes the 3rd
> party application's buffer to be segmented into 4 16K buffers per
> write and the resulting overhead considerably slows things down. I've
> since modified the version of OpenSSL that I'm using to support an
> arbitrary TLS record size allowing OpenSSL to scale up to 1MB or
> larger TLS record size. Since this change, my network throughput has
> dramatically increased (187% degradation down to 33%). 
>
> I subsequently checked the TLS RFC to determine why a 16K record size
> was being used, and all could find was the following:
>
> length
>   The length (in bytes) of the following TLSCompressed.fragment. 
>   The length MUST NOT exceed 2^14 + 1024.
>
> The language here is pretty explicit stating that the length must not
> exceed 16K (+ some change).Does anyone know the reason for this? Is
> there a cryptographic reason why we shouldn't exceed this message
> size? Based on my limited experiment, it would appear that a larger
> record size would benefit low latency/high bandwidth networks.
>

The peer is required to buffer the entire record before processing it,
at at that point the data could be from an untrusted party/attacker.  So
the limit is for protection against denial-of-service via resource
exhaustion.

-Ben
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


[openssl-users] Failed TLSv1.2 handshake

2015-12-07 Thread Nounou Dadoun
Hi folks, running into a failed handshake problem -

Although we upgraded to openssl 1.0.2d last summer, we had never changed our 
context setup from accepting any version other than TLSv1, i.e. (in boost)
m_context(pIoService->GetNative(), boost::asio::ssl::context::tlsv1)


When we recently changed to accepting other versions (didn't matter if we 
disabled sslv2 and sslv3) to (in boost):
m_context(pIoService->GetNative(), boost::asio::ssl::context::sslv23)

our ssl handshakes started failing with "decryption failed or bad record mac"

I've attached a packet capture, the client does a TLSv1.2 CLIENT HELLO and 
offers up 72 cipher suites.

The server responds with the SERVER HELLO, CERTIFICATE, SERVER HELLO DONE and 
appears to select 
Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)

The Client does the CLIENT KEY EXCHANGE, CHANGE CIPHER SPEC, ENCRYPTED 
HANDSHAKE MESSAGE
and then the exchange appears to finish with the above error in the server log.

The cipher setting on the server is:
SSL_CTX_set_cipher_list(pSslContext->GetNativeRef().impl(),  
"ALL:SEED:!EXPORT:!LOW:!DES:!RC4");

Any suggestions?  Is it possible that we've selected a cipher setting which is 
not compiled in?

Thanks in advance for any help ... N


Nou Dadoun
Senior Firmware Developer, Security Specialist


Office: 604.629.5182 ext 2632 
Support: 888.281.5182  |  avigilon.com
Follow Twitter  |  Follow LinkedIn


This email, including any files attached hereto (the "email"), contains 
privileged and confidential information and is only for the intended 
addressee(s). If this email has been sent to you in error, such sending does 
not constitute waiver of privilege and we request that you kindly delete the 
email and notify the sender. Any unauthorized use or disclosure of this email 
is prohibited. Avigilon and certain other trade names used herein are the 
registered and/or unregistered trademarks of Avigilon Corporation and/or its 
affiliates in Canada and other jurisdictions worldwide.


failed_tls1.2_handshake.pcapng
Description: failed_tls1.2_handshake.pcapng
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users


Re: [openssl-users] Failed TLSv1.2 handshake

2015-12-07 Thread Viktor Dukhovni
On Mon, Dec 07, 2015 at 10:46:26PM +, Nounou Dadoun wrote:

> The cipher setting on the server is:
> SSL_CTX_set_cipher_list(pSslContext->GetNativeRef().impl(),  
> "ALL:SEED:!EXPORT:!LOW:!DES:!RC4");

Note, your cipher setting is likely not what you intend it to be,
instead try:

"DEFAULT:!EXPORT:!LOW:!RC4:+SEED"

Unless you know what you're doing in enabling anonymous ciphers.
Also note the difference between ":SEED" and ":+SEED".

You're also using a version 1 server certificate with a public
exponent of "3".  This is a really bad idea.  You've not defined
any DH or ECDH parameters, so the cipher selected uses RSA key
transport, not a good idea, but should work barring bugs on either
side.

> Any suggestions?  Is it possible that we've selected a cipher setting which 
> is not compiled in?

No, that gives you plenty of ciphers (more than you need).  Perhaps
the client is buggy.  Have you tried OpenSSL 1.0.2e?  What software
is the client running?

In any case, there are enough red flags all over the place that
make it likely that other mistakes are being made.

-- 
Viktor.
___
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users