Re: [TLS] Last Call: (Deprecating MD5 and SHA-1 signature hashes in TLS 1.2) to Proposed Standard

2020-10-15 Thread Martin Rex
The IESG  wrote:
> 
> The IESG has received a request from the Transport Layer Security WG (tls) to
> consider the following document: - 'Deprecating MD5 and SHA-1 signature
> hashes in TLS 1.2'
>
>as Proposed Standard
> 
> The IESG plans to make a decision in the next few weeks, and solicits final
> comments on this action. Please send substantive comments to the
> last-c...@ietf.org mailing lists by 2020-10-28. Exceptionally, comments may
> be sent to i...@ietf.org instead. In either case, please retain the beginning
> of the Subject line to allow automated sorting.


The new, backwards-incompatible and interop-fatal behaviour proposed in
section 2 of the current draft must be changed to reflect the updated
rationale from section 6 of the very same document, and to promote
safe and secure interoperability instead of a needless total interop failure.

Requesting interop failure where instead safe and secure interop can
be easily obtained, as the current draft does, would be in serious
violation of section 6 of RFC 2119 about where an imperative MUST
is not allowed for the given situation.



Section 6 of the current draft says:

   6.  Updates to RFC5246

   [RFC5246], The Transport Layer Security (TLS) Protocol Version 1.2,
   suggests that implementations can assume support for MD5 and SHA-1 by
   their peer.  This update changes the suggestion to assume support for
   SHA-256 instead, due to MD5 and SHA-1 being deprecated.

   In Section 7.4.1.4.1: the text should be revised from:

   OLD:

   "Note: this is a change from TLS 1.1 where there are no explicit
   rules, but as a practical matter one can assume that the peer
   supports MD5 and SHA- 1."

   NEW:

   "Note: This is a change from TLS 1.1 where there are no explicit
   rules, but as a practical matter one can assume that the peer
   supports SHA-256."


and therefore the behaviour in section 2 about the "Signature Algorithms"
extension ought to say:

   2.  Signature Algorithms

   Clients MUST NOT include MD5 and SHA-1 in the signature_algorithms
   extension.  If a client does not send a signature_algorithms
   extension, then the server MUST use (sha256,rsa) for
   digitally_signed on the ServerKeyExchange handshake message for
   TLS cipher suites using RSA authentication, and the server MUST use
   (sha256,ecdsa) for TLS cipher suites using ECDSA authentication.

   The server behaviour ought to be consistent for both, receipt of an
   extension-less SSLv3+ ClientHello handshake message, and for a
   backwards-compatible SSL VERSION 2 CLIENT-HELLO (which can not convey
   any TLS extensions) as described in Appendix-E.2 of rfc5246,
   and as permitted in bullet 2 in section 3 of RFC6176.



As desribed in RFC6151, collision attacks on MD5 were appearing in
publications already in 2004, 2005, 2006 & 2007, the TLSv1.2 spec
rfc5246 should have never *NEWLY* added support for (md5,rsa) in
TLSv1.2 digitally_signed.

Similarily, since the sunset date for SHA1-signatures had been
announced by NIST *before* TLSv1.2 (rfc5246) was published,
TLSv1.2 (rfc5246) should have never *NEWLY* added support for
(sha1,rsa) in TLSv1.2 digitally_signed, but have used (sha256,rsa)
from the beginning.  sha256 has been required for the TLSv1.2 PRF
and for HMAC-SHA256 of several cipher suites anyway, so there
was no excuse for not using sha256 in TLSv1.2 digitally_signed
in the first place.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3 Problem?

2020-09-29 Thread Martin Rex
Michael D'Errico  wrote:
> 
> Since RFC 8446 obsoletes RFC 5246, this is a serious problem.
> 
> How is this supposed to work?   Sorry but I did not follow the
> development of TLS 1.3.  I felt that I was unwelcome in this
> group by some of the "angry cryptographers" as I call them.

The "Obsoletes" Markers used in TLS documents (rfc4346, rfc5246, rfc8446)
are pure crypto-political bullshit, they are completely non-sensical
with respect to the Editorial meaning of "Obsolete:" in the RFC
document series, as it was explained in rfc2223:

   Obsoletes

  To be used to refer to an earlier document that is replaced by
  this document.  This document contains either revised information,
  or else all of the same information plus some new information,
  however extensive or brief that new information is; i.e., this
  document can be used alone, without reference to the older
  document.

IPv6 specification(s) can not possibly obsolete IPv4 specifiation(s)
HTTP/2 spec does not obsolete HTTP/1.1 spec (and does not try to)

Example for *correct* obsoletion:

If you want to implement the Simple Mail Transfer Protocol (SMTP),
it will be sufficient that you only ever read and refer to rfc5321
and _never_ look at rfc2821 nor rfc821 at all -- and still can
expect to your implementation of rfc5321 to interop fine with
older implementations there were created based on rfc821 or rfc2821.


The same is impossible for TLSv1.1 (rfc4346), TLSv1.2 (rfc5246)
and TLSv1.3 (rfc8446). An implementor reading only rfc8446 will
be completely unable to interop with TLSv1.0, TLSv1.1 and TLSv1.2
implementations -- and I am not even sure whether TLSv1.3 can
be implemented with rfc8446 alone and never looking at rfc5246.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-08-02 Thread Martin Rex
Hubert Kario  wrote:
> On Wednesday, 1 May 2019 01:49:52 CEST Martin Rex wrote:
>> 
>> It is formally provable that from the three protocol versions:
>> 
>>  TLSv1.0, TLSv1.1, TLSv1.2
>> 
>> the weakest one is TLSv1.2, because of the royally stupid downgrade
>> in the strength of digitally signed.
>> 
>> 
>> Disabling TLSv1.0 will only result in lots of interop failures
>> and pain, but no improvement in security.
> 
> We've been over this Martin, the theoretical research shows that for Merkle-
> Damgård functions, combining them doesn't increase their security 
> significantly.
> 
> And the practical research:
> https://eprint.iacr.org/2016/131.pdf
> https://www.iacr.org/archive/asiacrypt2009/59120136/59120136.pdf
> only confirms that.
> 
> So, please, use a bit less inflammatory language when you have no factual 
> arguments behind your assertions.


I recently looked into the practical research paper you referenced,
about what it **REALLY** says.  Maybe you should have read this first,
about the *true* prerequisites of the attack, and what it means *really*
means for the security of digitally_signed using SHA1||MD5 in TLSv1.0+TLSv1..1

>From what I can read in that paper, it actually confirms that the security
of digitally_signed using SHA1||MD5 for the TLSv1.0 ServerKeyExchange
handshake message for TLS_ECDHE_RSA_WITH_* is more in the same ballpark
as the security of SHA256, because of the small size of data that gets
signed.



from page 3 of this paper:

   https://eprint.iacr.org/2016/131.pdf

   In this paper, we devise the first second preimage attack on the
   concatenation combiner of Merkle-Damg°ard hash functions which is
   faster than 2^n.  As in related attacks (and in particular, [23])
   we obtain a tradeoff between the complexity of the attack and the
   length of the target message. In particular, our second preimage
   attack is faster than 2^n only for input messages of length at
   least[*2]  2^(2*n/7).  The optimal complexity[*3] of our attack is
   2^(3*n/4), and is obtained for (very) long messages of length 2^(3*n/4).
   Due to these constraints, the practical impact of our second preimage
   attack is limited and its main significance is theoretical.
   Namely, it shows that the concatenation of two Merkle-Damg°ard hash
   functions is not as strong a single ideal hash function.


  [*2]  For example, for n=160 and messge block length 512 bits (as in SHA-1),
the attack is faster than 2^160 only for messages containing at least
2^48 blocks, or 2^52 bytes.

  [*3]  The complexity formulas do not take into account (small) constant
factors, which are generally ignored throughout this paper.


FYI   2^52 bytes == 4,503,599,627,370,496 bytes ==  4 PETA-Bytes

i.e. you need an AWFULLY HUGE amount of "wiggle room" in order to
efficiently construct a multi-collision, and the paper mentions that
if you have just tiny wiggle-room available, you are out-of-luck,
there is no "fast" algorithm for an attack.  More so if you also
have to account for a leading length field in the message.


For comparison size of digitally-signed data in ServerKeyExchange:

  struct {
  select (KeyExchangeAlgorithm) {
  case dh_anon:
  ServerDHParams params;
  case dhe_dss:
  case dhe_rsa:
  ServerDHParams params;
  digitally-signed struct {
  opaque client_random[32];
  opaque server_random[32];
  ServerDHParams params;
  } signed_params;
  case rsa:
  case dh_dss:
  case dh_rsa:
  struct {} ;
 /* message is omitted for rsa, dh_dss, and dh_rsa */
  /* may be extended, e.g., for ECDH -- see [TLSECC] */
  };
  } ServerKeyExchange;

for a DHE-2048 bit keypair, the size of signed data is typically

32 + 32 + 2 + 256 + 2 + 1 + 2 + 256  = 583 bytes

for ECDHE P-384, the size of signed data is typically

32 + 32 + 1 + 2 + 1 + 97  = 165 bytes 


So for TLS_ECDHE_RSA_WITH_* cipher suites, you have a *MUCH* stronger
baseline security with TLSv1.0 + TLSv1.1 (SHA1||MD5) than with TLSv1.2
(SHA1-only)


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-14 Thread Martin Rex
Hubert Kario  wrote:
> 
> there are attacks, like BEAST, that TLS 1.0 is vulnerable to that
> TLS 1.1 and TLS 1.2 are not - that's a fact there are ciphersuites
> that are invulnerable to Lucky13 and similar style of attacks that
> can not be used with TLS 1.0 or TLS 1.1 - that's a fact

BEAST is an attack against Web Browsers (and the abuse known as SSL-VPNs),
it is *NO* attack against TLS -- whose design properties are described
in appendix F of rfc5246, and there is a trivial workaround for those
few apps that were affected.  Continued mentioning of BEAST really
only means one thing: severe crypto-cluelessness.

  http://www.educatedguesswork.org/2011/11/rizzoduong_beast_countermeasur.html

There are two things that BEAST showed:
   Running arbitrary attacker-supplied active content is a bad idea!
   Performing protocol version downgrade dances is a bad idea.

Lucky thirteen applies equally to all three:  TLSv1.0, TLSv1.1 and TLSv1.2,
but was a real-world issue only for borked implementations of DTLS (those
implementations that were providing a no-limits guessing oracle.


> 
> that doesn't sound to me like "ZERO security benefit",

You seem to be confusing the difference between
  (1) ensuring that TLSv1.2 support is enabled
with
  (2) disabling TLSv1.0 + TLSv1.1 support.

If you do (1), then (2) does not add security benefits.

> 
>> On digitally_signed, is proven that TLSv1.2 as defined by rfc5246
>> is the weakest of them all.
> 
> yes, provided that:
>  - MD5 is actually in use
>  - or Joux does not hold and MD5+SHA1 is _meaningfully_ stronger[1]
> than SHA-1 alone *and* SHA-1 is actually in use

MD5 || SHA-1  is **ALWAYS** meaninfully stronger than SHA-1 alone, *NO* if!

>  
>> The POODLE paper
>>https://www.openssl.org/~bodo/ssl-poodle.pdf
>> 
>> asserts that many clients doing downgrade dances exist, and at the
>> time of publication, this includes Mozilla Firefox, Google Chrome and
>> Microsoft Internet Explorer.
> 
> either we consider clients that haven't been updated for half a decade now to 
> be of importance, then disabling support for old protocol versions has 
> meaningful security benefit, or we ignore them as they include insignificant 
> percentage of users and are vulnerable to much easier attacks anyway
> 
> so, which way is it?

MSIE seems to still be doing downgrade dances _today_, btw.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-14 Thread Martin Rex
Hubert Kario  wrote:
> Martin Rex wrote:
>> Hubert Kario  wrote:
>>> MD5 was deprecated and removed by basically every library
>>> and can't be used in TLS 1.2, I specifically meant SHA1
>> 
>> MD5 deprecated ?  Nope, glaring emtpy:
>>   https://www.rfc-editor.org/errata_search.php?rfc=5246
>> 
>> MD5 removed ? Mostly, but several implementors had to be prodded with
>>   with CVE-2015-7575 (SLOTH) to remove it.
> 
> I meant in practice
> 
>> The real issue at hand is:
>> 
>>   Prohibiting TLSv1.0 and TLSv1.1 is going to result in lots of
>>   interop problems, while at the same time providing *ZERO*
>>   security benefit.
> 
> that's your opinion, not an established fact

You got this backwards.

There is a bold assertion that disabling TLSv1.0 and TLSv1.1 (alone)
would provide security benefits, but a complete lack of proof.
On digitally_signed, is proven that TLSv1.2 as defined by rfc5246
is the weakest of them all.

>> 
>>   What *WOULD* provide *HUGE* benefit, would be to remove the
>>   dangerous "protocol version downgrade dance" from careless applications,
>>   that is the actual problem known as POODLE, because this subverts the
>>   cryptographic procection of the TLS handshake protocol.
>>  
>>   We've known this downgrade dance to be a problem since the discussion
>>   of what became rfc5746.  Prohibiting automatic protoocol version
>>   downgrade dances is going to ensure that two communication peers
>>   that support TLSv1.2 will not negotiate a lower TLS protocol version.
> 
> which exact piece of popular software actually still does that?
> It ain't curl, it ain't Chrome, it ain't Firefox.

It definitely was implemented in Chrome and Firefox, which is how this
poor document got onto standards track:
   
   https://tools.ietf.org/html/rfc7507

 TLS Fallback Signaling Cipher Suite Value (SCSV)
for Preventing Protocol Downgrade Attacks

>
> It also isn't something done automatically 
> by any TLS implementation that's even remotely popular:
> OpenSSL, NSS, GnuTLS, schannel, Secure Transport, go...

It is impossible to do this transparently, because the a connection
is ususable after a fatal TLS hanshake failure (or unexpected socket closure).

Any application-level cleartext negotiation will have to be repeated
as well, and the TLS implementation typically does not know about it.
(such as HTTP CONNECT or STARTTLS)

The POODLE paper
   https://www.openssl.org/~bodo/ssl-poodle.pdf

asserts that many clients doing downgrade dances exist, and at the
time of publication, this includes Mozilla Firefox, Google Chrome and
Microsoft Internet Explorer.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-09 Thread Martin Rex
Hubert Kario  wrote:
>On Wednesday, 8 May 2019 02:31:57 CEST Martin Rex wrote:
>> Hubert Kario  wrote:
>>>> Thanks to Peter Gutmann for the summary:
>>>> https://mailarchive.ietf.org/arch/msg/tls/g0MDCdZcHsvZefv4V8fssXMeEHs
>>>> 
>>>> which you may have missed.
>>> 
>>> yes, Joux paper also shows that attacking MD5||SHA1 is harder than
>>> attacking SHA1 alone
>>> 
>>> but that doesn't matter, what matters is _how much harder it is_ and Joux
>>> paper says that it's less than a work factor of two, something also knows
>>> as a "rounding error" for cryptographic attacks
>> 
>> collision attacks and real-time 2nd preimage attacks on randomly keyed
>> hashes are substantially different things.
>> 
>> simple math seems hard.
>> 
>> 
>> TLSv1.0 + TLSv1.1 both use   (rsa, MD5||SHA1)
>> 
>> TLSv1.2 (rfc5246) permitted (rsa, MD5) and allows (rsa,SHA1)
> 
> side note on that, with ECDSA, all three versions use (ecdsa, sha1) so 
> everything we are discussing applies to RSA and RSA only

(EC)DSA is fatally flawed (design flaw), no-one should be using it.
EdDSA once it becomes available, might be OK.

I guess that all existing TLS implementations with ECDSA support might
be leaking (enough info to compute) the ECDSA private key to a mere passive
observer of a few thousand full TLS_ECHDE_ECDSA_ handshakes.



> 
> MD5 was deprecated and removed by basically every library
> and can't be used in TLS 1.2, I specifically meant SHA1

MD5 deprecated ?  Nope, glaring emtpy:
  https://www.rfc-editor.org/errata_search.php?rfc=5246

MD5 removed ? Mostly, but several implementors had to be prodded with
  with CVE-2015-7575 (SLOTH) to remove it.


The real issue at hand is:

  Prohibiting TLSv1.0 and TLSv1.1 is going to result in lots of
  interop problems, while at the same time providing *ZERO*
  security benefit.

  The installed base of software which is limited to TLSv1.0
  for outgoing TLS-protected communication is huge.


  What *WOULD* provide *HUGE* benefit, would be to remove the
  dangerous "protocol version downgrade dance" from careless applications,
  that is the actual problem known as POODLE, because this subverts the
  cryptographic procection of the TLS handshake protocol.

  We've known this downgrade dance to be a problem since the discussion
  of what became rfc5746.  Prohibiting automatic protoocol version
  downgrade dances is going to ensure that two communication peers
  that support TLSv1.2 will not negotiate a lower TLS protocol version.

  If applications doing downgrade dances had at least a basic amount
  of risk management, and would refuse to perform an unlimited amount
  of downgrades automatically and secretly, then everyone would be
  much better of.

  I've seen web browsers doing this entirely without risk management,
  and wasn't there some Java class which also did this?



And PLEASE stop unconditionally bashing SHA-1

I am constantly seeing crypto-clueless folks, including some national
governmental agencies, that are giving out their own TLS recommendations,
which are typically sterile of scientific rationale, and it's pretty
obvious those folks either haven't read US NIST SP 800-57 part 1 rev.4
or not understood it.  In particular Table 3 on top of page 54, about
the signficant difference between sha1WithRsaEncryption and HMAC-SHA1
e.g. when used for integrity protection by a TLS cipher suites such
as the TLSv1.2 MTI cipher suite TLS_RSA_WITH_AES_128_CBC_SHA.


Security  Digital Signatures and   HMAC, Key Derivation Functions
Strength  hash-only applications   Random Number Generation

no <=80 no no no no  SHA-1  no no no no no no no no no no no no no no

  112  SHA-224,SHA-512/224,SHA3-224

  128  SHA-256,SHA-512/256,SHA3-256   SHA-1

  192  SHA-384,SHA3-384  SHA-224,SHA-512/224

 >=256 SHA-512,SHA3-512SHA-256,SHA-512/256,SHA-384
SHA-512, SHA3-512


In particular, if you compare HMAC-SHA1 to the shorter GMAC integrity
protection afforded by AES-GCM cipher suites (rfc5288,rfc5289)
or the even shorter integrity protection afforded
by AES-CCM cipher suites (rfc6655).


Lots of folks erroneously believe that
   TLS_RSA_WITH_AES_128_GCM_SHA256
and more so
   TLS_RSA_WITH_AES_256_GCM_SHA384

would provider stronger integrity protection than

   TLS_RSA_WITH_AES_128_CBC_SHA

while in reality, it is just the opposite.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-07 Thread Martin Rex
Hubert Kario  wrote:
>> 
>> Thanks to Peter Gutmann for the summary:
>> 
>> https://mailarchive.ietf.org/arch/msg/tls/g0MDCdZcHsvZefv4V8fssXMeEHs
>> 
>> which you may have missed.
> 
> yes, Joux paper also shows that attacking MD5||SHA1 is harder than attacking  
> SHA1 alone
> 
> but that doesn't matter, what matters is _how much harder it is_ and Joux 
> paper says that it's less than a work factor of two, something also knows
> as a "rounding error" for cryptographic attacks

collision attacks and real-time 2nd preimage attacks on randomly keyed
hashes are substantially different things.

simple math seems hard.


TLSv1.0 + TLSv1.1 both use   (rsa, MD5||SHA1)

TLSv1.2 (rfc5246) permitted (rsa, MD5) and allows (rsa,SHA1)

if we assumed that there *existed* (it currently doesn't, mind you)

a successful preimage attack on MD5  with effort  2^20
a successful preimage attack on SHA1 with effort  2^56

then if Joux would apply not just to multicollisons, but also 2nd preimage,

then the efforts would be:

  TLSv1.2 (rsa,MD5)  2^20
  TLSv1.2 (rsa,SHA1) 2^56

  TLSv1.0 (rsa, MD5||SHA1) >= 2^57 (slightly more than the stronger of the two)


Comparing  TLSv1.0 (rsa,MD5||SHA1) 2^57  with TLSv1.2 (rsa,MD5) 2^20

A factor 2^37 is significantly more than "marginally stronger".


If you are aware of successfull 2nd preimage attacks on
either MD5 or SHA1, please provide references.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-06 Thread Martin Rex
Hubert Kario  wrote:
> On Friday, 3 May 2019 16:56:54 CEST Martin Rex wrote:
>> Hubert Kario  wrote:
>> > We've been over this Martin, the theoretical research shows that for
>> > Merkle- Damgård functions, combining them doesn't increase their security
>> > significantly.
>> 
>> You are completely misunderstanding the results.
>> 
>> The security is greatly increased!
> 
> like I said, that were the follow up papers
> 
> the original is still Joux:
> https://www.iacr.org/archive/crypto2004/31520306/multicollisions.pdf

Thanks to Peter Gutmann for the summary:

https://mailarchive.ietf.org/arch/msg/tls/g0MDCdZcHsvZefv4V8fssXMeEHs

which you may have missed.

> 
>>   TLSv1.2 (rsa,MD5) *cough* -- which a depressingly high number of clueless
>>   implementers actually implemented, see SLOTH
> 
> SLOTH?

SLOTH is a brainfart in the TLSv1.2 spec which is blatently obvious.

If (md5,rsa) was actually shipped in a TLSv1.2 implementation, it indicates
a dysfunctional (or crypto-clueless) QA for the project.


The erroneous implementation of (md5,rsa) was silently removed from openssl
*without* CVE, after I privately complained about this brainfart having
been added to openssl.


I ranted about the TLSv1.2 digitally_signed brainfart in rfc5246 on
the IETF TLS WG mailing list here (01-Oct-2013):

https://mailarchive.ietf.org/arch/msg/tls/l_R94xX7myvL9x8I_7L7NiDjV9w

assuming that crypto clue and common sense should work.

Looking at what was still affected by the problem end of 2014,
it seems that you *MUST* hit TLS implementors with a CVE

https://www.mitls.org/pages/attacks/SLOTH#disclosure

and can not rely on crypto-clue and common sense.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-05-03 Thread Martin Rex
Hubert Kario  wrote:
> 
> We've been over this Martin, the theoretical research shows that for Merkle-
> Damgård functions, combining them doesn't increase their security 
> significantly.

You are completely misunderstanding the results.

The security is greatly increased!

Nobody is afraid of the exhaustive search preimage attacks.

What folks with a little crypto clue are afraid of is
significantly-faster-than-exhaustive-search real-time preimage attacks.
And this is where

  TLSv1.0 + TLSv1.1 (rsa,SHA1+MD5)

is *significantly* stronger than

  TLSv1.2 (rsa,MD5) *cough* -- which a depressingly high number of clueless
  implementers actually implemented, see SLOTH
  TLSv1.2 (rsa,SHA1)


That is also trivially formally provable.

Assume that a real-time preimage attack for *one* of the functions is
discovered, and compare the resulting efforts.

 
-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for "Deprecating TLSv1.0 and TLSv1.1"

2019-04-30 Thread Martin Rex
Martin Thomson  wrote:
> On Sat, Apr 27, 2019, at 07:29, Viktor Dukhovni wrote:
>> The sound-bite version is: first raise the ceiling, *then* the floor.
> 
> Yep.  We've done the ceiling bit twice now.
> Once in 2008 when we published TLS 1.2 and then in 2018
> with the publication of TLS 1.3.  I'd say we're overdue for the floor bit.

Just that this rationale is a blatant lie.

It is formally provable that from the three protocol versions:

 TLSv1.0, TLSv1.1, TLSv1.2

the weakest one is TLSv1.2, because of the royally stupid downgrade
in the strength of digitally signed.


Disabling TLSv1.0 will only result in lots of interop failures
and pain, but no improvement in security.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Further TLS 1.3 deployment updates

2018-12-14 Thread Martin Rex
Nico Williams  wrote:
> On Wed, Dec 12, 2018 at 04:21:43PM -0600, David Benjamin wrote:
>> We have one more update for you all on TLS 1.3 deployment issues. Over the
>> course of deploying TLS 1.3 to Google servers, we found that JDK 11
>> unfortunately implemented TLS 1.3 incorrectly. On resumption, it fails to
>> send the SNI extension. This means that the first connection from a JDK 11
>> client will work, but subsequent ones fail.
>> https://bugs.openjdk.java.net/browse/JDK-8211806
> 
> I'm told that OpenSSL accidentally takes the SNI from the initial
> connection on resumption if there's no SNI in the resumption.  This
> seems like a very good workaround for the buggy JDK 11 TLS 1.3 client,
> as it has no fingerprinting nor downgrade considerations.

Just that this workaround is a no-go for any layered approach
to SNI, where server-side processing of SNI is outside of the TLS stack.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Certificate keyUsage enforcement question (new in RFC8446 Appendix E.8)

2018-11-07 Thread Martin Rex
Geoffrey Keating  wrote:
> Viktor Dukhovni  writes:
>> 
>> TL;DR:  Should TLS client abort DHE-RSA handshakes with a peer
>> certificate that *only* lists:
>> 
>> X509v3 Key Usage: 
>> Key Encipherment, Data Encipherment
> 
> Yes, because in DHE-RSA, the RSA key is used for signing, and this is
> an encryption-only key.


There is *ZERO* security problem associated with TLS client allowing
a TLS server to do this, but it makes it harder to catch defective
CA software and bogus CA issuing practices when clients do not complain
here -- and the TLS specification says this KeyUsage DigitalSignature
is a MUST for DHE/ECDHE key exchange:

  TLSv1.2:  https://tools.ietf.org/html/rfc5246#page-49

  DHE_RSARSA public key; the certificate MUST allow the
  ECDHE_RSA  key to be used for signing (the
 digitalSignature bit MUST be set if the key
 usage extension is present) with the signature
 scheme and hash algorithm that will be employed
 in the server key exchange message.
 Note: ECDHE_RSA is defined in [TLSECC].

  TLSv1.0:  https://tools.ietf.org/html/rfc2246#page-38


CAs and CA software that issues certificates as TLS server certificates
(i.e. with ExtKeyUsage  id-kp-serverAuth, id-kp-clientAuth or both) and
forgets to assert DigitalSignature, prove their own royal brokenness.


Using an RSA key for PKCS#1 v1.5 signatures is *NO* security problem.

Do not get confused by the FUD and snake-oil that resulted in the
needless additional complexity of RSA-PSS in TLSv1.3, that adds ZERO
security value.

   https://www.schneier.com/blog/archives/2018/09/evidence_for_th.html

   https://eprint.iacr.org/2018/855


There is some security risk with using an RSA signing-only key
for PKCS#1 v1.5 encryption, i.e. the equivalent of
using a keyUsage without keyEncipherment for static-RSA key exchange 


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] WGLC for draft-ietf-tls-sni-encryption

2018-10-17 Thread Martin Rex
Eric Rescorla  wrote:
> Martin Rex  wrote:
> 
> > Sean Turner  wrote:
> > >
> > > This is the working group last call for the
> > > "Issues and Requirements for SNI Encryption in TLS"
> > > draft available at
> > > http://datatracker.ietf.org/doc/draft-ietf-tls-sni-encryption/.
> > > Please review the document and send your comments to the list
> > > by 2359 UTC on 31 October 2018.
> >
> >
> > I think the idea of encrypted SNI is inherently flawed in its concept.
> >
> 
> It's pretty late to raise this point

Nope, I've raised this *EVERY* time on the list when the dead horse was
newly beaten.


> 
>> As it is, there are a number of servers which desperately require
>> the presence of TLS extension SNI, or will fail TLS handshakes either
>> by choking and dropping connections (Microsoft IIS 8.5+) or by
>> very unhelpful alerts (several others), and also HTTP/2.0 requires
>> unconditional cleartext presence of TLS extension SNI.  Any kind of
>> heuristics-based approach for clients to guess whether or not to
>> send TLS extension SNI is flawed from the start.  If a network
>> middlebox can make a client present a cleartext TLS extension SNI
>> by refusing connections without cleartext TLS extension SNI,
>> the entire effort becomes pretty useless.
> 
> Yes, clients must not fall back to cleartext SNI in this case.

Please give a clear deterministic algorithm how a client can
tell apart a server that requires cleartext SNI from a server
that does not want cleartext SNI.


>
>   It is necessary
> > that the client knows reliably that a hostname must not be sent
> > in the clear, including when the connection fails for unknown reasons,
> > and only a new URI method will reliably provide such a clear distinction.
> >
> 
> I don't agree with this claim, given that we have a number of other proposed
> mechanisms for the client to know when ESNI is allowed, including DNS.

DNS is a non-starter for several reasons.

Ever heard of firewalled networks, private DNS universes and HTTP CONNECT
proxies?

Then the TLS implementation itself may be completely free of blocking
network IO.  


> 
>> By sending TLS extension SNI in the clear to a server, the client
>> tells that server:  I am going to perform an rfc2818 "HTTP over TLS"
>> section 3.1 "Server Endpoint Identification" matching
> 
> I don't know where you get this from, given that RFC 6066 doesn't
> even cite 2818.

Simply to avoid a downref.  I should not have to explain this to you.

rfc2818, section 3.1:

   3.1.  Server Identity

   In general, HTTP/TLS requests are generated by dereferencing a URI.
   As a consequence, the hostname for the server is known to the client.
   If the hostname is available, the client MUST check it against the
   server's identity as presented in the server's Certificate message,
   in order to prevent man-in-the-middle attacks.

rfc6066, section 3:

   3.  Server Name Indication

   TLS does not provide a mechanism for a client to tell a server the
   name of the server it is contacting.  It may be desirable for clients
   to provide this information to facilitate secure connections to
   servers that host multiple 'virtual' servers at a single underlying
   network address.


It looks blatantly obvious to me that
  rfc2818:  "check the hostname of the server agains the server's identity
 as presented in the server's Certifcate messag"
and
  rfc6066:  "a mechanism for a client to tell a server the name of the server
 it is contacting"

refers to the VERY SAME THING, and that the check urged by rfc2818 section 3.1
is the reason why the server responding with the _wrong_ certificate is
a problem.


> 
> In protocol version SSLv3->TLSv1.2, encryption keys are only established
>
>> *AFTER* successful authentication of the server through its server
>> certificate. So it was obviously impossible to encrypt the information
>> whose only purpose it was to allow the server to decide *which* TLS Server
>> certificate to use for authentication (hen-and-egg).
> 
> This isn't really correct: the mechanism for encrypting SNI itself would
> actually work fine in previous versions of TLS as well.

Actually, no, it will not work at all in TLSv1.2
(this would not be TLSv1.2 anymore, or an entirely different TLS extension)

My server-side implementation of TLS extension SNI is entirely outside
of the TLS protocol stack.  My middleware selects the Server certificate,
and my middleware also provides the convenience function for rfc2818
section 3.1 server endpoint identification as well as the client-side
SSL session cache management, because you essentially can not do this
within TLS, and rfc5

Re: [TLS] WGLC for draft-ietf-tls-sni-encryption

2018-10-17 Thread Martin Rex
Sean Turner  wrote:
>
> This is the working group last call for the
> "Issues and Requirements for SNI Encryption in TLS"
> draft available at
> http://datatracker.ietf.org/doc/draft-ietf-tls-sni-encryption/.
> Please review the document and send your comments to the list
> by 2359 UTC on 31 October 2018.


I think the idea of encrypted SNI is inherently flawed in its concept.


If anyone really thinks that there should be a scheme where a server's
hostname is no longer transfered in a cleartext (including TLS extension SNI),
then first of all a *NEW* distinct URI method should be defined for that
purpose,  e.g. "httph://"  as a reliable indicator to the client processing
this URI, that the hostname from this URI is not supposed to be sent
over the wire in the clear *anywhere*.

As it is, there are a number of servers which desperately require
the presence of TLS extension SNI, or will fail TLS handshakes either
by choking and dropping connections (Microsoft IIS 8.5+) or by
very unhelpful alerts (several others), and also HTTP/2.0 requires
unconditional cleartext presence of TLS extension SNI.  Any kind of
heuristics-based approach for clients to guess whether or not to
send TLS extension SNI is flawed from the start.  If a network
middlebox can make a client present a cleartext TLS extension SNI
by refusing connections without cleartext TLS extension SNI,
the entire effort becomes pretty useless.  It is necessary
that the client knows reliably that a hostname must not be sent
in the clear, including when the connection fails for unknown reasons,
and only a new URI method will reliably provide such a clear distinction.



I also believe the draft is contains flawed assumptions and
misleading descriptions of facts and history.



e.g. Section 2.2

   "The common name component of the server certificate generally exposes
the same name as the SNI.  In TLS versions 1.0 [RFC2246], 1.1
[RFC4346], and 1.2 [RFC5246]""

SubjectAltName attributes, unless you were hiding under some stone with
no internet acces for more than a decade...

rfc2818 "HTTP over TLS" section 3.1 "Server Endpoint Identification"
described retroactively, what kind of name matching TLS clients Browsers
were doing -- and what all TLS clients are supposed to be doing on a
TLS server certificate.

 https://tools.ietf.org/html/rfc2818#section-3

This approach was recommended for use in other protocols besides
HTTP over TLS by RFC 6125.


By sending TLS extension SNI in the clear to a server, the client
tells that server:  I am going to perform an rfc2818 "HTTP over TLS"
section 3.1 "Server Endpoint Identification" matching -- and if you
have several different TLS server certificates to choose from,
you better send me one that is going to succeed this specific matching,
which I am going to perform on your TLS ServerCertificate response.



The TLS server certificate could only be conveyed *IN*THE*CLEAR* in SSLv3,
could be conveyed only in the clear in TLSv1.0, when TLS extension SNI was
proposed in rfc3546 to allow virtual hosting from HTTP to work with HTTPS,
and could be conveyed only in the clear in TLSv1.2, when SNI was rev'ed
by rfc6066.

In protocol version SSLv3->TLSv1.2, encryption keys are only established
*AFTER* successful authentication of the server through its server
certificate. So it was obviously impossible to encrypt the information
whose only purpose it was to allow the server to decide *which* TLS Server
certificate to use for authentication (hen-and-egg).

DH_anon cipher suites do not have a server certificate handshake message,
and they are well-known to be completely insecure to man-in-the-middle
attacks anyways, which is why TLSv1.2 (rfc5246) says this about them:

   The following cipher suites are used for completely anonymous
   Diffie-Hellman communications in which neither party is
   authenticated.  Note that this mode is vulnerable to man-in-the-
   middle attacks.  Using this mode therefore is of limited use: These
   cipher suites MUST NOT be used by TLS 1.2 implementations unless the
   application layer has specifically requested to allow anonymous key
   exchange.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [CAUTION] Re: Fwd: New Version Notification for draft-moriarty-tls-oldversions-diediedie-00.txt

2018-07-10 Thread Martin Rex
m...@sap.com (Martin Rex) wrote:
> Andrei Popov  wrote:
>>
>> On the recent Windows versions, TLS 1.0 is negotiated more than 10%
>> of the time on the client side (this includes non-browser connections
>> from all sorts of apps, some hard-coding TLS versions),
>> and TLS 1.1 accounts for ~0.3% of client connections.
> 
> "On recent Windows versions" sounds like figure might not account
> for Windows 7 and Windows Server 2008R2, about half of the installed
> base of Windows, and where the numbers are likely *MUCH* higher.
> 
> When troubleshooting TLS handshake failures, I sometimes trying
> alternative SSL/TLS clients on customer machines through remote support,
> and it seems when I run this command on a Windows 2012R2 server:
> 
> powershell "$web=New-Object System.Net.WebClient ; 
> $web.DownloadString('https://www.example.com/')" 2>&1
> 
> it connects with TLSv1.0 only, and this is a client-side limitation.
> 
> To make it use TLSv1.2, I would have to use
> 
> powershell "[Net.ServicePointManager]::SecurityProtocol = 
> [Net.SecurityProtocolType]::Tls12 ; $web=New-Object System.Net.WebClient ; 
> $web.DownloadString('https://www.example.com/')" 2>&1
> 
> i.e. explicit opt-in.


btw. I checked this on a Windows 10 (1709) machine, and it's powershell also
tries connecting with TLSv1.0 only.

To me, it looks more like 100% of the Microsoft Windows installed
base not being ready for a TLSv1.2-only world.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Fwd: New Version Notification for draft-moriarty-tls-oldversions-diediedie-00.txt

2018-07-09 Thread Martin Rex
Andrei Popov  wrote:
>
> On the recent Windows versions, TLS 1.0 is negotiated more than 10%
> of the time on the client side (this includes non-browser connections
> from all sorts of apps, some hard-coding TLS versions),
> and TLS 1.1 accounts for ~0.3% of client connections.

"On recent Windows versions" sounds like figure might not account
for Windows 7 and Windows Server 2008R2, about half of the installed
base of Windows, and where the numbers are likely *MUCH* higher.

When troubleshooting TLS handshake failures, I sometimes trying
alternative SSL/TLS clients on customer machines through remote support,
and it seems when I run this command on a Windows 2012R2 server:

powershell "$web=New-Object System.Net.WebClient ; 
$web.DownloadString('https://www.example.com/')" 2>&1

it connects with TLSv1.0 only, and this is a client-side limitation.

To make it use TLSv1.2, I would have to use

powershell "[Net.ServicePointManager]::SecurityProtocol = 
[Net.SecurityProtocolType]::Tls12 ; $web=New-Object System.Net.WebClient ; 
$web.DownloadString('https://www.example.com/')" 2>&1

i.e. explicit opt-in.


I already have a long list of stuff that uses TLSv1.0 for outgoing
communication by default, and that list is constantly growing, and that
is not just stuff running with JavaSE 1.6 or JavaSE 1.7.
Btw. lots of J2EE Servers are still on JavaSE 1.6, without the
non-public TLSv1.2-capable update.


We also had customer incidents about hardware stuff, they called it "RF Gun",
probably some RFID scanner, that seems to be limited to TLSv1.0 and
TLSv1.0 cipher suites (either 3DES-EDE-CBC-SHA or RC4-128).


I would really hate it to see the IETF enter the "planned/forced obsolence"
market, growing the dumpsters of electronic equipment that would still work
just fine, but has to be retired for the sole purpose of economic growth.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Broken browser behaviour with SCADA TLS

2018-07-06 Thread Martin Rex
Peter Gutmann  wrote:
> 
> (I have no idea about the prevalence of IE vs. others, but since it's the
> default browser for Windows I assume this will be the easiest/recommended path
> fowards, and until Windows 10 MS had a long history of being excruciatingly
> careful about backwards compatibility so it seems the safest bet).


The last time that I've seen a Microsoft Windows TLS implementation
show a reasonble level of interoperability was Windows2003sp2
with Hotfix 948963 installed.

Windows 2008R2 has sever interop problems with TLSv1.2,
and goofed the RSA premaster secret version check.
   https://www.ietf.org/mail-archive/web/tls/current/msg08139.html

Windows2012R2 made things worse, same TLSv1.2 interop failures,
plus default lack of backwards compatibility to non-SNI clients.


I was surprised when I just recently saw responses from Windows 2016
   Microsoft IIS/10.0 = presumably Windows 2016
   Microsoft IIS/8.5  = Windows 2012R2
   Microsoft IIS/7.5  = Windows 2008R2

While 2016 seems to have fixed two annoying TLSv1.2 interop failures that
affect 2008R2 and 2012R2 (about the TLSv1.2 signature_algorithms extension),
Win2016 also newly added two _new_ handshake failures (compared to 2008R2)

Comparing Win2016 behaviour for SSLv2Hello vs. extension-less SSLv3 Hello
the new behaviour is just crazy: the server requires TLS extension SNI
for SSLv3 ClientHellos with client_version=(3,1) and (3,2),
but *NOT* for client_version=(3,3) and *NOT* for SSLv2Hellos (any version)!

I would expect that someone who cares about backwards compatibility
would test stuff at least once before shipping.

Windows 2008R2 and later looks like entirely untested to me
or maybe "tested only with MSIE and only in default config"
-- which is equivalent to "untested" on my scorecard.
Windows 2008R2 in default config had TLSv1.2 disabled,
and no one seem to have thought of testing with a sha256WithRsaEncryption
signed server certificate.


-Martin

All cells marked with ** are SChannel Bugs.


   2003 2008R22012R2 2016

SSLv2Hello offering (3,1) TLSv1.0   TLSv1.0   **FAILTLSv1.0

SSLv2Hello offering (3,2) TLSv1.0   TLSV1.1   **FAILTLSv1.1

SSLv2Hello offering (3,3) TLSv1.0   **TLSv1.1 **FAIL**TLSv1.1

ClientHello (no extensions)   TLSv1.0   TLSv1.0   **FAIL**FAIL
  client_version=(3,1)

ClientHello (SNI) TLSv1.0   TLSv1.0   TLSv1.0   TLSv1.0
  client_version=(3,1)

ClientHello (no extensions)   TLSv1.0   TLSv1.1   **FAIL**FAIL
  client_version=(3,2)

ClientHello (SNI) TLSv1.0   TLSv1.1   TLSv1.1   TLSv1.1
  client_version=(3,2)

ClientHello (no extensions)   TLSv1.0   **FAIL**FAILTLSv1.2
  client_version(3,3)

ClientHello (SNI) TLSv1.0   **FAIL**FAILTLSv1.2
  client_version(3,3)

ClientHello (SNI+sig_algs)
  client_version(3,3) TLSv1.0   TLSv1.2   TLSv1.2   TLSv1.2


The strange SChannel behaviour for SSLv2Hello offering (3,3) affects
MSIE 11 Win 7/2008R2 (and probably Win8/8.1/2012/2012R2 as well).
When "SSL Version 2" is enabled in "Internet Options" together with TLSv1.2
then interop with an TLSv1.2-enabled Microsoft IIS (SChannel) is still
possible, because of the server-side bug to negotiate only TLSv1.1.

But when MSIE offers TLSv1.2 in SSLv2Hello to a server with a correct
TLS implementation, and that server responds with protocol version TLSv1.2,
then MSIE chokes and dies.

  https://support.microsoft.com/en-us/kb/2851628

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Broken browser behaviour with SCADA TLS

2018-07-05 Thread Martin Rex
Martin Thomson  wrote:
> 
> The problem with DHE of course being that it uses the TLS 1.0 suites
> with the SHA1 MAC and with the MAC and encrypt in the wrong order.

I'm confused about what you are thinking here.

In TLSv1.0 through TLSv1.2 inclusive, all of the TLS handshake messages,
including the *KeyExchange handshake messages (with the exception of Finished)
are in the clear and neither MACed nor encrypted, so the ordering
MtE vs. EtM for the GenericBlockCipher record PDU seems quite irrelevant.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Broken browser behaviour with SCADA TLS

2018-07-05 Thread Martin Rex
Peter Gutmann  wrote:
> David Benjamin ? writes:
> 
>>The bad feedback was not even at a 2048-bit minimum, but a mere 1024-bit
>>minimum. (Chrome enabled far more DHE ciphers than others, so we encountered
>>a lot of this.) 2048-bit was completely hopeless. At the time of removal, 95%
>>of DHE negotiations made by Chrome used a 1024-bit minimum.
> 
> How does Google, rather than the people running the systems being connected
> to, know that 1024-bit DH isn't secure enough for a given environment?  The
> majority of this stuff is running on isolated, private networks or inside VPN
> tunnels for which pretty much anything, including 512-bit keys, are fine.

Silently removing stuff (support for TLS_DHE) is just as bad as silently
adding stuff (automatic, silent and unlimited-times protocol downgrade
dance in the broswer which became famously known as POODLE).


> 
> It's also somewhat disturbing that Chrome now walks straight past a perfectly
> good PFS DHE suite and instead goes to a problematic pure-RSA one instead.

Cough, what?  TLS_DHE_ was known to be a security disaster beyond fixing
a good decade ago (14-May-2007):

   https://www.ietf.org/mail-archive/web/tls/current/msg01647.html

but it needed a LOGJAM demonstration to have folks look at the
crappy implementations in the installed base.

We didn't have support for TLS_DHE in our SSL implementation back then,
and I decided I never want to have it added.  We've added support for
TLS_ECDHE a few years ago, but TLS_DHE remains unsupported.  Probably
most usage scenarios that still offer TLS_DHE today, provide less security
than with static-RSA 2048-bit.


How often does your SCADA devices (=servers) regenerate its DHE params?
If it is using DHparams for several weeks, months or even forever, then
there is essentially no PFS, and static RSA will be equal or better than DHE.
Static RSA-2048 will always be better than DHE-1024.  Simply regenerate
and roll your RSA keys cert if it makes you feel good.


btw. which kind of "problematic pure-RSA" are you talking of?
I'm not actually aware of any problem, and I'm much more concerned
about TLS servers with longterm DHE params or longterm ECDHE keys
and equally about servers using session tickets with a longterm session ticket
encryption key, because of the illusion of PFS, which they create but
do not offer.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Mail regarding draft-ietf-tls-tls13

2018-06-19 Thread Martin Rex
Ben Personick  wrote:
>
> (My apology for the long email, I did not have time to write a shorter one)
>  We are currently evaluating when to begin offering ECC Certificates
>  based cypto on our websites.
> 
> Despite the advantages to doing this in TLS 1.2, there is a lot of
> push-back to wait until we "have to support it" once the TLS 1.3 draft
> is published, and the option to use it becomes available.

Honestly, why would you want to do this?

ECC/RSA Dual Cert setups a cryptographically a bad idea, and a real
nuisance for interoperability.

Elliptic Curve Crypto, when used with the design-flawed ECDSA digital
signature algorithm, might leak the private key within a few thousand
TLS full handshakes to a mere passive observer.

Support for EdDSA is somewhere between thin and non-existent still.

And for programmatic TLS clients, which take security serious, and
do not come with hundreds of public CA certificates preconfigured
as trusted, a sudden change of the TLS server certificate when
rearranging TLS cipher suites or when the underlying TLS implementation
starts include support for ECDSA certificates, can easily result
in a sudden unexpected loss of interop (missing trust).

Testing that you have the required trust properly configured for
*BOTH* TLS server certs is a royal pita, and _preparing_ for a TLS client
software update that adds support for ECDSA cipher suites is pretty
much impossible to test (unless you already have that implementation,
but that is not what I meant with preparing).


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] Network middlebox corrupting TLS session resumes

2018-02-09 Thread Martin Rex
Hi,

During the analysis of a recent customer support call, I determined
from a wireshark/network trace that the cause of unexpected failures
of TLS session resumption handshakes were caused by some broken
network middlebox, which allegedly was configured for "SSL inspection".

I would like to know whether this problem is visible on the telemetry
data collections of any browser folks.


The network middlebox was seemingly *NOT* doing MitM-Attacks on the
connection, because the full handshakes with (remote 3rd-party) servers
succeeded just fine with genuine TLS server certs.


I noticed what looked like three subtly different kind of failures,
but only got the wireshark trace of one of these for analysis.

TLS session resumes were corrupted by that "SSL inspecting"
network middlebox in the follwoing fashion:

  (1) non-critical (but negligent) corruption:
  the protocol version number of the TLS record that carries
  ServerHello was changed from (3,3) to (3,1) by that
  network middlebox on TLS session resume attempts.
  This change did not occur on TLS full handshakes, and the
  genuine server sends (3,3) at the record layer both times.

  (2) fatal corruption:
  the 5-byte TLS record header of the ChangeCipherSpec handshake
  message was missing (removed from the network stream). and
  the remaining content byte (01) is obviously not a valid start
  of a new TLS record, causing our TLS client to abort the handshake
  with a fatal TLS illegal_parameter alert (although it seems
  that the appropriate alert would be decode_error).

  The genuine server was sending the TLS record with ServerHello
  and the TLS record with ChangeCipherSpec in the same
  TCP segment and IP datagram (152 bytes on the wire),
  and the filtered response that went through that network
  middlebox was just 147 bytes on the wire, with the
  5-byte TLS record header of ChangeCipherSpec missing.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2017-12-15 Thread Martin Rex
Ilari Liusvaara  wrote:
> On Fri, Dec 15, 2017 at 07:33:44PM +, Tim Hollebeek wrote:
>> 
>> However, servers are easier to upgrade than clients, which is why you see
>> some of the server side support you mention.  I know CloudFlare in
>> particular helped a lot of people cope with communicating with clients who
>> had different certificate capabilities.  It isn't a bad thing that both
>> approaches exist.
> 
> Also, it should be noted that the past two migrations needed to be
> compatible with TLS 1.0 and 1.1, which have much less advanced
> signature negotiation than TLS 1.2 (and 1.3).

There is an awfully large installed base of borked TLSv1.2 servers.

If those servers are equipped with a sha256WithRsaEncryption server cert,
the handshake results are:

  - TLSv1.0 for SSLv3 ClientHello w/ client_version = (3,1) 
  - TLSv1.1 for SSLv3 ClientHello w/ client_version = (3,2) 
  - TLSv1.1 for SSL VERSION 2 CLIENT-HELLO offering (3,3)
  - chokes and drops network connection
   for SSLv3 ClientHello w/ client_version = (3,3)

i.e. there exists a serious interop problem for TLSv1.2 with such servers,
but there is no problem interoperating with TLSv1.0 or TLSv1.1

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] A closer look at ROBOT, BB Attacks, timing attacks in general, and what we can do in TLS

2017-12-15 Thread Martin Rex
Tim Hollebeek  wrote:
> Because it's easier for the client to decide what the client understands
> than it is for the server to decide what the client understands.  Less
> complexity = less failures.  
> 
> Note that this is how XP was handled for code signing.  The Authenticode
> spec actually made it so if you did things in the right order, XP would only
> see the SHA-1 signature, while more recent operating systems would see both
> the SHA-1 and SHA-2 signatures, ignore the SHA-1 signature, and use the
> SHA-2 signature.  This allowed doubly-signed binaries that worked both on XP
> and non-XP systems.  Unfortunately the technical steps to do so weren't
> widely publicized, but I know some companies took advantage of it.

Now that sounds weird.

If I look at the code signatures on my Windows 7 machine,
e.g.
C:\windows\ccm\CcmExec.exe

it carries one single digital signature & timestamp _from_Microsoft_ 
created 01-November-2017 and both with sha1RSA.

So it seems some vendors haven't really started migrating away from SHA-1.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] editorial error in draft-ietf-tls-rfc4492bis-17

2017-10-24 Thread Martin Rex
I just noticed a strange inconsistency in section 6 of
draft-ietf-tls-rfc4492bis-17

https://tools.ietf.org/html/draft-ietf-tls-rfc4492bis-17#section-6

The last of the "must implement 1 of these 4" list of cipher suites at
the end of section 6 is not contained in the table at the beginning of
section 6 above it (instead, it appears in rfc5289 only).

I believe that the last ciphersuites should be changed (which will
provide consistence with the second list entry (the TLSv1.2 MTI cipher suite).


-Martin


   +-++
   | CipherSuite | Identifier |
   +-++
   | TLS_ECDHE_ECDSA_WITH_NULL_SHA   | { 0xC0, 0x06 } |
   | TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA   | { 0xC0, 0x08 } |
   | TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA| { 0xC0, 0x09 } |
   | TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA| { 0xC0, 0x0A } |
   | TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 | { 0xC0, 0x2B } |
   | TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 | { 0xC0, 0x2C } |
   | ||
   | TLS_ECDHE_RSA_WITH_NULL_SHA | { 0xC0, 0x10 } |
   | TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA | { 0xC0, 0x12 } |
   | TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  | { 0xC0, 0x13 } |
   | TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  | { 0xC0, 0x14 } |
   | TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   | { 0xC0, 0x2F } |
   | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384   | { 0xC0, 0x30 } |
   | ||
   | TLS_ECDH_anon_WITH_NULL_SHA | { 0xC0, 0x15 } |
   | TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA | { 0xC0, 0x17 } |
   | TLS_ECDH_anon_WITH_AES_128_CBC_SHA  | { 0xC0, 0x18 } |
   | TLS_ECDH_anon_WITH_AES_256_CBC_SHA  | { 0xC0, 0x19 } |
   +-++


   Server implementations SHOULD support all of the following cipher
   suites, and client implementations SHOULD support at least one of
   them:

   o  TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
   o  TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
   o  TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+  o  TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
-  o  TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Update on TLS 1.3 Middlebox Issues

2017-10-10 Thread Martin Rex
Hannes Tschofenig <hannes.tschofe...@gmx.net> wrote:
> 
> On 10/10/2017 10:52 AM, Martin Rex wrote:
>> Nope, none at all.  I'm _not_ asking for protocol changes, just that
>> the TLS handshake continues to end with CCS + HS, and ContentTypes
>> remain visible.  Contents of all handshake messages, and whether
>> and how that content is protected, remains subject to negotiated
>> protocol version which may vary significantly.
> 
> FWIW: Making the ContentType visible is a protocol change since the
> current version of the TLS / DTLS 1.3 protocol encrypts them.

I haven't looked at DTLS 1.3, but from what I remember TLS 1.3
has _two_ ContentTypes, one in the clear in the original TLS record
structure, and one encrypted, and the cleartext ContentTypes is
IIRC specified to contain bogus/misleading information.

Since hiding of the ContentType provides ZERO[*] security value,
fixing the cleartext ContentType to carry the true value is not
really a protocol change.

Conceptually, for the TLS *ENDPOINTS*, I prefer a code layering approach
with a transport-free TLS implementation by a huge margin.
Falling up and down huge callstacks with arbitrarily incomplete TLS records
results in huge amounts of complex, poor and inefficent code.

And the IO middleware layer should not have to bother with TLS protocol
versions.


-Martin

 [*] the security value of the hidden ContentType is zero, because
when capturing an entire TLS session, one will be able to
identify the real content types context-free heuristically in 99.5% of
the time looking at all records, and when knowing the server, one
can determine all the content types in 99,% of the time.

The hiding of the content type is only sufficiently awkward
to break streaming IO of communication layers above TLS, as well
as efficient connection state management.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Update on TLS 1.3 Middlebox Issues

2017-10-10 Thread Martin Rex
Ilari Liusvaara <ilariliusva...@welho.com> wrote:
[ Charset UTF-8 unsupported, converting... ]
> On Mon, Oct 09, 2017 at 07:21:01PM +0200, Martin Rex wrote:
>> 
>> Fixing the backwards-incompatibilities in the TLS record layer
>> would be terribly useful for streaming-optimized IO layers as well,
>> i.e. ensure the the TLS record properly identifies ContentType,
>> and that a TLSv1.3 handshake ends with CCS followed by 1 Handshake message.
> 
> Unfortunately, doing that would do really bad things to security.

Nope, none at all.  I'm _not_ asking for protocol changes, just that
the TLS handshake continues to end with CCS + HS, and ContentTypes
remain visible.  Contents of all handshake messages, and whether
and how that content is protected, remains subject to negotiated
protocol version which may vary significantly.


> 
> And the middleboxes I am talking about actually parse every cleartext
> handshake message. Change anything in any message and they fail. And
> fixing some known vulnerabilities in TLS 1.2 is not possible without
> changing the structures around.

Changing the contents of TLS handshake messages _other_ than
ClientHello+ServerHello is fine with me.  I also don't care which
of the handshake messages are clear vs. encrypted.

What I'm mainly asking for is keeping TLS record ContentType visible
(Handshake, AppData, CCS, Alert), and having a CCS before the final
Handshake record of a TLS handshake.  I'm really looking *ONLY* at
the TLS record layer semantics.

I have an issue with the borked TLS record layer protocol at the *ENDPOINT*,
because TLSv1.3 is never going to work as a drop-in replacement for us with
the current TLS record layer breakage.  This is about
(a) streaming-optimized IO for the handshake phase
(b) CCS to recognize the final step of the handshake phase
(c) and Content-Type visible Alerts to distinguish
End-of-Connection alerts (both fatal error or warning-level
close_notify) from next AppData record -- so that the body
of an AppData record can be left in the network receive buffers
and be visible through "network readable" socket event(s).

Having to redesign the entire application network read event model
in order to juggle around with an unprotected-but-not-yet-processible
AppData record would be a royal PITA, as much as not being able to
recognize premature client-side termination of a longrunning request
(which Web Browser navigation and complex page designs cause all the time).


> 
> In fact, I think the record layer changes in TLS 1.3 actually _reduce_
> intolerance, not _increase_ it. If your middlebox is not as anal as I
> described above, it probably falls into copying data back and forth
> when it loses the handshake. However, the changes into ServerHello
> could easily cause trouble even with such middleboxes.

I personally hate network middleboxes other than plain NAT, and I'm
violently opposed to MITMs (aka TLS-inspecting network middleboxes).
I will certainly not mind if those latter break.  Broken non-malicious
middleboxes are obnoxious, too, and create a significant & needless
support load.  A lot of our customers use some kind of totally broken
transparent internet proxies, which let TCP connect through, but
silently close the network connection after TLS ClientHello was sent.

Such behaviour is indistinguishable from a choking TLS implementation,
such as Microsoft IIS with SChannel (receiving SSLv3 ClientHello with
ClientHello.client_version=(3,3) or a Win2012+ IIS receiving ClientHello
without the optional TLS extension SNI).

And when telling customers to check their firewall rules, they often
come back saying: "but telnet connects".  Yup, braindead firewall.


> 
> Here what might work getting around those really annoying middleboxes
> (and this is pretty nasty):
> 
> - Add back the session field, echo field from client
>
> - Add dummy zero into place of compression method, so TLS 1.2 parsers
>   can parse the message.
>
> - Add two zeros into ServerHello so the message can be parsed the same
>   way as TLS 1.2.
 

You mean for ServerHello?  Yes, it would be highly preferable to
make ServerHello fully backwards compatible (with respect to the PDU parser)
so that you don't have to change horses midway while parsing ServerHello.


> - Fix ServerVersion at TLS 1.2, send true version in supported_versions
>   extension.

wfm.


> - If the version is TLS 1.3, the session id is non-empty and 0-RTT was
>   not accepted, insert fake ChangeCipherSpec message immediately after
>   ServerHello and change outer content-type of the next record to 22
>   (instead of 23). The client can do the same.

fake CCS before the final HS of a TLS handshake would make me happy. :)


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Update on TLS 1.3 Middlebox Issues

2017-10-09 Thread Martin Rex
Ilari Liusvaara  wrote:
>
> And even if the changes might not be directly consequential to
> security, the changes to get through some more annoying middleboxes
> might be quite annoying to implement.
> 
> E.g. there probably are several different middeboxes that have a
> configuration that actually checks that the handshake looks valid,
> which includes checks for things like ChangeCipherSpec being
> present in both directions, even for resumption; while the non-
> resumption mode might even verify the authentication signatures in
> the handshake and not letting server send non-handshake messages
> before sending its 2nd flight. Ugh, getting around those would be
> pretty nasty.


Fixing the backwards-incompatibilities in the TLS record layer
would be terribly useful for streaming-optimized IO layers as well,
i.e. ensure the the TLS record properly identifies ContentType,
and that a TLSv1.3 handshake ends with CCS followed by 1 Handshake message.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Update on TLS 1.3 Middlebox Issues

2017-10-09 Thread Martin Rex
Eric Rescorla  wrote:
>
> two options:
> 
> - Try to make small adaptations to TLS 1.3 to make it work better with
> middleboxes.

Return to the proper TLSv1.2 record format with true ContentTypes
(hiding them doesn't add any security anyways).

With the needlessly broken ContentTypes, we will be unable to support
TLSv1.3 in our current apps.

The needless changes break streaming of layered IO and end-of-communication
discovery for long-running requests, because it is not possible to
reliably distinguish a warning-level closure alert from a pipelined
continuation of app data.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSASSA-PSS in certificates and "signature_algorithms"

2017-09-12 Thread Martin Rex
Ilari Liusvaara wrote:
> On Tue, Sep 12, 2017 at 01:02:19PM +0200, Martin Rex wrote:
>> 
>> A new TLS extension to convey acceptable signatures on certs would be
>> needed, and for this, it would be preferable to pass along ASN.1
>> OIDs from AlgIds (not full AlgIds, this would be too messy with RSA-PSS).  
> 
> With RSA-PSS, there is also the parametrization signature by hashes.
> 
> The implementation I have supports the following six sets of parameters
> for RSA-PSS:
> 
> - hash=SHA-256, mgf=MGF1[SHA-256], salt=256, trailer=1
> - hash=SHA-384, mgf=MGF1[SHA-384], salt=384, trailer=1
> - hash=SHA-512, mgf=MGF1[SHA-512], salt=512, trailer=1
> - hash=SHA3-256, mgf=MGF1[SHA3-256], salt=256, trailer=1
> - hash=SHA3-384, mgf=MGF1[SHA3-384], salt=384, trailer=1
> - hash=SHA3-512, mgf=MGF1[SHA3-512], salt=512, trailer=1


The salt length is supposed to be in bytes, not bits, and for SHA-1
defaults to 20.  The salt length suggested in the standard is the
output size of the underlying hash length in bytes.

But it seems that some existing implementations use weird/unusual salt
values for RSA-PSS signature with hash algs other than SHA-1.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSASSA-PSS in certificates and "signature_algorithms"

2017-09-12 Thread Martin Rex
Eric Rescorla wrote:
> 
> Generally, the idea with signature schemes is that they are supposed to
> reflect both the capabilities of your TLS stack and the capabilities of
> your PKI verifier [0]. So, yes, if you advertise this it is supposed to
> work. So, ideally we would just say that it's as Ilari says in his followup.
> 
> It seems like there are really two choices:
> 
> 1. Only advertise algorithm X if you support algorithm X in both places
> 2. Invent a new extension whose semantics are "if present, this tells you
> what the PKI validator accepts, otherwise look at signature schemes"
> 
> I hate to be suggesting #2 at this stage in the process, but maybe it's
> right anyway...
> 
> -Ekr
> 
> [0] I recognize that there are people who think that TLS shouldn't say
> anything about the PKI verifier,
> but I don't think that this is viable, for exactly the reason shown here:
> it's possibly to have an algorithm which is widely supported in TLS stacks
> but not in PKI verifiers.

I believe you got it backwards.

A new TLS extension to convey acceptable signatures on certs would be
needed, and for this, it would be preferable to pass along ASN.1
OIDs from AlgIds (not full AlgIds, this would be too messy with RSA-PSS).  

The text in rfc5246 which suggests that contents of the "signature_algorithm"
extension should be applied to certificates is a defect of rfc5246, and
only a small fraction of implementors seems to have gotten this wrong,
and created implementations that erroneously abort by making flawed
assumptions on what the peer might support (or not).

The signature_algorithms extensions must only be applied to transforms
used within the TLS protocol itself (digitally_signed), because only this
transform can be negotiated and produced at will by communication peers.

Certificates will, in most real-world usage cases, be created out-of-band
by third party entities (CAs), and can typically not be created by the
TLS communication endpoints.  It is OK to use "signature_algorithms"
as a selection hint.  It is a dumb idea for a TLS communication peer
to abort a TLS handshake by making flawed assumptions about whether
a communication peer does or does not support a particular signature
scheme on certificates.

TLS handshake failures based on flawed assumptions are unconditionally BAD.
If the peer has a certain policy, it is up to that peer to apply this
policy during _verification_.  TLS handshake failures are unprotected,
they can not be recovered from at the TLS level (transparently), but require
app-level acrobatics, and may require repeated proxy traversals.  And
they create downgrade vulnerabilities, such as the silly "downgrade dance"
implemented by a number of browsers, which creates the POODLE vulnerability.


btw. some PKIs are currently switching to using RSA-PSS signatures
on certificate chains.  But they do use rsaEncryption for the key
in the endEntity certificate.  Because they want to use end-entity
certs for both, PKCS#7/CMS (and maybe SOAP) signatures as well as
SSL/TLS client certificates.  For use with TLSv1.2 as client certs,
they're hardwired to RSA PKCS#1 v1.5 signatures, which precludes tagging
the keys in client end-entity certs as RSA-PSS.

TLSv1.3 needs to avoid more "MUST" mistakes around algorithms that
work just fine with earlier version of TLS.  It is perfectly OK to
created RSA-PSS signatures with keys that are tagged as rsaEncryption,
and adding words or semantics to TLSv1.3 that suggests this is not
OK, would be terribly bad.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Martin Rex
Colm MacCárthaigh wrote:
> Martin Rex <m...@sap.com> wrote:
>> 
>> With RDRAND, you would use e.g. SHA-256 to compress 10*256 = 2560 Bits of
>> a black-box CPRNG output into a 256-bit _new_ output that you
>> actually use in communication protocols.
> 
> If the relation between the RDRAND input and the output of your function is
> fixed, then your attacker than just do the same thing. It doesn't help at
> all really. You have to mix RDRAND with something else that is unknowable
> to the attacker as part of the process.

Through the 10x compression of the RDRAND output, which will provably
create an incredibly huge amount of collisions, the attacker will be
unable to identify any particular output values of RDRAND.

Your conceived attack could only work under the condition that
10 RDRAND consecutive outputs are always fully deterministic, and
that also the seed used by RDRAND will be fully deterministic to
the attacker -- or can otherwise be learned out-of-band by the attacker
-- while at the same time this property will remain invisible to
all external randomness tests.

Can you shed any light on how you believe and attacker could meet
such preconditions, because I'm not seeing the problem yet.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Martin Rex
Colm MacCárthaigh wrote:
> Martin Rex <m...@sap.com> wrote:
>> 
>> Since you also have no idea whether and how the internal hardware design
>> behind Intel RDRAND is backdoored, you should not be using any of its
>> output without an at least 10x cryptographic compression in any case.
> 
> Obviously your CPU can fully compromise you locally, so that's not very
> interesting to think about. But for remote attacks, like the one you
> describe here, where an adversary may use predictable PRNG output, it is
> probably better to mix RDRAND output with something else. There are a few
> layers of defense here, such as multi-source NRBGs, or personalization
> strings. Those significantly distance the output from the RDRAND. The kind
> of compression you mention here can be easily precomputed and tables
> generated by someone with a large amount of resources, since it's a pure
> function.

We're either talking about different things, or I fail to understand
what you're talking about.

The predictable failure of EC_Dual was based on the fact that the
internal state and the output had (almost) the _same_ size.

With RDRAND, you would use e.g. SHA-256 to compress 10*256 = 2560 Bits of
a black-box CPRNG output into a 256-bit _new_ output that you
actually use in communication protocols.

Should the creator of a backdoored black-box-CPRNG be able to recompute
the internal state from a few leaked _new_ (post-compression) outputs,
then you _will_ be able to notice a real problem (non-randomness)
problem with the outputs of black-box-CPRNG.


> 
> In BoringSSL, and s2n, we mix RDRAND in as part of the reseeding. But the
> initial seed came from urandom (which is not pure RDRAND). In s2n, we also
> use personalization strings to provide another degree of defense.

Any half-way portable crypto library will have code for collecting
entropy on pre-RDRAND Intel CPUs and non-Intel CPUs, probably use
an entropy pool that is >=512 bits and outputs of at most half of
the size of the entropy pool, and use compressed RDRAND outputs for
additional entropy gathering, and at most for nonces, but never
use RDRAND alone for generation of secret keying material.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] 32 byte randoms in TLS1.3 hello's

2017-07-26 Thread Martin Rex
Peter Gutmann wrote:
> Christian Huitema  writes:
>  
>>On 7/25/2017 4:57 PM, Peter Gutmann wrote:
>>> Are we talking about the same thing here?
>>
>>Not sure. 
> 
> OK, I'd say we are :-).  This isn't about which PRNG or randomness source to
> use but the need for a separation between PRNG-for-secret-values and PRNG-for-
> public-values.  In particular you don't want to feed large amounts of output
> from your PRNG-for-secret-values to an attacker for analysis via nonces,
> client/server randoms, and other things, in case they can use that to attack
> your PRNG state.
> 
> The prime example of this is the EC-DRBG fiasco, which relied on the attacker
> seeing the client/server random just before the same PRNG was used to generate
> the master secret.  If the client/server random had come from PRNG instance #1
> and the master secret from PRNG instance #2, it would have been... well, still
> a bad PRNG, but not as catastrophic a failure.
> 
> So the advice was to seed the public-PRNG from the private-PRNG and use the
> public-PRNG for nonces, sequence numbers, and so on, and the private-PRNG for
> encryption keys and the like.


Whenever you're using "some" CPRNG as a black box, you should very probably
take 4x to 16x of CPRNG native output size and run it through a cryptgraphic
compression function (such as a secure hash or PRF) to produce 1x output
size (a) as a safety precaution and (b) to obtain reasonable entropy.

Just look at the Intel RDRAND fiasko.  It isn't just a bad design, it
is a clearly documented bad design.  They openly admit that they're
artificially inflating the output by factor of 2x, i.e. that an output
of 256 bits is based on entropy of _at_best_ 128 bits.

Since you also have no idea whether and how the internal hardware design
behind Intel RDRAND is backdoored, you should not be using any of its
output without an at least 10x cryptographic compression in any case.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-20 Thread Martin Rex
Colm MacCárthaigh wrote:
> 
> If you maintain that draft-green is Wiretapping, then that is also the
> correct term for what is happening today. Today, operators are using a
> static key, used on the endpoints they control, to decrypt traffic
> passively. The only difference is DH vs RSA, and I know you'll agree that's
> not material.


What I'm currently seeing in the installed base for getting at the plaintext
of TLS-protected traffic, are these two approaches:

(1) Server-side:  Reverse-proxy

(2) Client-side:  TLS-intercepting proxy

and neither of these needs to break TLS and neither needs to break forward
secrecy of the SSL/TLS-protected(!) communication.


While Server-side reverse proxies (1) _might_ be used to "inspect/monitor"
traffic, more often it seems to be used for scaling / load-balancing
to multiple backend servers of a server farm.  We ship such functionality
for the backend specifically for scaling/load-balancing, and for placing
the SSL termination point into a DMZ (firewalled backend servers).
We a proprietary scheme to forward SSL/TLS client certs from the
reverse proxy to the backend servers.


(2) is often used by so-called "AV-Software" of the "Internet Security"
kind, and studies have shown over and over again just how badly many of
that stuff breaks security (by botching the TLS server certificate
validation and/or server endpoint identification).


I also see (2) being used by some of our customers in the form of
TLS intercepting internet proxies, mostly in countries that lack
constitutional strong privacy protections (US, UK, middle EAST).
It regularly breaks outgoing communication scenarios and confused
application admins, because the fake TLS server certificates will be
rejected by our app client unless the trust in explicitly configured.
Something which IT/networking departments occasionally fail to properly
explain to their colleague application admins.

Whenever outgoing communication requires the use of SSL/TLS client certs,
existing TLS intercepting proxies don't work (and I'm really glad that they
break).  A growing number of legal/governmental data exchange scenarios
use SSL/TLS client certs (something I really appreciate, because the use
of client certs fixes a number of problems).  :-)



Personally, I consider server-side reverse proxies primarily as an
implementation detail of the backend (how workload is distributed
within the backend).

The client-side TLS intercepting proxies, however, are a constant security
problem, and unless it is a ***TRUELY*** voluntary, consenting opt-in,
and running on the same machine as the user, and with the user in full
control on whether and how that intercepting proxy is used.

If presence & use of a TLS intercepting proxy is the result of anything
along the lines of "compliance", "policy" or "conformance", then it is
wire-tapping, with a probability near certainty.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Is there a way forward after today's hum?

2017-07-20 Thread Martin Rex
I'm sorry, Russ, but I think this would be seriously deceiving.


Russ Housley wrote:
> 
> If a specification were available that used an extension that involved
> both the client and the server, would the working group adopt it, work
> on it, and publish it as an RFC?
> 
> I was listening very carefully to the comments made by people in line.
> Clearly some people would hum for "no" to the above question, but it
> sounded like many felt that this would be a significant difference.
> It would ensure that both server and client explicitly opt-in, and any
> party observing the handshake could see the extension was included or not.

Any party observing the handshake (read: a monitoring middlebox) would
see whether client proposed the extension and server confirmed the extension
in the clear part of the TLS handshake, and that very same monitoring
middlebox very very very probably would kill/prevent all TLS handshakes
without that extension being confirmed by the server from completing...

... at which point this is no longer a "rare and occasional voluntary
opt-in for debugging broken apps" but rather a policy enforcment known
as "coercion".

I am violently opposed to standardizing enfored wire-tapping for TLS.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Martin Rex
Martin Rex wrote:
> 
> There were a few issues with F5 loadbalancers (that were just forwarding
> traffic, _not_ acting as reverse proxy) related to a borked F5 option called
> "FastL4forward", which occasionally caused the F5 box to truncate TCP streams
> (the box received 5 MByte of TCP data towards the TLS Server, but
> forwarded only 74 KBytes to the client before closing the connection with
> a TCP FIN towards the TLS client.
> 
> And once I saw another strange TCP-level data corruption caused by
> some Riverbed WAN accellerator.

I forgot to mention how I analyzed the breakage created by the middleboxes:

network capture (tcpdump) with a IP-address capture filter for a dedicated
client machine was *perfectly* sufficient to determine the TCP-level breakage.

For the F5 cockup, we used a concurrent tcpdump capture on the box in both
directions, again with IP-address capture filter, in order to prove to the
vendor that his box is corrupting/truncating the TCP stream between
TLS client and TLS server.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] datacenter TLS decryption as a three-party protocol

2017-07-19 Thread Martin Rex
I just watched the presentation on static DH / TLS decryption in Enterprise
settings of todays TLS session

  https://youtu.be/fv1AIgRdKkY?t=4473

and I'm seriously appalled by the amount of cluelessness in the
Networking & IT Departments on the one hand, and by what seems like
woefully inadequate apps on the other hand.

I've been doing middleware development and customer support of
SSL/TLS-protected communication for our company's (legacy) app
as well as maintenance & customer support for the TLS stack we're
shipping with it for the past 17 years, and I can't stop shaking
my head about the perceived problems, that *NEVER*EVER* occurred
to me, nor our IT & Hosting and neither any of our customers using our app
(and I would definitely know about it).

Although we do ship our own TLS implementation, we don't have any APIs
to export cryptographic keys, simply because it's completely unnecessary.


With extremely few exceptions, an API-level trace at the endpoints
is totally sufficient for finding app-level problems, such as
unexpected "expensive" requests.  App-level traces on *EITHER* side
should provide *ALL* information that is necessary to determine _which_
particular requests are taking longer than expected.  If your Apps do
not provide meaningful traces, then you have an *APPS* problem, and
should be fixing or replacing apps, rather than mess around with TLS.


There were a few issues with F5 loadbalancers (that were just forwarding
traffic, _not_ acting as reverse proxy) related to a borked F5 option called
"FastL4forward", which occasionally caused the F5 box to truncate TCP streams
(the box received 5 MByte of TCP data towards the TLS Server, but
forwarded only 74 KBytes to the client before closing the connection with
a TCP FIN towards the TLS client.

And once I saw another strange TCP-level data corruption caused by some 
Riverbed WAN accellerator.


I remember exactly _two_ occasions during that 17 years when I produced
a special instrumented version of our library for the server endpoint,
which dumped stuff into a local trace file, but I never ever thought
about exporting crypto keys (because it wouldn't help, and those
weren't _my_ keys (but those of a customer):

   (1) it dumped the decrypted RSA block from the ClientKeyExchange
   handshake message when encountering a PKCS#1 BT02 padding
   encoding failure

   (2) it dumped the final decrypted block of a TLS record for
   when the CBC-padding-verification failed the check.

(1) was caused by a bug in the long integer arithmetic of an F5 load balancer
which ocassionally produced bogus (un-decryptable) PreMasterSecrets

(2) was caused by a design flaw in JavaSE 1.6 by Java's SSL when it was used 
with GenericBlockCipher over native-IO interfaces.


I firmly believe that if you have a desire to decrypt TLS-protected
traffic to debug APPS issues, then your APPS must be crap, and seriously
lack capabilities to trace/analyze endpoint behaviour at the app level.


With respect to "monitoring" SSL/TLS-protected traffic in the
enterprise environment:

At least here in Germany, we're in the lucky position that wiretapping
network traffic has been made a criminal offense in 2004.  If IT/Networking
folks in the enterprise settings don't want to spend up to 4 years behind
bars, they don't even try to decrypt SSL/TLS-protected traffic.

 
-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] adopted: draft-ghedini-tls-certificate-compression

2017-06-07 Thread Martin Rex
Ilari Liusvaara wrote:
>On Wed, Jun 07, 2017 at 05:38:59AM +, Raja ashok wrote:
>> Hi Victor & Alessandro,
>> 
>> I have gone through the draft and I am having a doubt. 
>> 
>>>   The extension only affects the Certificate message from the server.
>>>   It does not change the format of the Certificate message sent by the
>>>   client.
>> 
>> This draft provides a mechanism to compress only the server certificate
>> message, not the client certificate message. I feel client authentication
>> is not performed in HTTPS of web application. But in all other applications
>> (eg. Wireless sensor network) certificate based client authentication is
>> more important. 
>> 
>> So I suggest we should consider compression on client certificate message
>> also.
> 
> Doing client certificate compression would add some complexity, because
> the compression indication currently needs to be external to certificates,
> and there is no place to stick such indication for client certificate.

A TLS extension could do this indication just fine.


ASN.1 DER encoded X.509v3 certificates all have the same first 12 bits.

0x30 0x8*

So sending an indication inband should also be possible.
But a negotiated TLS extension (proposed by client in ClientHello,
confirmed by server in ServerHello) could also change the Certificate PDU
to provide room for a seperate indicator.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Eric Rescorla's Discuss on draft-ietf-tls-ecdhe-psk-aead-04: (with DISCUSS and COMMENT)

2017-06-01 Thread Martin Rex
Watson Ladd wrote:
>Martin Rex <m...@sap.com> wrote:
>>
>> The suggestion to accept a recognized TLSv1.2 cipher suite code point
>> as an alternative indicator for the highest client-supported protocol
>> version is not really a "mechanism".  It's efficient (with 0-bytes on
>> the wire), intuitive and extremely backwards-compatible (will not upset
>> old servers, neither version-intolerant as the Win2008/2012 servers,
>> nor extension-intolerant servers.
> 
> It's a substantial change made after WG last call. That alone makes it
> improper. If you want to get WG consensus for such a change, go ahead.
> But don't try making this in the dead of night.

The proposed small addition of when the TLS cipher suites can be negotiated
is clearly *NOT* a change, and certainly not substantial.

Implementors that want to completely ignore this small addition
can do so and will remain fully compliant, they will not have to
change a single line of code.

For those implementing the proposed addition there will be two
very desirable effects:

  1) make more TLS handshakes succeed

  2) make more TLS handshakes use TLS protocol version TLSv1.2 rather
 than TLSv1.1 or TLSv1.0

come at an extremely low cost, and this addition has ZERO downsides.
The IETF is about promoting interoperability.

You seem to have a problem with either or both of the above outcomes,
but I fail to understand which and why.


> 
>> It's worse -- there are still TLS servers out there which choke on
>> TLS extensions (and TLS server which choke on extension ordering).
> 
> TLS 1.2 demands extensions work. Sending a TLS 1.2 hello without
> extensions is going to make it impossible to implement many features
> TLS 1.2 security relies on.

Actually, it does not.  TLSv1.2 works just fine without TLS extension,
although there are a few implementations in the installed base which
got this wrong.  rfc5246 appendix E.2 shows that TLSv1.2 interop with
extension-less ClientHellos was desired and assumed to be possible.
Some implementors got it wrong.



> 
>> It seems that there are others facing the same issue:
>>
>> https://support.microsoft.com/en-us/help/3140245/update-to-enable-tls-1.1-and-tls-1.2-as-a-default-secure-protocols-in-winhttp-in-windows
>>
>> and defer enabling to explicit customer opt-in.
>>
>>
>> Really, a very compatible and extremely robust and useful approach would
>> be to allow implied client protocol version indication through presence of
>> TLSv1.2-only cipher suite codepoints and this would allow large parts
>> of the installed base to quickly start using TLSv1.2--without breaking
>> existing usage scenarios and without the hazzle for users having to opt-in
>> and test stuff.
> 
> The people who have these problems are not "large parts" of the
> install base. They are large parts of *your* install base. Don't
> confuse these two.

The above WinHTTP issue alone applies to Win7, which is about 50% of
the installed base of Desktops PCs.

Refering to ~50% of the installed base as "large parts" seems OK to me. YMMV.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Eric Rescorla's Discuss on draft-ietf-tls-ecdhe-psk-aead-04: (with DISCUSS and COMMENT)

2017-05-30 Thread Martin Rex
Eric Rescorla wrote:
> On Tue, May 23, 2017 at 9:34 PM, Martin Rex <m...@sap.com> wrote:
>>
>> This change _still_ prohibits the server from negotiating these algorithms
>> with TLSv1.1 and below.
>>
>> Could you elaborate a little on where and why you see a problem with this?
>>
> 
> For starters, TLS 1.3 has already designed a completely independent
> mechanism for doing version negotiation outside of ClientHello.version,
> so doing another seems pretty odd. In any case, it's not something you
> do between IETF-LC and IESG approval.

The suggestion to accept a recognized TLSv1.2 cipher suite code point
as an alternative indicator for the highest client-supported protocol
version is not really a "mechanism".  It's efficient (with 0-bytes on
the wire), intuitive and extremely backwards-compatible (will not upset
old servers, neither version-intolerant as the Win2008/2012 servers,
nor extension-intolerant servers.


> 
>> As this changes tries to explain, had such a text been used for all
>> TLSv1.2 AEAD cipher suite code points, then browsers would have never
>> needed any "downgrade dance" fallbacks, POODLE would have never
>> existed as a browser problem, and the TLS_FALLBACK_SCSV band-aid
>> would not been needed, either.
> 
> I'm not sure this is true, because there were also servers which did
> not understand extensions.


It's worse -- there are still TLS servers out there which choke on
TLS extensions (and TLS server which choke on extension ordering).

Sending TLS extensions is therefore a negotiation scheme that we
can not ship as patch into the installed base, because we *KNOW*
that it will break a few existing usage scenarios.  Stuff that needs
TLS extensions is therefore an opt-in only scheme -- and even when
making it opt-in, we may have to additonally provide a TLS extension
exclusion list of hostnames.

It seems that there are others facing the same issue:

https://support.microsoft.com/en-us/help/3140245/update-to-enable-tls-1.1-and-tls-1.2-as-a-default-secure-protocols-in-winhttp-in-windows

and defer enabling to explicit customer opt-in.


Really, a very compatible and extremely robust and useful approach would
be to allow implied client protocol version indication through presence of
TLSv1.2-only cipher suite codepoints and this would allow large parts
of the installed base to quickly start using TLSv1.2--without breaking
existing usage scenarios and without the hazzle for users having to opt-in
and test stuff.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Adam Roach's No Objection on draft-ietf-tls-ecdhe-psk-aead-04: (with COMMENT)

2017-05-23 Thread Martin Rex
Adam Roach wrote:
> draft-ietf-tls-ecdhe-psk-aead-04: No Objection
> 
> --
> COMMENT:
> --
> 
> I agree with EKR's discuss -- specifying semantics for these ciphersuites
> with TLS 1.0 and 1.1 is a material change, and the proposed mechanism (in
> which servers are encouraged to infer 1.2 support even in the absence of
> explicit indication) is a bit baffling.

It encourages (but does not require) servers to infer 1.2 support
from _very_explicit_ information: the offering of TLSv1.2-only TLS
ciphersuites is the very same TLS ClientHello handshake message.

We know since rfc5746 that the most reliable scheme to indicate support
for certain TLS protocol features is a cipher suite value.  It is far
from rocket science to infer support for 1.2 from 1.2-only cipher
suite codepoints in ClientHello.cipher_suites.



I just realized that I suggested removal of a description of client
behaviour that can, and should remain in the document (I'm sorry):

A client MUST treat the selection of these cipher
  suites in combination with a version of TLS that does not support
  AEAD (i.e., TLS 1.1 or earlier) as an error and generate a fatal
  'illegal_parameter' TLS alert.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Eric Rescorla's Discuss on draft-ietf-tls-ecdhe-psk-aead-04: (with DISCUSS and COMMENT)

2017-05-23 Thread Martin Rex
It seems I had a typo in the new text.

Martin Rex wrote:
> Eric Rescorla wrote:
>> draft-ietf-tls-ecdhe-psk-aead-04: Discuss
>> 
>> --
>> DISCUSS:
>> --
>> 
>> The following text appears to have been added in -04
>> 
>>A server receiving a ClientHello and a client_version indicating
>>(3,1) "TLS 1.0" or (3,2) "TLS 1.1" and any of the cipher suites from
>>this document in ClientHello.cipher_suites can safely assume that
>> the
>>client supports TLS 1.2 and is willing to use it.  The server MUST
>>NOT negotiate these cipher suites with TLS protocol versions earlier
>>than TLS 1.2.  Not requiring clients to indicate their support for
>>TLS 1.2 cipher suites exclusively through ClientHello.client_hello

That line should say
 through ClientHello.client_version

>>improves the interoperability in the installed base and use of TLS
>>1.2 AEAD cipher suites without upsetting the installed base of
>>version-intolerant TLS servers, results in more TLS handshakes
>>succeeding and obviates fallback mechanisms.
>> 
>> This is a major technical change from -03, which, AFAIK, prohibited
>> the server from negotiating these algorithms with TLS 1.1 and below
>> and maintained the usual TLS version 1.2 negotiation rules.
> 
> This change _still_ prohibits the server from negotiating these algorithms
> with TLSv1.1 and below.
> 
> Could you elaborate a little on where and why you see a problem with this?
> 
> As this change tries to explain, had such a text been used for all
> TLSv1.2 AEAD cipher suite code points, then browsers would have never
> needed any "downgrade dance" fallbacks, POODLE would have never
> existed as a browser problem, and the TLS_FALLBACK_SCSV band-aid
> would not have been needed, either.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Eric Rescorla's Discuss on draft-ietf-tls-ecdhe-psk-aead-04: (with DISCUSS and COMMENT)

2017-05-23 Thread Martin Rex
Eric Rescorla wrote:
> draft-ietf-tls-ecdhe-psk-aead-04: Discuss
> 
> --
> DISCUSS:
> --
> 
> The following text appears to have been added in -04
> 
>A server receiving a ClientHello and a client_version indicating
>(3,1) "TLS 1.0" or (3,2) "TLS 1.1" and any of the cipher suites from
>this document in ClientHello.cipher_suites can safely assume that
> the
>client supports TLS 1.2 and is willing to use it.  The server MUST
>NOT negotiate these cipher suites with TLS protocol versions earlier
>than TLS 1.2.  Not requiring clients to indicate their support for
>TLS 1.2 cipher suites exclusively through ClientHello.client_hello
>improves the interoperability in the installed base and use of TLS
>1.2 AEAD cipher suites without upsetting the installed base of
>version-intolerant TLS servers, results in more TLS handshakes
>succeeding and obviates fallback mechanisms.
> 
> This is a major technical change from -03, which, AFAIK, prohibited
> the server from negotiating these algorithms with TLS 1.1 and below
> and maintained the usual TLS version 1.2 negotiation rules.

This change _still_ prohibits the server from negotiating these algorithms
with TLSv1.1 and below.

Could you elaborate a little on where and why you see a problem with this?

As this changes tries to explain, had such a text been used for all
TLSv1.2 AEAD cipher suite code points, then browsers would have never
needed any "downgrade dance" fallbacks, POODLE would have never
existed as a browser problem, and the TLS_FALLBACK_SCSV band-aid
would not been needed, either.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] secdir review of draft-ietf-tls-ecdhe-psk-aead-03

2017-05-19 Thread Martin Rex
Benjamin Kaduk wrote:
> 
> Some other editorial nits follow.
> 
> In section 4, "these cipher suites MUST NOT be negotiated in TLS
> versions prior to 1.2" should probably clarify that "these" cipher
> suites are the new ones specified by this document.


This reminds me of the specification goofs in several TLSv1.2-related
documents about AEAD cipher suites which are responsible for the viability
of the POODLE attack and other exploitable fallback hacks.

It would be much preferable to avoid/fix those problems and facilitate
the migration to and use of TLSv1.2 without failing TLS handshakes and
band aids such as TLS_FALLBACK_SCSV


Suggested improvement:

   The cipher suites defined in this document make use of the
   authenticated encryption with additional data (AEAD) defined in TLS
   1.2 [RFC5246] and DTLS 1.2 [RFC6347].  Earlier versions of TLS do not
   have support for AEAD and consequently, these cipher suites MUST NOT
-  be negotiated in TLS versions prior to 1.2.  Clients MUST NOT offer
-  these cipher suites if they do not offer TLS 1.2 or later.  Servers,
-  which select an earlier version of TLS MUST NOT select one of these
-  cipher suites.  A client MUST treat the selection of these cipher
-  suites in combination with a version of TLS that does not support
-  AEAD (i.e., TLS 1.1 or earlier) as an error and generate a fatal
-  'illegal_parameter' TLS alert.
+   A client that offers
+  the cipher suites from this document in ClientHello.cipher_suites
+  in combination with (3,1) "TLSv1.0" or (3,2) "TLSv1.1" in
+  ClientHello.client_version MUST support TLSv1.2 and MUST accept
+  the server to negotiate TLSv1.2 for the current session.  If the
+  client does not support TLSv1.2 or is not willing to negotiate TLSv1.2,
+  then this client MUST NOT offer any of these cipher suites with a
+  lower protocol version than (3,3) "TLSv1.2" in ClientHello.client_version.
+  A server receiving a ClientHello and a client_version indicating
+  (3,1) "TLSv1.0" or (3,2) "TLSv1.1" and any of the cipher suites from
+  this document in ClientHello.cipher_suites can safely assume that the
+  client supports TLSv1.2 and is willing to use it.  The server MUST
+  NOT negotiate these cipher suites with TLS protocol versions earlier
+  than TLSv1.2.
+
+  Not requiring clients to indicate their support for TLSv1.2 cipher
+  suites exclusively through ClientHello.client_hello improves the
+  interoperability in the installed base and use of TLSv1.2 AEAD
+  cipher suites without upsetting the installed base of version-intolerant
+  TLS servers, results in more TLS handshakes succeeding and obviates
+  fallback mechanisms.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] trusted_ca_keys

2017-05-04 Thread Martin Rex
Salz, Rich wrote:
[ Charset UTF-8 unsupported, converting... ]
> > The organization info (O, L, ST, C, etc...) is supposed to differ in that 
> > case (CN
> > is just one field of DN), rendering the full DNs distinct.
> 
> But where and how is that enforced, or enforceable?  Again, any links to show 
> I'm wrong?
 

In theory: using  Directory Name  SubjectNameConstraints to enforce
a hierarchical Naming on all Subject Names and one single hierarchy
of CAs.

(Just that this doesn't work for dozens of independent PKIs environments
 like the SSLiverse...)

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] trusted_ca_keys

2017-05-04 Thread Martin Rex
Salz, Rich wrote:
[ Charset windows-1252 unsupported, converting... ]
> > There is some wording in PKIX and X.509 which creates the impression that a
> > CA could be re-using the same Subject DName with different keys, but such
> > an interpretation is a formally provable defect of the PKIX specification.
> 
> Any links you can point to?
> 
> I don't see how CA1 issuing a sub-ca for "... CN=fred" can globally prevent 
> CA2 from issuing a sub-ca with the exact same DN.  Can you explain what I am 
> missing?
 
Such an action will create two mutually exclusive PKIs, PKIs that
are *NOT* allowed to ever be bridged.  Bridging them or would open security
problem in the design of CRL processing rules for a collision
of distinct subCA names, because those rules say that a signature on
a CRL is valid, if the CRL signer cert can be verified under the same
root as the CA.


PKIX (rfc5280) about AuthorityKeyIdentifier X.509v3 extension:

https://tools.ietf.org/html/rfc5280#section-4.2.1.1

   The keyIdentifier field of the authorityKeyIdentifier extension MUST
   be included in all certificates generated by conforming CAs to
   facilitate certification path construction.

While it is a requirement for conforming CAs to place AuthorityKeyIdentifiers
into issued certificates, using it for building or verifying certificate
chains by RPs is purely optional "faciliate".

If re-using the same CA DName for certs with different keys would be allowed,
then chain building and chain verifying would become *DESPERATELY* dependent
on support *AND* use of AuthorityKeyIdentifier->SubjectKeyIdentifier.

-Martin


PS:

Coincidentally, this also implies that "self-issued" (rather than self-signed)
certificates are a "myth".  While they can be created technically, they are
*ALWAYS* in violation of requirements of the specification(s).  

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] trusted_ca_keys

2017-05-04 Thread Martin Rex
Salz, Rich wrote:
>
>> The certificate should have its own DN, use that.
> 
> She's saying that it *doesn't.*
> 
> SubjectDN is not unique.  IssuerDN/Serial is unique, but this extension use 
> that.

SubjectDN of a *Certificate Authority* **MUST** be unique.

There is some wording in PKIX and X.509 which creates the impression
that a CA could be re-using the same Subject DName with different keys,
but such an interpretation is a formally provable defect of the PKIX
specification.

Creating a certificate chain *MUST* be possible on the issuer name->
subject name chaining alone.  Support for chaining by subjectKeyId
is truely *OPTIONAL* based on the requirements in the specification,
so reuse of the same DName with a different key would break certificate
chaining (and verification) that is performed on the issuer->subject
chain alone and is therefore prohibited.

Support for the SubjectKeyIdentifier and AuthorityKeyIdentifier
X.509v3 extensions is explicitly just *RECOMMENDED* and can therefore
be absent from a perfectly PKIX-conforming minimum requirements RP.
PKIX-conforming CA MUST NOT break minimum-requirements RPs, e.g.
by creating/issuing CA certificates with identical subject names but
different public keys. 

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS RSA-PSS and various versions of TLS

2017-04-26 Thread Martin Rex
Dr Stephen Henson wrote:
> On 25/04/2017 15:36, Benjamin Kaduk wrote:
>> 
>>RSASSA-PSS algorithms  Indicates a signature algorithm using RSASSA-
>>   PSS [RFC3447 ] with mask 
>> generation function 1.  The digest used in
>>   the mask generation function and the digest being signed are both
>>   the corresponding hash algorithm as defined in [SHS 
>> ].  When used
>>   in signed TLS handshake messages, the length of the salt MUST be
>>   equal to the length of the digest output.  This codepoint is also
>>   defined for use with TLS 1.2.
>> 
>> 
>> Is the concern that this is insufficiently clearly indicated as placing
>> requirements on signatures of certificates as opposed to signatures of
>> TLS data structures?
> 
> Yes that's my concern. Supporting PSS signatures on certificates is
> a mandatory requirement and I think we should be very clear about the
> parameters we permit.
> 
> The above paragraph says nothing about salt length limitations on
> signatures on certificates. We could have a situation where one
> implementation enforces the salt length to be equal to the digest length
> (and rejects everything else) and another will allow any valid length.


It has always been a terribly stupid bug that TLS started talking about
signatures on certificates when negotiating TLS protocol properties,
and it resulted in a few painfully broken TLSv1.2 implementations getting
shipped (such as Microsoft Windows7/2008R2 through 8.1/2012R2).

Please ensure that TLSv1.3 is cleaned from bogus references about
requirements on signature algorithms for certificates.

Signatures on certificates are created by CAs, rather than TLS endpoints,
so any implementation that uses TLS protocol parameters (about TLS signature
algorithms) for more than a mere cert selection hint, is actively creating
interop problems while providing *NO* value.  Many TLS peers will have
just one suitable certificate anyway, so not even looking at certificate
signatures will be the easiest to implement, most robust and perfectly secure
behaviour anyway.


The issue with RSA-PSS digital signatures is that they were defined
with additional (unnecessary) parameters that are encoded (=hidden) in the
ASN.1 AlgorithmIdentifier, and that are therefore unspecified when
RSA-PSS is requested as (rsa-pss,sha-256) rather than with an ASN.1
AlgorithmIdentifier.

The additional, unnecessary parameters are "saltLen" and
"MaskGenerationFunction" (MGF), and the commonly-used MaskGenerationFunction
(mgf1) adds yet another additional, unnecessary parameter (MGF-internal hash).


In theory there is another additional, unnecessary parameter "TrailerField",
which appears in the ASN.1 AlgorithmIdentifier parameter list (and in the
XMLdsig encoding for RSA-PSS), but PKCS#1 v2.1 (rfc3447) essentially
hardwires the Trailerfield to option TrailerfieldBC(1), internal value 0xbc.


The definition of "implied" RSA-PSS parameters applies only to the
"digitally-signed" signature blobs using inside the TLS protocol
because these do not come with an ASN.1 AlgorithmIdentifier tag.
The implied RSA-PSS paramters for TLS' digitally-signed are unrelated
to RSA-PSS signatures on certificates (certificates come with explicit
RSA-PSS parameters encoded in an ASN.1 AlgorithmIdentifer):

   Certificate  ::=  SEQUENCE  {
tbsCertificate   TBSCertificate,
signatureAlgorithm   AlgorithmIdentifier,
signatureValue   BIT STRING  }

The original RSA-PSS AlgorithmIdentifer specification also defines
a hierarchical policy concept, that is supposed to limit the kinds
of signatures that can be created (verified) with a so-tagged public
RSA key, and this policy is supposed to work/apply from the RootCA cert
*downwards* to the leaf / end-entity cert.

It seems silly trying to apply implied RSA-PSS parameter selections
from the digitally-signed TLS protocol transform to the signature
on the TLS end-entity cert (or worse, even to certs up the cert chain),
because that would be the wrong/invalid direction.


What should be spelled out, whether and how any RSA-PSS policy in the
subjectPublicKeyInfo AlgorithmIdentifier of the end-entity certificate
interacts with implied RSA-PSS parameters used by the TLS digitally-signed
transform.  In any case, the decision whether to accept a certificate
should be _with_the_receiver_ (verifier / RP), and *NEVER* with the sender.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Alert after sending ServerHello

2017-04-26 Thread Martin Rex
Ilari Liusvaara wrote:
>> In effect, we assume that the entire flight is processed atomically
>> and generate errors based on that.  Only when the entire flight is
>> processed cleanly do we install keys and respond.
> 
> My implementation processes message-by-message. So it installs the
> client handshake keys after ServerHello.
>  
>> This is a pain for us, we don't have the code that Ilari talks about,
>> so some of our tests end up hitting decode errors on the server, but
>> it's been manageable thus far.
> 
> The code I was talking about was handling the special case that the
> server might receive either encrypted or unencrypted alert in response
> to its flight. And the difference it makes is just what error is
> declared as abort reason.

Up to TLSv1.2 there was no confusion about whether a TLS record
was encrypted or not: everything before "ChangeCipherSpec" is cleartext,
everything thereafter is encrypted.

Easy and straightforward solution would be to add the (meta-)information
whether the Contents of a TLSv1.3 record are encrypted or not into the
outer ContentType field, i.e. define an additional "encrypted Alert"
content type and ensure that Alerts ContentTypes are always visible
in the traditional TLS record header (rather than the bogus/futile
attempt to hide this information).

Having to do heuristics on something that can be easily deterministically
tagged as one or the other, seems a little odd.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-24 Thread Martin Rex
Michael StJohns wrote:
> Martin Rex wrote:
>> oops, typo:
>>
>> Martin Rex wrote:
>>> Actually, looking at the DigiCert issued ECC cert for www.cloudflare.com
>>> I'm a little confused.
>>>
>>> This is the cert chain (as visualized by Microsoft CryptoAPI):
>>>
>>>server-cert:  CN=cloudflare.com, ...
>>>  contains ECDSA P-256 public key
>>>  is allegedly signed with sha256ECDSA
>>>
>>>intermediate CA:  CN=DigiCert ECC Extended Validation Server CA
>>>  contains ECDSA P-384 public key
>>>  is allegedly signed with sha384RSA
>>>
>>>root CA:  CN=DigiCert High Assurance EV Root CA
>>>  contains RSA 2048-bit public key
>>>  is self-signed with sha1WithRsaEncryption
>>>
>>> For those who insist on reading rfc5246 verbatim, this chain requires
>>>
>>> ECDSA+SHA384:RSA+SHA384:RSA+SHA1
>>   ECDSA+SHA256:RSA+SHA384:RSA+SHA1
> 
> I don't think RSA + SHA 1 is actually required.   The Signature over the 
> trust anchor (root CA) is basically a no-op - assuming the certificate 
> is in the browser(client) trust store.  The trust is traced to the 
> public key regardless of the form in which it's provided.  We use 
> self-signed certs a lot to carry the public keys and names (and 
> sometimes constraints), but that's not required by PKIX.

A server TLS implementation is *ALLOWED* to unconditionally include the RootCA
cert, and in that case a client not sending RSA+SHA1 clearly indicates
DO NOT SEND ME THAT PATH if the bogus words in rfc5246 about
signature_algorithms is taken literally.

Remember that were talking about *SERVERS* preempting the clients decision
(like Microsoft SChannel implemented it), not clients ignoring certs
sent by the server for policy reasons.

A server implementation refusing to send such a cert chain to a client
that doesn't include RSA+SHA1 (and blaming DigiCert for it)
is equally justified than Microsoft SChannel choking on absent
signature_algorithm extensions.


The only reasonable interpretation of that part of rfc5246,
is to completely ignore it.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-24 Thread Martin Rex
oops, typo:

Martin Rex wrote:
> 
> Actually, looking at the DigiCert issued ECC cert for www.cloudflare.com
> I'm a little confused.
> 
> This is the cert chain (as visualized by Microsoft CryptoAPI):
> 
>   server-cert:  CN=cloudflare.com, ...
> contains ECDSA P-256 public key
> is allegedly signed with sha256ECDSA
> 
>   intermediate CA:  CN=DigiCert ECC Extended Validation Server CA
> contains ECDSA P-384 public key
> is allegedly signed with sha384RSA
> 
>   root CA:  CN=DigiCert High Assurance EV Root CA
> contains RSA 2048-bit public key
> is self-signed with sha1WithRsaEncryption
> 
> For those who insist on reading rfc5246 verbatim, this chain requires
> 
>ECDSA+SHA384:RSA+SHA384:RSA+SHA1

 ECDSA+SHA256:RSA+SHA384:RSA+SHA1

> 
> The digital signature on the server certificate looks bogus to me,
> that should be a sha384ECDSA signature according to NIST, because
> it uses a P-384 signing key.
> 
> The signature on the intermediate CA is imbalanced, and
> should be sha256RSA rather than sha384RSA. (that is only an interop issue,
> not a security issue).

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-24 Thread Martin Rex
Viktor Dukhovni wrote:
> 
> > On Mar 24, 2017, at 1:08 AM, Martin Thomson  
> > wrote:
> > 
> >> I've never seen
> >> a TLS server that has multiple chains to choose from for the same
> >> server identity.
>
  [ https://www.cloudflare.com/  ]
> 
> Both chains of course use SHA256.

Actually, looking at the DigiCert issued ECC cert for www.cloudflare.com
I'm a little confused.

This is the cert chain (as visualized by Microsoft CryptoAPI):

  server-cert:  CN=cloudflare.com, ...
contains ECDSA P-256 public key
is allegedly signed with sha256ECDSA

  intermediate CA:  CN=DigiCert ECC Extended Validation Server CA
contains ECDSA P-384 public key
is allegedly signed with sha384RSA

  root CA:  CN=DigiCert High Assurance EV Root CA
contains RSA 2048-bit public key
is self-signed with sha1WithRsaEncryption

For those who insist on reading rfc5246 verbatim, this chain requires

   ECDSA+SHA384:RSA+SHA384:RSA+SHA1


The digital signature on the server certificate looks bogus to me,
that should be a sha384ECDSA signature according to NIST, because
it uses a P-384 signing key.

The signature on the intermediate CA is imbalanced, and
should be sha256RSA rather than sha384RSA. (that is only an interop issue,
not a security issue).


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-24 Thread Martin Rex
Viktor Dukhovni wrote:
> 
> The net effect is that in practice you simply ignore the signature
> algorithms when it comes to the certificate chain.

Essentially correct.  This is the only reasonable, highly backwards
compatible and perfectly secure choice.

>
> I've never seen a TLS server that has multiple chains to choose from
> for the same server identity.  This applies also to TLS 1.2, despite
> RFC 5246.

Servers with multiple chains for the same server identity have been
around for a while, and their number might be higher than you expect.

More often they choose cert by offered cipher suites (RSA vs. ECDSA),
rather than by signature_algorithms.

I have recently encountered three public CAs that issue such dual-certs,
two of them either thought about potential consequences (or got it right
by chance): VeriSign and DigiCert.

* The RSA and ECDSA dual server certs from VeriSign were issued under the same
(RSA) RootCA cert "VeriSign PCA3 - G5",

* The RSA and ECDSA dual server certs from DigiCert were issued under the same
(RSA) RootCA cert "DigiCert High Assurance EV Root CA"

However, the RSA and ECDSA dual server certs from Comodo were issued
under _distinct_ RootCAs.  I don't know whether Comodo or the purchaser
of the dual server certs goofed this, but unless you negligently dump an
overbroad "blindly trust everyone" into your client-software in a web-browser
fashion (which gives _everyone_ the creeps, see Certificate Transparency
and HPKP bandaids), those Comodo-issued RSA+ECDSA dual server certs
from distinct PKIs are a severe interop nightmare.

If you want to roll out support for ECDSA cipher suites into an installed
base of TLS that is currently limited to RSA, dual certs from distinct
PKIs may result in servers unexpectedly picking and responding with
an untrusted server cert in existing usage scenarios,
after a software update of the TLS client (which adds support for ECDSA
cipher suites).


Two Cloudflare (CDN) examples for dual cert servers:

  Comodo:https://regmedia.co.uk/

  DigiCert:  https://www.cloudflare.com/

Actually, the latter seems to be a triple-cert server (try -sigalgs RSA+SHA1)



Things could be so much easier if folks would spend a little more
time thinking about backwards compatibility, i.e. what is necessary
so that a new feature can be rolled out into an installed base of
TLS client and servers, without causing interop failures for existing
usage scenarios.

TLSv1.3 has a few problems. Including the pointless hiding of ContentInfo
in the outer TLS record, which is non-interoperable with some of the
installed base and precludes efficient end-of-communication discovery.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-23 Thread Martin Rex
Eric Rescorla wrote:
>>
>> based on your reply my conclusion is that
>>
>> -  there is no (standard compliant) way for a server to use a
>> SHA256 based certificate for server side authentication in cases where the
>> client does not provide the signature_algorithm extension
>
> Not quite. If the client offers TLS 1.1 or below, then you simply don't
> know if it
> will accept SHA-256 and you should send whatever you have. If the client
> offers
> TLS 1.2 and no signature_algorithm extension, then you technically are
> forbidden
> from sending it a SHA-256 certificate. Note that any client which in fact
> supports
> SHA-256 with TLS 1.2 but doesn't send signature_algorithms containing it,
> is noncomformant. It's not clear to me how many such clients in fact exist.

rfc5246 makes it perfectly compliant for a TLSv1.2 client to support
sha256-based signature algorithms and be willing to use them,
and *NOT* send the TLS signature_algorithm extension.

https://tools.ietf.org/html/rfc5246#appendix-E.2


> 
> -clients should always use the signature algorithm extension to
>> ensure the server can apply a certificate with the appropriate crypt
>> algorithms

Except that the vast majority of servers only has a single certificate,
and will have to do "the right thing" in most situations anyway.
The definition of the "signature_algorithms" extensions is
sufficiently dense to say that you MUST not send it, except when
ClientHello.client_version=(3,3), and it is not possible to send it
when using a backwards-compatible SSL version 2 CLIENT-HELLO.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Enforcing stronger server side signature/hash combinations in TLS 1.2

2017-03-23 Thread Martin Rex
Fries, Steffen wrote:
> 
> based on your reply my conclusion is that
> 
> -  there is no (standard compliant) way for a server to use a SHA256
> based certificate for server side authentication in cases where the
> client does not provide the signature_algorithm extension

The statement quoted by Eric is an obvious and silly defect in the spec,
which the entire installed base of TLSv1.0 and TLSv1.1 completely ignored
--even Microsoft's implementation of TLSv1.0 and TLSv1.1 properly ignores it.


The only defective TLS implementation, that I am aware of, which put this
silly and obviously backwards-incompatible and defective requirement
from rfc5246 into code, seems to be Microsofts TLSv1.2 implementation
in Windows 7 through Windows 8.1.
One can interop with Windows 7 through Windows 8.1 just fine without
signature_algorithms when offering at most TLSv1.1, or offering TLSv1.2
in a SSL version 2 CLIENT-HELLO (in which case Microsoft SChannel will
negotiate TLSv1.1).


> 
> -  clients should always use the signature algorithm extension to ensure
> the server can apply a certificate with the appropriate crypt algorithms

Unless the server is creating his own certificate on the fly, the
signature algorithm on the server certificate is something which the
server has no control over, and which is therefore quite obviously
not up for negotiation in the TLS protocol handshake.

signature_algorithms can only be normative for signatures created as part
of the TLS handshake.  For signature algorithms on certificates, it is
a simple selection hint (similar to TLS extension server_name_indication
only being a selection hint).  That is a thoroughly intuitive requirement
for backwards compatibility.


Btw. there are a number of defects in the TLS spec (and the TLSv1.2 spec
rfc5246 in particular), one is introducing (rsa,md5) as a permissible
signature algorithm into TLSv1.2 handshakes.  While obvious to anyone
with a clue, it took a few implementors several years to understand
that blindly following wording of a "proposed standard" specification
is a terribly bad idea and a poor excuse for lack of common sense 
and lack of interop testing.  For those who missed the defect on
reading, a simple interop test will make this specification defect
stand out like a sore thumb.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RFC 6066 - Max fragment length negotiation

2017-03-22 Thread Martin Rex
Peter Gutmann wrote:
> Thomas Pornin  writes:
>> 
>>TLS 1.3 is moving away from the IoT/embedded world, and more toward a Web
>>world. This is not necessarily _bad_, but it is likely to leave some people
>>unsatisfied (and, in practice, people clinging to TLS 1.2).
> 
> I would go slightly further and say that TLS 1.3 could end up forking TLS in
> the same way that HTTP/2 has forked HTTP.  There's HTTP/2 for web content
> providers and HTTP 1.1 for the rest of us/them (depending on your point of
> view).  Similarly, there are sizeable groups of users who will take a decade
> or more to get to TLS 1.3 (they're still years away from 1.2 at the moment),
> or who may never move to TLS 1.3 because too much of their existing
> infrastructure is dependent on how TLS 1.x, x = 0...2, works.  So as with
> HTTP/2 we may end up with TLS 1.3 for web content providers and TLS 1.0/1.2
> for everything else.

I expect TLSv1.3 is going to be in a decade where IPv6 is today.

Not supporting IPv4 is a non-starter, because you can not reach
95% of the internet, and not even get internet connectivity in a
lot of places.

Not supporting IPv6 is paradise: less code, less headaches, less
interop problems, less security issues, and you will _not_ miss anything
at all, because everything that is at least remotely interesing,
is accessible via IPv4.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Draft: Using DNS to set the SNI explicitly

2017-03-10 Thread Martin Rex
Ilari Liusvaara wrote:
> On Fri, Mar 10, 2017 at 09:25:41AM +0100, Martin Rex wrote:
>> 
>> You don't understand the purpose of SNI and how the (already weak)
>> rfc2818 section 3.1 server endpoint identification and CABrowser Forum
>> public CA Domain validation has been designed to work.
> 
> SNI has extremely little to do with public CA domain validation,
> except for special validation certificate selection in some
> methods.

SNI is the TLS-standard for clients to tell the server

"This is the DNS-Hostname, which I will use for rfc2818 section 3.1
 server endpoint identification. If you have multiple server certificates
 to choose from, you may want to consider this SNI value for choosing
 the server certificate to use for this TLS handshake".

CABrowser-Forum defines the rules which browsers implemenent on
top of rfc2818 section 3.1 server endpoint identity checks
of server certificates.

btw. SNI explicitly excludes IPv4 and IPv6 address matching that
is defined in rfc2818 section 3.1 as alternatives to DNS Hostname
matching.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Draft: Using DNS to set the SNI explicitly

2017-03-10 Thread Martin Rex
Ben Schwartz wrote:
> Martin Rex <m...@sap.com> wrote:
> 
>>Ben Schwartz wrote:
>>>
>>> Like a lot of people here, I'm very interested in ways to reduce the
>>> leakage of users' destinations in the ClientHello's cleartext SNI.  It
>>> seems like the past and current proposals to fix the leak are pretty
>>> difficult, involving a lot of careful cryptography and changes to clients
>>> and servers.
>>
>> It is formally provable that there is no solution to the problem
>> that you're describing.
> 
> Perhaps I'm not trying to solve the problem that you're thinking of?
> 
> Here's an example:
> Wordpress.com uses HTTPS, with a wildcard certificate (*.wordpress.com) for
> all its hosted blogs, which have domains of the form
> myblogname.wordpress.com.  A passive adversary watching traffic to
> Wordpress.com can currently determine which blog each client IP address is
> accessing by observing the IP source address and the TLS SNI in the
> ClientHello message.
> 
> With this proposal, if Wordpress were to set an SNI DNS record on each
> subdomain, with empty RDATA, compliant clients would omit SNI when
> contacting the Wordpress server.  Connections would still work fine, but
> the passive adversary would no longer know which client is accessing which
> blog.
> 
> Is there something wrong with this example that I am missing?

You don't understand the purpose of SNI and how the (already weak)
rfc2818 section 3.1 server endpoint identification and CABrowser Forum
public CA Domain validation has been designed to work.


Wordpress.com isn't using SNI at all, so the ultimate solution
would be for the client to entirely omit SNI from ClientHello.

wordpress itself could achieve just the same by using URLs
of the kind

   blogs.wordpress.com/blogname with a cert issued to blogs.wordpress.com

rather than

   blogname.wordpress.com with a cert issued to *.wordpress.com


You might want eventually want to check with the logging functionality
of Adblockers (such as uBlock) or browser plugins like "Collusion", to how
many different servers & domains a typical server (including *.wordpress.com)
publishes (HTTP-Referer) where the user just went.


The decision to register a distinct & seperate name in DNS is an explicit
and obvious desire to **PUBLISH** this information.  If you do not want
to publish information, DO NOT REGISTER it in the DNS, so that it will
and it will never appear in SNI, in DNS lookups, or in DV-validation
request for obtaining a TLS server cert from a public CA.



Btw. your adversary will see the cleartext DNS lookup prior to the
TLS handshake, and tell accesses to multiple different blogs apart by
looking at the size of the responses.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] New Draft: Using DNS to set the SNI explicitly

2017-03-09 Thread Martin Rex
Ben Schwartz wrote:
> 
> Like a lot of people here, I'm very interested in ways to reduce the
> leakage of users' destinations in the ClientHello's cleartext SNI.  It
> seems like the past and current proposals to fix the leak are pretty
> difficult, involving a lot of careful cryptography and changes to clients
> and servers.

It is formally provable that there is no solution to the problem
that you're describing.

While you can come up with all kinds of fancy and complicated schemes
that are sufficient to provide you the illusion that you're looking for,
the best you can come up with, will *be* an illusion.  But some of
those illusions will cause lots of pain for implementors and make
the whole thing fragile and cause interop problems.

The situation is pretty similar for the hiding of the ContentType
in TLSv1.3 records.  It is formally provable that this can not provide
value, but it make implementations harder and reliably break some
existing stuff.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Awkward Handshake: Possible mismatch of client/server view on client authentication in post-handshake mode in Revision 18

2017-02-16 Thread Martin Rex
David Benjamin wrote:
> 
> Post-handshake auth exists basically entirely to service HTTP/1.1 reactive
> client certificate which was previously hacked on via renegotiation. I
> think we should not make this feature any more complicated than absolutely
> necessary to support this mode, and we should not add more bells and
> whistles to it to encourage further use.
> 
> For the HTTP/1.1 use case, this is not necessary because it's reasonable
> for client/server to agree that the server will not send any more data for
> that request until it has processed the client's authentication messages.

Well, the funny thing here is how HTTP/1.1 POST works when the server
asks for a certificate through renegotiation.  The client-side data,
which may be a multi-megabyte upload, will be performed entirely before
the renegotiation, and any renegotiation requested by the server might
be sitting in the incoming network buffer and ignored by the
client.  Depending on how much data the server expects, and the bandwith
available to the client upstream, the server might not even want to start
the renegotiation handshake before the client has completed transmission,
because this might result in a connection termination (when the client
needs more than 2*MSL for the upload and doesn't perform any recv() / SSL_Read
on the socket while uploading.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Awkward Handshake: Possible mismatch of client/server view on client authentication in post-handshake mode in Revision 18

2017-02-15 Thread Martin Rex
I think the issue is more about the "principle of least surprise".

The client needs to know whether it offered authentication by client cert
and which client cert it offered.  Whether the server accepted it, and
caches the accepted cert in the session or in a session ticket, is
the business of the server (it may help in troubleshooting to know,
but should not be necessary for application flow control).

Clients asserting multiple identities sounds extremely awkward to me.

We do have clients in posession of multiple client certs, but our application
must specify _before_ each TLS handshake which client identity to use (and
this information is necessary for client-side session caching,
and client-side session lookup.

There is a significant difference between cached sessions where a
specific client cert was used (or at least offered by the client),
and cached sessions where no client cert was offered.

If a client tries to access a resource through a session that is
authenticated with SSL client cert A, and the server-side authorization
decision denies access to client cert A, then this will typically
result in an access failure, _without_ the server asking for a different cert.

When no client cert has been used in a session then access to a resource
that requires a (particular) client cert may result in a request of the
server for a client cert (renegotiation up to TLSv1.2) after seeing the
request--provided that the server supports renegotiation.


-Martin



Andrei Popov wrote:
>
> Is it important for the client to know, from the TLS stack itself,
> whether client authentication succeeded or not? My assumption
> (perhaps incorrect) has been that it is the server that cares about
> the client's identity (or identities) in order to make an
> authorization decision.
> 
> This thread also seems to consider client authentication a binary state:
> the client is either authenticated, or not. In practice, the client may
> assert multiple identities, and the server may grant various levels
> of access.
> 
> Also, why should the client care whether session ticket X includes client
> identity Y? If a client resumes a TLS session with a session ticket that
> does not include (or point to) a client authenticator needed to satisfy
> the client's request, the server can initiate client auth (or deny the
> request).
> 
> Cheers,
> 
> Andrei

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Requiring that (EC)DHE public values be fresh

2016-12-29 Thread Martin Rex
Adam Langley wrote:
>
> https://github.com/tlswg/tls13-spec/pull/840 is a pull request that
> specifies that (EC)DH values must be fresh for both parties in TLS
> 1.3.
> 
> For clients, this is standard practice (as far as I'm aware) so should
> make no difference. For servers, this is not always the case:
> 
> Springall, Durumeric & Halderman note[1] that with TLS 1.2:
>   ? 4.4% of the Alexa Top 1M reuse DHE values and 1.3% do so for more
> than a day.
>   ? 14.4% of the Top 1M reuse ECDHE values, 3.4% for more than a day.
> 
> Since this defeats forward security, and is clearly something that
> implementations of previous versions have done, this change
> specifically calls it out as a MUST NOT. Implementations would then be
> free to detect and reject violations of this.

While you may have good intentions, the idea "and reject violations of this"
sounds like a bad idea to me.

First of all, forward secrecy is equally defeated by TLS session caching
(traditional as well as session tickets), and the effect of rfc5077
TLS session tickets is likely at least a magnitude worse--and cannot
be "fixed" by clients purging the tickets earlier.

Reuse of DHE values should not come as a surprise, PFS with DHE is
prohibitively expensive with 2048+ bit ephemeral keys.  DHE in TLS has
been well known to be fatally flawed (publicly known since the issue
was raised by Yngve on the TLS ietf mailing list in 2007).
  https://www.ietf.org/mail-archive/web/tls/current/msg01647.html

Since Sun/Oracle hardwired DHE to at most 1024 bits up to JavaSE 7,
i.e. on Billions on devices out there, DHE in TLS is a very dead horse.


Ephemeral ECDHE values should be much less painful performance-wise,
but may happen to accidentally share the same properties as DHE values.
Potentially, the (EC)DHE keypairs are (mis-)managed above the TLS stack,
instead of entirely within--in which case this would not be the fault of the
TLS implementation code, but of the TLS implementation API (design).


Now what does it mean when a _client_ that happens to connect to one
of these 14.4% Alexa top 1M sites that reuse ECDHE values, notices a
reuse of ECDHE on a repeated full handshake (which will not happen
immediately due to session caching).  This would result
is random handshake failures (client aborting the TLS handshake).
The server doesn't know why the client chokes, only the client can
decided to retry, but this is unlikely to affect the servers approach
to reusing the (EC)DHE value at all.


So the only thing this will cause is headaches to users and support
folks.  It will *NOT* improve the security by one iota.
 

If you want to improve something, then specify how to obtain the
desired behaviour, not how to panic and choke.



-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Confirming consensus: TLS1.3->TLS*

2016-11-18 Thread Martin Rex
Christian Huitema wrote:
>
> I prefer TLS 1.3, because is signals continuity with the
> ongoing TLS deployment efforts.

As long as the awful hiding of the ContentType information in TLS Records
remains in this protocol, it will *NOT* easily deploy as a replacement
of TLSv1.2.

I'm OK with TLS 4,

but my real concern is the (current) lack of backwards compatibility
of the TLS record format.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-10 Thread Martin Rex
Benjamin Kaduk wrote:
[ Charset windows-1252 unsupported, converting... ]
> On 11/09/2016 11:42 AM, Martin Rex wrote:
> > Nobody so far has provide a single example of *REAL* value.
> > For the hiding of ContentType to provide real value, the prerequisites are:
> >
> >   (1) this value will be _unconditionally_ provided in TLSv1.3
> >
> >   (2) this value can be demonstrated to be a real security issue in TLSv1.2,
> >   for existing usage scenarios, where hiding of ContentType is not
> >   available
> >
> > Anyhing less is no value, just an illusion of value.
> 
> Thanks for clarifying your position.  I don't think many of the other
> people in the thread are using the same definition of "value", which has
> led to a lot of confusion.
> 
> However, I'm not convinced that the concrete benefit needs to be
> mandatory-to-use in TLS 1.3 to be considered to provide value.


There is a concept called "provable correctness", and folks (such as
those from the miTLS implementation) are using this approach to check/prove
whether TLS provides certain security properties (rather than just
assuming that these properties are provided).

If hiding of ContentType has *real* value, then this property will be
formally provable.  If the properties that someone asserts as value
can be proven to not exist (one counterexample is sufficient),
then the value is an illusion / obscurity, and definitely not real value.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-09 Thread Martin Rex
Eric Rescorla wrote:
> 
> I'm not quite following who's who in this scenario, so some potentially
> stupid
> questions below.
> 
> As I understand it, you have the following situation:
> 
> - A Web application server
> - Some middleware, which comes in two pieces
>   - A crypto-unaware network component
>   - The TLS stack (you control this piece as well, right?)
> - The client

This is about any conceivable scenario involving at least one of our
components, just client, just server, or at both peers.

The "middleware" is part of our clients and servers. The middleware
performs all the network I/O and necessary calls into the TLS stack,
offers an appdata streaming convenience option for reading,
variable blocking I/O (non-blocking (0ms) up to infinite timeout).

TLS records of type Handshake, Alert and CCS are always processed
in batch, as many as the network buffers have already received.
TLS records of type AppData are processed based on read strategy
desired by the application caller.  The desired reading strategy
(trickling, efficient, streaming) is a parameter of the middleware
read API call, so the app could change it for every call.

 - Trickling means the traditional TLS record reading, i.e. two network
   read calls for every TLS record.

 - Improved means on average one network read call per TLS record.

 - Streaming means reading and decoding as many TLS appdata record
   as there are present in the network read buffers and can be
   received non-blocking.  Only TLS AppData records will be passed
   to the TLS Stack for decoding, Alerts and Handshake records
   will be left in the middlewares network read buffers until all
   appdata as been properly received by the application caller.
   Only upon the next read call from the application, the alert
   or handshake records will be passed to the TLS stack.

Example: GET https://www.google.de/ HTTP/1.0
(where my call to www.google.com is redirected to...)

currently returns a HTTP-response with a 657 byte Header and 44879 byte Body
The response would easily fit into 3 TLS appdata records
(4 Records if the Header is sent in its own TLS record), however
in reality, the Google servers perform pathological fragmentation
of the AppData and use a whooping 36 TLS records for the response
(curiously no TLS AppData record split between header and body).


Processing overhead for _receiving_ such a badly fragmented response:

Trickling:
  34 Read calls into the middleware
  73 recv() calls to read the TLS AppData records from the socket
  34 calls into the TLS stack

Improved:
  34 Read calls into the middleware
  42 recv() calls to read the TLS AppData records from the socket
  34 calls into the TLS stack

Streaming:
   5 Read calls into the middleware
  14 recv() calls to read the TLS AppData records from the socket
  34 calls into the TLS stack
   



>
> When do you deliver the close_notify and how do you know how?

When the application calls for read, and has already seen all
prior AppData, then the close_notify will be processed and the
result (connection closure, no more data) reported to the calling app.

There is no magic involved here.

Apps *know* when to perform reads, and when not to perform reads.
The middleware doesn't because it is protocol ignorant (it is used
by SMTP, ldaps, HTTPS, HTTP/2.0, websockets, etc).

Think of a simple HTTP-based server app receiving a HTTP/1.0 request
from a TLS-stack on top of a blocking network socket.  If the App would
keep calling SSL_read() (for an OpenSSL-style API), it would get stuck
(blocked on read), and at some point the client would give up and
close the connection (with close_notify or TCP RST), and the server
waking from either the processing of the close_notify or the TCP RST
would be unable to deliver a response (which the client would not read
anyway).

Whether or not the calling App wants to shutdown a communication
at different times in both directions depends on the existing semantics
of that application (which has just added TLS protection around its
communication).  Reading and processing a close_notify in the TLS stack
(e.g. OpenSSL) will tear down *BOTH* directions immediately, and preclude
any further of sending of responses by the application, so the middleware
really will want to hold of processing of close_notify alerts unless
_explicitly_ asked to read further AppData by the application.

With TLS up to TLSv1.2 streaming is no problem, the middleware can
easily recognize non-AppData records and avoid passing them to
the TLS stack for processing unless the application explicitly asks
the middleware to do so.  When TLSv1.3 hides the ContentType,
the fact that a close_notify was received & processed can only be
determined after the fact, when the shattered pieces are on the floor.
Communication in the other direction will be impossible, and it will
not be possible to prevent this from happening.

While it is conceivable to jump hoops and implement new APIs and
callback for the TLS stack 

Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-09 Thread Martin Rex
Daniel Kahn Gillmor wrote:
>
> Martin Rex wrote:
>>
>> The problem here is that this breaks (network) flow control, existing
>> (network socket) event management, and direction-independent connection
>> closure, and does so completely without value.
> 
> Martin, you keep saying things like "without value", while other people
> on this thread (Rich, Ilari, Yoav) have given you examples of the value
> it provides.  You don't seem to be trying to understand those positions.

Nobody so far has provide a single example of *REAL* value.
For the hiding of ContentType to provide real value, the prerequisites are:

  (1) this value will be _unconditionally_ provided in TLSv1.3

  (2) this value can be demonstrated to be a real security issue in TLSv1.2,
  for existing usage scenarios, where hiding of ContentType is not
  available

Anyhing less is no value, just an illusion of value.


> 
> This WG isn't chartered to defend the engineering optimizations made by
> any particular middlebox vendor.  It's chartered to improve the privacy
> and security guarantees offered to users of TLS.

You are confusing _middlebox_ with _middleware_at_the_endpoint_,
which is a huge difference, because the middleboxes are performing
man-in-the-middle attacks, whereas the _middleware_at_the_endpoint_
has regular access to the entire plaintext of the communication.

The problem with hiding of TLS record ContentTypes is that it severely
interferes with efficient streaming network I/O--which is preferably
performed outside/above the TLS implementation and async non-blocking
whenever you get into thousands of parallel connections.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-03 Thread Martin Rex
Yoav Nir wrote:
> 
> On 3 Nov 2016, at 16:31, Martin Rex <m...@sap.com> wrote:
>> 
>> Since then, I've seen exactly ZERO rationale why the cleartext contenttype,
>> which has existed through SSLv3->TLSv1.2 would be a problem.  With the
>> removal of renegotiation from TLSv1.3, it is even less of a problem to
>> keep the contenttype in the clear.
> 
> Here?s some to get this to somewhat >0:
> 
> Most TLS 1.2 connections will have a few handshake records,
> followed by a couple of CCS records followed by a whole bunch of
> application records, followed possibly by a single Alert.
> 
> You only see more handshake records in two cases:
>1. The client decided to re-negotiate. That is exceedingly rare.
>2. The server decided a renegotiation is needed
>   so it sent a HelloRequest followed by a handshake.
> 
> With visible content type, you can tell these two flows apart.

 (a) so what?  for those interested, one can tell such flows appart
 pretty reliably by traffic analysis.  So there is exactly ZERO
 protection against bad guys, while breaking the good guys.

 (b) but TLSv1.2 remains unchanged, and this flow does not seem to
 exist in TLSv1.3, since renegotiation no longer exists in TLSv1.3.
 -- so why would we need a backwards-incompatible change in a
 protocol that protects something that no longer exists,
 but which severely breaks existing middleware, making it
 impossible to drop-in replace a TLSv1.2 implementation with
 a TLSv1.3 implementation that has this backwards-incompatibility.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-03 Thread Martin Rex
Salz, Rich wrote:
>> Since then, I've seen exactly ZERO rationale why the cleartext contenttype,
>> which has existed through SSLv3->TLSv1.2 would be a problem.  
> 
> Because it's kind of implied in the charter, about making as much private as 
> possible.
> 
>> years), because it is actively being used to signal state of the 
>> communication
>> channel to the application and to *NOT* break application architecture that
>> relies on (new) application data remaining visible on network sockets as
>> "network readable" events.
> 
> One app's data is another adversary's oracle.
> Or is it that "signals have no morals"?

If you look at TLS records exchanged between two peers,
and in particular if you perform a TLS handshake with same server
yourself and compare, you can easily (heuristically) determine
which TLS records of the original stream are hanshake records,
and which are application data records.

So there is exactly ZERO benefit of concealing the ContentTypes.

But this concealing reliably breaks existing application middleware
at the endpoints, which needs to reliably & quickly tell the difference
between handshake records and application data, so that it can
leave application data records in the network buffer, so that the
socket readable event remains visible for the event-based application
logic.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-11-03 Thread Martin Rex
Ilari Liusvaara wrote:
>>> 
>>> Hiding the types does have its benefits (and it is also used for
>>> zero-overhead padding scheme).
>> 
>> Nope, ZERO benefits.  But it totally breaks the middleware
>> _at_the_endpoints_!
> 
> Also, things like this should have been discussed like year or two
> ago. Right now it is too late for major changes like this without good
> cryptographic justifications (which AFAICT don't exist).

They WERE brought up back then, and several times in between.
But the TLSv1.3 proposal has still not been fixed so far.
Ignorance does not make problems go away.  Instead, it means that
one will have to fix it later.


Since then, I've seen exactly ZERO rationale why the cleartext contenttype,
which has existed through SSLv3->TLSv1.2 would be a problem.  With the
removal of renegotiation from TLSv1.3, it is even less of a problem to
keep the contenttype in the clear.

The removal of visibility of ContentType in TLSv1.3 will be a complete
non-starter for TLSv1.3 as a drop-in replacement to TLSv1.2 for certain
software architectures (including a lot of stuff we've been shipping for the
last 5 years), because it is actively being used to signal state of
the communication channel to the application and to *NOT* break application
architecture that relies on (new) application data remaining visible on
network sockets as "network readable" events.



https://www.ietf.org/mail-archive/web/tls/current/msg13085.html

https://www.ietf.org/mail-archive/web/tls/current/msg13106.html


A similar issue exists for the visibility of Alert ContentTypes
for efficiently detecting client-side connection closure.
This has been described here:

https://www.ietf.org/mail-archive/web/tls/current/msg21123.html


The IETF technical leadership is supposed to prevent backwards-incompatible
changes to be adopted and standardized that provide *ZERO* benefit,
but severely impair interop and consumption.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-10-28 Thread Martin Rex
Ilari Liusvaara wrote:
> Martin Rex wrote:
>> Joseph Salowey wrote:
>> 
>> There are two seriously backwards-incompatible changes in the
>> current proposal that provide zero value, but completely break
>> backwards-compatibility with existing middleware infrastructure.
>> 
>> 
>> (1) hiding of the TLS record content types.
>> Please leave the TLS record types (handshake/AppData/Alert/CCS)
>> clearly visible on the outside of the TLS records, so that
>> middleware protocol parsers (which interface to transport-free
>> TLS protocol stacks) can continue to work, and continue to
>> work efficiently.
> 
> Hiding the types does have its benefits (and it is also used for
> zero-overhead padding scheme).

Nope, ZERO benefits.  But it totally breaks the middleware
_at_the_endpoints_!


> 
> And also, TLS 1.3 handshake is so darn different from TLS 1.2, that
> you couldn't do anything sane even if you had record types.

Wrong.

If one is using an architecture where the TLS protocol stack is
transportless, so that the network communication can be performed
efficiently (coalescing TLS records that are trickling in), then
the *REAL* content type is quite important for knowing whether
the TLS handshake is still ongoing, or whether it is already
complete.

The way I've built this is that the middleware has a timeout for
the TLS handshake in its entirety (independent of the number of
roundtrips), and at the same time promises the application a
network readable event for every incoming TLS record with
application data.  This only works if I can leave TLS appdata
records partially in the incoming network buffer, and for this
I must be able to recognize them.

For processing TLS records with Handshake messages, pre-reading and
passing multiple of them is preferable and much more efficient
(if TLS handshake messages come in seperate TLS records each, which
some implementations do).  Pre-reading TLS records with handshake messages,
but not prereading TLS records with AppData (so that network readable
events will remain visible for app data) is only possible if I see the
contents on the outside of the record by just reading the TLS record header.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Working Group Last Call for draft-ietf-tls-tls13-18

2016-10-28 Thread Martin Rex
If the server_name remains in plaintext and full sight in ClientHello
(where it needs to be for TLSv1.2 backwards compatibility anyway),
then I don't have an issue.  (I'm sorry for not reading the draft in full).


Eric Rescorla wrote:
> 
>> (2) hiding of the TLS extension SNI.
>> Right now it is perferctly fine to implement TLS extensions SNI
>> on the server completely outside the TLS protocol stack to route
>> to single-cert SNI-unaware backends.  The current proposal
>> suggest to move TLS extension SNI into the encrypted part, if
>> my superficial reading of the draft is correct, so TLSv1.3
>> will not fly with existing architectures where spreading of
>> TLS requests on the server-side based on TLS extension SNI
>> is done outside of the TLS protocol stack (i.e. bottleneck-less
>> without having to open TLS).
> 
> 
> This isn't quite right. In RFC 6066, the client sends its server_name
> extension in ClientHello and the server responds with an empty
> server_name in its ServerHello to indicate that it accepted SNI.

Yes, I know that rfc6066 suggests the server responds with an empty SNI
extension.  This kind of server response is is a complete waste,
and server-side SNI works just fine with the server not returning
an empty SNI extension.


> 
>A server that receives a client hello containing the "server_name"
>extension MAY use the information contained in the extension to guide
>its selection of an appropriate certificate to return to the client,
>and/or other aspects of security policy.  In this event, the server
>SHALL include an extension of type "server_name" in the (extended)
>server hello.  The "extension_data" field of this extension SHALL be
>empty.
> 
> In TLS 1.3, the client's extension remains where it is, but the server's
> extension is in EncryptedExtensions. This shouldn't interfere with
> configurations such as the one you describe, as the server already
> needed to insert the SNI field itself and hash it into Finished.

Nope, the server doesn't need to insert anything at all.  The empty
TLS extensions SNI in ServerHello is completely superflouous.

If this is really about "hinding the empty TLS extension SNI response",
while leaving the actualy server_name in full sight in the cleartext
ClientHello, why not just dropping the ServerHello TLS extension SNI
response and be done with it?  It really has no functional value other
than information discovery for scanners (the bad guys).


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI and Resumption/0-RTT

2016-10-25 Thread Martin Rex
Kyle Nekritz wrote:
>
> I do think this should be allowed if the client is satisfied with the
> previous identities presented. We currently allow resumption across
> domains supported by our wildcard certificate (I believe this is fairly
> common practice), and our clients take advantage of this to improve their
> resumption rate. In regards to the referenced paper, I don't think this
> is any more dangerous than the wildcard certificate itself as a full
> handshake would succeed anyway. I don't think it's entirely necessary
> for the server to opt-in to this either, if the server wants it can
> simply reject the resumption attempt.

I think it is a bad idea to look at this purely from the perspective
of whether this represents an obvious attack vector.

And there are two entirely *independent* decisions involved.

  (1) whether the TLS client proposes resumption for a session
  (i.e. client-side cache management)

  (2) whether the TLS server agrees to a proposed resumption
  or whether it performs a full handshake instead

And there are _different_ security trade-offs in these two distinct
decisions.

As I previously described my position, I'm perfectly OK with a server
performing a resumption if the full handshake would cause the server
to send/present the very same TLS server certificate as in the full
handshake that created the session that is proposed for resumption
... and that is actually the behaviour which I implemented, and
what comes out naturally if you implement TLS extension SNI support
_outside_ of the TLS stack on the server side.

However, I believe that the server agreeing to resumption with a
different SNI hostname is a use case, that with a sensible
generic TLS client, should never actually occur in practice.
Except for bugs or design flaws in the client-side session cache management
maybe.

The client does _not_ know which TLS server certificates the server has
available, and what criteria it will apply for selecting one or the other.
The existence of a wildcard certificate does not unconditionally preclude
existence of host-specific certificates for specific services that are
technically covered by the wildcard.  I really dislike seemingly
non-deterministic behaviour, and therefore try to avoid it as much
as possible in whatever I implement.

The decision to accept a particular server certificate for one specific
hostname/target does not (should not) necessarily apply to *each* other
possible servername covered/conveyed by that server certificate.

Special-casing stuff makes the behaviour also difficult to comprehend
for end-users / consumers (and implementers get it wrong more easily).
What if the server certificate is "manually" confirmed by the end-user
(for whatever reason:  it's self-signed/untrusted or from DANE rather
that PKIX) should that "acceptance" still/also transcend to all other
hostnames (why or why not)?

My client-side TLS session cache management (which I implemented above
the TLS stack), uses the target hostname as one of the session cache
lookup parameters, and I don't think it would be sensible to propose
arbitrary sessions for resumption.

The performance overhead for a full handshake per hostname is completely
negligible (and if the server operator cares, he could simply avoid
spreading content over distinct server hostnames).

What I found painful instead, is the server-side behaviour implemented
by Microsoft IIS / SChannel in the past, because when configured for
optional client certificates, the server exhibits Alzheimer towards
certificate-less clients (or at least it did so in the past),
because it would force each client without client cert through
a full renegotiation handshake after resumption for each new connection,
failing to memorize that the resumed session was created by renegotiation
where the server did ask for a client cert and the client did turn down
that request.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI and Resumption/0-RTT

2016-10-21 Thread Martin Rex
Ilari Liusvaara wrote:
> On Fri, Oct 21, 2016 at 11:41:59PM +1100, Martin Thomson wrote:
>> On 21 October 2016 at 19:55, Ilari Liusvaara  
>> wrote:
>>> Of course, defining the "same certificate" is
>>> way trickier than it initially seems
>> 
>> Not if you think simplistically: same octets in EE ASN1Cert
>> in both handshakes.
> 
> Such behaviour would run into problems with certificate renewal.

Just the opposite.  You definitely want full handshake on
certificate renewal.

I don't know how common it is in TLS servers (and TLS clients) to
allow replacing of TLS certificates in "full flight".  I implemented
this in ours about 10 years ago, and I'm flushing the session cache
after loading of the new/updated cert, so that every new handshake
will result in a full handshake rather than session resume (ongoing
connections continue to use the old/previous certificate until
closed by the application).

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] SNI and Resumption/0-RTT

2016-10-21 Thread Martin Rex
Andrei Popov wrote:
>
> Perhaps it's OK to resume a session with a different SNI if in this
> session the server has proved an identity that matches the new SNI.
> In order to enforce this, the server would have to cache (or save in
> the ticket) a list of identities it presented in each resumable session?

The current wording in rfc6066 may be slightly confusing about what is
acutally important and why.

The server ought to perform a full handshake whenever the full handshake
will result in selection & use of a _different_ TLS server certificate
than what was used for the original full handshake on a session resumption.
This is a direct consequence of the principle of least surprise.

This is also the most backwards-compatible behaviour when upgrading the
server from a does-not-support-SNI to a supports-SNI state/implementation.

You do *NOT* want to have session caching interfere with the
server certificate that a client gets to see, because that would
essentially result in not-quite-deterministic server behaviour.

Sometimes there may be bugs in client-side session caching,
and clients proposing the wrong session for resumption, and the server
doing a full handshake results in interoperable, deterministic and
secure behaviour.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Deprecating alert levels

2016-10-19 Thread Martin Rex
Kyle Nekritz wrote:
> 
>> This list is already missing the warning-level "unrecognized_name" alert,
>> and such a change would imply that all new/unrecognized alerts are going
>> to be treated as fatal forever (i.e. that no new warning-level alerts
>> can ever be defined).
> 
> That alert is currently defined as a fatal alert (see section 6.2 in the
> current draft).  RFC 6066 also states "It is NOT RECOMMENDED to send a
> warning-level unrecognized_name(112) alert, because the client's behavior
> in response to warning-level alerts is unpredictable.", which I think
> illustrates the problem. Allowing new non-fatal alerts to be added later
> would require that existing clients ignore unknown warning alerts,
> which I think is somewhat dangerous.

It seems that rfc6066 is not clear enough in explaining the issue
about the situation with the two WELL-DEFINED (but poorly implemented)
variants of the TLS alerts

  (1)  unrecognized_name(112)  level WARNING
  (2)  unrecognized_name(112)  level FATAL

See the *ORIGINAL* specification which created *BOTH* of these alert variants:

https://tools.ietf.org/html/rfc3546#page-10


   If the server understood the client hello extension but does not
   recognize the server name, it SHOULD send an "unrecognized_name"
   alert (which MAY be fatal).


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] CertficateRequest extension encoding

2016-10-10 Thread Martin Rex
Geoffrey Keating wrote:
> 
> A typical macOS system will have many issued certs, typically with at
> most one that will work for any particular web site or web API.  So
> the filter is somewhat important for client certs to work there in any
> kind of user-friendly way.  In particular if the server provides no
> guidance, the UI will ask the user, presenting a dialog containing
> many certificates the user is not aware they have, leading to complete
> user confusion.

In the past, Safari on MAC entirely ignored the server-asserted contents of
certificate_authorities in the TLS CertificateRequest handshake message,
and would offer *all* possible client certs to the user.  Has this
bug been fixed in Safari?  I remember customer messages where clients
were refused that were erroneously sending AppleID client certs...


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-28 Thread Martin Rex
Martin Rex wrote:
> Stephen Farrell wrote:
> > 
> > On 28/09/16 01:17, Seth David Schoen wrote:
> > > People with audit authority can then know all of the secrets,
> > 
> > How well does that whole audit thing work in the financial services
> > industry?  (Sorry, couldn't resist:-)
> 
> I am actually having serious doubts that it works at all.
> 
> Consider a scenario that uses TLSv1.2 with static-RSA key exchange,
> plain old session caching and Microsoft style renego-client-cert-auth
> on a subset of the urlspace.
> 
> (1) first TLS session, full handshake, request to public area.
> 
> (2) TLS session resume, request to non-public area -> renego
> 
> (3) TLS session resume for renego'ed session to non-public area.
> 
> 
> To obtain the cleartext of session (3), you'll need the master secret
> of the renego'ed session from (2), for which you'll first have to locate
> and decrypt (2), for which you need the master secret from (1), so you'll
> have to locate (1), and only at (1) you can start opening the encryption
> with the longterm private RSA key of the server.
> 
> It is impossible to open (3) directly, and the ClientKeyExchange
> handshake message (and client randoms) that created the master secret
> of session (3) is encrypted during renegotiation, so one can not
> directly recover that with the longterm private RSA key of the server,
> but has to open (2) first.

And it might even be more difficult than that.

Because the Server Hello (with the server-issued new session_id)
is encrypted during renegotiation, one can not see which renegotiation
created the session that is resumed in (3), and may have to decrypt
several renegotiation handshakes in order to find the correct one
which created the session_id (3).

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-28 Thread Martin Rex
Stephen Farrell wrote:
> 
> On 28/09/16 01:17, Seth David Schoen wrote:
> > People with audit authority can then know all of the secrets,
> 
> How well does that whole audit thing work in the financial services
> industry?  (Sorry, couldn't resist:-)

I am actually having serious doubts that it works at all.

Consider a scenario that uses TLSv1.2 with static-RSA key exchange,
plain old session caching and Microsoft style renego-client-cert-auth
on a subset of the urlspace.

(1) first TLS session, full handshake, request to public area.

(2) TLS session resume, request to non-public area -> renego

(3) TLS session resume for renego'ed session to non-public area.


To obtain the cleartext of session (3), you'll need the master secret
of the renego'ed session from (2), for which you'll first have to locate
and decrypt (2), for which you need the master secret from (1), so you'll
have to locate (1), and only at (1) you can start opening the encryption
with the longterm private RSA key of the server.

It is impossible to open (3) directly, and the ClientKeyExchange
handshake message (and client randoms) that created the master secret
of session (3) is encrypted during renegotiation, so one can not
directly recover that with the longterm private RSA key of the server,
but has to open (2) first.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-28 Thread Martin Rex
Judson Wilson wrote:
> 
> I think this challenge is best solved by putting the information on the
> wire in some way, possibly as a special industry-specific extension (used
> only by those who are bent on shooting themselves in the foot). The benefit
> being that if the TLS channel is alive, the session information is
> available to the monitor.  Just as a strawman, the client could transmit
> session info in special records, encrypted by a public key, and the
> monitoring equipment could scoop these up. For compatibility with servers
> outside the network, a middlebox could somehow filter out these records.
> 
> It sounds like the need is large enough that such an effort is feasible,
> and it would be good to keep normal TLS 1.3 unambiguously forward secure.
> (There IS still the question of how to make sure that the extension is not
> enabled in endpoints it shouldn't be.)


Whoa there.  What you're describing is essentially the
Clipper-Chip & Skipjack encryption

https://en.wikipedia.org/wiki/Skipjack_(cipher)


I'm sorry, but the IETF decided back then that it doesn't want
to standardize such technology:

https://tools.ietf.org/html/rfc1984


I'm sorry, but I'm still violently opposed to the IETF endorsing
backdooring of security protocols.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-26 Thread Martin Rex
Pawel Jakub Dawidek wrote:
> 
> Because of that, every corporate network needs visibility inside TLS
> traffic not only incoming, but also outgoing, so they can not only
> debug, but also look for data leaks, malware, etc.

There may be a some countries with poor civil liberty protections
where such activies (employee communication surveillance) has
not been criminalized yet, but at least in the European Union,
there is EU Directive 2002/58/EC which requires member states to
criminalize such surveillance.  In Germany, this was criminalized
with the 2004 update of the TKG (Telekommunikationsgesetz) and
will get every employer up to 5 years prison term for doing this.

And no, there can not be any valid regulations to require such
monitoring, because _every_ to the secrecy provisions and criminalization
requires an explicit law from the parlamentarian legislator.

"regulations" are issued by parts of the government (executive power),
and the German national law (TKG) and the German constitution (GG)
formally excludes the executive power from defining/creating exceptions
to telecommunication secrecy.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-09-26 Thread Martin Rex
Thijs van Dijk wrote:
> 
> Regular clients, no.
> But this would be a useful addition to debugging / scanning suites (e.g.
> Qualys), or browser extensions for the security conscious (e.g. CertPatrol).

With FREAK and LOGJAM attacks, there is a significant difference in
effort between servers using a static private (DH or temporary RSA) key
vs. truely ephemeral key.  But security checks of "vulnerability scanners"
do not seem to do any checks on whether the server is presenting the
same public key on multiple handshakes.

Generation of truely ephemeral DH keys for every full handshake is IMO
quite expensive for 2048+ bits DH.  The reason why I like Curve25519
is that generation of ephemeral keys is cheap.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Suspicious behaviour of TLS server implementations

2016-09-09 Thread Martin Rex
My personal take on your questions:


Andreas Walz wrote:
> 
> (1) Several server implementations seem to ignore the list of proposed
> compression methods in a ClientHello and simply select null compression
> even if that has not been in the ClientHello's list.

Sounds like reasonable behaviour (improving interop) which does not cause
any security issues.

>
> The specification is rather clear that null compression MUST be part of
> the list.  However, I'm not aware of any clear statement about what a
> compliant server should do in case it receives a ClientHello without
> null compression. My best guess would have been that in such cases the
> server should abort the handshake (at least if it does not support
> whatever the client proposed).

The requirement is about the Client.  The Server behaviour is un(der)specifed.
The server choosing null compression anyway and continuing looks pretty
reasonable to me (least amount of code, so the safest to implement).

Aborting by the server would be OK with the specification, but not
necessarily reasonable.  Consider that there was a time when compression
within TLS was available and widely used.  A server that wanted and would
use TLS compression when offered would behave sort-of unrational when
complaining about an absence of a compression method it does not intend
to use (in presence of the compression method it wants to use).

The actual problem is the design flaw in TLS that availability of 
null compression is not implied, but rather given a seperate codepoint,
and the server choosing null compression and continuing even when it
is not explicitly asserted by the client, is a server silently making up
for that design flaw in the TLS spec.


> 
> (2) In a ClientHello several server implementations don't ignore data
> following the extension list. That is, they somehow seem to ignore the
> length field of the extension list and simply consider everything
> following the list of compression methods as extensions.  Aside from this
> certainly being a deviation from the specification, I was wondering
> whether a server should silently ignore data following the extension
> list (e.g. for the sake of upward compatibility) or (as one could infer
> from RFC5246, p. 42) send e.g. a "decode_error" alert.

Up to TLSv1.2, TLS extensions were purely optional, so an implementation
that unconditionally ignores everything following compression methods is
at least fully conforming to SSLv3, TLSv1.0 and TLSv1.1.

For an TLS implementation that parses TLS extensions, the behaviour of
what to do about trailing garbage is a different matter.  Personally
I prefer aborting in case of garbage trailing TLS extensions.

When I looked at OpenSSL's implemenation of the extensions parser a few
years ago, I noticed that it was ignoring (I believe up to 3) trailing bytes.
(the shortest possible TLS extension is 4 bytes).


>
> (3) If a ClientHello contains multiple extensions of the same type,
> several server implementations proceed with the handshake (even if they
> parse these specific extensions). The specification again is clear that
> "there MUST NOT be more than one extension of the same type".
> However, what should a server do in case there are? Again, my guess
> would be that it should abort the handshake. Should this also be the
> case for extensions that a server simply ignores (as it e.g. doesn't
> know them)?

What the server does in presence of multiple TLS extensions of the same type
is implementation defined.  I think it would be extremely reasonable for a
server to perform a simple plausibility check on decode whether it is decoding
the same TLS extension more than once, and then abort with a decode_error
alert, in order to have ambiguities caught early.
Recognizing duplicate TLS extensions that the server does not
support/implement does not (yet) create ambiguities (for that server)
and requires more complex code, so I would not implement that check.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR#625: Change alert requirements

2016-09-07 Thread Martin Rex
Salz, Rich wrote:
>
> I've been reading this.
> 
> I think we should get rid of the "abort" concept.  There's a clean
> shutdown and there's everything else which is an abrupt or unclean
> closing of the connection.  The "send alert" and "close connection"
> concepts are separable and I think we should do that.
> 
> I think writing things this way will make it more clear.
> And then we can bikeshed over which alerts are MAY MUST SHOULD,
> knowing all along that ECONNRESET means the other side gave up.

For TLS handshake failures, the presence of Alerts on-the-wire can
significantly facilitate troubleshooting.

For terminating application data flows after successful TLS handshakes
an ECONNRESET may actually be preferable to graceful TLS closure alerts
in certain scenarios (but this means that applications will have to
perform proper application-level end-of-data signaling).

If the backend uses a multi-tier architecture (such as a reverse proxy
in a DMZ), then its easier to notice (and process) ECONNRESET than
a TLS closure alert, and cancel processing of the active request
in the backend.

This becomes even more important when the stupid idea of hiding the
content type is not removed from TLSv1.3/TLSv2.0.  Recognizing
ECONNRESET is trivial.  Peeking and recognizing a pending TLS Alert
is doable in TLS up to v1.2, but this is going to be a royal PITA
in an Alert-ContentType-concealing TLSv1.3/TLSv2.0.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] PR#625: Change alert requirements

2016-09-07 Thread Martin Rex
Andrei Popov wrote:
>>> the only popular stack I found that does not seem to send alerts is 
>>> the schannel from Microsoft
> 
> To clarify, schannel does generate alerts per RFC, but the HTTP stack
> (which actually owns the socket) sees no value in sending them.

"Pillows don't hit people, people do." ;-)

When operating with a transport-less (i.e. opaque-PDUs-only) API,
it is somewhat unusal to having an API call _fail_ *and* return output
parameters & transport requests at the same time.  Higher layer programmers
(and the exception-style programming) often do not expect having to
deal with both.

I actually notice just now that Microsoft Win32 SSPI DeleteSecurityContext()
(Microsoft's incarnation of GSS-API) does not have a final/context-deletion
token output parameter.

In the original (DEC given to IETF) GSS-API design, the "final token"
was not meant to be emitted by a failing context iterator call along with
a fatal error code, but by a _successful_ call to gss_delete_sec_context().

I'm actually confused--where does SChannel return that TLS alert PDU
(and along with with what kind of API return code)?



Admittedly, for applications on top of GSS-API (rfc2743/rfc2744) it is
quite common to _not_ bother conveying the optional(!) context deletion
token that may be produced by GSS-API's gss_delete_sec_context(), and
the GSS-APIv2 spec acknowledged this being a common app behaviour,
and deprecated the context deletion token.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3: Deterministic RSA-PSS and ECDSA

2016-08-10 Thread Martin Rex
Tony Arcieri wrote:
>
> It's also worth noting that BERserk is one of many such incidents of this
> coming up in practice:
> https://cryptosense.com/why-pkcs1v1-5-signature-should-also-be-put-out-of-our-misery/

With the PKCS#1 v1.5 signature verification operation,
as described in PKCS#1 v2.0 (rfc2437, Oct-1998, Section 8.1.2)

https://tools.ietf.org/html/rfc2437#section-8.1.2

it is *IMPOSSIBLE* to create an implementation with a bug such
as BERserk, because there is (on purpose) *NO* ASN.1 decoding step
defined for this signature verification.


A useful specification that is almost 2 decades old does not
protect from clueless implementors, however.

Heartbleed is also not part of the underlying specification.
Anyhow some very seriously broken code, for a completely useless
feature (within TLS, not DTLS), was created and shipped into
large parts of the installed base...


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3: Deterministic RSA-PSS and ECDSA

2016-08-09 Thread Martin Rex
Tony Arcieri wrote:
[ Charset UTF-8 unsupported, converting... ]
> On Monday, August 8, 2016, Martin Rex <m...@sap.com> wrote:
> >
> > The urban myth about the advantages of the RSA-PSS signature scheme
> > over PKCS#1 v1.5 keep coming up.
> 
> Do you think we'll see real-world MitM attacks against RSA-PSS in TLS
> similar to those we've seen with PKCS#1v1.5 signature forgery, such as
> BERserk?

BERserk is an implementation defect, not a crypto weakness.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS 1.3: Deterministic RSA-PSS and ECDSA

2016-08-08 Thread Martin Rex
Hanno Böck wrote:
> 
> Actually there is some info on that in the PSS spec [1]. What I write
> here is my limited understanding, but roughly I'd interpret it as this:
> It says that if you use a non-random salt the security gets reduced to
> the security of full domain hashing, which was kinda the predecessor of
> PSS.
> I'd conclude from that that even in a situation where the salt
> generation is a non-random value nothing really bad happens. The
> security of a PSS scheme without randomness is still better than that
> of a PKCS #1 1.5 signature.

The urban myth about the advantages of the RSA-PSS signature scheme
over PKCS#1 v1.5 keep coming up.

It has been mentioned here before:

Fedor Brunner wrote on 4 Mar 2016 17:45:19:
> 
> Please see the paper "Another Look at ``Provable Security''" from Neal
> Koblitz and Alfred Menezes.
> 
> https://eprint.iacr.org/2004/152
> 
> Section 7: Conclusion
> 
> "There is no need for the PSS or Katz-Wang versions of RSA;
> one might as well use just the basic ?hash and exponentiate? signature
> scheme (with a full-domain hash function)."


The advantages of the RSA-PSS signature scheme are limited to situations
where the rightful owner of the private signing key is not supposed
to have access to the bits of the private key (i.e. key kept in hardware).

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] weird ECDSA interop problem with cloudflare/nginx

2016-07-26 Thread Martin Rex
Viktor Dukhovni wrote:
> 
>> On Jul 25, 2016, at 3:08 PM, Martin Rex <m...@sap.com> wrote:
>> 
>> specifically, after the FF update, this new TLS ciphersuite:
>> 
>>   security.ssl3.ecdhe_ecdsa_aes_128_gcm_sha256  (0xcc, 0xa9)
>> 
>> was the only ECDSA cipher suite enabled in my Firefox 47.0.1, and this
>> kills connectivity (TLS handshake_failure alert) with regmedia.co.uk.
> 
> OpenSSL lists "CC, A9" as:
> 
> 0xCC,0xA9 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA 
> Enc=CHACHA20/POLY1305(256) Mac=AEAD
> 
> Which is not AES_128_GCM.  The IANA registry seems to agree:
> 
> https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4
> 
>   0xCC,0xA9   TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256   Y   
> [RFC7905]


Sorry for the confusion about the cipher suite.

The issue seems a little weirder than what I thought, because the
failure seems to happen only for a particular cipher suite combo
(which happens to be the combo produced by my own Firefox config):

I can repro the handshake failure with openssl-1.1.0-pre5 with this
command line:

Failure:
openssl s_client -connect regmedia.co.uk:443 -cipher 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305

Success:
openssl s_client -connect regmedia.co.uk:443 -cipher ECDHE-RSA-AES128-GCM-SHA256

Success:
openssl s_client -connect regmedia.co.uk:443 -cipher 
ECDHE-ECDSA-CHACHA20-POLY1305



-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] weird ECDSA interop problem with cloudflare/nginx

2016-07-26 Thread Martin Rex
Correction--

I'm sorry, I mistyped the firefox config, this should have said
the chacha20_poly1305 (0xcc 0xa9) cipher suite was the only one enabled.


 
Martin Rex wrote:
> I've just run into a weird interoperability problem with an (alleged)
> cloudflare/nginx TLS server and my personal Firefox settings.
> 
> https://regmedia.co.uk/2015/07/14/giant_weta_mike_locke_flicker_cc_20.jpg
> 
> 
> Traditionally I have all TLS ciphersuites with ECDSA disabled through
> about:config, but it seems that recently two new TLS ciphersuites were
> added to FF, which caused complete loss of interop with regmedia.co.uk
> for me with my existing configuration. (Loss of pictures on the
> www.theregister.co.uk news site).
> 
> specifically, after the FF update, this new TLS ciphersuite:
> 
>security.ssl3.ecdhe_ecdsa_aes_128_gcm_sha256  (0xcc, 0xa9)

security.ssl3.ecdhe_ecdsa_chacha20_poly1305_sha256  (0xcc, 0xa9)
 
> was the only ECDSA cipher suite enabled in my Firefox 47.0.1, and this
> kills connectivity (TLS handshake_failure alert) with regmedia.co.uk.
> 
> It looks like a bug in the cloudflare/nginx cipher suites selection
> algorithm, which appears to blindly go for ECDSA, even though there is
> no actual ECDSA cipher suite available which the server supports.
> 
> 
> -Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


[TLS] weird ECDSA interop problem with cloudflare/nginx

2016-07-25 Thread Martin Rex
I've just run into a weird interoperability problem with an (alleged)
cloudflare/nginx TLS server and my personal Firefox settings.

https://regmedia.co.uk/2015/07/14/giant_weta_mike_locke_flicker_cc_20.jpg


Traditionally I have all TLS ciphersuites with ECDSA disabled through
about:config, but it seems that recently two new TLS ciphersuites were
added to FF, which caused complete loss of interop with regmedia.co.uk
for me with my existing configuration. (Loss of pictures on the
www.theregister.co.uk news site).

specifically, after the FF update, this new TLS ciphersuite:

   security.ssl3.ecdhe_ecdsa_aes_128_gcm_sha256  (0xcc, 0xa9)

was the only ECDSA cipher suite enabled in my Firefox 47.0.1, and this
kills connectivity (TLS handshake_failure alert) with regmedia.co.uk.

It looks like a bug in the cloudflare/nginx cipher suites selection
algorithm, which appears to blindly go for ECDSA, even though there is
no actual ECDSA cipher suite available which the server supports.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] HTTP, Certificates, and TLS

2016-07-21 Thread Martin Rex
Martin Thomson wrote:
> On 21 July 2016 at 18:41, Martin Rex <m...@sap.com> wrote:
>>A server that implements this extension MUST NOT accept the request
>>to resume the session if the server_name extension contains a
>>different name.  Instead, it proceeds with a full handshake to
>>establish a new session.
> 
> If that's the only barrier to doing this, I'd be surprised.  The
> prospect of having to overcome this is not at all daunting.

No, that is only the tip of an iceberg, and you're going Titanic here.

Really, this is about TLS session cache management (which is something
very close to TLS) vs. Endpoint identification, i.e. interpreting
end-entity certificates -- which is something that is explicitly
outside of the scope of TLS (e.g. rfc2818 and rfc6125).


Could you please describe the approach to session cache management that
you're conceiving here?  In the original TLS architecture (up to TLSv1.2)
TLS sessions are read-only after creation, and identities (certificates)
are locked down.  Forgetting to cryptographically bind the communication
identities into the session properties allowed the triple-handshake-attack.


If you want to change any session properties (including certificates),
you MUST perform a new full handshake, which creates a new session with
new properties and a new session ID / session cache entry.

Session resumption (and session resumption proposal) should operate
based on requested properties (is there an existing session with the
requested properties?) and this is conceptually independent from the
app-level endpoint identification (such as rfc2818/rfc6125).


The wording in rfc6066 is not optimal.  It should have better said:
whenever a full handshake would result in selection of a different
server certificate, then the server MUST perform a full handshake,
in order to produce predictable/deterministic behaviour that is
not side-effected by session cache management / session cache lifetime
effects.  The principle of least surprise.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] HTTP, Certificates, and TLS

2016-07-21 Thread Martin Rex
Mike Bishop wrote:
> 
> That means we now have a proposal for carrying both client and server
> certificates above TLS, found at
> https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs.
> 
> We have also discussed that it might be preferable to pull part of this
> capability back into TLS,

You are facing a MUST NOT in rfc6066 for this particularly bad idea.

I'm currently wondering what kind of (weird) TLS session caching strategy
would actually allow you to create such client or server behaviour.
You're definitely in severe conflict with the "principle of least surprise"
in respect to deterministic behaviour of your TLS clients and TLS servers.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Thoughts on Version Intolerance

2016-07-20 Thread Martin Rex
Hubert Kario wrote:
> Martin Rex wrote:
>>
>> Forget TLS extensions, forget ClientHello.client_version.
>> Both in fundamentally broken, and led to Web Browsers coming up
>> with the "downgrade dance" that is target of the POODLE attack.
>> 
>> We know fairly reliably what kind of negotiation works just fine:
>> TLS cipher suite codepoints.
> 
> please re-read my mail, they don't:
> 
> 49% (6240) are intolerant to a Client Hello with no extensions but
> big number of ciphers that bring its size to 16388 bytes)
> 91.5% (11539) are intolerant to a Client Hello with no extensions
> but a number of ciphers that bring it well above single record layer limit
> (16.5KiB)

You're seriously confusing things here.

Any ClientHello with > 200 Cipher suite code points indicates fairly insane
Client behaviour, so rejecting it is _perfectly_sane_ server behaviour.

Trying to support theoretical encoding size limits is a stupid idea,
because it leads to endless security problems.  Imposing sane sizes
plus a safety margin is solid implementation advice.

Large stuff that doesn't need to be exchanged in abbreviated handshakes
should *NEVER* be included in ClientHello, because of the performance
penalties this creates (Network bandwidth for TLS handshake,
and TCP slow start).


> 
>>> I'm now also collecting some data and have some preliminary
>>> suspicion on affected devices. My numbers roughly match yours that we
>>> are in the more or less 3% area of 1.3 intolerance.
>> 
>> The TLSv1.2 version intolerance is already a huge problem,
>> and I'm not seeing it go away.  Acutally Microsoft created an
>> awfully large installed base of TLSv1.2-intolerant servers
>> (the entire installed base of Win7 through Win8.1 aka 2008R2, 2012, 2012R2).

Please recheck with a vanilla (aka extension-free) ClientHello that
has ClientHello.client_version = (3,3), to recognize all TLSv1.2-intolerant
implementations in your counts.


>> 
>> I would really like to see the TLS WG improving the situation
>> rather than keep sitting on its hands.  The problem has been well-known
>> since 2005.  And the "downgrade dance" was a predictably lame approach
>> to deal with the situation, because it completely subverts/evades the
>> cryptographic protection of the TLS handshake.
> 
> it's not IETF's fault that the implementers add unspecified by IETF
> restrictions and limitations to parsers of Client Hello messages or that
> they can't handle handshake messages split over multiple record layer
> messages, despite the standard being very explicit in that they MUST
> support this.

Nope, not really.  Limiting PDU sizes to reasonably sane sizes is
perfectly valid behaviour.  X.509v3 certificates can theoretically include
CAT MPEGs and amount to megabytes.  A TLS implementation that limits
the certificate chain (i.e. the TLS Certificate Handshake message) to
a reasonably sane size with safety margin, say 32 KBytes in total,
is acting totally reasonable.  Anyone who creates an insane PKI deserves
to loose, and deserves to loose quite badly.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Thoughts on Version Intolerance

2016-07-20 Thread Martin Rex
Hanno Böck wrote:

Checking application/pgp-signature: FAILURE
> Hubert Kario  wrote:
> 
>> so it looks to me like while we may gain a bit of compatibility by
>> using extension based mechanism to indicate TLSv1.3,

Forget TLS extensions, forget ClientHello.client_version.
Both in fundamentally broken, and led to Web Browsers coming up
with the "downgrade dance" that is target of the POODLE attack.

We know fairly reliably what kind of negotiation works just fine:
TLS cipher suite codepoints.

Please define *ALL* TLSv1.3-specific cipher suites to
  a) indicate that the client offering it supports (at least) TLSv1.3
  b) that indication (a) will override any lower ClientHello.client_version
 that may have been used for backwards compatibility.

> 
> I'm now also collecting some data and have some preliminary
> suspicion on affected devices. My numbers roughly match yours that we
> are in the more or less 3% area of 1.3 intolerance.

The TLSv1.2 version intolerance is already a huge problem,
and I'm not seeing it go away.  Acutally Microsoft created an
awfully large installed base of TLSv1.2-intolerant servers
(the entire installed base of Win7 through Win8.1 aka 2008R2, 2012, 2012R2).


I would really like to see the TLS WG improving the situation
rather than keep sitting on its hands.  The problem has been well-known
since 2005.  And the "downgrade dance" was a predictably lame approach
to deal with the situation, because it completely subverts/evades the
cryptographic protection of the TLS handshake.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Consensus call for keys used in handshake and data messages

2016-06-17 Thread Martin Rex
Daniel Kahn Gillmor wrote:
> On Thu 2016-06-16 11:26:14 -0400, Hubert Kario wrote:
>> wasn't that rejected because it breaks boxes that do passive monitoring 
>> of connections? (and so expect TLS packets on specific ports, killing 
>> connection if they don't look like TLS packets)
> 
> We're talking about the possibility of changing the TLS record framing
> anyway, which would kill the simplest of those boxes.  One theory is if
> you're going to make such a break, you might as well pull the band aid
> off in one fell swoop.

While I dislike monitoring boxes and hate intercepting proxies,
changing of the TLS record framing (and hiding the ContentType)
is going to break _the_endpoints_.  If TLSv1.3 does that, its
adoption curve will make IPv6 adoption appear fast by comparison.

Please stop messing with the TLS record format.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Downgrade protection, fallbacks, and server time

2016-06-06 Thread Martin Rex
The IMO most reasonable way forward would be to side-step the
TLS version negotiation through ClientHello.client_version
entirely, because of the well-known interop problems.

Simply use the presence of *ANY* TLSv1.2+ TLS cipher suite in the
offered list of TLS cipher suites as an indication that a TLS client is
capable and willing to actually _use_ TLSv1.2, even when
ClientHello.client_version might indicate (3,2) or (3,1) for
better interop.  The same approach will work for TLSv1.3,
where the presence of a TLSv1.3 could be used to tell the server
that the client is capable and willing to talk TLSv1.3, even when
ClientHello.client_version might indicate a lower version of the
protocol.

Only the client is in the position to decide whether aborting
the TLS handshake and retrying with a more feature-creeping ClientHello
could provide additional benefits to the client (that it previously
didn't offer, for whatever reason), and only the client is in the
position to know how many TLS handshake attempts with which properties
it previously attempted, and when to better stop retrying automatically
and silently and to perform risk management (such as warning user/admin
that something odd is going on).


To get a smooth migration to using newer TLS protocol versions, we
first need to define a scheme that lets new implementations recognize
each other without upsetting the installed base.  We know what works,
the code points already exist, we will just have to align the documented
semantics to provide a more forward-interoperable and more reasonable
behaviour.



Viktor Dukhovni wrote:
> 
>> David Benjamin  wrote:
>> 
>> I'm not sure I follow. The specification certainly spells out how
>> version negotiation is supposed to work. That hasn't stopped servers
>> from getting it wrong. Fundamentally this is the sort of thing where
>> bugs don't get noticed until we make a new TLS version, and we don't
>> do that often enough to keep rust from gathering.
> 
> A better way to keep rust from gathering is to not instutionalize fallback,
> force the broken sites to deal with the issue.

It's not the sites, but rather the software vendor providing the
underlying TLS implementation.  Sometimes you don't actually have
a choice and an alternative to using what exists.


>
> While 2% is noticeable, you can probably drive 1.3 version intolerance
> out of the ecosystem relatively quickly if Chrome implements fallback
> for a limited time (say 6 months after TLS 1.3 RFC is done) and with
> a diminishing probability (60% first month, 10% less each month
> thereafter), season to taste.

There exist various different flavour of TLS version intolerance,
and the amount of defective servers out there is probably much larger.

The entire installed base of Windows 2008R2 and Windows 20012R2
is TLS version intolerant with respect to TLSv1.2.

If you send propose TLSv1.2 inside an SSLv2Hello to such a server,
it will negotiate TLSv1.1 (rather than TLSv1.2).  If you send
an extensionless SSLv3 Hello proposing TLSv1.2, these Windows
server with choke and close the network connection.  An extension-less
SSLv3 Hello with at client_version = TLSv1.1 or TLSv1.0 will succeed.


-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] TLS weakness in Forward Secrecy compared to QUIC Crypto

2016-04-11 Thread Martin Rex
Salz, Rich wrote:
>>  In MinimaLT, the current ephemeral key for the server is added to
>> the DNS record fetched during the DNS lookup.  These entries expire fairly
>> quickly, ensuring that old keys are never used.
> 
> Can you compare the TTL of the ephemeral key record with the
> A/ record TTL?  Are they related?  If someone can get phony
> records into DNS, can they then become the real MLT server?  For how long?


Admittedly I don't know anything about MLT, but your question indicates
what might be a serious misunderstanding about DNSSEC.

The TTL of a DNS record is *NOT* protected by DNSSEC, and can be
regenerated at will by an attacker, will be regenerated by intermediate
DNS server and its purpose is purely cache-management, *NOT* security.

Only the "Signature Expiration" information in the RRSIG
is protected by DNSSEC, and only that ensures expiry of information
from DNS.

https://tools.ietf.org/html/rfc4034#section-3.1

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Are the AEAD cipher suites a security trade-off win with TLS1.2?

2016-03-19 Thread Martin Rex
Alexandre Anzala-Yamajako wrote:
>
> IMO, the layer creating the plaintext shouldn't have to pad it for security
> that's the job of the TLS layer.

Yep.  And retrofitting random padding into TLS (all protocol versions, all
PDUs) could be actually pretty simple and straightforward.

http://www.ietf.org/mail-archive/web/tls/current/msg11626.html

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Are the AEAD cipher suites a security trade-off win with TLS1.2?

2016-03-18 Thread Martin Rex
Colm MacCárthaigh wrote:
> 
> But I take the point that AEAD modes are harder for programmers to screw
> up; and that does have value.

Though it is a pretty flawed assumption.

I've seen an AEAD cipher implementation fail badly just recently (resulting
in corrupted plaintext that went unnoticed within TLS--MACing the ciphertext
is obviously a pretty dumb idea), something that is *MUCH* more unlikely
to happen to any cipher suites using GenericBlockCipher PDU.

Pretty much all of othe known crypto attacks are highly theoretical and
meaningless in practice, whereas corrupted plaintext is an immediate
real pain in the ass.

I'm glad that the problem was spotted before the affected code was shipped.

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] RSA-PSS in TLS 1.3

2016-03-04 Thread Martin Rex
Fedor Brunner wrote:
> 
> Please see the paper "Another Look at ``Provable Security''" from Neal
> Koblitz and Alfred Menezes.
> 
> https://eprint.iacr.org/2004/152
> 
> Section 7: Conclusion
> 
> "There is no need for the PSS or Katz-Wang versions of RSA;
> one might as well use just the basic ?hash and exponentiate? signature
> scheme (with a full-domain hash function)."


Thanks a million for adding some clue to this discussion!

-Martin

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


  1   2   >