[TLS] Delegated Credentials and Lawful Intercept

2019-11-01 Thread Florian Weimer
Would it be possible to use delegated credentials to address lawful
intercept concerns, similar to eTLS?

Basically, the server operator would issue a delegated credential to
someone who has to decrypt or modify the traffic after intercepting
it, without having to disclose that backdoor in certificate
transparency logs.

And in a data center scenario, perhaps people feel more comfortable
loading those short-term credentials into their monitoring equipment.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] ESNIKeys over complex

2018-11-21 Thread Florian Weimer
* Paul Wouters:

> On Wed, 21 Nov 2018, Stephen Farrell wrote:
>
>>> We currently permit >1 RR, but
>>> actually
>>> I suspect that it would be better to try to restrict this.
>>
>> Not sure we can and I suspect that'd raise DNS-folks' hackles,
>> but maybe I'm wrong.
>
> I think the SOA record is the only exception allowed (and there
> is an exception to that when doing AXFR I believe)
>
> Usually these things are defined as "pick the first DNS RRTYPE
> that satisfies you".

Not sure what you mean by that (RRTYPE?).

The DNAME algorithm (RFC 6672) only works if there is a single DNAME
record for an owner name.  RFC 1034 is also pretty clear that only CNAME
record is permitted per owner name.

To be honest, I don't expect much opposition from DNS people, as long as
there is no expectation that the DNS layer is expected to reject
multiple records.  If the higher-level protocol treats non-singleton
RRsets as a hard error, I expect that would be fine.

DNS treats RRsets as an atomic unit, so there is no risk here that a
zone file change ends up producing a multi-record RRset due to caching.

Thanks,
Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] network-based security solution use cases

2017-11-05 Thread Florian Weimer
* Nancy Cam-Winget:

> @IETF99, awareness was raised to some of the security WGs (thanks
> Kathleen ☺) that TLS 1.3 will obscure visibility currently afforded in
> TLS 1.2 and asked what the implications would be for the security
> solutions today.
> https://tools.ietf.org/html/draft-camwinget-tls-use-cases-00 is an
> initial draft to describe some of the impacts relating to current
> network security solutions.  The goal of the draft is NOT to propose
> any solution as a few have been proposed, but rather to raise
> awareness to how current network-based security solutions work today
> and their impact on them based on the current TLS 1.3 specification.

I'm not sure if this approach is useful, I'm afraid.  The draft is
basically a collection of man-in-the-middle attacks many people would
consider benign.  It's unclear where the line is drawn: traffic
optimization/compression and ad suppression/replacement aren't
mentioned, for example, and I would expect both to be rather low on
the scale of offensiveness.

What the draft is essentially arguing is that many user cannot afford
end-to-end encryption for various reasons, some legal, some technical,
some political.  But it seems to me that this is currently not a
viewpoint shared by the IETF.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Publication of draft-rhrd-tls-tls13-visibility-00

2017-10-17 Thread Florian Weimer

On 10/13/2017 02:45 PM, Stephen Farrell wrote:

So the problems with that are numerous but include:

- there can be >1 carol, (and maybe all the carols also need to
   "approve" of one another), if we were crazy enough to try do
   this we'd have at least:
   - corporate outbound snooper
   - data-centre snooper (if you buy those supposed use-cases)
   - government snooper(s) in places where they don't care about
 doing that openly
   ...port 80 would suddenly be quicker than 443 again;-(


And any authorized eavesdropper is not allowed to be able to infer if 
they are the only ones listening in.


I don't understand why this complicated approach is needed.  Why can't 
the server provide an OOB interface to look up sessions keys, or maybe 
export them proactively?  The proposed draft needs a protocol like this 
anyway because SSWrapDH1 keys need to be distributed, and periodic key 
regeneration is needed because it is the only way to implement 
revocation of access privileges without revealing the existence of other 
authorized parties.


I don't buy the argument that there are too many session keys for 
proactive export.  Obviously, you already have sufficient capacity to 
send these keys (or an equivalent) over the wire once, so sending 
another copy or two shouldn't be a problem.


Thanks,
Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Industry Concerns about TLS 1.3

2016-10-05 Thread Florian Weimer
* BITS Security:

> Deprecation of the RSA key exchange in TLS 1.3 will cause significant
> problems for financial institutions, almost all of whom are running
> TLS internally and have significant, security-critical investments in
> out-of-band TLS decryption.
>  
> Like many enterprises, financial institutions depend upon the ability
> to decrypt TLS traffic to implement data loss protection, intrusion
> detection and prevention, malware detection, packet capture and
> analysis, and DDoS mitigation.

We should have already seen this with changing defaults in crypto
libraries as part of security updates.  That should have broken
passive monitoring infrastructure, too.

Maybe some of the vendors can shed some light on this problem and tell
us if they ever have received pushback for rolling out
ECDHE-by-default.  (I know that some products have few capabilities
for centralized policy management, which is why defaults matter a lot
there.)

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] debugging tools [was: Industry Concerns about TLS 1.3]

2016-10-05 Thread Florian Weimer
* Hubert Kario:

> those secret keys are on the client machines and they will stay on client 
> machines
>
> making it hard to extract master key from process memory is just security 
> through obscurity, not something that will stop a determined attacker

I think extracting the master key is probably not what you want to do
anyway, just adding Systemtap probes to get cleartext copies
(preferably along with connection detail information) should be
sufficient.

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2016-01-04 Thread Florian Weimer
On 01/04/2016 12:59 PM, Hubert Kario wrote:
> On Monday 28 December 2015 21:08:10 Florian Weimer wrote:
>> On 12/21/2015 01:41 PM, Hubert Kario wrote:
>>> if the rekey doesn't allow the application to change authentication
>>> tokens (as it now stands), then rekey is much more secure than
>>> renegotiation was in TLS <= 1.2
>>
>> You still have the added complexity that during rekey, you need to
>> temporarily switch from mere sending or receiving to at least
>> half-duplex interaction.
> 
> this situation already happens in initial handshake so the 
> implementation needs to support that

But after and the handshake and without real re-key, sending and
receiving operations exactly match what the application requests.  If
you need to switch directions against the application's wishes, you end
up with an API like OpenJDK's SSLEngine (or a callback variant which is
equivalent in complexity).

Dealing with this during the initial handshake is fine.  But supporting
direction-switching after that is *really* difficult.

Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2016-01-04 Thread Florian Weimer
On 12/28/2015 10:09 PM, Salz, Rich wrote:
>> When the key is changed, the change procedure should involve new randomness. 
> 
> I don't think this is necessary, and I don't think the common crypto 
> expertise agrees with you, either. But I am not a cryptographer, maybe one of 
> the ones on this list can chime in.
> 
> "Crank the KDF" suffices.

The attacks against GCM are at the stage where even “periodically
increment the key by one“ would thwart them, right?

The risk is that without real re-key (introducing additional
randomness), someone might come up with a better attack that reduces the
security level below the design target, and which requires similar
effort as the existing GCM attack (four years of traffic at terabit
speed, it seems).

Real re-key is difficult to introduce as an afterthought (see my recent
response to Hubert), and I'd rather see such issues fixed at the cipher
level if at all possible.  The current update-key mechanism doesn't have
the complexity issue of real re-key, but it's ambiguous if it's a design
goal to paper over cipher deficiencies in the rest of the protocol.

Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2016-01-04 Thread Florian Weimer
On 01/04/2016 01:19 PM, Hubert Kario wrote:

>> Dealing with this during the initial handshake is fine.  But
>> supporting direction-switching after that is *really* difficult.
> 
> yes, this is a bit more problematic, especially for one-sided transfers. 
> For example, when one side is just sending a multi-gigabyte transfer as 
> a reply to a single command - there may be megabytes transferred before 
> the other side reads our request for rekey and then our "CCS" message

Yes, this is the issue I meant.  I simply don't see a way to re-inject
new randomness without a round-trip.  (Key update without new randomness
doesn't face this challenge, but then it's mostly cheating.)

Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-28 Thread Florian Weimer
On 12/28/2015 09:11 PM, Eric Rescorla wrote:

>> You still have the added complexity that during rekey, you need to
>> temporarily switch from mere sending or receiving to at least
>> half-duplex interaction.
>>
> 
> That's not intended. Indeed, you need to be able to handle the old key
> in order to send/receive the KeyUpdate. Can you elaborate on your concern?

Ah, so you want to keep the current mechanism and not inject fresh
randomness?  Isn't this fairly risky?

Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Data volume limits

2015-12-28 Thread Florian Weimer
On 12/21/2015 01:41 PM, Hubert Kario wrote:

> if the rekey doesn't allow the application to change authentication 
> tokens (as it now stands), then rekey is much more secure than 
> renegotiation was in TLS <= 1.2

You still have the added complexity that during rekey, you need to
temporarily switch from mere sending or receiving to at least
half-duplex interaction.

Florian

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] Should we require implementations to send alerts?

2015-09-17 Thread Florian Weimer
On 09/16/2015 09:53 PM, Brian Smith wrote:

> Assume the client and the server implement the mandatory-to-implement
> parameters and that both the client and the server are otherwise
> conformant. In this scenerio, when would an alert other than the non-fatal
> close_notify be sent?

I have been told that mandatory-to-implement does not mean
mandatory-to-enable, and that it is possible to run a nominally
RFC-conforming client or server in a mode which is not interoperable
with anything else.  Under such a scenario, fatal alerts happen without
an attack.

Most fatal alerts in the wild appear to be harmless in the sense that
they are not due to attacks, but due to interoperability failures (due
to not enabling mandatory-to-implement cipher suites, self-signed
certificates, incomplete certificate chains, or just bugs).

-- 
Florian Weimer / Red Hat Product Security

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls