On Sep 10, 2013, at 1:43 PM, Hannes Tschofenig <[email protected]> 
wrote:

> Hi Dean,
> 
> 
> I wanted to comment on your suggestions:
> 
> > 1) Everything SHOULD be encrypted, unless there is an absolute operational 
> > requirement not to. This means "encryption by default" in new protocols, 
> > and not even specifying unencrypted operations modes unless necessary. 
> > Older protocol specs still in use should be revised to require encryption. 
> > Deprecate the non "s" versions of protocols.
> 
> 
> I guess there are two issues here, namely:
> 
> * End-to-end vs. Hop-by-hop (or stuff in between)
> 
> * Encryption itself is often not the problem but rather the key management
> 
> 
> As you have seen my post about the VoIP stuff it is actually not so easy to 
> say what exactly has to be done in what situations since our protocols are a 
> bit more complex...
> 
> So, you will have to expand a bit. Maybe you also want to explain the SHOULD 
> vs. a MUST.
> 

There's no final answer here, just a trend-reversal that points us, I think, in 
a batter direction.

So for example, IMAP, POP, SMTP, HTTP, SIP, RTP, all those protocols for which 
there is an "S" variant? Deprecate the non-S variant      to historical ASAP. 
Don't write any more like them.


> >
> > 2) Well-known ports should be avoided. Or overloaded to the point where the 
> > port number is no longer a significant indicator to the application. This 
> > gives rise to the "everything over 443" mentality, which we need to find a 
> > way to handle gracefully. Demuxing the service within the channel is a 
> > better idea than I used to think.
> 
> That does not make sense. We are heading in the direction of everything 
> running on 443 with TLS (from a standardization point of view) -- not 
> necessarily on the deployment side (since otherwise we wouldn't need efforts 
> like those from EFF).

Having only one port for everything  is information-density equivalent to 
having random ports, so that's acceptable.

> 
> You might also find it interesting to hear that the ability for 
> demultiplexing HTTP 2.0 and earlier version will be done based on information 
> in the TLS handshake and that the TLS group had decided that they prefer a 
> solution that reveals the type of application and rejected a proposal for 
> hiding it.

Bad decision. Question the motivations of the people who made it. Remember, 
we've been told that the security of our network is being systematically 
crippled, by design, from within. I believe that to be true, in the general 
(rather than specific) sense. On the other hand, maybe they just didn't think 
about the problem from this perspective.

> 
> Here are the two proposals:
> http://datatracker.ietf.org/doc/draft-ietf-tls-applayerprotoneg/
> http://tools.ietf.org/html/draft-agl-tls-nextprotoneg-04
> 
> Maybe it would be worthwhile to revisit the decision?
> 

I haven't followed the technicals; they MIGHT have good reasons. But if 
security/privacy/whatever really IS more important than efficiency (which 
Stephen has done me the honor to dispute), then a rethink is probably in order. 
That sounds like an IAB question that should lead to community review of 
fundamental principles.

> (A side remark: I was at that meeting and pointed out that this is a privacy 
> decision and folks in the room said that this has nothing to do with privacy….

Perhaps they didn't understand. I've found that most people don't, and that 
much can be explained by misinformation that could also be explained by malice.


> )
> 

> >
> > 3) Packet sizes should be variable, preferably random. This is the opposite 
> > of the "discover the MTU and fill every packet" model of efficiency. Or, we 
> > could make all packets the same fixed size by padding small ones. I like 
> > random better, but there might well be some hardware optimizations around 
> > fixed packet sizes.
> 
> Ok. Sounds reasonable to have that option. I know that IPsec has the ability 
> to add padding.

Again, is privacy more important than security?

> 
> >
> > 4) Every protocol spec needs to include a pseudonymous usage model, and 
> > most should include an anonymous usage model.
> 
> Makes sense to me (at least for protocols that are potentially run by end 
> devices). For some protocols I guess it is less useful (thinking about 
> routing protocols).
> 

even with routing you might want to know that a given route update came from 
the same source as another, even if you don't know who that source REALLY is.


> Here is the challenge: If I look at SIP then we certainly have that option but
> a) you will have to get providers to implement it, and
> b) the functionality often conflicts with other privacy features.
> 
> For example, you may not want to get interrupted with a phone call when you 
> do not know the person on the other end.
> 

true, but there may be models where "I" know the person on the other end, but 
the network provider doesn't know them -- or know that I know them. That's an 
aspect of pseudonymity.

> >
> > 5) New protocols should be built around end-to-end crypto rather than 
> > relying on transport-level wrappers for everything. It's too easy to use a 
> > compromised CA-cert to dynamically build a TLS proxy cert. Some level of 
> > key delivery out-of-band, coupled to in-band footprint verification, is 
> > probably needed. zRTP is a good model.
> 
> I think they should have both since the functions provided are actually 
> different.
> 
> ZRTP is not a good model if you don't know the voice of the other person. Not 
> all communication is (a) between persons, (b) between persons who use voice 
> communication, and (b) persons who know each other.

Other out-of-band verification models can be used. For example, if a bot's 
fingerprint is on its advertisement flyer, I can verify I'm communicating with 
the bot mentioned in the flyer rather than an impostor.


> >
> > 6) Randomizing interpacket timing is useful. This does all sorts of 
> > horrible things to both TCP optimization and the jitter buffers in 
> > real-time communications. But it's worth it. Remember, 
> > surveillance-resistance is MORE IMPORTANT than efficiency.
> 
> Need to think about that.

yes, I think we all do. My opinion is that we need to internalize this position 
right next to the hourglass and Postel's law.

> 
> >
> > 7) Peer-to-peer, DTN, and peer-relay (TURN, for example) all have lessons 
> > we should learn. So does TOR.
> 
> In case of Tor we could certainly learn something about fingerprinting 
> avoidance. I am not sure about the lessons you have learned from the other 
> efforts. Our IETF p2p efforts are still in a dying state since the entire 
> industry has unfortunately changed their preferred communication model in the 
> meanwhile from p2p to client-to-server for pretty much everything.

We're in the process of distributing those servers very widely, so it'll make a 
comeback.

> 
> In my VoIP blog post I argued that TURN doesn't actually give you any 
> additional privacy protection if the adversary is a powerful eavesdropper or 
> the VoIP provider itself. It only helps when you want to hide your IP address 
> towards the communication party, as Shida explained in his SIP privacy RFC.
> 

true. But if do something like TOR to a TURN exit, and those other factors 
(variable inter packet timing, variable packet sizes, random port numbers, 
etc.) are in place, It takes a VERY powerful attacker to correlate.

> >
> > 8) Every piece of crypto-advice needs serious, multiparty, international, 
> > and aggressive review. No more documents authored by NSA shills (which 
> > Schneier says we seem to have).
> 
> 
> I agree with you about the standardization aspects (regarding openness and 
> transparency). The problem is that with the Web world we are unfortunately 
> heading into a different direction, as we (=IAB) tried to explain some time 
> ago with the 'post standardization' plenary (+document). I am not sure yet 
> how to best tackle that story (and I am unfortunately not entirely alone with 
> my lack of suggestions).
> 
> On the second suggestion I don't think you are serious. We obviously have 
> documents co-authored by NSA employees (see 
> http://www.arkko.com/tools/allstats/c_nsa.html) but first I dislike to 
> exclude people (since that's the whole point of having an open standards 
> process) and second where do you stop excluding? We have people who are 
> contracting for the NSA, we have people who work at government organizations 
> (like NIST), we have companies who work on government contracts (like BBN).

The reality is, we'll have people with counter-privacy secret biases doing the 
work, because state security apparatus fund them. Many are true believers in 
the missions of their apparatus.  

So there is no single trusted reviewer.

The best we can hope is large-scale consensus from multiple reviewers who 
hopefully have conflicting agendas. This is hard. But it's VITAL, to eliminate 
the taint of distrust now smeared on our systems.

Frankly, if I were a state purchasing agent outside the USA, I wouldn't want to 
buy systems that had been specced for me by NSA employees. But I might buy 
systems that had been specced by a combination of NSA, EFF, MSS, FSB, and 
Anonymous activists.

--
|Dean


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
perpass mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/perpass

Reply via email to