Hi Dean,
I wanted to comment on your suggestions:
> 1) Everything SHOULD be encrypted, unless there is an absolute
operational requirement not to. This means "encryption by default" in
new protocols, and not even specifying unencrypted operations modes
unless necessary. Older protocol specs still in use should be revised to
require encryption. Deprecate the non "s" versions of protocols.
I guess there are two issues here, namely:
* End-to-end vs. Hop-by-hop (or stuff in between)
* Encryption itself is often not the problem but rather the key management
As you have seen my post about the VoIP stuff it is actually not so easy
to say what exactly has to be done in what situations since our
protocols are a bit more complex...
So, you will have to expand a bit. Maybe you also want to explain the
SHOULD vs. a MUST.
>
> 2) Well-known ports should be avoided. Or overloaded to the point
where the port number is no longer a significant indicator to the
application. This gives rise to the "everything over 443" mentality,
which we need to find a way to handle gracefully. Demuxing the service
within the channel is a better idea than I used to think.
That does not make sense. We are heading in the direction of everything
running on 443 with TLS (from a standardization point of view) -- not
necessarily on the deployment side (since otherwise we wouldn't need
efforts like those from EFF).
You might also find it interesting to hear that the ability for
demultiplexing HTTP 2.0 and earlier version will be done based on
information in the TLS handshake and that the TLS group had decided that
they prefer a solution that reveals the type of application and rejected
a proposal for hiding it.
Here are the two proposals:
http://datatracker.ietf.org/doc/draft-ietf-tls-applayerprotoneg/
http://tools.ietf.org/html/draft-agl-tls-nextprotoneg-04
Maybe it would be worthwhile to revisit the decision?
(A side remark: I was at that meeting and pointed out that this is a
privacy decision and folks in the room said that this has nothing to do
with privacy....)
>
> 3) Packet sizes should be variable, preferably random. This is the
opposite of the "discover the MTU and fill every packet" model of
efficiency. Or, we could make all packets the same fixed size by padding
small ones. I like random better, but there might well be some hardware
optimizations around fixed packet sizes.
Ok. Sounds reasonable to have that option. I know that IPsec has the
ability to add padding.
>
> 4) Every protocol spec needs to include a pseudonymous usage model,
and most should include an anonymous usage model.
Makes sense to me (at least for protocols that are potentially run by
end devices). For some protocols I guess it is less useful (thinking
about routing protocols).
Here is the challenge: If I look at SIP then we certainly have that
option but
a) you will have to get providers to implement it, and
b) the functionality often conflicts with other privacy features.
For example, you may not want to get interrupted with a phone call when
you do not know the person on the other end.
>
> 5) New protocols should be built around end-to-end crypto rather than
relying on transport-level wrappers for everything. It's too easy to use
a compromised CA-cert to dynamically build a TLS proxy cert. Some level
of key delivery out-of-band, coupled to in-band footprint verification,
is probably needed. zRTP is a good model.
I think they should have both since the functions provided are actually
different.
ZRTP is not a good model if you don't know the voice of the other
person. Not all communication is (a) between persons, (b) between
persons who use voice communication, and (b) persons who know each other.
>
> 6) Randomizing interpacket timing is useful. This does all sorts of
horrible things to both TCP optimization and the jitter buffers in
real-time communications. But it's worth it. Remember,
surveillance-resistance is MORE IMPORTANT than efficiency.
Need to think about that.
>
> 7) Peer-to-peer, DTN, and peer-relay (TURN, for example) all have
lessons we should learn. So does TOR.
In case of Tor we could certainly learn something about fingerprinting
avoidance. I am not sure about the lessons you have learned from the
other efforts. Our IETF p2p efforts are still in a dying state since the
entire industry has unfortunately changed their preferred communication
model in the meanwhile from p2p to client-to-server for pretty much
everything.
In my VoIP blog post I argued that TURN doesn't actually give you any
additional privacy protection if the adversary is a powerful
eavesdropper or the VoIP provider itself. It only helps when you want to
hide your IP address towards the communication party, as Shida explained
in his SIP privacy RFC.
>
> 8) Every piece of crypto-advice needs serious, multiparty,
international, and aggressive review. No more documents authored by NSA
shills (which Schneier says we seem to have).
I agree with you about the standardization aspects (regarding openness
and transparency). The problem is that with the Web world we are
unfortunately heading into a different direction, as we (=IAB) tried to
explain some time ago with the 'post standardization' plenary
(+document). I am not sure yet how to best tackle that story (and I am
unfortunately not entirely alone with my lack of suggestions).
On the second suggestion I don't think you are serious. We obviously
have documents co-authored by NSA employees (see
http://www.arkko.com/tools/allstats/c_nsa.html) but first I dislike to
exclude people (since that's the whole point of having an open standards
process) and second where do you stop excluding? We have people who are
contracting for the NSA, we have people who work at government
organizations (like NIST), we have companies who work on government
contracts (like BBN).
Ciao
Hannes
>
_______________________________________________
perpass mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/perpass