Greetings, all,

There has been much discussion on this list about increasing the
resistance of messaging to pervasive surveillance. This is not my area
of expertise, but it seems to me there are probably a few holes to plug
here on the specification side, and a long, long way to go on the
deployment side. Looking further, it would be nice if we could extend
these efforts to post-SMTP/IMAP messaging, but most of the deployed
technology here seems to involve rows stored in plaintext in proprietary
databases, which is even less resistant to pervasive surveillance than
anything we're talking about. That's the subject of another rant, though.

More generally, increasing the deployment of opportunistic encryption
would bring content protection and security-usability benefits to
messaging as well as to other traffic.

Both of these continue trends already underway -- we as a community have
long recognized that protecting the content of communications from
eavesdropping by third parties is, on balance, a generally desirable
goal, regardless of the motive of the eavesdropper. Beyond that, it is a
clear-cut engineering problem, and we're pretty good at solving those.
The network is apathetic to motive. What one sees as evil, another may
treasure as an essential service of the state, but a third party is a
third party no matter how you look at it.

We have had less productive conversation to date on the intersection
between surveillance and management. The tools of pervasive surveillance
are the same tools we use for passive network observation for
measurement, management, and troubleshooting. These tools are also
apathetic to motive. At the low end of the spectrum I can use tcpdump to
debug your wireless router or to read your email (if you're unfortunate
enough not to have encrypted it, that is). The network measurement and
management community has invested significant effort as long as there
has been an Internet in making it as easy as possible to observe as much
of the network as possible, so this spectrum goes well beyond tcpdump.

Objections that more content encryption will hurt the practice of
network management and network security are to some extent true, but
pointless. Simply put, if your strategy for managing or defending your
network involves observing content on the wire in plaintext, you were
already going to need a new strategy. Deep packet inspection (DPI) is
dead. HTTPS Everywhere killed it, and a cute little napkin sketch
showing every data center operator in the world where "SSL gets removed"
buried it.

Metadata is a different story. Indeed, we've just published as STD 77
IPFIX, a very nice protocol (in my humble opinion) for efficiently
moving network traffic metadata around. This is a vital tool in network
measurement, management, and accounting. Behavioral security monitoring
based thereon is part of how we're going to deal with the fact that DPI
is dead. IPFIX and similar technologies are of course just as easily
applicable to pervasive surveillance as they are to any of these endeavors.

Protecting communications from metadata analysis is much harder than
protecting content, since the power of inference and association
increases with the amount of metadata collected, while the space of
possible solutions is rather restricted, and involves costs in bandwidth
and latency. Onion routing is probably the most practical of these, and
has the advantage that it is already deployed and doesn't require
architectural changes. It's also not particularly resistant to pervasive
surveillance undertaken by a dedicated adversary willing to operate
middling-large portions of the overlay network, if the adversary isn't
awfully picky about who it identifies.

I'd intended to go into more detail on the possibilities of metadata
analysis and the cost of resistance in a draft-trammell-perpass-ppa-01,
but haven't found the time to do so, and in thinking about it realized
we've already published guidelines for network data anonymization -- in
these terms, the practice of attempting to preserve the utility of
metadata for some analysis while simultaneously reducing or eliminating
its utility for identifying individuals -- as RFC 6235. Viewed from the
right angle it's the same problem, and though the document is somewhat
IPFIX-specific, it might be of interest to those considering the problem
of metadata analysis resistance.

In general, I don't think there's much win here for network and
transport layer metadata simply because there's no practical universal
technique for analysis resistance (beyond the trivial solution: simply
not using the network.)

There's higher-level "metadata", of course -- application-specific
identifiers that can be bound to individuals or associations of
individuals. But the problem here isn't so much large-scale metadata
analysis as it is simple information leakage. This seems to be one of
the directions that draft-cooper-ietf-privacy-requirements, and it's
worth the effort to survey these and take a long hard look at what we
really need to expose for manageability, and consider the tradeoffs
involved in reengineering each protocol to leak less.

Cheers,

Brian

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
perpass mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/perpass

Reply via email to