hi Stephen, Tony, a few further points inline...
On Oct 14, 2013, at 6:29 PM, Stephen Farrell <[email protected]> wrote: > > Hi Tony, > > (Subject lines are cheap and helpful, let's try use > 'em a bit better please.) > > On 10/14/2013 04:35 PM, Tony Rutkowski wrote: >> Steve, >> >> Brian's draft defines "pervasive surveillance" as >>> the practice of >>> surveillance at widespread observation points, without any >>> modification of network traffic, and without any particular >>> surveillance target in mind. >> There are a couple of obvious deficiencies here.. >> >> As a starter, the definition is self-contradictory. >> The first sentence in the introduction uses RFC6973's >> definition of surveillance with is aimed at an individual >> and concatenates it with "pervasive" to come up with >> something that says there "is no particular surveillance >> target in mind." Which is it? You cannot logically >> concatenate the two notions together. > > I don't see the contradiction. I'm sure wordsmithing will > be needed of course, and we'll want a better definition > of pervasive monitoring for sure. (See earlier comments > on the draft in the list archive.) Although I don't represent the clarity or quality of the draft as anything other than -00, I also don't understand what's not clear here. For those who don't have 6973 open in front of them: "Surveillance is the observation or monitoring of an individual's communications or activities... [and] can be conducted by observers or eavesdroppers at any point along the communications path." The argument is that this definition is deficient, in that it presumes an individual target. The whole conceptual framework of surveillance as an activity presumes a target. Legal surveillance requires one in order to get the necessary documents signed by the necessary oversight authority. Illegal surveillance generally has one in mind because it's cheaper that way. (One could make a case that there are indiscriminate attacks by criminal networks, e.g. skimming keystrokes from compromised machines to search for credit-card numbers... while these are untargeted with respect to individual, they're also not really surveillance per 6973, in that it's specific types of data that's the goal of the eavesdropping, not the communication or the activity in general.) "Pervasive surveillance" (to mangle the 6973 defintion) is "the observation or monitoring of all individuals' communications or activities." Removing the concept of targeting (even if targeting is done after the fact) changes the character of the activity, both in terms of its impact on the monitored individual(s) (and -- at the risk of getting too far from the engineering -- its impact on the civil society of which the monitored individuals are presumed to be members) and in terms of how the impact it has on protocol design. Specifically, in targetless surveillance, attempts not to become a target are meaningless. (Which goes back to someone's... I think it was Yoav's... stated desire to increase the cost of pervasive surveillance to the point that he dropped out of the target set, which captures nicely the level of sensitivity we have to infinite versus finite target sets.) >> You > > s/You/the draft/ I guess. > >> also don't deal with timeframes. Most Big Data >> implementations for all kinds of purposes, acquire >> observations and sort out the metadata. That's how >> particular particular targets (e.g., purveyors of cheap >> nuclear devices) are found, and it's mandated by law >> under the E.U. Data Retention Directive. > > Timeframes are an interesting aspect to consider, I agree. +1. I think we should assume they are for all intents and purposes infinite. >> Additionally, all this is context dependent as there all >> kinds of bases for exactly this kind of activity that are >> operational, commercial, and legal. It would also be >> interesting to see a definition of "network." Radio >> networks have been subject to constant monitoring >> for many decades. Fast forwarding to SDNs and Cloud >> Computing services, renders most of these this efforts >> irrelevant. > > Huh? I don't get what you mean. I'm also confused, but I'm going to take a guess. If by "cloud computing" you mean that protocols are being replaced by services, then yes, this is a problem. Nothing we can do on the network can protect against "PRISM"-class surveillance activities, by which I mean at least one endpoint of the communication (in this case, your email provider) cooperates with the observer. One could advocate the use of messaging protocols with end-to-end encryption of everything, as discussed in another thread on this list I've had to only halfway follow, as a workaround here. But then you'd have to come up with another business model to pay for email than the one we've arrived at to date. > >> Then after proffering a definition, the religious statement >> appears: "we presume a priori that communications systems >> should aim to provide appropriate privacy guarantees to >> their users, and that such pervasive surveillance is therefore >> a bad thing." "Presume a priori?" > > Yep. For this draft, such an a-priori assumption is ok > I think - the point is so that when its done (and its a > -00) it'll be useful for protocol designers who need to > consider this threat model. The "limiting assumptions" of the threat model need, I think, to be more explicitly stated as such. > So its not at all "religious" here (and incidentally, > I figure such terms are purely pejorative, and not > generally helpful). > >> There are innumerable >> contexts where privacy - which is itself a socio-political- >> legal abstraction - is not relevant or applicable. > > Can you enumerate some real cases where someone might > be designing an Internet protocol and where privacy > is irrelevant or not applicable? > > That kind of scoping could be useful, if such cases > exist. > >> Similarly, the "perfect passive adversary" definition is a >> self-contradiction. If the observer is taking no action, >> there is no threat by definition. As Björn pointed out, "passive" here is measurement jargon, opposed to "active": a passive adversary acts only as an observer, and cannot modify any traffic along the path. All they can to is observe. The question is: given that, what can they know? An observer that wanted to do much more and was positioned to do so could, of course -- there's a tradeoff here in terms of risk of detection of the observer. (As an aside, I'll note that "perfect passive adversary" collides with a related-enough-to-be-dangerous term in the anonymity literature, such that it will be renamed in a future revision.) >>> We explicitly assume the PPA does not have the ability to compromise >>> trusted systems at either the initiator or a recipient of a >>> communication. >> Give me a break. > > Ok, take a break:-) > >> Here again, an assertion is made that >> is simply not credible. Essentially all systems are >> capable of compromise - either technically, lawfully, >> or through insider threats (which is generally regarded >> as the greatest threat). > > As it happens I also commented on the definitions, and I > agree they do need work - that's what'll happen as the > draft progresses. And this is another limiting assumption. We are explicitly not treating these compromises _in this threat model_. Why? First, because if the observer owns your terminal or the terminal at the other endpoint, there is nothing you can do as a protocol designer. The game is over. What to do in that case is best handled at layer 9. We also make the explicit assumption that crypto works as advertised. That's also a risky assumption. Why do we make it? Because it's best treated somewhere else (and by someone else -- I learned years ago, when I was young and didn't know any better, with a home-spun Blowfish implementation, that I'm no cryptographer). Best regards, Brian
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ perpass mailing list [email protected] https://www.ietf.org/mailman/listinfo/perpass
