On Feb 26, 2013, at 14:42 , Claudia Diaz <[email protected]> wrote:

> 
> 
> On 26 Feb 2013, at 23:19:15, David Singer wrote:
>> On Feb 26, 2013, at 14:11 , Claudia Diaz <[email protected]> 
>> wrote:
>>> 
>>> On 26 Feb 2013, at 09:45:38, SM wrote:
>>>> At 13:15 25-02-2013, Claudia Diaz wrote:
>>>>> If that entity is a gov/commercial organization, then "security" is the 
>>>>> term likely to be used for the properties you want to achieve, while for 
>>>>> those same properties "privacy" is the usual term when the entity is a 
>>>>> private individual.
>>>> 
>>>> There is currently a security considerations section in every IETF RFC.  
>>>> The draft recommends having a privacy considerations section too.  The 
>>>> question which can arise is in which section the perspective should be 
>>>> covered.  In other words it is about how to disambiguate between security 
>>>> and privacy.
>>> 
>>> 
>>> It's a tough one: I am not sure you can fully disambiguate the two terms if 
>>> you are considering general-purpose protocols. 
>> 
>> For the purposes of debate, I am going to try.
>> 
>> Security problem: something unintended happened which gave an 
>> attacker/opponent access to data, systems, or capability which was not an 
>> expected part of the identified system/protocol.
>> 
>> Privacy problem: operation of the system/protocol gives undesirable exposure 
>> of private information not strictly needed for the operation desired. 
>> 
>> If you combine them, then indeed the privacy problem may well get worse.
>> 
>> 
>> So, for example, the fact that on the internet your IP address is exposed as 
>> part of the protocol also gives your respondent probable knowledge of your 
>> location, and hence time of day.  No rules were broken to see your IP 
>> address or draw conclusions from it - there was no 'break-in' or security 
>> hole that was taken advantage of.
> 
> 
> That's an interesting distinction. Translating it to concrete scenarios would 
> make us however have to change how we usually use the terms. This can be 
> counterintuitive in some cases: 
> 
> - If I browse to a website and my IP is exposed, then it is a privacy 
> problem. If I browse to the same website over Tor and my IP is exposed 
> because Tor is attacked, then it is a security problem. 

To be precise, there was a security breach (your IP address was exposed when 
the design of the protocol and your expectations said it would not be), and 
that resulted in a privacy problem.  Many security problems indeed result in 
privacy issues (not surprisingly).

> - If the passwords to access the confidential information at the embassy are 
> sent in clear (because nobody bothered to encrypt them), it is a privacy 
> problem, and not a security problem.

Agreed.  The protocol was being operated in its expected way, and no protocol 
violation or break-in was needed. (It was the wrong protocol to use.)
 
> If someone manages to get my facebook password, then it is a security 
> problem, and not a privacy problem (because it was not exposed by default). 

If they broke in, they took advantage of a security issue of some sort. The 
result was, again, a privacy violation.

> - If the gov listens to my encrypted conversations (eg, by reconstructing the 
> conversation from the traffic), it is a security problem. If the minister of 
> interior talks over an unencrypted line about his plans to catch terrorists, 
> then it is a privacy problem. 


I think we are on the same page.  Security breaches often result in privacy 
problems, but not always, and privacy issues don't always happen as a result of 
security breaches -- and it is the ones that don't that I think we should focus 
on.  (Once the 'lock is broken' we can't say much about privacy, I fear -- 
we're into limiting damage).

The famous CSS link-visited issue is a privacy issue, pure and simple.  It's 
possible to write a script that finds out what links a user has recently 
visited, using public APIs in their intended way.

Exploring what the privacy implications of using a specification "in its normal 
way" I think is way overdue.

We can *also* think about specifying prudent steps to take such that if there 
is *also* a security break-in, the resulting privacy impact is minimized (e.g. 
"keep as little private data as you for only as long as you need it, so if you 
have a data loss/leak/break-in, the impact is minimized").  But that's 
secondary.

David Singer
Multimedia and Software Standards, Apple Inc.

_______________________________________________
ietf-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ietf-privacy

Reply via email to