There are clearly lots of issues here at several levels, from the procedural to 
the technical… I'm not going to fan the flames, though - I just want to make 
one observation about the threat model/risk assessment, in response to the 
snippet I have retained below.

There is a general problem with user perceptions of privacy risk, because of 
the lack of noticeable impact arising out of behaviour that erodes privacy. The 
human brain is not well adapted to this kind of threat (it's well evolved to 
deal with immediate threats like a tiger running at you…). When it comes to 
privacy, the deleterious effects of 'bad' behaviour are so remote from the 
behaviour that caused them, that we tend not to draw the connection between the 
two in any way that causes us to change our behaviour.

Similar issues are evident in our attitudes towards the risk of smoking, lack 
of exercise, poor posture, fatty foods etc.: the risk (and the damage) is 
incremental and often not apparent until the habit is too well ingrained to 
change. 

The threat to privacy from intrusive surveillance technologies may be remote, 
and the impact may not be noticeable to the average person, but that doesn't 
mean it should be ignored… nor does it mean that user perception of the problem 
is a reliable guide to what should be done about it.

Best wishes,

Robin 


Robin Wilton
Technical Outreach Director - Identity and Privacy
Internet Society

email: [email protected]
Phone: +44 705 005 2931
Twitter: @futureidentity




On 11 Dec 2012, at 08:24, SM wrote:

> 
>   I would describe it a distant threat as there is no immediacy to it, i.e. 
> for the average person, there isn't any noticeable impact.

_______________________________________________
ietf-privacy mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ietf-privacy

Reply via email to