Yes, I agree with all that - ultimately it's about autonomy, in a way. As we 
become integrated in the system, we lose that autonomy.

Sent from my iPhone

On 10 Jul 2013, at 19:25, "Raven Jiang CX" 
<j...@stanford.edu<mailto:j...@stanford.edu>> wrote:

I think privacy is just a small part of a larger issue when it comes to Google 
Glass and its future descendants.

The large issue is how increasing network connectivity changes what it means to 
be an individual or to even be human. As our access to the Internet becomes 
more immediate (from huge desktops to HUD) and persistent, I think we will stop 
seeing ourselves as individuals and more as a collective. Think of how 
groupthink works online and then a future where you can never be offline.

And when we grow reliant on Glass constantly prompting us with information 
about the real world, will we still bother to remember things? I feel that 
there is a natural tendency for those of us who are highly connected (myself 
included) to offload cognitive functions onto our web-enabled devices. We stop 
remembering certain information and instead remember what keywords to Google 
for to retrieve that information.

I wonder if hivemind will eventually become literal as technology progresses 
and more closely binds itself to our mental processes.

Sorry for the digression, but that's how I perceive privacy issues when it 
comes to Google Glass. Much like how karma and upvotes lead to groupthink, 
greater connectivity and sharing can subject our lives to constant peer 
approval. I think that wisdom of the crowd only works when individuals in the 
crowd are not subjected to the same bias.

Raven Jiang

Stanford University
Computer Science
soraven.com<http://www.soraven.com/>

On 10 July 2013 11:08, Paul Bernal (LAW) 
<paul.ber...@uea.ac.uk<mailto:paul.ber...@uea.ac.uk>> wrote:
I wrote a blog piece on Glass a month or two back:

http://paulbernal.wordpress.com/2013/05/07/google-glass-just-because-you-can/

Here's the text:

Google Glass: just because you can…

As a bit of a geek, and a some-time game player, it’s hard not to like the look 
of Google Glass. Sure, it makes you look a little dorky in its current 
incarnation (even if you’re Sergey Brin, as in the picture below) but people 
like me are used to looking dorky, and don’t really care that much about it. 
What it does, however, is cool, and cool in a big way. We get heads-up displays 
that would have been unimaginable even a few years ago, a chance to feel like 
Arnie in the Terminator, with the information about everything we can see 
immediately available. It’s cool – in a dorky, sci-fi kind of way, and for 
those of us brought up on a diet of SF it’s close to irresistible.

And yet, there’s something in the back of my mind – well, OK, pretty close to 
the front of my mind now – that says that we should be thinking twice about 
pushing forward with developments like this. Just because we can make something 
as cool as Google Glass, doesn’t mean that we should make it. There are 
implications to developments like this, and risks attached to it, both direct 
and indirect.

Risks to the wearer’s privacy

First we need to be clear what Google Glass does – and how it’s intended to be 
used. The idea is that the little camera on the headset essentially ‘sees’ what 
you see. It then analyses what it can see, and provides the information about 
what you see – or information related to it. In one of the promotional videos 
for it, for example, as the wearer looks at a subway station, the Glass alerts 
the wearer to the fact that there’s a delay on the subway, so he’d better walk. 
Then he looks at a poster for a concert – it analyses the poster, then links 
directly to a ticket agency that lets him buy a ticket for the concert.

Cool? Sure, but think about what’s going on in the background – because there’s 
a lot. First of all, and almost without saying, the Google Glass headset is 
tracking the wearer: what we can ‘geolocation’. It knows exactly where you are, 
whenever you’re using it. There are implications to that – I’ve written about 
them before – and this is yet another step towards making geolocation the 
‘norm’. The idea is that Google (and others) want to know exactly where you are 
at all times – and of course that means that others could find out, whether for 
good purposes or bad.

Secondly, it means that Google are able to analyse what you are looking at – 
and profile you, with huge accuracy, in the real world, the way to a certain 
extent they already do in the online world. And, again, if Google can profile 
you, others can get access to that profile – either through legal means or 
illegal. You might have consented to giving others access, in one of those long 
Terms and Conditions documents you scrolled down without reading and clicked 
‘OK’ to. The government might ask Google for access to your feed, in the course 
of some investigation or other. A hacker might even hack into your system to 
take a look…

…and this last risk, the risk of hacking, is a very real one. Weaknesses in 
Google Glass have already surfaced. As the Guardian reported a few days ago:

“Augmented reality glasses could be compromised by a hacker who would be able 
to see and hear everything the wearer does”

This particular weakness may or may not turn out to be a real risk – but the 
potential is there. Where data exists, and where systems exist, they are 
hackable – Google Glass, by its nature, could be a clear target. And what they 
get, as a result, could be seriously dangerous and damaging.

Risks to others’ privacy

Equally worrying are the risks to those the wearer looks at. There are specific 
risks – anyone who knows about the concept of ‘creepshots’ – surreptitiously 
taken photographs, usually of young women and girls, up skirts, down blouses 
etc, posted on the internet – should be see the possibilities immediately. As 
Gizmodo put it:

“Once these things stop being a rich-guy novelty and start actually hitting the 
streets, the rise in creepshots is going to be worse than any we’ve ever seen 
before”

They’re right – and the makers of Google Glass should be aware of the 
possibilities. Some people are even working on developing an app to allow you 
to take a picture using Google Glass just by winking, which would extend the 
possibilities of creepshots one creepy step forward – at the moment, at least, 
voice commands are needed to take shots, alerting the victim, but with winking 
or other surreptitious command systems even that protection would be gone.

Creepshots are just one extreme – the other opportunities for invasions of 
privacy are huge. In mitigation, some say ‘Oh, at least you can see that people 
are wearing Google Glass, so you know they’re filming you’. Well, yes, but 
there are lots of problems with that. Firstly, should we really need to check 
the glasses of everyone who can see us? Secondly, this is just the first 
generation of Google Glass. What will the next one look like? Cooler, less like 
something out of Star Trek? And the technology could be used in ways that are 
much less obvious – hack and disguise your own Google Glass and make it look 
like a pair of ordinary sunglasses? Not hard for a hacker. They’ll be available 
on the net within a pretty short time.

Normalising surveillance

All these, however, are just details. The real risk is at a much higher level – 
but it may be a danger that’s already been discounted. It’s the risk that our 
society goes down a route where surveillance is the norm. Where we expect to be 
filmed, to have our every movement, our every action, our every word followed, 
analysed, compiled, and aggregated for the service of companies that want to 
make money out of us and governments that want to control us. Sure, Google 
Glass is cool, and sure it does some really cool stuff, but is it really worth 
that?

Now there may be ways to mitigate all these risks, and there may be ways that 
we can find to help overcome some of the issues. I’d like it to be so, because 
I love the coolness of the technology. Right now, though, I’m not convinced 
that we have – or even that we necessarily will be able to. It means, for me, I 
think we need to remember that just because we can do things like this, it 
doesn’t mean that we should.



Dr Paul Bernal
Lecturer
UEA Law School
University of East Anglia
Norwich Research Park
Norwich NR4 7TJ

email: paul.ber...@uea.ac.uk<mailto:paul.ber...@uea.ac.uk>
Web: http://www.paulbernal.co.uk/
Blog: http://paulbernal.wordpress.com/
Twitter: @paulbernalUK

On 10 Jul 2013, at 17:52, Yosem Companys 
<compa...@stanford.edu<mailto:compa...@stanford.edu>>
 wrote:

From: Bruno Fortugno 
<brunofortu...@sympatico.ca<mailto:brunofortu...@sympatico.ca>>

I am a student writing a paper on the potential privacy issues caused by 
Google's upcoming product Google Glass. I was wondering if anyone could advise 
some good resources for my research.

Thanks,

Bruno Fortugno
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu<mailto:compa...@stanford.edu> or changing 
your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech


--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu<mailto:compa...@stanford.edu> or changing 
your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech

--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu<mailto:compa...@stanford.edu> or changing 
your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Reply via email to