This very digestible short talk (22:00) on the emerging threat of
algorithmic/biometric governmentality from Zeynep Tufekci may be of
interest to those who research control societies, etc..:
https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads

The transcript is below:

So when people voice fears of artificial intelligence, very often, they
invoke images of humanoid robots run amok. You know? Terminator? You know,
that might be something to consider, but that's a distant threat. Or, we
fret about digital surveillance with metaphors from the past. "1984,"
George Orwell's "1984," it's hitting the bestseller lists again. It's a
great book, but it's not the correct dystopia for the 21st century. What we
need to fear most is not what artificial intelligence will do to us on its
own, but how the people in power will use artificial intelligence to
control us and to manipulate us in novel, sometimes hidden, subtle and
unexpected ways. Much of the technology that threatens our freedom and our
dignity in the near-term future is being developed by companies in the
business of capturing and selling our data and our attention to advertisers
and others: Facebook, Google, Amazon, Alibaba, Tencent.

Now, artificial intelligence has started bolstering their business as well. And
it may seem like artificial intelligence is just the next thing after
online ads. It's not. It's a jump in category. It's a whole different
world, and
it has great potential. It could accelerate our understanding of many areas
of study and research. But to paraphrase a famous Hollywood philosopher, "With
prodigious potential comes prodigious risk."

Now let's look at a basic fact of our digital lives, online ads. Right? We
kind of dismiss them. They seem crude, ineffective. We've all had the
experience of being followed on the web by an ad based on something we
searched or read. You know, you look up a pair of boots and for a week,
those boots are following you around everywhere you go. Even after you
succumb and buy them, they're still following you around. We're kind of
inured to that kind of basic, cheap manipulation. We roll our eyes and we
think, "You know what? These things don't work." Except, online, the
digital technologies are not just ads. Now, to understand that, let's think
of a physical world example. You know how, at the checkout counters at
supermarkets, near the cashier, there's candy and gum at the eye level of
kids? That's designed to make them whine at their parents just as the
parents are about to sort of check out. Now, that's a persuasion
architecture. It's not nice, but it kind of works. That's why you see it in
every supermarket. Now, in the physical world, such persuasion
architectures are kind of limited, because you can only put so many things
by the cashier. Right? And the candy and gum, it's the same for everyone, even
though it mostly works only for people who have whiny little humans beside
them. In the physical world, we live with those limitations.

In the digital world, though, persuasion architectures can be built at the
scale of billions and they can target, infer, understand and be deployed at
individuals one by one by figuring out your weaknesses, and they can be
sent to everyone's phone private screen, so it's not visible to us. And
that's different. And that's just one of the basic things that artificial
intelligence can do.

Now, let's take an example. Let's say you want to sell plane tickets to
Vegas. Right? So in the old world, you could think of some demographics to
target based on experience and what you can guess. You might try to
advertise to, oh, men between the ages of 25 and 35, or people who have a
high limit on their credit card, or retired couples. Right? That's what you
would do in the past.

With big data and machine learning, that's not how it works anymore. So to
imagine that, think of all the data that Facebook has on you: every status
update you ever typed, every Messenger conversation, every place you logged
in from, all your photographs that you uploaded there. If you start typing
something and change your mind and delete it, Facebook keeps those and
analyzes them, too. Increasingly, it tries to match you with your offline
data. It also purchases a lot of data from data brokers. It could be
everything from your financial records to a good chunk of your browsing
history. Right? In the US, such data is routinely collected, collated and
sold. In Europe, they have tougher rules.

So what happens then is, by churning through all that data, these
machine-learning algorithms -- that's why they're called learning
algorithms -- they learn to understand the characteristics of people who
purchased tickets to Vegas before. When they learn this from existing
data, they
also learn how to apply this to new people. So if they're presented with a
new person, they can classify whether that person is likely to buy a ticket
to Vegas or not. Fine. You're thinking, an offer to buy tickets to Vegas. I
can ignore that. But the problem isn't that. The problem is, we no longer
really understand how these complex algorithms work. We don't understand
how they're doing this categorization. It's giant matrices, thousands of
rows and columns, maybe millions of rows and columns, and not the
programmers and not anybody who looks at it, even if you have all the
data, understands
anymore how exactly it's operating any more than you'd know what I was
thinking right now if you were shown a cross section of my brain. It's like
we're not programming anymore, we're growing intelligence that we don't
truly understand.

And these things only work if there's an enormous amount of data, so they
also encourage deep surveillance on all of us so that the machine learning
algorithms can work. That's why Facebook wants to collect all the data it
can about you. The algorithms work better.

So let's push that Vegas example a bit. What if the system that we do not
understand was picking up that it's easier to sell Vegas tickets to people
who are bipolar and about to enter the manic phase. Such people tend to
become overspenders, compulsive gamblers. They could do this, and you'd
have no clue that's what they were picking up on. I gave this example to a
bunch of computer scientists once and afterwards, one of them came up to me. He
was troubled and he said, "That's why I couldn't publish it." I was like,
"Couldn't publish what?" He had tried to see whether you can indeed figure
out the onset of mania from social media posts before clinical symptoms, and
it had worked, and it had worked very well, and he had no idea how it
worked or what it was picking up on.

Now, the problem isn't solved if he doesn't publish it, because there are
already companies that are developing this kind of technology, and a lot of
the stuff is just off the shelf. This is not very difficult anymore.

Do you ever go on YouTube meaning to watch one video and an hour later
you've watched 27? You know how YouTube has this column on the right that
says, "Up next" and it autoplays something? It's an algorithm picking what
it thinks that you might be interested in and maybe not find on your own. It's
not a human editor. It's what algorithms do. It picks up on what you have
watched and what people like you have watched, and infers that that must be
what you're interested in, what you want more of, and just shows you more. It
sounds like a benign and useful feature, except when it isn't.

So in 2016, I attended rallies of then-candidate Donald Trump to study as a
scholar the movement supporting him. I study social movements, so I was
studying it, too. And then I wanted to write something about one of his
rallies, so I watched it a few times on YouTube. YouTube started
recommending to me and autoplaying to me white supremacist videos in
increasing order of extremism. If I watched one, it served up one even more
extreme and autoplayed that one, too. If you watch Hillary Clinton or
Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and
it goes downhill from there.

Well, you might be thinking, this is politics, but it's not. This isn't
about politics. This is just the algorithm figuring out human behavior. I
once watched a video about vegetarianism on YouTube and YouTube recommended
and autoplayed a video about being vegan. It's like you're never hardcore
enough for YouTube.

(Laughter)

So what's going on? Now, YouTube's algorithm is proprietary, but here's
what I think is going on. The algorithm has figured out that if you can
entice people into thinking that you can show them something more
hardcore, they're
more likely to stay on the site watching video after video going down that
rabbit hole while Google serves them ads. Now, with nobody minding the
ethics of the store, these sites can profile people who are Jew haters, who
think that Jews are parasites and who have such explicit anti-Semitic
content, and let you target them with ads. They can also mobilize algorithms to
find for you look-alike audiences, people who do not have such explicit
anti-Semitic content on their profile but who the algorithm detects may be
susceptible to such messages, and lets you target them with ads, too. Now,
this may sound like an implausible example, but this is real. ProPublica
investigated this and found that you can indeed do this on Facebook, and
Facebook helpfully offered up suggestions on how to broaden that
audience. BuzzFeed
tried it for Google, and very quickly they found, yep, you can do it on
Google, too. And it wasn't even expensive. The ProPublica reporter spent
about 30 dollars to target this category.

So last year, Donald Trump's social media manager disclosed that they were
using Facebook dark posts to demobilize people, not to persuade them, but
to convince them not to vote at all. And to do that, they targeted
specifically, for example, African-American men in key cities like
Philadelphia, and I'm going to read exactly what he said. I'm quoting.

They were using "nonpublic posts whose viewership the campaign controls so
that only the people we want to see it see it. We modeled this. It will
dramatically affect her ability to turn these people out."

What's in those dark posts? We have no idea. Facebook won't tell us.

So Facebook also algorithmically arranges the posts that your friends put
on Facebook, or the pages you follow. It doesn't show you everything
chronologically. It puts the order in the way that the algorithm thinks
will entice you to stay on the site longer.

Now, so this has a lot of consequences. You may be thinking somebody is
snubbing you on Facebook. The algorithm may never be showing your post to
them. The algorithm is prioritizing some of them and burying the others.

Experiments show that what the algorithm picks to show you can affect your
emotions. But that's not all. It also affects political behavior. So in
2010, in the midterm elections, Facebook did an experiment on 61 million
people in the US that was disclosed after the fact. So some people were
shown, "Today is election day," the simpler one, and some people were shown
the one with that tiny tweak with those little thumbnails of your friends
who clicked on "I voted." This simple tweak. OK? So the pictures were the
only change, and that post shown just once turned out an additional 340,000
voters in that election, according to this research as confirmed by the
voter rolls. A fluke? No. Because in 2012, they repeated the same
experiment. And that time, that civic message shown just once turned out an
additional 270,000 voters. For reference, the 2016 US presidential election was
decided by about 100,000 votes. Now, Facebook can also very easily infer
what your politics are, even if you've never disclosed them on the site. Right?
These algorithms can do that quite easily. What if a platform with that
kind of power decides to turn out supporters of one candidate over the
other? How would we even know about it?

Now, we started from someplace seemingly innocuous -- online adds following
us around -- and we've landed someplace else. As a public and as citizens, we
no longer know if we're seeing the same information or what anybody else is
seeing, and without a common basis of information, little by little, public
debate is becoming impossible, and we're just at the beginning stages of
this. These algorithms can quite easily infer things like your people's
ethnicity, religious and political views, personality traits, intelligence,
happiness, use of addictive substances, parental separation, age and
genders, just from Facebook likes. These algorithms can identify
protesters even
if their faces are partially concealed. These algorithms may be able to
detect people's sexual orientation just from their dating profile pictures.

Now, these are probabilistic guesses, so they're not going to be 100
percent right, but I don't see the powerful resisting the temptation to use
these technologies just because there are some false positives, which will
of course create a whole other layer of problems. Imagine what a state can
do with the immense amount of data it has on its citizens. China is already
using face detection technology to identify and arrest people. And here's
the tragedy: we're building this infrastructure of surveillance
authoritarianism merely to get people to click on ads. And this won't be
Orwell's authoritarianism. This isn't "1984." Now, if authoritarianism is
using overt fear to terrorize us, we'll all be scared, but we'll know it, we'll
hate it and we'll resist it. But if the people in power are using these
algorithms to quietly watch us, to judge us and to nudge us, to predict and
identify the troublemakers and the rebels, to deploy persuasion
architectures at scale and to manipulate individuals one by one using their
personal, individual weaknesses and vulnerabilities, and if they're doing
it at scale through our private screens so that we don't even know what our
fellow citizens and neighbors are seeing, that authoritarianism will
envelop us like a spider's web and we may not even know we're in it.

So Facebook's market capitalization is approaching half a trillion
dollars. It's
because it works great as a persuasion architecture. But the structure of
that architecture is the same whether you're selling shoes or whether
you're selling politics. The algorithms do not know the difference. The
same algorithms set loose upon us to make us more pliable for ads are also
organizing our political, personal and social information flows, and that's
what's got to change.

Now, don't get me wrong, we use digital platforms because they provide us
with great value. I use Facebook to keep in touch with friends and family
around the world. I've written about how crucial social media is for social
movements. I have studied how these technologies can be used to circumvent
censorship around the world. But it's not that the people who run, you
know, Facebook or Google are maliciously and deliberately trying to make
the country or the world more polarized and encourage extremism. I read the
many well-intentioned statements that these people put out. But it's not
the intent or the statements people in technology make that matter, it's
the structures and business models they're building. And that's the core of
the problem. Either Facebook is a giant con of half a trillion dollars and
ads don't work on the site, it doesn't work as a persuasion architecture, or
its power of influence is of great concern. It's either one or the other. It's
similar for Google, too.

So what can we do? This needs to change. Now, I can't offer a simple
recipe, because we need to restructure the whole way our digital technology
operates. Everything from the way technology is developed to the way the
incentives, economic and otherwise, are built into the system. We have to
face and try to deal with the lack of transparency created by the
proprietary algorithms, the structural challenge of machine learning's
opacity, all this indiscriminate data that's being collected about us. We
have a big task in front of us. We have to mobilize our technology, our
creativity and yes, our politics so that we can build artificial
intelligence that supports us in our human goals but that is also
constrained by our human values. And I understand this won't be easy. We
might not even easily agree on what those terms mean. But if we take
seriously how these systems that we depend on for so much operate, I don't
see how we can postpone this conversation anymore. These structures are
organizing how we function and they're controlling what we can and we
cannot do. And many of these ad-financed platforms, they boast that they're
free. In this context, it means that we are the product that's being sold.
We need a digital economy where our data and our attention is not for sale
to the highest-bidding authoritarian or demagogue.

(Applause)

So to go back to that Hollywood paraphrase, we do want the prodigious
potential of artificial intelligence and digital technology to blossom, but
for that, we must face this prodigious menace, open-eyed and now.

Thank you.


_____________________________________

Dr. Ian Alan Paul
www.ianalanpaul.com
Assistant Professor of Emerging Media
Art Department, Stony Brook University

“What can I do?
One must begin somewhere.
Begin what?
The only thing in the world worth beginning:
The End of the world of course.”

           -Aimé Césaire
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: [email protected]
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Reply via email to