----------empyre- soft-skinned space----------------------
Hello All,

Thanks to Renate and Tim for pulling this timely topic together. I'm very
happy to be in conversation with you all.

First, a tiny bit of background. I've been focused on social media
as a primary topic of investigation and a site for artistic action for more
than a decade now. Facebook has been a frequent (and ongoing) target, while
other projects of mine take on Twitter, Instagram, and TikTok, as well
platforms that aren't explicitly thought of as social media such as Google
Search.

For some in the world, the current crisis of disinformation---and the scale
of off-platform violent action it can create (e.g., Jan 6th)---has been a
surprise. Others, including those leading discussion over the next month,
have been talking, writing, and/or creating artworks and other projects for
years to investigate and communicate about how the actions of social media
and other technology platforms have been leading the world towards a moment
like this for quite a while. Quite frankly, it's pretty easy to name any
specific ill produced by platforms today and to identify an artist,
designer, scholar, theorist, or other thinker who has been essentially
yelling about that ill for years, predicting it, warning about it, and in
various ways, offering up alternatives to it.

Given this assessment, and thinking about the initial questions that Renate
posed regarding analysis of the present landscape and imaginings for future
alternatives to it, I thought a good way to start would be to outline and
assess specific platform characteristics of concern and to catalog our
current tactics, all towards eventually imagining aspirational paths
forward. I'd love to talk about both the realistic we-can-do-it-today kind
of ideas as well as the
perhaps-impossible-today-but-maybe-possible-tomorrow kind of plans. I think
some discussion of individual vs collective action is probably appropriate
in this discussion as well.

>From my vantage point, a primary area of concern is the surveillance-based
engagement- and profit-motivated monopolistic-platform-enabled algorithmic
feed. Algorithmic feeds (e.g., the Facebook News Feed, Twitter's "top
tweets" feed, etc) are not designed to produce an informed citizenry
through discussion and debate, they are a tool created to produce
*engagement*. More user engagement (through the prescribed paths of
liking/"reacting"/sharing/commenting) produces more data from those users,
ultimately leading the platforms to what they most desire: more profit.
Engagement-motivated algorithmic feeds are a major player in the
disinformation ecosystem, not only enabling the sharing of misleading
content, but also in the ways they create incentives to craft, share, and
emotionally activate users through inflammatory lies and manipulations.

One tactic I've engaged in response to the threats posed by algorithmic
feeds is obfuscation. Through works such as Go Rando (an extension that
obfuscates a user's emotions on Facebook) [1] or Not For You (an automated
confusion system for TikTok that reveals how that platform feels when its
feed is no longer made "for you") [2], I've worked to not only create
systems for individual protection, but to also activate broader discussion
about how the designs of software platforms are always in service of
someone---and that this someone is rarely, if ever, the user. Obfuscation
techniques can and do produce interesting aesthetic and information
experiences, can pollute big data stores with nonsense (potentially
confusing surveillance systems), and can generate conversation between
users and in the media. But from my perspective, perhaps the most useful
effect of obfuscation tactics (in the world of algorithmic social media
feeds) is the tension their presence can create for users when they
consider whether to use them at all. Often users are reluctant to use a
work like Not For You because they're worried it will confuse their
carefully constructed platform profile, leading the feed to start showing
them posts that don't conform to what they hope to see. This provokes some
users to start thinking more critically about how the system sees them and
their interests, who that vision most benefits, and who it makes most
vulnerable.

I'd be interested to hear others speak about their tactics, what those
tactics can accomplish, and where they fall short. Further, what areas of
the platform landscape---as they relate to a crisis of disinformation (or
elsewhere?)---are you thinking about in this moment?

best,
ben

[1] https://bengrosser.com/projects/go-rando/
[2] https://bengrosser.com/projects/not-for-you/

-- 
gros...@bengrosser.com
http://bengrosser.com
@bengrosser <http://twitter.com/bengrosser>

PGP public key <https://bengrosser.com/share/pgp/Benjamin_Grosser_pub.txt>
_______________________________________________
empyre forum
empyre@lists.artdesign.unsw.edu.au
http://empyre.library.cornell.edu

Reply via email to