Re: [-empyre-] Critical considerations linger.

2021-02-14 Thread Brian Holmes
--empyre- soft-skinned space--Derek, thanks for your response on February 11, which was searching and
completely to the point. You said that existing ideology critique:

"doesn’t begin to explain the way content recognition algorithms can
radicalize individuals to the point where they storm the US capital. These
actions were the result of algorithms connecting people to other people and
content that reinforces an ideological worldview. What I’m asking is there
a way that artists can reveal this process to a person, to show them how
their worldview may be partially constructed by algorithms?"

It's the great question of "woke" tactical media. Which, I agree with you,
remains to be invented.

My impression is that answering the question requires a shift of that basic
coordinates in which tactical media is supposed to operate - what you might
call the referential frame. In most of the work that has been described
(including the fascinating CSIA device) what's imagined is a confrontation
between an individual and the surveillance apparatus of the state, with the
presumption that the state will mistakenly identify you and wrongly target
you, oppress you. However, online radicalization is quite different and
cannot be confronted with the same assumptions.

Here the presumption is that quantifiable traits from your online behavior
will be enough to associate you with people who share a specific,
historically rooted, partially unconscious form of culture - ie a world
view, an ideology. Radicalization is the online discovery and adoption of
that culture/ideology, which may be reinforced by a thousand offline cues
as well (racism and nationalism are deep-rooted, even ubiquitous). So we
are talking about a becoming-collective, a process that moves between self
and society. The specific power of the computational tool - whether a
social media app or an artistic intervention - is to activate that kind of
cultural transmission and communication, either simply by reinforcing it,
in the case of the app, or through some kind of exploratory,
consciousness-raising process, in the case of the art.

The big difference is that in the first referential framework we are
asking, What's wrong with the state? How can I defend myself from state or
perhaps corporate algorithms? Why are these surveillance techniques legally
permissible and technologically possible?

Whereas in the second case we are asking, What's wrong with us? What
aspects of a racist and nationalist culture do we unconsciously share? Why
am I susceptible to radicalization? What kinds of associations and ties do
I form through the mediation of corporate algorithms?

This second set of questions tends to destroy the modernist focus on the
characteristics of a specific medium. Instead we have to confront widely
dispersed yet very intimate cultural motifs that shape our identity as
participants in a social process. The difficulty for pedagogical or
consciousness-raising art is to keep the attention focused on specific
algorithms while at the same time exploring and working through the
uncomfortable psychological associations activated by those algorithms. The
question here is not, What intangible rights do I have as an individual?
Instead the question is, What collective wrongs do I commit when I allow
myself to adopt a mediated ideology?

Woke tactical media is going to be a very challenging thing to create.
Because it has to grapple with the big picture: the world-picture that each
individual internalizes. As Marx wrote, and as Benjamin repeated after him:
"The reformation of consciousness lies solely in our waking the world...
from its dreams about itself."

all the best, Brian

On Thu, Feb 11, 2021 at 10:18 AM Curry, Derek 
wrote:

> --empyre- soft-skinned space--
> Brian brings up a good point about the capacity of art to teach users
> about what happens when they engage with social media. How to represent the
> algorithmic processes that happen on the back end is something Jennifer and
> I have been wrestling with for a little while, usually with what feels like
> qualified or limited success (when there is any success at all).
>
> The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post)
> worked best when people were presented with their own Twitter posts after
> they had been processed by our machine learning classifiers, including one
> trained on posts made by accounts of known terrorist organizations like
> ISIS and Boko Haram (before they were banned). When we were invited to
> speak about the project by a specific group, we would typically surveil the
> Twitter accounts of people we knew would be present. Most people will have
> a few posts that our terrorist classifier would flag. We can then look at
> those individual posts as a group and try to see why the classifier may
> have flagged the post.  One notable interaction was when a young woman saw
> that a significant number of her 

Re: [-empyre-] Critical considerations linger.

2021-02-14 Thread Curry, Derek
--empyre- soft-skinned space--
I’ve really been enjoying the conversations this month. In thinking about Ben’s 
use of obfuscation, Leo’s URME, as well as older tactics (Jon McKenzie 
mentioned our friends CAE), I also try to imagine how tactics could be 
repurposed or ultimately serve to strengthen power structures. Admittedly, this 
question is always in the back of my mind (when it is not front and center). 
But, to what extent has tactical media served as a form of penetration testing 
for the ruling elite? For example, is it ever irresponsible for an artwork 
reveal an exploit or effective tactic when it means that exploit will be 
patched, or tactic will be guarded against in the future? And while I 
completely agree with Geert’s proposal that there should be a culture of 
refusal and a collective exodus from the dominant social media platforms. But 
my fear is that this won’t necessarily lead to a better situation given that 
there was a mass collective exodus from major social media platforms and a 
migration to smaller alternatives in the wake of the January 6th 
insurrection—only the discontent with the platforms was due to censorship of 
right-wing claims of election fraud, and the migration was to platforms that 
cater to right-wing ideologies. AWS suspension of the right wing Parler 
microblog from using its services has bolstered calls from some conservatives 
to break up the big tech monopolies, and may result in a viable alternative—at 
least for right-wing social media outlets. Interestingly, what is being enacted 
by discontented cultural warriors on the right is very similar to what I have 
heard liberal friends of mine propose be enacted for years—and for much the 
same reasons, they don’t want a major corporation to control their speech or 
what content they are able to see.

At the individual level, there are tactics and choices for how to engage with 
social media, at least for the near future. But increasingly the danger is not 
only the effects it may have on an individual, but what it can trigger others 
to do. Web 2.0 has resulted in a new type of control that operates via the 
ability to transform vast quantities of aggregated data into predictions and 
actions. Zuboff has recently described the process as data exhaust being 
rendered as behavior, but it is also similar to what Deleuze called 
“ultra-rapid forms of free floating control” (Postscript on the Societies of 
Control). Like nuclear technology, this power structure can’t simply be 
dismantled now that it exists, and it has been demonstrated that it can be 
weaponized by technocrats with an understanding of analytic mass psychology and 
access to large datasets. Deleuze says that there is no need to hope or fear, 
only to look for new weapons. I tend to think that this is the future of social 
media—but this could just be me projecting what happened with networked stock 
trading onto social media.

In the transition to networked stock trading, there was a moment when 
technology emerged as a liberating force that helped to counter the power 
imbalance between retail investors and established market makers. A community 
of day traders emerged who leveraged new technology to trade effectively 
against large investment banks. They were dubbed “SOES Bandits” by the market 
makers because they primarily used NASDAQ’s SOES trading platform at first. The 
tactics used by the SOES Bandits were similar to tactics used by hacktivists 
and sometimes tactical media practitioners at the time. For example, the sign 
for one of the largest trading platforms for SOES Bandits, known as Island, was 
actually carved foam that was spray painted and glued onto the building to make 
it look like it was an old company (link to image posted below). Recognizing 
that networked technology was creating opportunities for more people to 
participate in the markets in ways that were more fair, courts in the US 
largely sided with the SOES Bandits in litigation over how networked stock 
trading should function, and regulators in the US passed rules that favored 
networked stock trading. Among other things, these rules required that all 
prices be made publicly available and that if a trader placed an order for a 
stock at one exchange, and that stock was available for a better price at 
another, the order must be either filled for the better price, or routed to the 
exchange with the best price. These rules, combined with new network technology 
(like microwave and laser data transmissions) and high-powered GPUs are what 
resulted in trading practices known as high-frequency trading. High-frequency 
traders (HFTs) are able to exploit network latency to detect big moves in the 
market and buy large quantities of a stock and sell them microseconds later. So 
when an institutional trader you’re your pension funds or a university 
endowment attempts to buy or sell a block of stock, the HFT can buy that stock 
first at a 

Re: [-empyre-] Critical considerations linger.

2021-02-14 Thread Amanda McDonald Crowley
--empyre- soft-skinned space--Jon: your post is haunting and Veronica's tribute song is playing over and
over and over in my head.

It is worth noting that former US president DT's administration ordered the
murder of more death row inmates than any american president since the
1800s.

I'm reminded of Graham Harwood's Rehearsal of Memory from 1995 (which I
serendipitously shared with my students in class this week): in the UK the
people who committed heinous crimes were committed to mental institutions,
not a prison system per se (although their rights were unsurprisingly
seriously curtailed).

There is not a single example that I'm aware of, of a person who has
committed a crime of this nature who did not have obscene crimes committed
against them as children.

What strategies are there to get these messages into mainstream media?
Letting kids rule ASAP is a noble objective, Jon: but which kids, we might
ask?

Amanda








On Fri, Feb 12, 2021 at 6:00 PM Jon McKenzie  wrote:

> --empyre- soft-skinned space--
> I totally agree that young activists are already inventing new forms of
> tactical media, operating across social media platforms and using different
> genres to form coalitions that, as CAE and DnG taught long ago, can
> intersect with different long-standing communities in patterns resonant
> with animal swarms. At Cornell, we have critical design teams
> 
> collaborating with community partners in NYS, Golan, and Uganda, NGOs and
> NPOs doing human rights, environmental, and public health work. Critical
> thinking + tactical media + design thinking is the mix. The student-partner
> work mimes art activist groups, and we use advanced design thinking
> research based on Latour's media cascade to situate cultural performances
> within flows of technical and org'l performances. Its rapid response design
> justice.
>
> The #Save Lisa 
> campaign was produced by a broad coalition of 80+ women organized by
> Cornell's Center on Death Penalty Worldwide's Prof Sandra Babcock and Zohra
> Ahmed. The Her Whole Truth team included second-year law student Veronica
> Cinibulk, whose tremendously powerful "Lisa's Song"
>  music video, created during
> months of tweets, Instagrams, FaceBook posts, and op-eds in mainstream
> media, still could not prevent the tragic execution of Lisa Montgomery by
> the Trump administration in early January.
>
> This work continues and over break we began reflecting and researching
> emerging work on retraumatization, platformism, entrepreneurial activism,
> collaborative data analytics, and what Ricardo Dominguez is calling
> un-design un-making. We design and the world, laughing, worlds. Let kids
> reign asap.
>
>
>
>
> On Thu, Feb 11, 2021 at 11:18 AM Curry, Derek 
> wrote:
>
>> --empyre- soft-skinned space--
>> Brian brings up a good point about the capacity of art to teach users
>> about what happens when they engage with social media. How to represent the
>> algorithmic processes that happen on the back end is something Jennifer and
>> I have been wrestling with for a little while, usually with what feels like
>> qualified or limited success (when there is any success at all).
>>
>> The Crowd-Source Intelligence Agency (that Jennifer mentioned in her
>> post) worked best when people were presented with their own Twitter posts
>> after they had been processed by our machine learning classifiers,
>> including one trained on posts made by accounts of known terrorist
>> organizations like ISIS and Boko Haram (before they were banned). When we
>> were invited to speak about the project by a specific group, we would
>> typically surveil the Twitter accounts of people we knew would be present.
>> Most people will have a few posts that our terrorist classifier would flag.
>> We can then look at those individual posts as a group and try to see why
>> the classifier may have flagged the post.  One notable interaction was when
>> a young woman saw that a significant number of her posts had an extremely
>> high statistical similarity (greater than 90%) to posts made by terrorist
>> groups. When seen in comparison to other members of the group, this seemed
>> really funny, especially to the woman whom our classifier deemed to be a
>> terrorist—people in the group all knew her, so this seemed absurd. But,
>> when we looked at the individual posts that had been flagged, she realized
>> that she had been retweeting a lot of posts by Palestinian activists—which
>> really is something that we know from our research that intelligence
>> agencies do look for. A look of horror came over this participants face and
>> her entire posture changed as she realized how her posts were interpreted
>> by an algorithm. She explained that she had been 

Re: [-empyre-] Critical considerations linger.

2021-02-12 Thread Jon McKenzie
--empyre- soft-skinned space--I totally agree that young activists are already inventing new forms of
tactical media, operating across social media platforms and using different
genres to form coalitions that, as CAE and DnG taught long ago, can
intersect with different long-standing communities in patterns resonant
with animal swarms. At Cornell, we have critical design teams

collaborating with community partners in NYS, Golan, and Uganda, NGOs and
NPOs doing human rights, environmental, and public health work. Critical
thinking + tactical media + design thinking is the mix. The student-partner
work mimes art activist groups, and we use advanced design thinking
research based on Latour's media cascade to situate cultural performances
within flows of technical and org'l performances. Its rapid response design
justice.

The #Save Lisa 
campaign was produced by a broad coalition of 80+ women organized by
Cornell's Center on Death Penalty Worldwide's Prof Sandra Babcock and Zohra
Ahmed. The Her Whole Truth team included second-year law student Veronica
Cinibulk, whose tremendously powerful "Lisa's Song"
 music video, created during
months of tweets, Instagrams, FaceBook posts, and op-eds in mainstream
media, still could not prevent the tragic execution of Lisa Montgomery by
the Trump administration in early January.

This work continues and over break we began reflecting and researching
emerging work on retraumatization, platformism, entrepreneurial activism,
collaborative data analytics, and what Ricardo Dominguez is calling
un-design un-making. We design and the world, laughing, worlds. Let kids
reign asap.




On Thu, Feb 11, 2021 at 11:18 AM Curry, Derek 
wrote:

> --empyre- soft-skinned space--
> Brian brings up a good point about the capacity of art to teach users
> about what happens when they engage with social media. How to represent the
> algorithmic processes that happen on the back end is something Jennifer and
> I have been wrestling with for a little while, usually with what feels like
> qualified or limited success (when there is any success at all).
>
> The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post)
> worked best when people were presented with their own Twitter posts after
> they had been processed by our machine learning classifiers, including one
> trained on posts made by accounts of known terrorist organizations like
> ISIS and Boko Haram (before they were banned). When we were invited to
> speak about the project by a specific group, we would typically surveil the
> Twitter accounts of people we knew would be present. Most people will have
> a few posts that our terrorist classifier would flag. We can then look at
> those individual posts as a group and try to see why the classifier may
> have flagged the post.  One notable interaction was when a young woman saw
> that a significant number of her posts had an extremely high statistical
> similarity (greater than 90%) to posts made by terrorist groups. When seen
> in comparison to other members of the group, this seemed really funny,
> especially to the woman whom our classifier deemed to be a terrorist—people
> in the group all knew her, so this seemed absurd. But, when we looked at
> the individual posts that had been flagged, she realized that she had been
> retweeting a lot of posts by Palestinian activists—which really is
> something that we know from our research that intelligence agencies do look
> for. A look of horror came over this participants face and her entire
> posture changed as she realized how her posts were interpreted by an
> algorithm. She explained that she had been reading news stories and was
> angry when she made those posts and had completely forgot that she had even
> made them. Jennifer and I have wrote about this type of response as a
> “visceral heuristic,” which she mentioned in her post the other day.
> Whereas many projects that focus on explainable AI try to teach people
> technical aspects of machine learning or some other technology, we have
> been looking for ways that people can simply experience it.
>
> But it is much easier to show people machine bias than to show them how
> their own ideology is produced through algorithmically designed echo
> chambers. For example, what would an algorithmic form of ideology critique
> look like? And I don’t mean ideology in the sense that some software
> studies theorists have combined Althusser’s psychoanalytic conception of
> ideology with the assumption that a computer is a brain (promulgated by
> some proponents of strong AI) to conclude that software itself must be
> ideology. This doesn’t begin to explain the way content recognition
> algorithms can radicalize individuals to the point where they storm the US

Re: [-empyre-] Critical considerations linger.

2021-02-12 Thread Curry, Derek
--empyre- soft-skinned space--Great post from Jon McKenzie:

From: Jon McKenzie 
Date: Thursday, February 11, 2021 at 6:20 PM
To: "Curry, Derek" 
Cc: soft_skinned_space 
Subject: Re: [-empyre-] Critical considerations linger.

I totally agree that young activists are already inventing new forms of 
tactical media, operating across social media platforms and using different 
genres to form coalitions that, as CAE and DnG taught long ago, can intersect 
with different long-standing communities in patterns resonant with animal 
swarms. At Cornell, we have critical design 
teams<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblogs.cornell.edu%2Fdesignthinkingcommunity%2Fwomen-on-death-row%2F=04%7C01%7Cd.curry%40northeastern.edu%7C6b82f16eafa24137dba608d8cee38d54%7Ca8eec281aaa34daeac9b9a398b9215e7%7C0%7C0%7C637486824015787228%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=2oAInI5%2BiNMOPaIDkLvrph0lpl67PKFq6fgRM40Itzg%3D=0>
 collaborating with community partners in NYS, Golan, and Uganda, NGOs and NPOs 
doing human rights, environmental, and public health work. Critical thinking + 
tactical media + design thinking is the mix. The student-partner work mimes art 
activist groups, and we use advanced design thinking research based on Latour's 
media cascade to situate cultural performances within flows of technical and 
org'l performances. Its rapid response design justice.

The #Save 
Lisa<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeathpenaltyworldwide.org%2Fproject%2Fsavelisa%2F=04%7C01%7Cd.curry%40northeastern.edu%7C6b82f16eafa24137dba608d8cee38d54%7Ca8eec281aaa34daeac9b9a398b9215e7%7C0%7C0%7C637486824015797223%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=gjMihNSYspUSI%2FmjzPlHojLCg1mMNo4NV%2BHl%2BhuyFzc%3D=0>
 campaign was produced by a broad coalition of 80+ women organized by Cornell's 
Center on Death Penalty Worldwide's Prof Sandra Babcock and Zohra Ahmed. The 
Her Whole Truth team included second-year law student Veronica Cinibulk, whose 
tremendously powerful "Lisa's 
Song"<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DokWfnh3pDcA=04%7C01%7Cd.curry%40northeastern.edu%7C6b82f16eafa24137dba608d8cee38d54%7Ca8eec281aaa34daeac9b9a398b9215e7%7C0%7C0%7C637486824015797223%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=uq3NnaFdO34phKnhAvjippQcllHyOEQYFttwwHu8lWE%3D=0>
 music video, created during months of tweets, Instagrams, FaceBook posts, and 
op-eds in mainstream media, still could not prevent the tragic execution of 
Lisa Montgomery by the Trump administration in early January.

This work continues and over break we began reflecting and researching emerging 
work on retraumatization, platformism, entrepreneurial activism, collaborative 
data analytics, and what Ricardo Dominguez is calling un-design un-making. We 
design and the world, laughing, worlds. Let kids reign asap.



On Thu, Feb 11, 2021 at 11:18 AM Curry, Derek 
mailto:d.cu...@northeastern.edu>> wrote:
--empyre- soft-skinned space--
Brian brings up a good point about the capacity of art to teach users about 
what happens when they engage with social media. How to represent the 
algorithmic processes that happen on the back end is something Jennifer and I 
have been wrestling with for a little while, usually with what feels like 
qualified or limited success (when there is any success at all).

The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post) 
worked best when people were presented with their own Twitter posts after they 
had been processed by our machine learning classifiers, including one trained 
on posts made by accounts of known terrorist organizations like ISIS and Boko 
Haram (before they were banned). When we were invited to speak about the 
project by a specific group, we would typically surveil the Twitter accounts of 
people we knew would be present. Most people will have a few posts that our 
terrorist classifier would flag. We can then look at those individual posts as 
a group and try to see why the classifier may have flagged the post.  One 
notable interaction was when a young woman saw that a significant number of her 
posts had an extremely high statistical similarity (greater than 90%) to posts 
made by terrorist groups. When seen in comparison to other members of the 
group, this seemed really funny, especially to the woman whom our classifier 
deemed to be a terrorist—people in the group all knew her, so this seemed 
absurd. But, when we looked at the individual posts that had been flagged, she 
realized that she had been retweeting a lot of posts by Palestinian 
activists—which really is something that we know from our research that 
intelligence agen

Re: [-empyre-] Critical considerations linger.

2021-02-11 Thread Curry, Derek
--empyre- soft-skinned space--
Brian brings up a good point about the capacity of art to teach users about 
what happens when they engage with social media. How to represent the 
algorithmic processes that happen on the back end is something Jennifer and I 
have been wrestling with for a little while, usually with what feels like 
qualified or limited success (when there is any success at all).

The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post) 
worked best when people were presented with their own Twitter posts after they 
had been processed by our machine learning classifiers, including one trained 
on posts made by accounts of known terrorist organizations like ISIS and Boko 
Haram (before they were banned). When we were invited to speak about the 
project by a specific group, we would typically surveil the Twitter accounts of 
people we knew would be present. Most people will have a few posts that our 
terrorist classifier would flag. We can then look at those individual posts as 
a group and try to see why the classifier may have flagged the post.  One 
notable interaction was when a young woman saw that a significant number of her 
posts had an extremely high statistical similarity (greater than 90%) to posts 
made by terrorist groups. When seen in comparison to other members of the 
group, this seemed really funny, especially to the woman whom our classifier 
deemed to be a terrorist—people in the group all knew her, so this seemed 
absurd. But, when we looked at the individual posts that had been flagged, she 
realized that she had been retweeting a lot of posts by Palestinian 
activists—which really is something that we know from our research that 
intelligence agencies do look for. A look of horror came over this participants 
face and her entire posture changed as she realized how her posts were 
interpreted by an algorithm. She explained that she had been reading news 
stories and was angry when she made those posts and had completely forgot that 
she had even made them. Jennifer and I have wrote about this type of response 
as a “visceral heuristic,” which she mentioned in her post the other day. 
Whereas many projects that focus on explainable AI try to teach people 
technical aspects of machine learning or some other technology, we have been 
looking for ways that people can simply experience it.

But it is much easier to show people machine bias than to show them how their 
own ideology is produced through algorithmically designed echo chambers. For 
example, what would an algorithmic form of ideology critique look like? And I 
don’t mean ideology in the sense that some software studies theorists have 
combined Althusser’s psychoanalytic conception of ideology with the assumption 
that a computer is a brain (promulgated by some proponents of strong AI) to 
conclude that software itself must be ideology. This doesn’t begin to explain 
the way content recognition algorithms can radicalize individuals to the point 
where they storm the US capital. These actions were the result of algorithms 
connecting people to other people and content that reinforces an ideological 
worldview. What I’m asking is there a way that artists can reveal this process 
to a person, to show them how their worldview may be partially constructed by 
algorithms? Like Jennifer mentioned in her post, some pro-Trump supporters who 
played our game WarTweets actually thought the game was in support of Trump. 
Brian asked how tactical media practitioners can reveal the affective and 
psychological effects on individuals, and the philosophical issues involved. I 
agree that a new generation of tactical media practitioners has begun to take 
up these questions, but I also think that an effective critique is still in its 
nascent stages. I think Zuboff’s framing of social media and content 
aggregation platforms as surveillance capitalism is a good framework for a 
post-Marxist critique—though for most artists I know who have been engaged 
social media have been discussing these issue for a few years without the 
terminology she coined. 

For anyone is interested, Jennifer and I have written about a visceral 
heuristic in “Crowd-Sourced Intelligence Agency: Prototyping counterveillance” 
published in Big Data and Society, “Qualculative Poetics: An Artistic Critique 
of Rational Judgement” in Shifting Interfaces: An Anthology of Presence, 
Empanty, and Agency in 21st Century Media Arts, and “Artistic Research and 
Technocratic Consciousness” in Retracing Political Dimensions: Strategies in 
Contemporary New Media Art.

https://journals.sagepub.com/doi/full/10.1177/2053951717693259
https://www.cornellpress.cornell.edu/book/9789462702257/shifting-interfaces/ 
https://www.degruyter.com/document/doi/10.1515/9783110670981/html

Looking forward to reading the continued conversation,

Derek


-- 
Derek Curry, PhD.
Assistant Professor Art + Design
Office: 211 Lake Hall

Re: [-empyre-] Critical considerations linger.

2021-02-09 Thread Curry, Derek
--empyre- soft-skinned space--
Thank you everyone for starting this discussion. There is a lot to respond to, 
but perhaps I will start with Renate’s questions/prompts related to social 
media being used in recent BLM, Navalny, and anti-lockdown protests and the 
role of artists.

For me, the use of networked communication for political organization seems 
like an inevitable outcome. If we think of social media as a progeny of 
Gutenberg’s printing press, the distribution of political ideologies, 
organization, and radicalization, should have been one of the first concerns 
rather than something that is often talked about as a ‘side effect.’ I have 
made the argument that one of the reasons social media like Twitter works so 
well for activists is that the original code for Twitter is based on Tad Hirsch 
and the Institute for Applied Autonomy’s TXTmob, which was designed 
specifically as a tool for activists to communicate and organize during 
protests. TXTmob was a service that allowed users to send SMS, or ‘text’, 
messages to other users on a list for real-time coordination in a changing 
situation. TXTmob was used during the 2004 Democratic and Republican National 
Conventions and the 2006 Mayday Immigrant Rights protests in San Francisco. In 
2004, Tad shared his project (and code) at a weekend-long meeting of hackers 
and activists in Oakland. A couple of the programmers at the event worked as 
developers at Odeo, a podcasting startup that would eventually transform into 
Twitter. Like TXTmob, the first version of Twitter (when it was still called 
Twttr) functioned through SMS messaging. (Tad’s version of the story is 
available here: 
https://medium.com/@tadhirsch/txtmob-and-twitter-a-reply-to-nick-bilton-eedbde2abbcd)
 In light of these origins, it should perhaps not be surprising that Twitter 
has become a useful tool during the Arab Spring, the Occupy movements, the 2009 
G-20 Summit, and for organization by activists groups such as BLM, the Spanish 
Indignados, and now right-wing militias and QAnon. 

One way that I have tried to respond as an artist is by trying to harness the 
organizational potential of social media in a way that reveals the power 
structure it creates. For my PhD dissertation project, I focused on how social 
media can impact finance—something that is now topical in the wake of Reddit 
users inflating various stock process. For the practice-based component of my 
degree, I created a web application called Public Dissentiment, an online 
application that helps people protesting a publicly traded company gain the 
attention of that company’s board members and shareholders by creating social 
media posts designed to negatively impact the price of the company’s stock when 
it is read by algorithmic trading bots. The inspiration for the project came 
from tactical media projects such as TXTmob and FloodNet, but also tactics used 
by stock traders and terrorist organizations. One example is from April 23, 
2013 when the Syrian Electronic Army hacked the @AP Twitter account and posted 
a tweet that said, “Breaking: Two Explosions in the White House and Barack 
Obama is injured.”  Less than three minutes after the tweet was posted at 
1:07pm, the Dow had lost more than $236.5 billion and the S 500 had lost 
almost the same amount. The market rebounded almost as quickly as it dropped. 
This type of event is known as a “flash crash” and is the result of 
high-frequency trading bots cancelling their stock orders because of perceived 
volatility. There are a number of factors, both technological and regulatory, 
that have led to the situation where a tweet can trigger a market crash—the 
biggest of which is that the vast majority of stock trades are now made by 
computer programs. High-Frequency Trading (HFT) is a type of algorithmic 
trading that involves the automated buying and selling securities and other 
financial products using high-powered computers, in very large volumes, and at 
very high speeds.  HFTs are not investors, they do not hold positions for long 
periods of time.  Rather, when an HFT buys a security it will sell it within a 
few hours—often within milliseconds. HFTs don’t depend on price fluctuations to 
make money, instead they collect ‘rebates’ from stock trading platforms that 
use a ‘maker-taker’ pricing system. In a maker-taker system, traders are paid a 
small rebate or kickback (typically $0.002 per share) for helping to “make” a 
transaction possible. Since they only make 20 cents for every 100 stocks they 
buy or sell, this means that they must deal in enormous volumes to make a 
profit, and they will stop trading at the first sign of volatility. HFTs look 
for volatility by monitoring price fluctuations and by scanning news and social 
media feeds for any information that could indicate a swift change in a stock’s 
price. Public Dissentiment uses a reverse-engineered financial sentiment 
analysis engine that algorithmically 

[-empyre-] Critical considerations linger.

2021-02-08 Thread Renate Ferro
--empyre- soft-skinned space--
Thank you Jennifer for introducing Derek and your collaborative work.  I am 
thankful you listed links to these and I am hoping that any of our subscribers 
who do have work that resonates with the ideas of data, information, and 
surveillance capitalism please share them with our listserv.  

What I am trying to wrap my head around today relates to the collapse of the 
virtual networks of social media highway into networks of physical political 
engagement, resistance, and protest in the streets.  Certainly, we witnessed 
this is 2020 during the Black Lives Matter movement and then quite negatively 
during the Capitol uprising.  Global examples of this occurred on Germany's 
parliament in 2020 in response to Covid and just within the last couple of 
weeks in Russia crowds gathering in support of Aleksei Navalny who was poisoned 
and then imprisoned. Where do we go from here individually and collectively?  
How does art help to engage social media users to understand the underpinnings? 
What may have negative impacts? Or positive ones?  What can we learn from 
comparisons? Critical considerations linger. 

Domenica mentioned digital literacy for the young. Geert listed an overthrow of 
the system from the ground up. Thoughts from our guests and subscribers? 

Best, 
Renate  

___
empyre forum
empyre@lists.artdesign.unsw.edu.au
http://empyre.library.cornell.edu