Out of these unprecedented ownership claims over the means of digital
production, a new system of class relations could now arise. This
article analyzes the social divisions associated with the ascendancy of
the tech industry and the reorganization of social processes through
algorithms2 —sets of instructions written as code and run on computers.
In the next section, The Rise of the Coding Elite and the
Pre-Automation of Everyone Else, we argue that the core divide in
digital capitalism opposes what we call the coding elite, who hold and
control the data and software, and the cybertariat, who must produce,
refine, and work the data that feed or train the algorithms, sometimes
to the point of automating their own jobs and making themselves
redundant. We also show that claims of technical and economic
efficiency, as well as fairness, are an important component of the
coding elite's societal power. [...]

In the section titled Classifiers and Their Discontents, we show that
algorithmic processes also structure how people come to know and
associate with one another, and how technical mediations intersect with
the perception and production of self and community. [...]

# The Coding Elite: Power at Scale

A new elite occupies the upper echelons of the digitized society—a
class or proto-class that, in a self-conscious nod to Mills (2000), we
call the coding elite. The coding elite is a nebula of software
developers, tech CEOs, investors, and computer science and engineering
professors, among others, often circulating effortlessly between these
influential roles. [...]

Professors circulate between their own start-ups, key positions in
large firms, government-sponsored research labs, and classrooms. Most
valued in this world are those people who touch and understand computer
code. Most powerful are those who own the code and can employ others to
deploy it as they see fit.

Mastery of computational techniques bestows special kinds of powers.
These powers are at once cultural, political, and economic. [...]

no profession, no matter how prestigious or how high the barriers to
entry, is exempt from having its judgment subject to a second
(algorithmic) opinion, if not wholly supplanted by it. Legitimacy has
been displaced from the professional to the coder-king—and,
increasingly, to the algorithm. [...]

# Cybertarians of the World, Disunited

If industrial capitalism concealed labor's existence through the
fetishism of commodities, digital capitalism intentionally conceals it
through the fetish of artificial intelligence (AI) and feigned
automation. 

The smooth functioning of on-demand apps, search engines, mapping
sites, social media websites, and even autonomous vehicles and many
other products all depend on the collective intelligence of armies of
humans performing ghost work.[...] What stands beneath the fetish of AI
is a global digital assembly line of silent, invisible men and women,
often laboring in precarious conditions, many in postcolonies of the
Global South. A new class of workers stands opposite the coding elite:
the cybertariat. [...]

One of the distinguishing features of digital capitalism is its
reliance on free labor. [...]

It may be more difficult for the cybertariat to resist the extraction
of their labor by undertaking shop-floor organizing, as did the
proletarians of the past. The material basis of their work situation
precludes it. Resistance is actively underway nonetheless, for example,
in online forums organized off-site. Coders allied to cyber workers'
rights also play a role, creating tools to aid cybertarians in
information sharing and self-organizing. [...]

Fairness may be the most hotly debated topic in machine learning today,
which often leads to complex arguments about which statistical
criterion best fits the situation: false negatives versus false
positives, or demographic parity versus predictive rate parity
(Weinberger 2019, Narayanan 2019). Some critics reject outright the
claim that mathematical objectivity is inherently better at guarding
against social inequities than human judgement, however subjective the
latter may be. Eubanks (2017, p. 168) insists on the fundamental role
of empathy in the delivery of social services: “the assumption that
human decision-making is opaque and inaccessible is an admission that
we have abandoned a social commitment to try to understand each other.”
Echoing this sentiment (and turning Max Weber on his head), Pasquale
(2019) concludes that a rule of persons is better able to guarantee
legal due process than a rule of machines.

# The Algorithmic Dominion

Inclusion into some identification database has long been a
prerequisite of modern citizenship [...]

In India, Aadhaar, an integrated identification system that stores
fingerprint and iris scans along with demographic data for each
citizen, was originally publicized as a tool for graft elimination and
the efficient delivery of welfare services. It has swiftly become
required for interactions with both public and private institutions,
anchoring an emergent mass surveillance infrastructure.  (Rao & Nair
2019).
In South Africa, the postapartheid government similarly sought
to implement a nationwide biometric identification system to improve
the uniformity of social welfare grant disbursement. In typical
Weberian fashion, the government claims that the system's universalism
and standardization guarantee equal treatment (Donovan 2015). Despite
the country's history of oppressive information infrastructures, most
notoriously its passbook system, this new citizen database was embraced
by postapartheid leaders.

In China, both municipalities and the central government have partnered
with private sector firms to develop social credit systems oriented to
improving the financial behavior and civic-mindedness of individuals
and organizations (Ahmed 2019, Liu 2019, Ohlberg et al. 2017). By
linking algorithmically produced social credit scores to tangible
outcomes (conveniences and perks, public praising or shaming), these
systems foster rule compliance (e.g., using crosswalks to cross the
street) and obedience to social expectations (e.g., taking care of
one's parents, doing volunteer work). While Western commentators have
often interpreted the development of social credit through the lens of
China's political authoritarianism, it is useful to remember that
private data infrastructures elsewhere can feel similarly oppressive
and inescapable. O'Neill (2016) describes, for example, how a job
applicant was shut out of work in a sizeable portion of the American
retail industry when he failed a hiring prescreening test designed by a
software company with contracts throughout the sector. Other data
systems operate ubiquitously across national borders. Euro-centric
assumptions built in to cybersecurity tools that automate the
identification of fraud, for example, have become a ubiquitous part of
the global infrastructure (Jonas & Burrell 2019). [...]

In these examples, fair allocation is not the only issue. The inability
of those so forcefully governed to shape the terms of the algorithmic
dominion, or to evade the rule of the code, raises fundamental
questions about democracy and human autonomy (Amoore 2020, Aneesh 2009).
[...]

A proper critique must thus begin with the recognition that algorithms
are ethico-political entities that generate their own “ideas of
goodness, transgression and what society ought to be” (Amoore 2020, p.
7). In other words, algorithms are transforming the very nature of our
moral intuitions—that is, the very nature of our relations to self and
others—and what it means to exist in the social world. The next section
examines this shifting terrain. [...]

the digital infrastructure operates in increasingly totalizing,
continuous, and dynamic ways. Not only do digital data traces allow for
intrusive probing by institutions far afield from the data's original
collection site (e.g., credit data matter to landlords and to
prospective romantic partners, and police departments are hungry for
social media data), but they also enable the guiding or control of
behavior through reactive, cybernetic feedback loops that operate in
real time. The more one interacts with digital systems, the more the
course of one's personal and social life becomes dependent on
algorithmic operations and choices. This is true not only for the kinds
of big decisions mentioned above but also for mundane, moment-by-moment
actions: For instance, each online click potentially reveals some
underlying tendency or signals a departure from a previous baseline. As
new data flow in, categories and classifications get dynamically
readjusted, and so do the actions that computing systems take on the
basis of those categories. 
This has important implications for how people ultimately perceive
themselves and for how social identities are formed. [...]

Inferences made about us are often fed back to us as visualizations,
assessments, scores, or recommendations and, in turn, reconfigure how
we understand ourselves in real time. [...]

the metrics, calculators, and visualization tools that presumably tell
our personal truth are not of our own making. However inaccurate, daily
step counts, menstrual cycles, heart rate, emotional states, social
networks, and spending patterns are reflected back to us, to
institutional others (e.g., doctors, insurance companies, welfare
agencies), and to the world as undeniable evidence of who we are over
and above subjective self-assessment or the old techniques of analog
self-presentation [...] Monitoring and investigating our sleep patterns,
eating habits, and social relations this way is slowly becoming second
nature. What may sometimes feel like playful self-diagnosis is really
no play at all, however—rather, it is a permanently self-probing
condition, powered by incessant feedback loops between human and
machine. [...]

The endgame of the coding elite, the ultimate goal of their
professional project, like the algorithms they build, remains
opaque.[...]
AI's trajectory in society, however, is not simply a question of
whether humanity will benefit or not but, rather, who will benefit. A
new division of learning opposes the knowers against the known (Zuboff
2019); those who make AI work face those who make AI work for
themselves. Unlike the mass of those surveilled, those misrepresented
and alienated, the data capitalists may be able to correct, control, or
improve their personal data representation; to buy themselves entirely
out of surveillance regimes; or to benefit from AI in new ways. [...]

For now, dominant industry talk promises a gentler, more acceptable,
less biased kind of AI, compliant with best practices and ethically
infused (Crawford & Calo 2016). But for all the great chatter about
equity and value alignment (Gabriel 2020), the established
technological trajectory has remained secure: Venture capital offices
and start-up firms continue to roll out a world of ubiquitous
computing, located in everything from human bodies and mundane objects
to city infrastructures and other large works of engineering. [...]

____


Tratto da
https://www.annualreviews.org/doi/full/10.1146/annurev-soc-090820-020800

Si tratta di una analisi molto chiara e ben strutturata, sebbene non
consideri aspetti tecnici e politici essenziali.

Si confonde algoritmi e software. Si fraintende il loro funzionamento
("a new class of algorithms (deep learning, an evolution of neural
network models) exploits this abundance by drawing direct inferences
from the data").

Si nota come questa nuova "coding élite" dipenda fortemente dal lavoro
non pagato, ma non si menziona l'open source.

Il movimento hacker non viene contrapposto a queste "coding élite" (ed
anzi viene riproposta la propaganda della Silicon Valley che si
giustifica e nobilita ponendosi in continuità con esso) e la nostra
azione politica si riduce a "Coders allied to cyber workers' rights".

Così come non viene fatta menzione delle implicazioni geopolitiche,
degli squilibri e delle tensioni che caratterizzano la società
cibernetica globale contemporanea.


Si tratta insomma di una ottima analisi che però prescinde dalla
comprensione della tecnologia di cui cerca di descrivere l'impatto.


Nonostante ciò, già solo basandosi sulle osservazioni esogene delle
dinamiche di sociali osservabili, ne trae conclusioni che credo tutti i
tecnici qui condivideranno (a riconferma delle conclusioni stesse):

```
It would be a mistake to uncritically embrace the fever dreams of the
coding elite's most fervent boosters, to treat computing “theologically
rather than scientifically or culturally”.

In that respect, sociology offers a useful reality check. Ethnographers
who observe digital technology in action have brilliantly tackled the
unglamorous everyday realities of algorithms. They have documented
considerable resistance to algorithmic systems, frequent errors and
breakdowns, and variations in meaning and effect, both across and
within societies.

We can both reject magical thinking about machine intelligence and
acknowledge the enormous economic, political, and cultural power of the
tech industry to transform the world we live in. Beyond futurism and
hype, existing AI is actually quite mundane.
It is designed by the coding elite, sustained by the cybertariat,
fueled by personal data extracted by (mainly) large digital firms,
frequently optimized for profit maximization, and supported by a
contingent set of legal institutions that authorize (at the time of
this writing) continuous data flows into corporate as well as state
servers.
Like prior control innovations, AI surveils, sorts, parses, assembles,
and automates. And like prior forms of social surveillance and
discipline, it weighs differently and more prejudicially on poor
and minority populations.

Far from being purely mechanistic, it is deeply, inescapably human. 
```


Giacomo
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to