Folks,
thank you Felix for the synopsis and clarification. (In a message that
slipped by a few days ago, the question of the ontological status of
technology already came up; here's an occasion to add the following
remark to the thread.)
I cannot give a sufficiently informed explanation, but I believe that
the status ascribed to technical things is crucial in this debate
(Simondon, Stiegler, Hui, can you help me?). Are technical things
products of the human mind, or is the human mind itself a product of our
existence as technical beings? Can technical things gain independence in
the systems/environments in which they "exist"?
(I cannot help but think of the main switch or the electricity plug
which is, as far as I'm concerned, the ultimate sign of inevitable
dependency; of course animal bodies, incl. humans, are also dependent on
their environments, but under Earth-surface circumstances they can go on
for days, some even weeks or months without food or water...)
Is the hypothetical 'granting of rights' to, as Francis calls them,
systems of Enhanced Pattern Recognition, an ontological challenge (for
all of us), or rather an ethical challenge (for those of us who ascribe
a form of potential subjecthood to such technical things)?
I agree with Felix et al. on this, that the claim for 'granting such
rights' smacks of a distraction from human power politics.
Regards,
-a
Am 05.10.23 um 11:38 schrieb Felix Stalder via nettime-l:
On 10/2/23 21:12, John Hopkins via nettime-l wrote:
Humans are neither autonomous (as in 'closed systems'), nor is any
technology 'completely external' to any particular human if you
consider the nature of reality as a completely connected and
continuous field of flows.
<snip>
What I meant was a relational sense of autonomy of living entities
vis-à-vis humans, meaning that there is something in living entities
that goes beyond, is prior to, cannot be expressed, by this
relationship. To reduce this relationship to human utilitarianism, to
turn them into things, is an act of violence and a defining operation of
colonialism. That was what the reference to Cesaire implied.
Technologies clearly do not possess this quality, That doesn't mean that
they don't shape human culture (and human beings) or are not themselves
part of the geo-biological cycles (they clearly are as any mine, data
center or landfill can attest), but the relation between them and humans
is a different one. It is, beneath forms of fetishization, and an
utilitarian one. A means to an end. And in capitalism, we know what the
end is.
On 10/2/23 12:42, Joseph Rabie wrote:
At the beginning of this discussion was a post by Edward Welbourne,
for whom the question of AI rights is contingent to the eventuality
of it becoming sentient. This aspect appears to have slipped from the
discussion.
I dropped that quite deliberately, because I don't know what to do with
it. Given the state of technology, ML systems are clearly not
intelligent (though they are powerful and astonishingly capable).
I fully agree with Matteo Pasquinelli, that the history of AI is better
read as labor-encoding technology than as a quest for intelligence or
sentience.
https://mediatheoryjournal.org/review-matteo-pasquinellis-the-eye-of-the-master-reviewed-by-alex-levant/
So, it bring up sentience and all the things that this would imply
(world domination, as recent open letters argued), feels either
misguided or disingenuous (in the case of the open letters), a way of
distracting (intentionally or not), from the actual problems at hand and
setting up shell companies for laundering power or bias as Ted referred to.
Felix
--
# distributed via <nettime>: no commercial use without permission
# <nettime> is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: https://www.nettime.org
# contact: [email protected]