----------empyre- soft-skinned space----------------------

Borders are simultaneously demarcated, and controlled, in physical and virtual 
space. When we attempt to cross borders, data and information that has been 
collected about us becomes part of the assessment process. Individuals are 
profiled in various ways—their country of origin, the social media networks 
they belong to, GIS data from their cellphone, their research interests, 
suspicious bodily movements as they wait in line for security—all of this 
captured data, and more, becomes part of our data body that shadows us as we 
travel.

Data are central to the work of intelligence agencies and border patrol agents. 
The massive scale of surveillance in both virtual and physical space produces 
enormous amounts of data and information that agents feel they need to collect, 
share, and process. Leaked documents and ethnographic reports show that 
intelligence agents are afraid that they may not be sharing enough information 
with one another, and yet they simultaneously feel that they are drowning in 
too much information, and struggling to make meaning out of noise. This 
“collect it all” and “share it all” approach has resulted in the accumulation 
of more information than can be processed by human agents, leading to the 
perception of a need for automated processing, or what are sometimes referred 
to as “next generation information access” (NGIA) systems, to algorithmically 
process the massive troves of data they have collected, with the belief that 
software will find patterns that human analysts cannot perceive. This fear of 
not collecting or sharing enough data emerged following the intelligence 
failures of 9/11.

Data are understood by intelligence agents to be raw facts and meaning is 
thought to be mechanically and objectively found by the analyst or algorithm. 
While these assumptions reflect an empiricist epistemology, I have found that 
intelligence analysts generally find it hard to articulate an epistemological 
methodology of their practice, even while they disavow deduction and intuition, 
which are central to their practice. This may leave agents susceptible to 
dominant epistemological shifts and arguments coming from other fields, like 
data science and Artificial Intelligence (AI), that bring their own sets of 
assumptions as they promise to provide technological solutions to ease the 
difficulties of mass surveillance. Companies like IBM promote their black boxed 
“smart algorithms” to analysts who do not understand how these technologies 
work, even while they rely on these technologies to make judgments.

It is with all of this in mind that I turn to a specific instance of automated 
judgment and border surveillance. Palmer Luckey, the founder of Oculus, along 
with former executives from the CIA-funded tech company Palantir are currently 
in the process of developing Virtual Reality that is augmented by Artificial 
Intelligence to automate judgments in the surveillance of the border between 
the US and Mexico. Luckey’s defense tech company, Anduril, has pitched this 
cybernetic surveillance agencement to DHS as the technological version of 
Trump’s border wall. According to Luckey, the technology desired by the DOD can 
be described as “Call of Duty goggles” where “you put on the glasses, and the 
headset display tells you where the good guys are, where the bad guys are, 
where your air support is, where you’re going, where you were.” Far beyond a 
cybernetic aid for improving the perception of movement, the ideal version of 
this technology would employ Artificial Intelligence to automate judgments at 
the border, to help determine the “bad guys”. As proprietary technology, it is 
not clear what kind of data or algorithms will be used to determine who is 
supposedly good or bad. This is yet another black boxed smart algorithm being 
sold as a technological solution to the problems produced by the massive scale 
of surveillance that US agencies are attempting to undertake.

There are several assumptions with this virtual (not to mention gamified) 
border security: that data can provide evidence of a threat to national 
security, that judgments at the border can and should be automated, and 
especially, that someone who risks their life to cross the border must be a 
threat. Trump’s Great Wall is founded on both xenophobia and ignorance about 
the broader conditions that prompt people to risk their lives to cross the 
border. The DHS and DOD appear to be taken in by the data science rhetoric used 
by companies like Anduril. Border agents should not use black boxed “smart 
algorithms” to automate judgments, especially judgments that contain this many 
assumptions. This leaves me with one looming question: how can we, the public, 
have meaningful oversight over proprietary public-private technological 
solutions to border surveillance?

_______________________________________________
empyre forum
empyre@lists.artdesign.unsw.edu.au
http://empyre.library.cornell.edu

Reply via email to