More on Google and Military Drones

A bit more of my thoughts on Google's military drone AI effort.

One issue that often comes up in such discussions is the difference
between defensive vs. offensive technologies. I remember having
discussions about topics like this at RAND many, many years ago (not
drones of course, but tech efforts that ostensibly aimed at troop
defense rather than offense, for example). The upshot was that in the
final analysis, it was impossible to "wall off" one from the other.
That is, tech designed for the former always ended up contributing to
the latter, either directly or indirectly (I've had Pentagon types say
this to me explicitly, explaining that this is part of why they fund
what seem to be purely defensive efforts -- they know there will be an
offensive side payoff).

With image analysis and target identification, this connection seems
even more direct.

A counter-argument is that better target analysis could in theory help
avoid civilian collateral damage. But I don't believe that is actually
generally true in practice given the nature of the kinds of targets
that drones are used again. These targets tend to be deep in civilian
areas and travel with civilians (including children, other family
members, etc., who typically have no choice about such matters).  No
drone-based image analysis can separate these. Pentagon planners for
years have used drones for attacks with the explicit understanding
that significant civilian losses are part and parcel of such attacks,
and any tech that increases the viability of drone-based attacks will
increase such losses.

Lauren Weinstein ( 
Lauren's Blog:
Google Issues Mailing List:
Founder: Network Neutrality Squad: 
         PRIVACY Forum:
Co-Founder: People For Internet Responsibility:
Member: ACM Committee on Computers and Public Policy
Tel: +1 (818) 225-2800
nnsquad mailing list

Reply via email to