Thoughtful critique of the tech industry from the Pentagon.  

Remember when Billy mocked the idea that Facebook was a threat?



American’s Innovation Culture Should Be About Public Purpose
https://www.theatlantic.com/ideas/archive/2018/11/mark-zuckerberg-missed-opportunity/576088/
(via Instapaper)


Mark Zuckerberg at the joint Senate Judiciary and Commerce Committees hearing 
on the company’s protection of user dataLeah Millis / Reuters
The arc of innovative progress has reached an inflection point. Recent 
technological change that has brought immeasurable improvements to billions 
around the globe now threatens to overwhelm us. Making this disruption positive 
for all is the chief challenge of our time. We ourselves—not only market 
forces—should bend the arc of change toward human good. To do so, we must 
reinvigorate an ethos of public purpose that has become dangerously decoupled 
from many of today’s leading tech endeavors.

Public purpose was once central to innovation. My mentors in the field of 
subatomic physics hailed from the Manhattan Project. This generation stressed 
that along with the ability to make great change came great responsibility. 
They were proud to have created nuclear weapons that helped end World War II 
and deterred a third world war. But this disruptive technology posed an 
existential danger, so many of these scientists also devoted themselves to arms 
control, missile defense, nonproliferation, and other efforts to make the 
nuclear revolution safer. This is the ethic that drew me to work in national 
security.

Today we face similarly game-changing technological advances in three big 
categories: digital, biotech, and jobs and training. But it’s not clear that 
tech leaders today have the same experience of fierce commitment to align 
technology with public purpose. How, then, can we set the conditions for 
today’s disruptive changes to redound to the overall good of humankind?

The right conditions and decisions will not emerge without strong input from 
technologists themselves. I valued this input greatly when I was secretary of 
defense. It’s why I founded the Defense Digital Service, the Defense Innovative 
Unit-Experimental (DIU-X) in Silicon Valley, and the Defense Innovation Board, 
which included senior leaders such as Eric Schmidt, Jeff Bezos, Reid Hoffman, 
and Jen Pahlka. Despite the Snowden hangover, I found a hunger among most 
technologists to be part of something bigger than themselves and their firms.

Unfortunately, there’s another ethos that’s hostile to this line of thinking. 
It is pervasive in digital tech and more prevalent perhaps among the generation 
that followed my mentors. This libertarian ethos is inherently distrustful of 
government and believes that public good and public purpose will somehow emerge 
through a popular and supposedly freer mechanism. This philosophy assumes that 
past major technological disruptions were weathered without accompanying 
changes in governance. But that’s not the case.

Take the farm-to-factory migration. Hundreds of millions of people altered 
their way of life when collective mechanized effort became the norm. Their 
lives were generally much better in the end, but this transition took decades 
to sort out. It fueled the rise of communism, exacerbated urban poverty, and in 
some places led to failed states. It was rocky.

The success of this first technical revolution was not automatic. In the United 
States, its sharp edges were rounded not by laws of technology or economics 
alone, but by the sweeping, deliberate changes brought about by what we now 
call the Progressive movement, which created the Food and Drug Administration, 
child-labor laws, compulsory public education, boards of public health, the 
Sherman Antitrust Act of 1890, muckraking journalism, and labor unions, among 
other innovations. By establishing minimum standards of safety and trust, these 
reforms made impersonal, large-scale commerce possible. Our charge today is to 
create an analogous effort to leaven today’s disruptive change so we get the 
good with less of the bad. Nowhere is this need more acute than with regard to 
social media, artificial intelligence, and the biotech revolution.

Social media are wonderful enablers of commerce and community, but also of 
darkness, hatred, lies, and isolation; invasion of privacy; even attack. That’s 
why the congressional hearings with Facebook’s CEO, Mark Zuckerberg, earlier 
this year were so important. The public understood the stakes: Ninety-one 
percent of Americans, according to Pew, feel they’ve lost control of how their 
personal data are collected and used. The hearings were a chance to uncover the 
dilemmas and pave the road for solutions.

Instead, they laid an egg. They missed a historic opportunity to devise what 
everyone agreed is needed: a mix of self-regulation by tech companies and 
informed regulation by government. Zuckerberg gave an account of his company’s 
ethical conduct that sufficed for one news cycle, but will not suffice for the 
arc of history. As for the quality of the congressional questioning, all I can 
say is that I wish members had been as poorly prepared to question me on war 
and peace in the scores of testimonies I gave as they were when asking Facebook 
about the public duties of tech companies.

Managing today’s tech dilemmas will also take renewed government efforts to 
step in judiciously when the common good is at stake. The United States has a 
long history of communication and information system regulation, including 
through antitrust. Some economists argue that since Facebook and Google are 
free, consumers face no economic harm and thus the government has no antitrust 
authority, or that antitrust means breaking up companies. These views would be 
alien to Senator John Sherman, of the Sherman Antitrust Act, and Justices Louis 
Brandeis and William Douglas, who wrote early opinions concerning its 
enforcement. They repeatedly stressed that the government’s interest was in the 
general public good, and was not confined to price gouging.

The second major digital dilemma concerns artificial intelligence. At the 
Pentagon, I promulgated a directive on that subject. It stated that for every 
system capable of carrying out or assisting the use of lethal force, a human 
must be involved in the decision. In the Pentagon, we cannot avoid 
responsibility by declaring, “The machine made a mistake.” The same goes for 
the designers of a driverless vehicle that kills a pedestrian. AI designers 
must enable the tracing of decision methods for accountability in things that 
matter.

I am well aware of the concerns of some Google employees about working on 
Project Maven, an AI effort for the U.S. Department of Defense. These concerns 
are misplaced. First, the Pentagon is governed by the memorandum I wrote, and 
Maven is required to abide by it; our nation takes its values to the 
battlefield. Second, who better than tech-savvy Google employees to steer the 
Pentagon in the right direction? Third, are Google employees really more 
comfortable working in and for Communist China, where there is no separation 
from the People’s Liberation Army? Fourth, we are, after all, defending our 
fellow citizens and common values.

The skepticism over Project Maven reflects a larger wariness among the rising 
generation of technologists. While they hold a sincere desire to advance the 
public interest, they are sometimes uncertain whether partnering with 
government is consistent with that conviction. We can’t afford to be lukewarm 
or of mixed mind in this arena. But even as we rebuild long-term trust between 
the private and public sectors, more and more young innovators are recognizing 
that if they don’t step up, they might not like who does.

As transformative as digital disruption has been, the looming biosciences 
revolution—driven by recent dramatic breakthroughs in biological science and a 
new, higher-velocity investment climate—will be at least as consequential in 
coming decades.

One major avenue is Clustered Regularly Interspaced Short Palindromic Repeats 
and the possibility of editing even the human genome. In addition to the 
obvious moral issues associated with tampering with life itself without any 
hope of the consent of the unborn, the wealthy could soon purchase a new kind 
of unequal opportunity that makes any previous form of discrimination pale in 
comparison.

A different innovative avenue is the growing capacity to create new kinds of 
designer cells. This could include novel pathogens with high lethality and 
flu-like ability to spread. But it extends to organisms and tissues custom-made 
for a wide range of purposes, which may be more or less benign.

All these innovations lead us to another avenue of disruptive potential: the 
union of the information revolution and the biological revolution. It is 
becoming quite possible, for example, to do a “big data” collection of a cell’s 
DNA, RNA, and protein inventory, not just on a sample basis from a single 
organism, but cell by cell within the organism.

Until recently, these biotech innovations sprang from laboratory techniques 
requiring Ph.D.-level talent and institutional scale. Today, however, they are 
becoming platforms on top of which amateurs can innovate. It is already 
possible to send off a DNA sample and get an entire sequence returned overnight 
by email (something that only recently took billions of dollars and years of 
effort). Forwarding that email to another overnight service returns an 
“interpretation” of the sample’s identity or health. Someone who knows nothing 
about the underlying science can exploit this platform to create novel 
applications. This potential is being turbocharged by the accessibility of bio 
incubators where young people can—at trivial cost—make use of laboratory 
equipment that costs millions of dollars. The scale and cost of meaningful 
innovation will go way down, and the speed of socially consequential innovation 
will go way up.

Meanwhile, the multibillion-dollar, decade-long investment cycle of traditional 
pharma will be supplemented by fast venture-capital money. These new investors 
may not have the culture or values of research scientists. And they may not 
share the norms and regulations that come with, for example, National 
Institutes of Health and Food and Drug Administration funding and approvals. 
Like the early digital era, this Wild West climate portends both boom and bust.

The third tech-driven revolution of our time is in the future of work and 
training. Put starkly, unless our fellow citizens can see that in all this 
disruptive change there is a path for them and their children to the American 
dream or its equivalent, we will not have cohesive societies.

There are a lot of smart people at MIT and around Boston working on 
technologies such as lidar that make driverless cars possible. I always say to 
these people, “Save a little bit of your innovative energy for the following 
challenge: How about the carless driver? What is to become of the tens of 
thousands of truck, taxi, and car drivers whose jobs are disrupted?” For these 
drivers, this unstoppable transition will be like the farm-to-factory 
transition. We owe it to them to make sure it all comes out well.

>From life-saving medicines to traffic-beating algorithms, the accelerating 
>pace of innovation is already bringing great progress. But it would be foolish 
>to let inertia set the agenda. We cannot have a functioning society if a 
>substantial fraction of Americans see innovation as passing them by. What’s 
>needed is a mix of tech-community expertise and public spirit working 
>together. Such collaboration can create the conditions that make the coming 
>disruptions not just tolerable but desirable.

This story was adapted from a Belfer Center report, “Shaping Disruptive 
Technological Change for Public Good.”

We want to hear what you think about this article. Submit a letter to the 
editor or write to [email protected].

Ash Carter, former U.S. Secretary of Defense, is the Director of Harvard 
Kennedy School’s Belfer Center for Science and International Affairs, where he 
directs a new project, Technology and Public Purpose. He is also an MIT 
Innovation Fellow and corporation member.


Sent from my iPhone

-- 
-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

--- 
You received this message because you are subscribed to the Google Groups 
"Centroids: The Center of the Radical Centrist Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to