(((Wired has sunk so low, it can't even tell kooks from pioneers.  And Minsky
+ Gates as inspirations, in 2008, WTF?)))

http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

WIRED MAGAZINE: ISSUE 16.02

Two AI Pioneers. Two Bizarre Suicides. What Really Happened?

By David Kushner Email 01.18.08 | 6:00 PM

Illustration: Justin Wood

Using the Internet to Build Their Case for Artificial Intelligence

On the morning of June 12, 1990, Chris McKinstry went looking for a gun. At
11 am, he walked into Nick's Sport Shop on a busy street in downtown Toronto
and approached the saleswoman behind the counter. "I'll take a Winchester
Defender," he said, referring to a 12-gauge shotgun in the display. She
eyeballed the skinny 23-year-old and told him he'd need a certificate to buy
it.

Two and a half hours later, McKinstry returned, claiming to have the required
document. The clerk showed him the gun, and he handled the pistol grip
admiringly. Then, as she returned it to its place, he grabbed another shotgun
from the case, yanked a shell out of his pocket, and jammed it into the
chamber.

"He's got a gun! He's got a gun!" a woman screamed, as she ran out the front
door. The store emptied. He didn't try to stop anyone.

Soon McKinstry heard sirens. A police truck screeched up, and men in black
boots and body armor took up positions around the shop.

The police caught glimpses of him through the store windows with the gun
jammed under his chin. They tried to negotiate by phone. They brought in his
girlfriend, with whom he'd just had a fight, to plead with him. They brought
in a psychiatrist — McKinstry had a history of mental problems and had tried
to institutionalize himself the day before. After five hours, McKinstry
ripped the telephone from the wall and retreated into the basement, where he
spent two hours listening to radio coverage of the standoff. Eventually, a
reporter announced that the cops had decided on their next move:

Send in the robot.

McKinstry had stolen the gun because he wanted to end his own life, but now
he was intrigued. He'd always been obsessed with robots and artificial
intelligence. At 4, he had asked his mother to sew a sleeping bag for his toy
robot so it wouldn't get cold. "Robots have feelings," he insisted. Despite
growing up poor with a single mom, he had taught himself to code. At 12, he
wrote a chess-playing program on his RadioShack TRS-80 Model 1.

As McKinstry cowered in the basement, he could hear the robot rumbling
overhead, making what he called "Terminator" noises. It must be enormous, he
thought, as it knocked over shelves. Then everything went eerily quiet.
McKinstry saw a long white plume of smoke arc over the stairs. The robot had
fired a tear gas canister, but it ricocheted off something and flew back the
way it came. Another tear gas canister fired, and McKinstry watched it trace
the same "perfectly incorrect trajectory." He realized the machine had no
idea where he was hiding.

But the cops had had enough. They burst through the front door in gas masks,
screaming, "Put the gun down!" McKinstry had been eager to die a few hours
before, but now something in him obeyed. The gas burned his eyes and lungs as
he climbed from the basement. At the top of the steps, he saw the robot
through the haze. It looked like an "armored golf cart" with a tangle of
cables and a lone camera eye mounted on top. It wasn't like the Terminator at
all. It was a clunky remote-controlled toy. Dumb.

Three hundred miles away in a suburb of Montreal, Pushpinder Singh was
preparing to devote his life to the study of smart machines. The high
schooler built a robot that won him the top prize in a province-wide science
contest. His creation had a small black frame with wheels, a makeshift
circuit board, and a pincer claw. As the prodigy worked its controller, the
robot rolled across the floor of his parents' comfortable home and picked up
a small cup. The project landed Singh in the Montreal Gazette.

Push, as everyone called him, had also taught himself to code — first on a
VIC-20, then by making computer games for an Amiga and an Apple IIe. His
father, Mahender, a topographer and mapmaker who had studied advanced
mathematics, encouraged the wüenderkind. Singh was brilliant, ambitious, and
strong-willed. In ninth grade, he had created his own sound digitizer and
taught it to play a song he was supposed to be practicing for his piano
lessons. "I don't want to learn piano anymore, I want to learn this," he
said.

Singh's lifelong friend Rajiv Rawat describes an idyllic geek childhood full
of Legos, D&D, and Star Trek. One of his favorite films was 2001: A Space
Odyssey — Singh was fascinated by the idea of HAL 9000, the artificial
intelligence that thought and acted in ways its creators had not predicted.

To create the character of HAL, the makers of 2001 had consulted with the
pioneering AI researcher Marvin Minsky. (In the novel, Arthur C. Clarke
predicted that Minsky's research would lead to the creation of HAL.) Singh
devoured Minsky's 1985 book, The Society of Mind. It presented the high
schooler with a compelling metaphor: the notion of mind as essentially a
complex community of unintelligent agents. "Each mental agent by itself can
only do some simple thing that needs no mind or thought at all," Minsky
wrote. "Yet when we join these agents in societies — in certain very special
ways — this leads to true intelligence." Singh later said that it was Minsky
who taught him to think about thinking.

In 1991, Singh went to MIT to study artificial intelligence with his idol and
soon attracted notice for his passion and mental stamina. Word was that he
had read every single one of the dauntingly complex books on the shelves in
Minsky's office. A casual conversation with the smiling young researcher in
the hallway or at a favorite restaurant like Kebab-N-Kurry could turn into an
intense hour-long debate. As one fellow student put it, Singh had a way of
"taking your idea and showing you what it looks like from about 50 miles up."

The field of AI research that Singh was joining had a history of bipolar
behavior, swinging from wild overoptimism to despair. When 2001 came out in
the late '60s, many believed that a thinking machine like HAL would exist
well before the end of the 20th century, and researchers were flush with
government grants. Within a few years, it had become apparent that these
predictions were absurdly unrealistic, and the funding soon dried up.

In the mid-'90s, researchers could point to some modest successes, at least
in narrow applications like optical character recognition. But Minsky refused
to abandon the grand Promethean dream of re-creating the human mind. He
dismissed Deep Blue, which beat chess grand-master Garry Kasparov in 1997,
because it had such a limited mission. "We have collections of dumb
specialists in small domains; the true majesty of general intelligence still
awaits our attack," Minsky is quoted as saying in a book called HAL's Legacy:
2001's Computer as Dream and Reality. "No one has tried to make a thinking
machine and then teach it chess."

Singh quickly established himself as Minsky's protégé. In 1996, he wrote a
widely read paper titled "Why AI Failed," which rejected a piecemeal approach
to research: "To solve the hard problems in AI — natural language
understanding, general vision, completely trustworthy speech and handwriting
recognition — we need systems with commonsense knowledge and flexible ways to
use it. The trouble is that building such systems amounts to 'solving AI.'
This notion is difficult to accept, but it seems that we have no choice but
to face it head on." June 9, 1996: Singh's manifesto, titled "Why AI Failed"
(with Bill Gates' response). View full page.

Singh's ambitious manifesto prompted an encouraging note from Bill Gates. "I
think your observations about the AI field are correct," he wrote. "As you
are writing papers about your progress, I would appreciate being sent
copies."

While Singh was climbing the academic ladder at MIT, McKinstry was trying to
put his life back together after spending two and a half months in jail. But
the suicidal standoff had given him a new sense of purpose. He liked to think
that the police robot had deliberately misfired its tear gas canisters in an
effort to save him "Maybe robots do have feelings," he later mused. By 1992,
McKinstry had enrolled at the University of Winnipeg and immersed himself in
the study of artificial intelligence. While pursuing a degree in psychology,
he began posting on AI newsgroups and became enamored with the writings of
the late Alan Turing.

A cryptographer and mathematician, Turing famously proposed the Turing test —
the proposition that a machine had achieved intelligence if it could carry on
a conversation that was indistinguishable from human conversation. In late
1994, McKinstry coded his own chatbot with the goal of winning the $100,000
Loebner Prize for Artificial Intelligence, which used a variation of the
Turing test.

After a few months, however, McKinstry abandoned the bot, insisting that the
premise of the test was flawed. He developed an alternative yardstick for AI,
which he called the Minimum Intelligent Signal Test. The idea was to limit
human-computer dialog to questions that required yes/no answers. (Is Earth
round? Is the sky blue?) If a machine could correctly answer as many
questions as a human, then that machine was intelligent. "Intelligence didn't
depend on the bandwidth of the communication channel; intelligence could be
communicated with one bit!" he later wrote.

On July 5, 1996, McKinstry logged on to comp.ai to announce the "Internet
Wide Effort to Create an Artificial Consciousness." He would amass a database
of simple factual assertions from people across the Web. "I would store my
model of the human mind in binary propositions," he said in a Slashdot Q&A in
2000. "A giant database of these propositions could be used to train a neural
net to mimic a conscious, thinking, feeling human being!" July 1996:
McKinstry announces "Internet Wide Effort to Create an Artificial
Consciousness." View full page.

The idea wasn't new. Doug Lenat, a former Stanford researcher, had been
feeding information into a database called Cyc (pronounced "psych") since
1984. "We're now in a position to specify the steps required to bring a
HAL-like being into existence," Lenat wrote in 1997. Step one was to "prime
the pump with the millions of everyday terms, concepts, facts, and rules of
thumb that comprise human consensus reality — that is, common sense." But the
process of adding data to Cyc was laborious and costly, requiring a special
programming language and trained data-entry workers.

Cyc was a decent start, McKinstry thought, but why not just get volunteers to
input all that commonsense data in plain English? The statements could then
be translated into a machine-readable format at some later date. But McKugh
and wasn't afraid to make a fool of himself in the process," recalls his
ex-wife. And there was one important person who McKinstry said treated him
with respect: Marvin Minsky. McKinstry claimed to have emailed Minsky in the
mid-'90s, asking if it were possible "to train a neural network into
something resembling human using a database of binary propositions."

"Yes, it is possible," Minsky is supposed to have replied, "but the training
corpus would have to be enormous."

That was apparently all the encouragement McKinstry needed. "The moment I
finished reading that email," he later recalled, "I knew I would spend the
rest of my life building and validating the most enormous corpus I could."

On July 6, 2000, McKinstry retooled his pitch for a collaborative AI
database. He had a business model this time, one that seemed well suited to
the heady days of the dotcom boom. His Generic Artificial Consciousness, or
GAC (pronounced "Jack"), would cull true/false propositions from people
online. For each submission, participants would be awarded 20 shares in
McKinstry's company, the Mindpixel Digital Mind Modeling Project.

Mindpixel was a term McKinstry invented to describe the individual
user-submitted propositions. Pixels, short for "picture elements," are the
tiny, simple components that combine to create a digital image. McKinstry saw
mindpixels as mental agents that could be combined to create a society of
mind. Gather enough of them — roughly a billion, he estimated — and the
mindpixels would combine to create a functioning digital brain.

The criticisms and flames never let up. But McKinstry's clever stock offer
managed to generate mainstream press coverage and hundreds of thousands of
mindpixel submissions. He posted regular messages to his "shareholders" and
talked up the enormous potential value that the Mindpixel project could have
if it achieved its lofty goals. "It's like inventing teleportation," he told
Wired News in September 2000. "How could you put a value on that?"

Do fish have hair? Can blue tits fly? Did Alan Turing theorize that machines
could, one day, think? Did Quentin Tarantiian. Both Net-savvy.

Like McKinstry, Singh was convinced that the potential of artificial
intelligence was enormous. "I believe that AI will succeed where philosophy
failed," he had written on his MIT homepage. "It will provide us with the
ideas we need to understand, once and for all, what emotions are." According
to Bo Morgan, a fellow student at MIT, Singh suggested that giving common
sense to computers would solve all the world's problems.

"Even starvation in Africa?" Morgan asked.

Singh paused. "Yeah, I think so."

But Singh's ambitions were modest and grounded compared with McKinstry's. The
man behind Mindpixel was certain that his database would become a thinking
machine in the near future. The father of a son from his brief marriage in
the '90s, he sometimes referred to GAC as his second child. He believed that
he would be recognized as one of the great scientific minds in history. "He
thought he deserved a Nobel Prize," says a friend who blogs under the handle
Alphabet Soup. "He compared himself to Einstein and Turing. He said GAC would
make him immortal."

McKinstry meant that part about immortality literally. "The only difference
between you and me is the same as the difference between any two MP3s —
bits," he wrote in an Amazon.com review of How We Became Posthuman: Virtual
Bodies in Cybernetics, Literature, and Informatics. (He gave the book three
stars.) McKinstry often told friends that he intended to upload his
consciousness into a machine: He would never die.

Do teenagers think they know everything? Is MIT the best tech school in the
world? Did HAL 9000 ... go nuts and try to kill everyone? Does me got bad
grammar? Does Wired magazine mostly write about different types of wire? Is
death inevitable? — Questions submitted to the Mindpixel database

McKinstry's hopes for a partnership with the MIT project were soon dashed.
"McKinstry was fundamentally different than us," Singh's collaborator, Stork,
recalled. "We thought people wouldn't participate in the project if they were
making some guy in Chile rich."

McKinstry didn't let it go. On July 16, 2002, he tried to reconnect with
Singh, emailing him a link to a paper on language models. It suggested a way
that statements submitted to Open Mind and Mindpixel could be understood by
machines. "This is what I've been babbling inarticulately about all these
years. It just needs to be trained on a corpus of validated propositions," he
wrote. The paper's author was Canadian. "Another coincidence," McKinstry
noted.

Four days later, Singh sent an unenthusiastic reply. "Current statistical
approaches are still too weak to learn complex things," he wrote. "We need
some really new ideas in machine learning that go beyond what people are
doing today. It helps to have the large datasets like mindpixel or openmind,
but we're still missing the right learning component."

Open Mind, which would eventually garner more than 700,000 submissions in
five-plus years, was now part of a Commonsense Computing division at the MIT
Media Lab. Singh was pursuing another research project for his PhD. He was
also coauthoring papers with Minsky and presenting his ideas at conferences
and symposia around the world.

Privately, McKinstry began speaking of his resentment of Open Mind. Singh's
project, he felt, had gotten all the attention simply because it was
affiliated with MIT. He complained that Singh had copied his statistical
model for collecting data and claimed that he had contacted a dean at MIT
asking that Singh's work be taken down. (There is no evidence to support this
allegation.)

Mindpixel would eventually receive roughly 1.5 million submissions, but
McKinstry's lack of business skills had become apparent. He had lined up no
commercial partners or applications and apparently had no intention of
honoring any of the promises he'd made to his "shareholders." All he had was
an enormous collection of questions ranging from "Does Britney Spears know a
lot about semiconductor physics?" to "Is McKinstry a media whore with no real
credentials or expertise?"

McKinstry, who said he was diagnosed as bipolar, went into decline. A fight
with his latest girlfriend led to a few nights in a Chilean mental hospital.
His mood was briefly buoyed when an article he'd written, entitled "Mind as
Space," was chosen to run in a 2003 anthology that would feature
contributions from many of the luminaries in the AI field. But as the
publication of the book was repeatedly postponed, he grew more frustrated and
despairing. He started wondering about his old rival again.

On January 12, 2006, McKinstry hit Singh's personal blog. "It has been hard
to give this blog any attention while finishing my dissertation," Singh had
written some six months earlier. "I am now Dr. Singh!" Singh also wrote about
"some new ideas [Minsky] has been developing about how minds grow. The basic
idea is called interior grounding,' and it is about how minds might develop
certain simple ideas before they begin building articulate connections to the
outside world."

New ideas? McKinstry commented on Singh's blog that it sounded similar to a
1993 paper in the journal Cognition, and he provided a link to the PDF. On
his own blog, he wrote, "The idea reminded me strongly of some neural network
experiments that I replicated in 1997." Singh never replied.

"So what exacty does a web suicide note look like?" McKinstry wrote on
January 20, 2006, a week after he posted to Singh's blog. "Exctly like this."

He was sitting in a café near his home in Santiago, pounding the keys on his
Mac laptop. He posted the message on his blog and a slightly different
version on a forum at Joel on Software, a popular geek hangout.

McKinstry's rant was florid and melodramatic. "This Luis Vuitton, Parada,
Mont Blanc commercial universe is not for me," he wrote. He talked about his
history of suicidal feelings and botched attempts, and he insisted that this
time things would be different. "I am certain I will not survive the
afternoon," he wrote. "I have already taken enough drugs that my alreadt
weakened liver will shut down very soon and I am off to find a place to hide
and die."

The online forum members were understandably skeptical. McChimp was flinging
bananas again, they figured. "Have a nice trip! Let us know if there's
anything beyond the 7th dimension!" read the first comment. "Typical of his
forum," McKinstry replied. "I am having more trouble than usual typing due to
the drugs. I have to go die not. bye." Then, "It is too late. I will leave
this cafe soon and curl up somewhere." A few minutes later: "I am leaving
now. People are strating to notice I canot type and I am about to vomit. Take
to go. Last post." Later still: "I am leave now. Permanently." January 20,
2006: McKinstry's suicide note on Joel on Software forum. View full page.

"I don't buy this for a minute," replied a familiar detractor named Mark
Warner. It was enough to pull McKinstry back into the fray for one last flame
war. "Warner, you were alway an ass," he replied. "I have to go vomit now and
take more pills." His final post continued the theme: "I am feeling really
impaired. And yes, time will tell what happens to me. I really have to get
out of here. I cannot type. and want to vomit. Time to go hide."

Three days later, on January 23, after calls from panicked friends, the
police checked McKinstry's apartment and found his body. He had unhooked the
gas line from his stove and connected it to a bag sealed around his head. He
was dead at age 38.

McKinstry's few friends say he occasionally spoke of suicide, but no one knew
why he had gone through with it this time. Carlos Gaona, a younger hacker who
had become his protégé, raced over to the apartment and convinced McKinstry's
girlfriend to give him his laptop, his journal, the dog-eared books. And, of
course, the Web was full of his thoughts, rants, dreams, and nightmares. He
never got to upload his consciousness into a thinking machine, but in a sense
he had been uploading himself his entire adult life. Before he died, he had
replaced the home page of chrismckinstry.com with the words "Catch you
later."

One blogger wondered, "If not for his belief in the permanence of the
internet, that his suicidal proclamation would remain on the World Wide Web
for posterity — would Chris McKinstry be alive today?"

Others wondered how this would affect the idea of collaborative AI databases.
On January 28, Bob Mottram, who had once been offered the unpaid position of
chief software developer at Mindpixel, wrote in a post memorializing
McKinstry: "For the present, the last man standing in this game is Push
Singh."

After completing his dissertation, Singh was offered a job as professor at
the MIT Media Lab. He would be teaching alongside his mentor, Minsky, who
credited him with helping to develop many of the ideas in his new book, The
Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the
Future of the Human Mind. He would have the resources to pursue his dream of
"solving AI." Before assuming his position, though, he decided to take time
off, as he told a friend, "to think."

Everything in Singh's life seemed to be going well. He was enjoying a
relationship with a girlfriend who worked at the lab. The IEEE Intelligent
Systems Advisory Board, a consortium of top AI figures around the world, had
selected him as one of the top 10 researchers representing the future of the
field.

But privately, Singh was suffering. He had severely injured his back while
moving furniture, and though he did his best to stay engaged on campus,
colleagues noticed that he was distracted. He told a friend, Eyal Amir, that
there were times when he was incapable of doing anything because of the
excruciating pain. Some thought it was clinical depression. Colleague Dustin
Smith asked, "How much of your attention is on the pain at a given moment?"

Singh replied, "More than half."

In The Emotion Machine, Minsky suggests that chronic pain is a kind of
"programming bug." He writes that "the cascades that we call Suffering' must
have evolved from earlier schemes that helped us to limit our injuries — by
providing the goal of escaping from pain. Evolution never had any sense of
how a species might evolve next — so it did not anticipate how pain might
disrupt our future high-level abilities. We came to evolve a design that
protects our bodies but ruins our minds."

Four weeks after Chris McKinstry committed suicide, the police were
dispatched to an apartment at 1010 Massachusetts Avenue near MIT. Inside,
they found the 33-year-old Singh. He had connected a hose from a tank of
helium gas to a bag taped around his head. He was dead.

Mahender Singh still has the robot that his son created in high school. "He
thought that computers should think as you and I think," he says. "He thought
it would change the world. I was so proud of him, and now I don't know what
to do without him. His mother cries every day."

"If anyone was the future of the Media Lab, it was Push," wrote the director
of the lab, Frank Moss, in a mass email on March 4, 2007. A memorial wiki
page was set up, and friends and colleagues posted dozens of testimonials as
well as pictures of the young researcher. "His loss is indescribable," Minsky
wrote. "We could communicate so much and so quickly in so very few words, as
though we were parts of a single mind."

Singh's childhood friend Rawat, with whom he had watched 2001 as kids in the
'80s, posted too. "This might sound corny," he wrote, "but I felt at the
funeral that they should play Amazing Grace' [as in] Spock's death scene in
Star Trek II, where Kirk eulogized him as being the most human' being he had
ever met in his travels." It would have been appropriate to Push, he said,
"who was at once intellectually curious and logical (or as he put it,
sensible) and deeply human."

Privately, Rawat cites a different movie. "Sometimes I think this totally
ridiculous thought," he says, "that he was bumped off like the end of
Terminator 2." He refers to the fate of the character Dr. Miles Dyson, who
creates a neural network processor that eventually achieves sentience and
turns against mankind. When a cyborg from the future warns of what's to come,
an attempt is made to kill Dyson before he can complete his work. Ultimately,
the scientist nobly sacrifices himself while destroying his research to
prevent the machines from taking over the world. "That's a fantasy [Push]
would have gotten a kick out of," Rawat says.

Amid the grieving, there were whispers about the striking parallels between
Singh's and McKinstry's lives and deaths. Some wondered whether there could
have been a suicide pact or, at the very least, copycat behavior. Tim
Chklovski, a collaborator with Singh on Open Mind, suggests that perhaps
McKinstry's suicide had inspired Singh. "It's possible that he gave Push some
bad ideas," he says. (The rumors are likely to begin again: The fact that
Singh committedrnia, began using Open Mind data to imbue its robots with
common sense. "There is a nice resurgence of interest in commonsense
knowledge," Amir says. "It's sad that Push didn't live to see it."

After McKinstry's long struggle for academic legitimacy and recognition, his
"Mind as Space" article will finally appear in the book Parsing the Turing
Test, whose publication was delayed from mid-2003 to this February.
"McKinstry himself was a troubled soul who had mixed luck professionally,"
the book's coeditor, Robert Epstein, says. "But this particular concept is as
good as many others."

In his acknowledgments, McKinstry credits Marvin Minsky for his
"encouragement of my heretical ideas"; his colleagues at the European
Southern Observatory's Paranal facility, "who tolerated my near insanity as I
wrote this article"; and "of course the nearly fifty thousand people that
have worked so hard to build the Mindpixel Corpus."

McKinstry and Singh were both cremated. Singh's sister scattered his ashes in
the Atlantic, not far from MIT. McKinstry's remains are said to be under his
son's bed in the UK. Meanwhile, someone is posting to newsgroups under
McKinstry's name. "I have always been and will always be," one message read.
"I am forever."

Contributing editor David Kushner ([EMAIL PROTECTED]) wrote about the
Linkin Park cyberstalker in issue 15.06.

Reply via email to