Thanks Matt, very nice post! We're on the same wavelength, it seems. --
Linas

On Thu, Jan 31, 2019 at 3:17 PM Matt Mahoney <[email protected]>
wrote:

> When I asked Linas Vepstas, one of the original developers of OpenCog
> led by Ben Goertzel, about its future, he responded with a blog post.
> He compared research in AGI to astronomy. Anyone can do amateur
> astronomy with a pair of binoculars. But to make important
> discoveries, you need expensive equipment like the Hubble telescope.
> https://blog.opencog.org/2019/01/27/the-status-of-agi-and-opencog/
> 
> Opencog began 10 years ago in 2009 with high hopes of solving AGI,
> building on the lessons learned from the prior 12 years of experience
> with WebMind and Novamente. At the time, its major components were
> DeStin, a neural vision system that could recognize handwritten
> digits, MOSES, an evolutionary learner that output simple programs to
> fit its training data, RelEx, a rule based language model, and
> AtomSpace, a hypergraph based knowledge representation for both
> structured knowledge and neural networks, intended to tie together the
> other components. Initial progress was rapid. There were chatbots,
> virtual environments for training AI agents, and dabbling in robotics.
> The timeline in 2011 had OpenCog progressing through a series of
> developmental stages leading up to "full-on human level AGI" in
> 2019-2021, and consulting with the Singularity Institute for AI (now
> MIRI) on the safety and ethics of recursive self improvement.
> 
> Of course this did not happen. DeStin and MOSES never ran on hardware
> powerful enough to solve anything beyond toy problems. ReLex had all
> the usual problems of rule based systems like brittleness, parse
> ambiguity, and the lack of an effective learning mechanism from
> unstructured text. AtomSpace scaled poorly across distributed systems
> and was never integrated. There is no knowledge base. Investors and
> developers lost interest.
> 
> Meanwhile the last decade transformed our lives with smart phones,
> social networks, and online maps. Big companies like Apple, Google,
> Facebook, and Amazon, powered it with AI: voice recognition, face
> recognition, natural language understanding, and language translation
> that actually works. It is easy to forget that none of this existed 10
> years ago. Just those four companies now have a combined market cap of
> USD $3 trillion, enough to launch hundreds of Hubble telescopes if
> they wanted to.
> 
> Of course we have not yet solved AGI. We still do not have vision
> systems as good as the human eye and brain. We do not have systems
> that can tell when a song sounds good or what makes a video funny. We
> still pay people $87 trillion per year worldwide to do work that
> machines are not smart enough to do. And in spite of dire predictions
> that AGI will take our jobs, that figure is increasing at 3-4% per
> year, continuing a trend that has lasted centuries.
> 
> Over a lifetime your brain processes 10^19 bits of input, performing
> 10^25 operations on 10^14 synapses at a cost of 10^-15 joule per
> operation. This level of efficiency is a million times better than we
> can do with transistors, and Moore's Law is not going to help. Clock
> speeds stalled at 2-3 GHz a decade ago. We can't make transistors
> smaller than about 10 nm, the spacing between P or N dopant atoms, and
> we are almost there now. If you want to solve AGI, then figure out how
> to compute by moving atoms instead of electrons. Otherwise Moore's Law
> is dead.
> 
> Even if we can extend Moore's Law using nanotechnology and biological
> computing (and I believe we will), there are other obstacles to the
> coming Singularity.
> 
> First, the threshold for recursive self improvement is not human level
> intelligence, but human civilization level intelligence. That's higher
> by a factor of 7 billion. But that's already happening. It's the
> reason our economy and population are both growing at a faster than
> exponential rate.
> 
> Second is Eroom's Law. The price of new drugs doubles every 9 years.
> Global life expectancy has been increasing 0.2 years per year since
> the early 1900's, but that rate has slowed a bit since 1990. Testing
> new medical treatment is expensive because testing requires human
> subjects and the value of human life is increasing as the economy
> grows.
> 
> Third, Moore's Law doesn't cover software or knowledge collection, two
> of the three components of AGI (the other being hardware). Human
> knowledge collection is limited to how fast you can communicate, about
> 150 words per minute per person. Software productivity has remained
> constant at 10 lines per day since 1950. If you were hoping for an
> automated method to develop software, keep in mind that the 6 x 10^9
> bits of DNA that is you (equivalent to 300 million lines of code)
> required 10^50 copy and transcription operations on 10^37 bits of DNA
> to write over the last 3.5 billion years.
> 
> Comments?
> 
> --
> -- Matt Mahoney, [email protected]


-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M074e47437b4dda937bf4a3e2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to