***
> This will be busted when we get sufficiently accurate systems biology
> simulation models.  But in any case it's an obstacle to bio research not
> to AGI...

We do not have any programs that input a chemical formula (like H2O)
and compute chemical properties (like the freezing point of water) by
modeling the interactions of atoms. The reason is that the computation
requires solving Schrodinger's equation for n particles, which runs in
exponential time in n on a non quantum computer. I suppose it is
possible in theory to model the 10^28 atoms in a human body to predict
the effects of new medical interventions.
***

No reason to assume we need to be able to predict chemistry
from physics, to be able to run way more useful medical-systems-biology
simulations than we have now...

This is a red herring

On Sun, Feb 10, 2019 at 2:46 AM Matt Mahoney <[email protected]> wrote:
>
> On Sat, Feb 9, 2019 at 5:31 AM Ben Goertzel <[email protected]> wrote:
> > ***
> > First, the threshold for recursive self improvement is not human level
> > intelligence, but human civilization level intelligence. That's higher
> > by a factor of 7 billion.
> > ***
> >
> > Obviously this is an upper bound... an AGI engineered for recursive
> > self-improvement could potentially do it with much less resources than 
> > this...
>
> Imagine that a developed country like China or the USA or Singapore
> (for example) closed its borders, cut off all international trade and
> internet traffic and then tried to implement AGI. How much would this
> slow it down?
>
> Suppose you assembled 1000 of the smartest people in the world into a
> village and cut it off from the rest of the world. No travel in or
> out. Disconnected from the power grid and internet except internally.
> How fast could this group implement AGI, having to build its own
> computers using only materials on hand, as well as grow their own food
> and supply their basic needs?
>
> Suppose you (an expert on AGI) were the only living human on Earth.
> All products of civilization like buildings, roads, vehicles,
> machinery, books, tools, etc did not exist. You had to hunt and forage
> for food and find shelter in the wild. How fast could you develop AGI?
>
> Do you see why human level intelligence is insufficient for recursive
> self improvement?
>
> > ***
> > Second is Eroom's Law. The price of new drugs doubles every 9 years.
> > Global life expectancy has been increasing 0.2 years per year since
> > the early 1900's, but that rate has slowed a bit since 1990. Testing
> > new medical treatment is expensive because testing requires human
> > subjects and the value of human life is increasing as the economy
> > grows.
> > ***
> >
> > This will be busted when we get sufficiently accurate systems biology
> > simulation models.  But in any case it's an obstacle to bio research not
> > to AGI...
>
> We do not have any programs that input a chemical formula (like H2O)
> and compute chemical properties (like the freezing point of water) by
> modeling the interactions of atoms. The reason is that the computation
> requires solving Schrodinger's equation for n particles, which runs in
> exponential time in n on a non quantum computer. I suppose it is
> possible in theory to model the 10^28 atoms in a human body to predict
> the effects of new medical interventions. But that technology is far
> away, and even then we can't expect a quantum computer to run faster
> than the process it is modeling. For now we can't even answer basic
> questions, like whether calorie restriction extends life in humans,
> because for one thing the experiments take so long to run.
>
> > ***
> > Third, Moore's Law doesn't cover software or knowledge collection, two
> > of the three components of AGI (the other being hardware). Human
> > knowledge collection is limited to how fast you can communicate, about
> > 150 words per minute per person.
> > ***
> >
> > This obviously makes no sense.  E.g. modern face recognition AI gained 
> > knowledge
> > much faster than this, by sucking up a lot of photos all at once.   Once 
> > NLP is
> > sufficiently solved, AI will be able to suck up a lot of knowledge by
> > reading the Web.
> > It won't need knowledge to be explicitly typed in for it.
> 
> All of the written knowledge on the internet was either typed in or
> spoken at some point. It still makes up less than 1% of the human
> knowledge that an AGI would need to model the economy, to know what
> you want without having to explicitly ask for it. You don't have a
> robot that will clean your house because it wouldn't know whether a
> magazine on the floor belongs on the table or in the trash. In the
> time it takes you to tell it, you could have picked it up yourself. It
> doesn't matter how smart it is. It's how fast you can communicate the
> 10^7 bits of human knowledge in your brain that nobody knows except
> you.
> 
> --
> -- Matt Mahoney, [email protected]



-- 
Ben Goertzel, PhD
http://goertzel.org

"The dewdrop world / Is the dewdrop world / And yet, and yet …" --
Kobayashi Issa

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-Ma14e1a30cb10b44bd0d055a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to