Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Wed, Feb 09, 2005 at 07:15:51PM -0500, Brad Wyble wrote:
 
 Hardware advancements are necessary, but I think you guys spend alot of 
 time chasing white elephants.  AGI's are not going to magically appear 
 just because hardware gets fast enough to run them, a myth that is 
 strongly implied by some of the singularity sites I've read.

There are several major stepping stones with hardware speed. One, is when you 
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the previous item
in an affordable (arbitrarily put, 50-100 k$) package.

Arguably, we're approaching the region where a very large, very expensive
installation could, in theory, support a nontrivial AI.
 
 The hardware is a moot point.  If a time traveler from the year 2022 were 
 to arrive tomorrow and give us self-powered uber CPU fabrication plants, 
 we'd be barely a mouse fart closer to AGI.

I disagree. The equivalent of 10^6 CPU Blue Gene under everybody's desktop
would make AI happen quite quickly.
 
 Spend your time learning how to use what we have now, that's what 
 evolution did, starting from the primitive processing capabilities of 
 single celled organisms.

The Cell is barely enough for a ~realtime physics simulator. I need a largish
cluster of them for superrealtime, and about the same to run an ANN to
control a critter.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpZwjz9gsMhB.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
On Wed, 9 Feb 2005, Martin Striz wrote:
--- Brad Wyble [EMAIL PROTECTED] wrote:
Hardware advancements are necessary, but I think you guys spend alot of
time chasing white elephants.  AGI's are not going to magically appear
just because hardware gets fast enough to run them, a myth that is
strongly implied by some of the singularity sites I've read.
Really?  Someone may just artificially evolve them (it happened once 
already on
wetware), and evolution in silico could move 10, nay 20, orders of magnitude
faster.

No never.  Evolution in silico will never move faster than real matter 
interacting.

But yes it's true, there are stupidly insane emounts of CPU power that 
would give us AI instantly (although it would be so alien to us that we'd 
have no idea how to communicate with it). However nothing that we'll get 
in the next 100 century will be so vast.  You'd need a computer many 
times the size of the earth to generate AI through evolution in a 
reasonable time frame.



Martin Striz
__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble



There are several major stepping stones with hardware speed. One, is when you 
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the previous item
in an affordable (arbitrarily put, 50-100 k$) package.
Arguably, we're approaching the region where a very large, very expensive
installation could, in theory, support a nontrivial AI.

Yes, *in theory*, but you still have to engineer it.  That's the hard 
part.

Maybe I'm overstating my case to make a point, but it's a point that 
dearly needs to be made: the control architecture is everything.

Let's do a very crude thought experiment, and for the moment not consider 
evolving AI, because the hardware requirements for that are a bit silly.

So imagine it like this, you've got your 10^6 CPU's and you want to make 
an AI.  You have to devote some percentage of those CPU's to thinking 
(ie analyzing and representing information) and the remainder to 
restricting that thinking to some useful task.  No one would argue, I 
hope, that it's useful to blindly analyze all available information.

The part that's directing your resources is the control architechture and 
it requires meticulous engineering and difficult design decisions. 
What percentage do you allocate?

5%? 20%?   The more you spend, the more efficiently the remaining CPU 
power is spent.  There's got to be a point at which you achieve a maximum 
efficiency for your blob of silicon.

The brain is thoroughly riddled with such control architechture, starting 
at the retina and moving back, it's a constant process of throwing out 
information and compressing what's left into a more compact form.  That's 
really all your brain is doing from the moment a photon hits your eye, 
determining whether or not you should ignore that photon.  And it is a 
Very Hard problem.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 04:46:39AM -0500, Brad Wyble wrote:

 No never.  Evolution in silico will never move faster than real matter 
 interacting.

Where are you taking this strong certainty? I can easily make a superrealtime
Newtonian physics simulator by spatial tesselation over a large number of 
off-shelf
components (DSPs would do). Biological chronon is some 10 Hz, that's not so 
fast. 

Depending on scenary complexity, a FPS is already faster than realtime. It's
mostly faked physics, but then, it runs on a desktop machine.

With dedicated circuitry a speedup of a million is quite achievable. Maybe
even up to a billion.

Oh, and I'd use in machina. In silico will soon sound as quaint as in
relais or in vacuum tubus.
 
 But yes it's true, there are stupidly insane emounts of CPU power that 
 would give us AI instantly (although it would be so alien to us that we'd 

It doesn't need to be alien to us, if the simworld is realistic.

 have no idea how to communicate with it). However nothing that we'll get 
 in the next 100 century will be so vast.  You'd need a computer many 

You will see individual installations with mole amount of switches within
next 40-50 years. Maybe sooner.

 times the size of the earth to generate AI through evolution in a 
 reasonable time frame.

Show me the numbers behind this assumption. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpUJj2CAW5sK.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form.  That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon.  And it is a
Very Hard problem.
Yes, but it's a solved problem. Biology is rife with useful blueprints to
seed your system with. The substrate is different, though, so some things are
harder and others are easier, so you have to coevolve both.
This is where you need to sink moles of crunch.

I don't think you and I will ever see eye to eye here, because we have 
different conceptions in our heads of how big this parameter space is.

Instead, I'll just say in parting that, like you, I used to think AGI was 
practically a done deal.  I figured we were 20 years out.

7 years in Neuroscience boot-camp changed that for good.  I think anyone 
who's truly serious about AI should spend some time studying at least one 
system of the brain.  And I mean really drill down into the primary 
literature, don't just settle for the stuff on the surface which paints 
nice rosy pictures.

Delve down to network anatomy, let your mind be blown by the precision and 
complexity of the connectivity patterns.

Then delve down to cellular anatomy, come to understand how tightly 
compact and well engineered our 300 billion CPUs are.  Layers and layers 
of feedback regulation interwoven with an exquisite perfection, both 
within cells and between cells.  What we don't know yet is truly 
staggering.

I guarantee this research will permanently expand your mind.
Your idea of what a Hard problem is will ratchet up a few notches, and 
you will never again look upon any significant slice of the AGI pie as 
something simple enough that it can can be done by GA running on a few kg 
of molecular switches.


-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Martin Striz

--- Brad Wyble [EMAIL PROTECTED] wrote:

 On Wed, 9 Feb 2005, Martin Striz wrote:
 
 
  --- Brad Wyble [EMAIL PROTECTED] wrote:
 
 
  Hardware advancements are necessary, but I think you guys spend alot of
  time chasing white elephants.  AGI's are not going to magically appear
  just because hardware gets fast enough to run them, a myth that is
  strongly implied by some of the singularity sites I've read.
 
  Really?  Someone may just artificially evolve them (it happened once
 already on
  wetware), and evolution in silico could move 10, nay 20, orders of
 magnitude
  faster.
 
 
 No never.  Evolution in silico will never move faster than real matter 
 interacting.

Evolution is limited by mutation rates and generation times.  Mammals need from
1 to 15 years before they reach reproductive age.  Generation times are long
and evolution is slow.  A computer could eventually simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find optimal
evolutionary methodologies).  It can already do as many operations per second,
it just needs to be able to do them for billions of agents.

 
 But yes it's true, there are stupidly insane emounts of CPU power that 
 would give us AI instantly (although it would be so alien to us that we'd 
 have no idea how to communicate with it). However nothing that we'll get 
 in the next 100 century will be so vast.  You'd need a computer many 
 times the size of the earth to generate AI through evolution in a 
 reasonable time frame.

That's not a question that I'm equipped to answer, but my educated opinion is
that when we can do 10^20 flops, it'll happen.  Of course, rationally designed
AI could happen under far, far less computing power, if we know how to do it.

Martin

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find optimal
evolutionary methodologies).  It can already do as many operations per second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated opinion 
is
that when we can do 10^20 flops, it'll happen.  Of course, rationally designed
AI could happen under far, far less computing power, if we know how to do it.
I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 08:42:59AM -0500, Brad Wyble wrote:

 I don't think you and I will ever see eye to eye here, because we have 
 different conceptions in our heads of how big this parameter space is.

It depends on the system. The one I talked about (automata networks) is not
very large. I.e. doable with a mole of switches. Is it a sufficiently
flexible framework? I suspect so, but the only way to find out would be to
try.
 
 Instead, I'll just say in parting that, like you, I used to think AGI was 
 practically a done deal.  I figured we were 20 years out.

Where did I say that AI is a done deal? Have you ever tried ordering a
mole of buckytronium from Dell? Try it sometime.
 
 7 years in Neuroscience boot-camp changed that for good.  I think anyone 
 who's truly serious about AI should spend some time studying at least one 
 system of the brain.  And I mean really drill down into the primary 
 literature, don't just settle for the stuff on the surface which paints 
 nice rosy pictures.

Extremely relevant for whole body emulation, rather not relevant for AI.
(Don't assume that my background is computer science). This is getting
off-topic, but this is precisely why WBE needs a molecular-level scan, and
machine learning to fall up the simulation layer ladder. Humans can't do it.
 
 Delve down to network anatomy, let your mind be blown by the precision and 
 complexity of the connectivity patterns.

It's a heterogenous excitable medium, a spiking high-connectivity network
that works with gradients and neurotransmitter packets. Some thousands ion
channel types, some hundreds to thousands neuron cell types.

This is about enough detail to seed your simulation with. Don't forget: we're
only using this as an educated guess to prime the co-evolution. On a
different substrate (you can emulate automata networks on 3d packet-switched
systems very efficiently).
 
 Then delve down to cellular anatomy, come to understand how tightly 
 compact and well engineered our 300 billion CPUs are.  Layers and layers 
 of feedback regulation interwoven with an exquisite perfection, both 
 within cells and between cells.  What we don't know yet is truly 
 staggering.

Agreed. Fortunately, all of this is irrelevant for AI, because the hardware
artifacts are different.
 
 I guarantee this research will permanently expand your mind.

It did. Unfortunately, didn't go beyond monographs.
 
 Your idea of what a Hard problem is will ratchet up a few notches, and 
 you will never again look upon any significant slice of the AGI pie as 
 something simple enough that it can can be done by GA running on a few kg 

Evolutionary algorithms, not GA.

 of molecular switches.

Do you think anyone is smart enough to code a seed? If not, what is your idea
of an AI bootstrap?


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpxU0XJlFOrp.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 10:15:25AM -0500, Brad Wyble wrote:

 Evolution is limited by mutation rates and generation times.  Mammals 
 need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

Most lower organisms are genetically determined. Even so, at a speedup rate
of 1:10^6 a wall clock-day is worth 3 kiloyears simulation time.
 
 10^ 9 generations per second?  This rate depends(inversely) on the 

10^9 generations/second is absurdly high. 10^9 rate is about the top of event
rate in the simulation you could hope to achieve, given what we know of
computational physics. Fitness testing seconds to minutes
on very large populations looks very doable, though. Some complex behaviour 
can be evaluated in some 10-100 ms with massively parallel molecular hardware.

Of course, current state of the art is pathetic: http://darwin2k.com.
People would laugh if you'd say plausible fake physics simulators could 
scale O(1).

Then, would Sutherland expect Nalu, or Dawn?
http://www.nzone.com/object/nzone_downloads_nvidia.html

 complexity of your organism.

No. The simulation handles virtual substrate, and that's O(1) if you match
organism size with volume of dedicated hardware, assuming local signalling
(which is ~ms constrained in biology, and ~ps..~fs constrained
relativistically).
 
 And while fitness functions for simple ant AI's are (relatvely) simple to 
 write and evaluate, when you start talking about human level AI, you need 

People can be paid or volunteer to judge organism performance from
interactive simulation. Co-evolution has a built-in drive and has no
intrinsic fitness function but the naturally emergent one.

 a very thorugh competition, involving much scoial interaction.  This takes 
 *time* whether simulated time or realtime, it will add up.
 
 A simple model of interaction between AI's will give you simple AI's.  We 
 didn't start getting really smart until we could exchange meaningful 
 ideas.

What I'm interested in an efficient, robustly evolvable framework. It doesn't
take more than insect equivalent complexity to achieve that. This implies
full genetic determinism and simple fitness testing.
 
 I'd be careful throwing around guesses like that.  You're dealing with so 
 many layers of unknown.
 
 Before the accusation comes, I'm not saying these problems are unsolvable. 
 I'm just saying that (barring planetoid computers) sufficient hardware is 

Are you seeing any specific physical limits in building systems hundreds of
km^3 large? And why do you think you need systems of nontrivial size for
evolutionary bootstrap of intelligence? Buckytronics are just molecules.

 a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
 optimism here that if we just wait long enough, it'll happen on all of our 
 desktops with off-the shelf AI building kits.
 
 Let me defuse another criticism of my perspective,  I'm not saying we need 
 to copy the brain.  However, the brain is an excellent lesson of how Hard 
 this problem is and should certainly be embraced as such.

Constraints on biological tissue are very different from constraints of
electron or electron spin distributions in solid state circuits switching in
GHz to THz range.

While the overall architecture definitely contains lots of components
necessary for hitting a fertile region in problem space, slavishly 
copying the microarchitecture is likely to only lead you astray.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgppJwj3lcM0l.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Shane
 --- Brad Wyble [EMAIL PROTECTED] wrote: 
 
  Evolution is limited by mutation rates and generation times.  Mammals 
  need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

I think a key point is that evolution isn't really trying to produce
organisms of higher intelligence.  It is just something that sometimes
occurs in some evolutionary niches as a byproduct of the process.  
Often a dumb organism is better than a larger more resource expensive
but intelligent one.

In comparison, if we were to try and evolve intelligence, using an
appropriate computable measure of intelligence of course, we could
use this measure to direct the evolution, i.e. by using it as the
objective function of a GA.  In terms of resource consumption, this
should be exponentially faster at achieve intelligence than what
occurred in nature.


 And while fitness functions for simple ant AI's are (relatvely) simple to 
 write and evaluate, when you start talking about human level AI, you need 
 a very thorugh competition, involving much scoial interaction.  This takes 
 *time* whether simulated time or realtime, it will add up.
 
 A simple model of interaction between AI's will give you simple AI's.  We 
 didn't start getting really smart until we could exchange meaningful 
 ideas.

Yes, simple AIs need relatively little to evaluate their intelligence.
So the first stages of evolving intelligence would be much easier in this
respect.  However, once you get to something like a rat's brain, you
already have much of the structure needed for a human's brain already
worked out.  I think something similar will be the case with an AI.
That is, the design changes needed to go from a medium level intelligence
to a high level intelligence are not great, much of it is just a problem
of scale.
 
Of course I can't really prove much of this without actually doing it.
Firstly, I need to create a proper measure of machine intelligence,
which is what I am currently working on...

Cheers
Shane


Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell-DG

2005-02-10 Thread Danny G. Goe
I think this begs the question of how you factor in the learning cost.
CPU-time?
Resources?
Memory?
Total Instruction counts?
If you can arrive at the same answer with less instructions executed and or 
less resources isn't that a better model?

Weighing the cost will be based on the availability of those Resources.
What resources give the highest rate of learning? CPU-time, memory?
Is an Intelligence Quotient a good way to find the learning curve?
Or some other method(s) to find the learning rate.
I am sure that this will be processed on clusters.
Some neural nets might run in background while different mutations are run.
Previous arrived computational states maybe either continuing or become 
fixed at some point in time.

If any configuration creates a learning system that the next generation of 
mutations generates a value greater than 1 from the previous generation you 
can then start to determine the evolution rates.

When you start your process you will have to run a large number of test 
generated methods and determine if any show promise to learning while some 
might work better early on others will mutate into higher learning curves as 
the evolution continues. You will have to run a large number of permutations 
of all the learning methods to find the optimal mix to obtain a high 
learning curve. If you decide to add any other learning methods, the new 
method will have to be tested with all the others.


First time runs will generate high learning rates, but level off as the 
known knowledge gets aborbed by any given configuration.


Comments?

Dan Goe



- Original Message - 
From: Brad Wyble [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, February 10, 2005 10:15 AM
Subject: Re: [agi] Cell


I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find 
optimal
evolutionary methodologies).  It can already do as many operations per 
second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that 
we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated 
opinion is
that when we can do 10^20 flops, it'll happen.  Of course, rationally 
designed
AI could happen under far, far less computing power, if we know how to do 
it.

I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Martin Striz

--- Brad Wyble [EMAIL PROTECTED] wrote:

  Evolution is limited by mutation rates and generation times.  Mammals 
  need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

You're confusing ontogeny with phylogeny.  It's the latter that I'm speaking
of.  Generation times are long because developmental pathways in biological
systems are slow.  Developmental pathways exist because there's an unfortunate
disconnect between phenotypes and genotypes.  Brains are not capable of
recursively self-improving their own wetware, and improvements have to waste
time by feeding back through the genome.  A recursively self-improving machine
intelligence (or pre-intelligent artificial lifeform) won't have the burden of
developmental pathways and excessive generation times.

 
  times are long and evolution is slow.  A computer could eventually 
  simulate 10^9 (or 10^20, or whatever) generations per second, and 
  multiple mutation rates (to find optimal evolutionary methodologies). 
  It can already do as many operations per second, it just needs to be 
  able to do them for billions of agents.
 
 
 10^ 9 generations per second?  This rate depends(inversely) on the 
 complexity of your organism.

Defined properly, complexity is the inverse of entropy, and entropy is the
number of equivalent states that a system can obtain.  Given this, I would not
be remiss in suggesting that the brain has more complexity then the molecular
architecture of the cell (due to its exquisite specification).  Yet it took 3
billion years to refine the cell, and only a few hundred million to catapult
ganglionic masses into human-level intelligent.  So while complexity defines a
proportional relationship with computational needs, selection conditions can
profoundly inverse those needs.

It may not takes seconds; it may take years, but I think it will be possible
within a few decades.

Martin Striz




__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 12:07:57PM -0500, Brad Wyble wrote:

 You guys are throwing around orders of magnitude like ping pong balls 
 based on very little practical evidence.  Sometimes no estimate is less 
 misleading than one that is arbitrary.

What makes you think it's arbitrary? Minimal switching time e.g. for a
MRAM-based logic cell or a NEMS buckling bucky tube element, or a 
ballistic bucky transistor are not exactly guesswork. Recent literature is
full of pretty solid data.

Do you think that equivalents of ~ms processes in biological tissue
information processing can't occur within ~ns? ~ps gate delays are achievable 
with
current electronics. I don't see why running the equivalent of a piece of
neuroanatomy should not accrue more than 1 k parallel gate delays. It's a 
conservative guess, actually. (Which is why I'm saying 10^6, and not 10^9).
 
 No. The simulation handles virtual substrate, and that's O(1) if you match
 organism size with volume of dedicated hardware, assuming local signalling
 (which is ~ms constrained in biology, and ~ps..~fs constrained
 relativistically).
 
 
 I was referring to the complexity of the organism's mind.  Surely you are 
 not going to tell me that as the evolving brains increase in complexity, 
 there is no effect on the simulation speed?

Yes. I'm going to tell you that a spike propagating doesn't care (much) about
whether it's running in a mouse or a human. Processing takes longer in human
primates than in rodents, but not dramatically so. The reason is more hops,
and capability to process far more complex stimuli with a minimally modified
substrate. The processing unit sees signals passing along virtual wires, or 
packets passing through nodes. The higher organization levels are transparent, 
what matters is processing volume/numbers of nodes, and whether your average 
(virtual) connectivity at the processing element level can handle the higher 
connectivity in a human vs. rodent cortex. It is not obvious that a mouse 
brain voxel is doing significantly less work than a human brain voxel, as 
far as operation complexity is concerned.
 
 But in order for interesting things to happen, organisms have to be able 
 to interact with one another for quite some time before the grim reaper 
 does his grim business.

Do you think that a 1 MJahr/Jahr simulation rate can't address that? 
 
 I'm confused, all you want are Ants?
 Or did you mean AGI in  ant-bodies?

Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.

If you can come that far, the rest is relatively trivial, especially if you
have continous accretion of data from wet and computational neuroscience.

 The idea of bootstrapping intelligence is interesting, but far from 
 proven.  That too will require much engineering.

The idea is not exactly new, and fully validated since you can read this
sentence. It is an engineering problem, not projections of fundamental science
milestones. 
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgp1LqzmCA0dU.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble

I'm confused, all you want are Ants?
Or did you mean AGI in  ant-bodies?
Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.
If you can come that far, the rest is relatively trivial, especially if you
have continous accretion of data from wet and computational neuroscience.

I'm going to have to stop on this note.  You and I live in different 
worlds.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Cell

2005-02-10 Thread Ben Goertzel

  Social insects are a good model, actually. Yes, all I want is a
 framework
  flexible and efficient enough to produce social insect level on
 intelligence
  on hardware of the next decades.
 
  If you can come that far, the rest is relatively trivial,
 especially if you
  have continous accretion of data from wet and computational
 neuroscience.


 I'm going to have to stop on this note.  You and I live in different
 worlds.


 -Brad

Hmmm... IMO, there is a damn big leap between bugs and humans!!!

I'm not sure why you think that the step from one to the next is trivial?

Clearly from here to a simulated bug is a big leap, but the leap from a sim
bug to a sim human is ALSO really big, no?

ben g


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]