[agi] Cell simulator is out

2005-11-10 Thread Eugen Leitl

/. is slow to pick it up, but

http://www-128.ibm.com/developerworks/power/library/pa-cellstartsim/

FC4 (x86/x86_64 will do) and 2 GHz machine required.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Cell

2005-02-14 Thread Philip Sutton
On 10 Feb 05 Steve Reed said:  

 In 2014, according to trend, the semiconductor manufacturers may reach
 the 16 nanometer lithography node, with 32 CPU cores per chip, perhaps
 150+ times more capable than today's x86 chip. 

I raised this issue with a colleague who said that he wondered whether this 
extrapolation would work because of the dynamics of economic cost.  He 
argued that CPUs have been getting more expensive in absolute terms (not 
relative to performance) as their capacity has increased and he thought that 
this trend of CPU price increases would continue.  He said he thought that the 
reasons that computers have been getting cheaper as whole systems has 
come close to running its course leaving the rising price of the CPUs as the 
dominant trend.  He therefore thought that Moore's Law might run out of puff - 
not because of technology limits but because of cost escalations.

Since I had no idea whether he was right (my subjective impression had been 
that the long run trajectory for the price of computers was a long run decline) 
I 
thought I should ask whether anyone has a view on my colleague's argument.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-14 Thread Eugen Leitl
On Mon, Feb 14, 2005 at 10:04:25PM +1100, Philip Sutton wrote:

 I raised this issue with a colleague who said that he wondered whether this 
 extrapolation would work because of the dynamics of economic cost.  He 

There are several developments which will terminate Moore in semiconductor
photolithography sometime soon (within a decade?). Each new generation fab 
is is getting considerably more expensive than previous. Current bane of 
chips is power density, which is a function of leak currents.

However, there are alternative computation paradigms and fabrication methods,
as well as achitecture tweaks, which do not have such hard limits. Whether 
these 
technologies will arrive on time to prevent a discontinuity in affordable 
integration density (what Moore is all about) is not yet obvious.

Some interesting technologies are SWNT wires and transistors, other
nanowire types, spintronics (MRAM and spintronic logic), reversible
computing, quantum dot self assembly, and multilayer organic electronics.

This is computing, but not as we know it, Jim.

 argued that CPUs have been getting more expensive in absolute terms (not 
 relative to performance) as their capacity has increased and he thought that 
 this trend of CPU price increases would continue.  He said he thought that 
 the 
 reasons that computers have been getting cheaper as whole systems has 
 come close to running its course leaving the rising price of the CPUs as the 
 dominant trend.  He therefore thought that Moore's Law might run out of puff 
 - 
 not because of technology limits but because of cost escalations.

Cost escalation is a technology limit. Moore's law is The complexity for
minimum component costs has increased at a rate of roughly a factor of two per 
year
See moorespaper.pdf in ftp://download.intel.com/research/silicon/ .
 
 Since I had no idea whether he was right (my subjective impression had been 
 that the long run trajectory for the price of computers was a long run 
 decline) I 
 thought I should ask whether anyone has a view on my colleague's argument.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpI7s0dHfUnY.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-11 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 04:58:51PM -0500, Ben Goertzel wrote:

 Hmmm... IMO, there is a damn big leap between bugs and humans!!!

Sure, but the leap between nothing at all and bugs is far greater still.
As another example, the step from a mouse to a man in terms of added
functionality at the genome and morphome level is almost insignificant (but
the results are for that so much more astonishing). It's mostly
the scale, and the supercell architecture (also more neuron types and ion
channels, but not that much more).

And we can tell where the diffs are through what the genetic diffs are, and how 
the
gene activity pattern change over time, and which shape changes they produce.
And which functionality they change, by in vitro and in vivo recording.

So I think it makes sense to put a roadmap from Drosophila to Mus and then
primates.
 
 I'm not sure why you think that the step from one to the next is trivial?
 
 Clearly from here to a simulated bug is a big leap, but the leap from a sim
 bug to a sim human is ALSO really big, no?

Yes, but we have a map: input from wet and computational neuroscience.
Working blueprints are crawling, flying and walking everywhere.

I realize it's the wrong approach to talk about on this list.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpuOGuBAOZE2.pgp
Description: PGP signature


RE: [agi] Cell

2005-02-11 Thread Ben Goertzel
  Clearly from here to a simulated bug is a big leap, but the
 leap from a sim
  bug to a sim human is ALSO really big, no?

 Yes, but we have a map: input from wet and computational neuroscience.
 Working blueprints are crawling, flying and walking everywhere.

 I realize it's the wrong approach to talk about on this list.

Yes, I agree, if you're talking about the leap from a precisely physically
simulated bug to a precisely physically simulated human.

I was thinking about the leap from an engineered artificial system with the
cognitive, social and perceptual-motor capability of a bug to an engineered
artificial system which has these capacities at human level.

IMO, the former gap is much smaller than the latter.

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-11 Thread Brad Wyble
On Fri, 11 Feb 2005, Eugen Leitl wrote:
Just want to be clear Eugen, when you talk about evolutionary simulations, 
you are talking about simulating the physical world, down to a 
cellular and perhaps even molecular level?

-B
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-11 Thread Eugen Leitl
On Fri, Feb 11, 2005 at 10:03:33AM -0500, Brad Wyble wrote:

 Just want to be clear Eugen, when you talk about evolutionary simulations, 
 you are talking about simulating the physical world, down to a 
 cellular and perhaps even molecular level?

Whole critters? Heavensforbid.

Fake physics not very much behind your typical first person shooter would
suffice for a crude virtual world. You can just skip actuators and make 
your virtual neurons apply forces to parts of body geometry.

Mapping signalling networks to molecular computational cell crystals is
reserved for maximum speed achievable. It is not a prerequisite for effective
artificial critter de novo design by evolutionary algorithms.

It's just on today's hardware your critters are very, very primitive.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpd56sfibnBW.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Wed, Feb 09, 2005 at 07:15:51PM -0500, Brad Wyble wrote:
 
 Hardware advancements are necessary, but I think you guys spend alot of 
 time chasing white elephants.  AGI's are not going to magically appear 
 just because hardware gets fast enough to run them, a myth that is 
 strongly implied by some of the singularity sites I've read.

There are several major stepping stones with hardware speed. One, is when you 
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the previous item
in an affordable (arbitrarily put, 50-100 k$) package.

Arguably, we're approaching the region where a very large, very expensive
installation could, in theory, support a nontrivial AI.
 
 The hardware is a moot point.  If a time traveler from the year 2022 were 
 to arrive tomorrow and give us self-powered uber CPU fabrication plants, 
 we'd be barely a mouse fart closer to AGI.

I disagree. The equivalent of 10^6 CPU Blue Gene under everybody's desktop
would make AI happen quite quickly.
 
 Spend your time learning how to use what we have now, that's what 
 evolution did, starting from the primitive processing capabilities of 
 single celled organisms.

The Cell is barely enough for a ~realtime physics simulator. I need a largish
cluster of them for superrealtime, and about the same to run an ANN to
control a critter.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpZwjz9gsMhB.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
On Wed, 9 Feb 2005, Martin Striz wrote:
--- Brad Wyble [EMAIL PROTECTED] wrote:
Hardware advancements are necessary, but I think you guys spend alot of
time chasing white elephants.  AGI's are not going to magically appear
just because hardware gets fast enough to run them, a myth that is
strongly implied by some of the singularity sites I've read.
Really?  Someone may just artificially evolve them (it happened once 
already on
wetware), and evolution in silico could move 10, nay 20, orders of magnitude
faster.

No never.  Evolution in silico will never move faster than real matter 
interacting.

But yes it's true, there are stupidly insane emounts of CPU power that 
would give us AI instantly (although it would be so alien to us that we'd 
have no idea how to communicate with it). However nothing that we'll get 
in the next 100 century will be so vast.  You'd need a computer many 
times the size of the earth to generate AI through evolution in a 
reasonable time frame.



Martin Striz
__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble



There are several major stepping stones with hardware speed. One, is when you 
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the previous item
in an affordable (arbitrarily put, 50-100 k$) package.
Arguably, we're approaching the region where a very large, very expensive
installation could, in theory, support a nontrivial AI.

Yes, *in theory*, but you still have to engineer it.  That's the hard 
part.

Maybe I'm overstating my case to make a point, but it's a point that 
dearly needs to be made: the control architecture is everything.

Let's do a very crude thought experiment, and for the moment not consider 
evolving AI, because the hardware requirements for that are a bit silly.

So imagine it like this, you've got your 10^6 CPU's and you want to make 
an AI.  You have to devote some percentage of those CPU's to thinking 
(ie analyzing and representing information) and the remainder to 
restricting that thinking to some useful task.  No one would argue, I 
hope, that it's useful to blindly analyze all available information.

The part that's directing your resources is the control architechture and 
it requires meticulous engineering and difficult design decisions. 
What percentage do you allocate?

5%? 20%?   The more you spend, the more efficiently the remaining CPU 
power is spent.  There's got to be a point at which you achieve a maximum 
efficiency for your blob of silicon.

The brain is thoroughly riddled with such control architechture, starting 
at the retina and moving back, it's a constant process of throwing out 
information and compressing what's left into a more compact form.  That's 
really all your brain is doing from the moment a photon hits your eye, 
determining whether or not you should ignore that photon.  And it is a 
Very Hard problem.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 04:46:39AM -0500, Brad Wyble wrote:

 No never.  Evolution in silico will never move faster than real matter 
 interacting.

Where are you taking this strong certainty? I can easily make a superrealtime
Newtonian physics simulator by spatial tesselation over a large number of 
off-shelf
components (DSPs would do). Biological chronon is some 10 Hz, that's not so 
fast. 

Depending on scenary complexity, a FPS is already faster than realtime. It's
mostly faked physics, but then, it runs on a desktop machine.

With dedicated circuitry a speedup of a million is quite achievable. Maybe
even up to a billion.

Oh, and I'd use in machina. In silico will soon sound as quaint as in
relais or in vacuum tubus.
 
 But yes it's true, there are stupidly insane emounts of CPU power that 
 would give us AI instantly (although it would be so alien to us that we'd 

It doesn't need to be alien to us, if the simworld is realistic.

 have no idea how to communicate with it). However nothing that we'll get 
 in the next 100 century will be so vast.  You'd need a computer many 

You will see individual installations with mole amount of switches within
next 40-50 years. Maybe sooner.

 times the size of the earth to generate AI through evolution in a 
 reasonable time frame.

Show me the numbers behind this assumption. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpUJj2CAW5sK.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form.  That's
really all your brain is doing from the moment a photon hits your eye,
determining whether or not you should ignore that photon.  And it is a
Very Hard problem.
Yes, but it's a solved problem. Biology is rife with useful blueprints to
seed your system with. The substrate is different, though, so some things are
harder and others are easier, so you have to coevolve both.
This is where you need to sink moles of crunch.

I don't think you and I will ever see eye to eye here, because we have 
different conceptions in our heads of how big this parameter space is.

Instead, I'll just say in parting that, like you, I used to think AGI was 
practically a done deal.  I figured we were 20 years out.

7 years in Neuroscience boot-camp changed that for good.  I think anyone 
who's truly serious about AI should spend some time studying at least one 
system of the brain.  And I mean really drill down into the primary 
literature, don't just settle for the stuff on the surface which paints 
nice rosy pictures.

Delve down to network anatomy, let your mind be blown by the precision and 
complexity of the connectivity patterns.

Then delve down to cellular anatomy, come to understand how tightly 
compact and well engineered our 300 billion CPUs are.  Layers and layers 
of feedback regulation interwoven with an exquisite perfection, both 
within cells and between cells.  What we don't know yet is truly 
staggering.

I guarantee this research will permanently expand your mind.
Your idea of what a Hard problem is will ratchet up a few notches, and 
you will never again look upon any significant slice of the AGI pie as 
something simple enough that it can can be done by GA running on a few kg 
of molecular switches.


-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Martin Striz

--- Brad Wyble [EMAIL PROTECTED] wrote:

 On Wed, 9 Feb 2005, Martin Striz wrote:
 
 
  --- Brad Wyble [EMAIL PROTECTED] wrote:
 
 
  Hardware advancements are necessary, but I think you guys spend alot of
  time chasing white elephants.  AGI's are not going to magically appear
  just because hardware gets fast enough to run them, a myth that is
  strongly implied by some of the singularity sites I've read.
 
  Really?  Someone may just artificially evolve them (it happened once
 already on
  wetware), and evolution in silico could move 10, nay 20, orders of
 magnitude
  faster.
 
 
 No never.  Evolution in silico will never move faster than real matter 
 interacting.

Evolution is limited by mutation rates and generation times.  Mammals need from
1 to 15 years before they reach reproductive age.  Generation times are long
and evolution is slow.  A computer could eventually simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find optimal
evolutionary methodologies).  It can already do as many operations per second,
it just needs to be able to do them for billions of agents.

 
 But yes it's true, there are stupidly insane emounts of CPU power that 
 would give us AI instantly (although it would be so alien to us that we'd 
 have no idea how to communicate with it). However nothing that we'll get 
 in the next 100 century will be so vast.  You'd need a computer many 
 times the size of the earth to generate AI through evolution in a 
 reasonable time frame.

That's not a question that I'm equipped to answer, but my educated opinion is
that when we can do 10^20 flops, it'll happen.  Of course, rationally designed
AI could happen under far, far less computing power, if we know how to do it.

Martin

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Brad Wyble
I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find optimal
evolutionary methodologies).  It can already do as many operations per second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated opinion 
is
that when we can do 10^20 flops, it'll happen.  Of course, rationally designed
AI could happen under far, far less computing power, if we know how to do it.
I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 08:42:59AM -0500, Brad Wyble wrote:

 I don't think you and I will ever see eye to eye here, because we have 
 different conceptions in our heads of how big this parameter space is.

It depends on the system. The one I talked about (automata networks) is not
very large. I.e. doable with a mole of switches. Is it a sufficiently
flexible framework? I suspect so, but the only way to find out would be to
try.
 
 Instead, I'll just say in parting that, like you, I used to think AGI was 
 practically a done deal.  I figured we were 20 years out.

Where did I say that AI is a done deal? Have you ever tried ordering a
mole of buckytronium from Dell? Try it sometime.
 
 7 years in Neuroscience boot-camp changed that for good.  I think anyone 
 who's truly serious about AI should spend some time studying at least one 
 system of the brain.  And I mean really drill down into the primary 
 literature, don't just settle for the stuff on the surface which paints 
 nice rosy pictures.

Extremely relevant for whole body emulation, rather not relevant for AI.
(Don't assume that my background is computer science). This is getting
off-topic, but this is precisely why WBE needs a molecular-level scan, and
machine learning to fall up the simulation layer ladder. Humans can't do it.
 
 Delve down to network anatomy, let your mind be blown by the precision and 
 complexity of the connectivity patterns.

It's a heterogenous excitable medium, a spiking high-connectivity network
that works with gradients and neurotransmitter packets. Some thousands ion
channel types, some hundreds to thousands neuron cell types.

This is about enough detail to seed your simulation with. Don't forget: we're
only using this as an educated guess to prime the co-evolution. On a
different substrate (you can emulate automata networks on 3d packet-switched
systems very efficiently).
 
 Then delve down to cellular anatomy, come to understand how tightly 
 compact and well engineered our 300 billion CPUs are.  Layers and layers 
 of feedback regulation interwoven with an exquisite perfection, both 
 within cells and between cells.  What we don't know yet is truly 
 staggering.

Agreed. Fortunately, all of this is irrelevant for AI, because the hardware
artifacts are different.
 
 I guarantee this research will permanently expand your mind.

It did. Unfortunately, didn't go beyond monographs.
 
 Your idea of what a Hard problem is will ratchet up a few notches, and 
 you will never again look upon any significant slice of the AGI pie as 
 something simple enough that it can can be done by GA running on a few kg 

Evolutionary algorithms, not GA.

 of molecular switches.

Do you think anyone is smart enough to code a seed? If not, what is your idea
of an AI bootstrap?


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpxU0XJlFOrp.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 10:15:25AM -0500, Brad Wyble wrote:

 Evolution is limited by mutation rates and generation times.  Mammals 
 need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

Most lower organisms are genetically determined. Even so, at a speedup rate
of 1:10^6 a wall clock-day is worth 3 kiloyears simulation time.
 
 10^ 9 generations per second?  This rate depends(inversely) on the 

10^9 generations/second is absurdly high. 10^9 rate is about the top of event
rate in the simulation you could hope to achieve, given what we know of
computational physics. Fitness testing seconds to minutes
on very large populations looks very doable, though. Some complex behaviour 
can be evaluated in some 10-100 ms with massively parallel molecular hardware.

Of course, current state of the art is pathetic: http://darwin2k.com.
People would laugh if you'd say plausible fake physics simulators could 
scale O(1).

Then, would Sutherland expect Nalu, or Dawn?
http://www.nzone.com/object/nzone_downloads_nvidia.html

 complexity of your organism.

No. The simulation handles virtual substrate, and that's O(1) if you match
organism size with volume of dedicated hardware, assuming local signalling
(which is ~ms constrained in biology, and ~ps..~fs constrained
relativistically).
 
 And while fitness functions for simple ant AI's are (relatvely) simple to 
 write and evaluate, when you start talking about human level AI, you need 

People can be paid or volunteer to judge organism performance from
interactive simulation. Co-evolution has a built-in drive and has no
intrinsic fitness function but the naturally emergent one.

 a very thorugh competition, involving much scoial interaction.  This takes 
 *time* whether simulated time or realtime, it will add up.
 
 A simple model of interaction between AI's will give you simple AI's.  We 
 didn't start getting really smart until we could exchange meaningful 
 ideas.

What I'm interested in an efficient, robustly evolvable framework. It doesn't
take more than insect equivalent complexity to achieve that. This implies
full genetic determinism and simple fitness testing.
 
 I'd be careful throwing around guesses like that.  You're dealing with so 
 many layers of unknown.
 
 Before the accusation comes, I'm not saying these problems are unsolvable. 
 I'm just saying that (barring planetoid computers) sufficient hardware is 

Are you seeing any specific physical limits in building systems hundreds of
km^3 large? And why do you think you need systems of nontrivial size for
evolutionary bootstrap of intelligence? Buckytronics are just molecules.

 a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
 optimism here that if we just wait long enough, it'll happen on all of our 
 desktops with off-the shelf AI building kits.
 
 Let me defuse another criticism of my perspective,  I'm not saying we need 
 to copy the brain.  However, the brain is an excellent lesson of how Hard 
 this problem is and should certainly be embraced as such.

Constraints on biological tissue are very different from constraints of
electron or electron spin distributions in solid state circuits switching in
GHz to THz range.

While the overall architecture definitely contains lots of components
necessary for hitting a fertile region in problem space, slavishly 
copying the microarchitecture is likely to only lead you astray.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgppJwj3lcM0l.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Shane
 --- Brad Wyble [EMAIL PROTECTED] wrote: 
 
  Evolution is limited by mutation rates and generation times.  Mammals 
  need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

I think a key point is that evolution isn't really trying to produce
organisms of higher intelligence.  It is just something that sometimes
occurs in some evolutionary niches as a byproduct of the process.  
Often a dumb organism is better than a larger more resource expensive
but intelligent one.

In comparison, if we were to try and evolve intelligence, using an
appropriate computable measure of intelligence of course, we could
use this measure to direct the evolution, i.e. by using it as the
objective function of a GA.  In terms of resource consumption, this
should be exponentially faster at achieve intelligence than what
occurred in nature.


 And while fitness functions for simple ant AI's are (relatvely) simple to 
 write and evaluate, when you start talking about human level AI, you need 
 a very thorugh competition, involving much scoial interaction.  This takes 
 *time* whether simulated time or realtime, it will add up.
 
 A simple model of interaction between AI's will give you simple AI's.  We 
 didn't start getting really smart until we could exchange meaningful 
 ideas.

Yes, simple AIs need relatively little to evaluate their intelligence.
So the first stages of evolving intelligence would be much easier in this
respect.  However, once you get to something like a rat's brain, you
already have much of the structure needed for a human's brain already
worked out.  I think something similar will be the case with an AI.
That is, the design changes needed to go from a medium level intelligence
to a high level intelligence are not great, much of it is just a problem
of scale.
 
Of course I can't really prove much of this without actually doing it.
Firstly, I need to create a proper measure of machine intelligence,
which is what I am currently working on...

Cheers
Shane


Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell-DG

2005-02-10 Thread Danny G. Goe
I think this begs the question of how you factor in the learning cost.
CPU-time?
Resources?
Memory?
Total Instruction counts?
If you can arrive at the same answer with less instructions executed and or 
less resources isn't that a better model?

Weighing the cost will be based on the availability of those Resources.
What resources give the highest rate of learning? CPU-time, memory?
Is an Intelligence Quotient a good way to find the learning curve?
Or some other method(s) to find the learning rate.
I am sure that this will be processed on clusters.
Some neural nets might run in background while different mutations are run.
Previous arrived computational states maybe either continuing or become 
fixed at some point in time.

If any configuration creates a learning system that the next generation of 
mutations generates a value greater than 1 from the previous generation you 
can then start to determine the evolution rates.

When you start your process you will have to run a large number of test 
generated methods and determine if any show promise to learning while some 
might work better early on others will mutate into higher learning curves as 
the evolution continues. You will have to run a large number of permutations 
of all the learning methods to find the optimal mix to obtain a high 
learning curve. If you decide to add any other learning methods, the new 
method will have to be tested with all the others.


First time runs will generate high learning rates, but level off as the 
known knowledge gets aborbed by any given configuration.


Comments?

Dan Goe



- Original Message - 
From: Brad Wyble [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, February 10, 2005 10:15 AM
Subject: Re: [agi] Cell


I'd like to start off by saying that I have officially made the transition 
into old crank.  It's a shame it's happened so early in my life, but it 
had to happen sometime.  So take my comments in that context.  If I've 
ever had a defined role on this list, it's in trying to keep the pies from 
flying into the sky.


Evolution is limited by mutation rates and generation times.  Mammals 
need from 1 to 15 years before they reach reproductive age.  Generation
That time is not useless or wasted.  Their brains are acquiring 
information, molding themselves.  I don't think you can just skip it.

times are long and evolution is slow.  A computer could eventually 
simulate 10^9 (or 10^20, or
whatever) generations per second, and multiple mutation rates (to find 
optimal
evolutionary methodologies).  It can already do as many operations per 
second,
it just needs to be able to do them for billions of agents.

10^ 9 generations per second?  This rate depends(inversely) on the 
complexity of your organism.

And while fitness functions for simple ant AI's are (relatvely) simple to 
write and evaluate, when you start talking about human level AI, you need 
a very thorugh competition, involving much scoial interaction.  This takes 
*time* whether simulated time or realtime, it will add up.

A simple model of interaction between AI's will give you simple AI's.  We 
didn't start getting really smart until we could exchange meaningful 
ideas.


But yes it's true, there are stupidly insane emounts of CPU power that
would give us AI instantly (although it would be so alien to us that 
we'd
have no idea how to communicate with it). However nothing that we'll get
in the next 100 century will be so vast.  You'd need a computer many
times the size of the earth to generate AI through evolution in a
reasonable time frame.
That's not a question that I'm equipped to answer, but my educated 
opinion is
that when we can do 10^20 flops, it'll happen.  Of course, rationally 
designed
AI could happen under far, far less computing power, if we know how to do 
it.

I'd be careful throwing around guesses like that.  You're dealing with so 
many layers of unknown.

Before the accusation comes, I'm not saying these problems are unsolvable. 
I'm just saying that (barring planetoid computers) sufficient hardware is 
a tiny fraction of the problem.  But I'm hearing a disconcerting level of 
optimism here that if we just wait long enough, it'll happen on all of our 
desktops with off-the shelf AI building kits.

Let me defuse another criticism of my perspective,  I'm not saying we need 
to copy the brain.  However, the brain is an excellent lesson of how Hard 
this problem is and should certainly be embraced as such.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Martin Striz

--- Brad Wyble [EMAIL PROTECTED] wrote:

  Evolution is limited by mutation rates and generation times.  Mammals 
  need from 1 to 15 years before they reach reproductive age.  Generation
 
 That time is not useless or wasted.  Their brains are acquiring 
 information, molding themselves.  I don't think you can just skip it.

You're confusing ontogeny with phylogeny.  It's the latter that I'm speaking
of.  Generation times are long because developmental pathways in biological
systems are slow.  Developmental pathways exist because there's an unfortunate
disconnect between phenotypes and genotypes.  Brains are not capable of
recursively self-improving their own wetware, and improvements have to waste
time by feeding back through the genome.  A recursively self-improving machine
intelligence (or pre-intelligent artificial lifeform) won't have the burden of
developmental pathways and excessive generation times.

 
  times are long and evolution is slow.  A computer could eventually 
  simulate 10^9 (or 10^20, or whatever) generations per second, and 
  multiple mutation rates (to find optimal evolutionary methodologies). 
  It can already do as many operations per second, it just needs to be 
  able to do them for billions of agents.
 
 
 10^ 9 generations per second?  This rate depends(inversely) on the 
 complexity of your organism.

Defined properly, complexity is the inverse of entropy, and entropy is the
number of equivalent states that a system can obtain.  Given this, I would not
be remiss in suggesting that the brain has more complexity then the molecular
architecture of the cell (due to its exquisite specification).  Yet it took 3
billion years to refine the cell, and only a few hundred million to catapult
ganglionic masses into human-level intelligent.  So while complexity defines a
proportional relationship with computational needs, selection conditions can
profoundly inverse those needs.

It may not takes seconds; it may take years, but I think it will be possible
within a few decades.

Martin Striz




__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-10 Thread Eugen Leitl
On Thu, Feb 10, 2005 at 12:07:57PM -0500, Brad Wyble wrote:

 You guys are throwing around orders of magnitude like ping pong balls 
 based on very little practical evidence.  Sometimes no estimate is less 
 misleading than one that is arbitrary.

What makes you think it's arbitrary? Minimal switching time e.g. for a
MRAM-based logic cell or a NEMS buckling bucky tube element, or a 
ballistic bucky transistor are not exactly guesswork. Recent literature is
full of pretty solid data.

Do you think that equivalents of ~ms processes in biological tissue
information processing can't occur within ~ns? ~ps gate delays are achievable 
with
current electronics. I don't see why running the equivalent of a piece of
neuroanatomy should not accrue more than 1 k parallel gate delays. It's a 
conservative guess, actually. (Which is why I'm saying 10^6, and not 10^9).
 
 No. The simulation handles virtual substrate, and that's O(1) if you match
 organism size with volume of dedicated hardware, assuming local signalling
 (which is ~ms constrained in biology, and ~ps..~fs constrained
 relativistically).
 
 
 I was referring to the complexity of the organism's mind.  Surely you are 
 not going to tell me that as the evolving brains increase in complexity, 
 there is no effect on the simulation speed?

Yes. I'm going to tell you that a spike propagating doesn't care (much) about
whether it's running in a mouse or a human. Processing takes longer in human
primates than in rodents, but not dramatically so. The reason is more hops,
and capability to process far more complex stimuli with a minimally modified
substrate. The processing unit sees signals passing along virtual wires, or 
packets passing through nodes. The higher organization levels are transparent, 
what matters is processing volume/numbers of nodes, and whether your average 
(virtual) connectivity at the processing element level can handle the higher 
connectivity in a human vs. rodent cortex. It is not obvious that a mouse 
brain voxel is doing significantly less work than a human brain voxel, as 
far as operation complexity is concerned.
 
 But in order for interesting things to happen, organisms have to be able 
 to interact with one another for quite some time before the grim reaper 
 does his grim business.

Do you think that a 1 MJahr/Jahr simulation rate can't address that? 
 
 I'm confused, all you want are Ants?
 Or did you mean AGI in  ant-bodies?

Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.

If you can come that far, the rest is relatively trivial, especially if you
have continous accretion of data from wet and computational neuroscience.

 The idea of bootstrapping intelligence is interesting, but far from 
 proven.  That too will require much engineering.

The idea is not exactly new, and fully validated since you can read this
sentence. It is an engineering problem, not projections of fundamental science
milestones. 
 
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgp1LqzmCA0dU.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-10 Thread Brad Wyble

I'm confused, all you want are Ants?
Or did you mean AGI in  ant-bodies?
Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.
If you can come that far, the rest is relatively trivial, especially if you
have continous accretion of data from wet and computational neuroscience.

I'm going to have to stop on this note.  You and I live in different 
worlds.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Cell

2005-02-10 Thread Ben Goertzel

  Social insects are a good model, actually. Yes, all I want is a
 framework
  flexible and efficient enough to produce social insect level on
 intelligence
  on hardware of the next decades.
 
  If you can come that far, the rest is relatively trivial,
 especially if you
  have continous accretion of data from wet and computational
 neuroscience.


 I'm going to have to stop on this note.  You and I live in different
 worlds.


 -Brad

Hmmm... IMO, there is a damn big leap between bugs and humans!!!

I'm not sure why you think that the step from one to the next is trivial?

Clearly from here to a simulated bug is a big leap, but the leap from a sim
bug to a sim human is ALSO really big, no?

ben g


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Eugen Leitl
On Tue, Feb 08, 2005 at 08:26:02AM -0600, Stephen Reed wrote:

 The published hardware description of the Cell SPUs: 128 bit vector 
 engines, 128 registers each, matches the published Freescale AltiVec 
 processor architecture.  I've looked over the programmer's documentation 

It's eight 4x32 bit engines, with dedicated on-die memory and
explicit-addressing shared memory, apart from the Power5-ish main CPU.

I don't see any documentation of inter-Cell clustering yet. It'd be trivial
to put SCI on-die. I'd be very surprised if they did, though.

 for that processor and believe that vector processing is of limited 
 usefulness for the typical Cyc knowledge base instruction trace.  As you 

No disagreement there.

 know, vector computations are well suited for fine-grained parallelism in 
 which a single operation is applied simultaneously to multiple operands.  

Notice that you've got 8 asynchronous cores of these. It's a SIMD and MIMD
system both. Granted it's numerics, but then most problems are that (AI
especially, or at least AI can be written that way).

 In Cyc, there are more opportunities for large-grained 
 inference parallelism as opposed to fine-grained parallelism.
 
 As the Cell programming model unfolds, it will be interesting to see just 
 how much entertainment (game) AI programming will use the Cell SPUs as 
 compared to using the Cell's conventional Power-derived GPU. I predict 

Current game AI is that by label only. The SPUs would be great for
nonalgorithmic/neuronal type of creature control -- but given that they will
be absorbed by increased game world complexity (physics engines) they will be
unavailable for that in typical console settings.

 that no game AI algorithm will use the SPUs.  This could be verified by 
 examining the marketing claims of the game development code libraries that 
 are sure to appear in the next couple of years.

Yes, I expect very detailed game physics, not much beyond that.
 
 Generally, I find that the Cell architecture is further evidence that 
 Moore's Law performance expectations will hold for several more 
 lithography nodes (process technology generations).  In particular, the 
 use of chip area for multiple cores as opposed to simply more cache 
 memory is a step in the right direction.  A spreadsheet I maintain 

Absolutely -- and they did the right thing by ditching the core for the SPUs.

 predicts that the x86 architecture will be 256 cores per chip at the 3.76 
 nanometer node, in the year 2022, which is nine lithography generations 

I don't expect conventional lithography electronics to go beyond 2015. Also,
x86 architecture and power dissipation densities break down before.

They have to go full scale nanoelectronics by 2022, which will be most likely
bucky electronics, spintronics and MRAM. 

 from now.  My assumption is that the number of cores will double with 
 each lithography generation, and that Intel will continue to migrate to a 
 new generation every two years.  It would suit Cyc-style AI processing 

Intel is having big problems with their x86 approach. Multicores will only
bring them that far, so they have to prepare a successor technology.

 best, if multi-core CPUs evolved in the direction of high performance 
 MIMD (multiple instruction, multiple data) integer processing, as 
 compared to the Cell SIMD (single instruction, multiple data) floating 
 point processing.

What I don't like about Cell is lack of 8 bit and 16 bit integer data types
in SPU SIMD. I'm also missing discussion on whether the SPUs are connected by
a crossbar (there might be no need for it, if the internal bus is really fast
and wide), and which signalling interconnect Cell clusters will use. I'm really
hoping they will ship a flavor with a torus network a la Blue Gene (will
there be Cell Blue Gene?)
 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpXs5ES0Pnd9.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-09 Thread Stephen Reed
On Wed, 9 Feb 2005, Eugen Leitl wrote:
 What I don't like about Cell is lack of 8 bit and 16 bit integer data types
 in SPU SIMD. I'm also missing discussion on whether the SPUs are connected by
 a crossbar (there might be no need for it, if the internal bus is really fast
 and wide), and which signalling interconnect Cell clusters will use. I'm 
 really
 hoping they will ship a flavor with a torus network a la Blue Gene (will
 there be Cell Blue Gene?)

If in fact the Cell SPU is dervived from the Freescale AltiVec 
architecture then 16 bit integer data types are supported according to the 
summary at:

http://www.freescale.com/files/32bit/doc/fact_sheet/ALTIVECGLANCE.pdf

-Steve


-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Stephen Reed
On Wed, 9 Feb 2005, Stephen Reed wrote:

 On Wed, 9 Feb 2005, Eugen Leitl wrote:
  What I don't like about Cell is lack of 8 bit and 16 bit integer data types
  in SPU SIMD. I'm also missing discussion on whether the SPUs are connected 
  by
  a crossbar (there might be no need for it, if the internal bus is really 
  fast
  and wide), and which signalling interconnect Cell clusters will use. I'm 
  really
  hoping they will ship a flavor with a torus network a la Blue Gene (will
  there be Cell Blue Gene?)
 

Here are links to Cell articles written by an attendee at the ongoing 
semiconductor conference:

http://arstechnica.com/articles/paedia/cpu/cell-1.ars
http://arstechnica.com/articles/paedia/cpu/cell-2.ars

-Steve


-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Yan King Yin
I guess one problem (I'm doing neural network stuff) is
whether the *main* memory access rate can be increased
by using the Cell.  If each subprocessor can access the
main memory independently that'd be a huge performance
boost.

The 256K local memory is not entirely ideal because,
like the brain, most useful neural networks need to
address the entire set of weights within relatively
short periods, ie you can't just play with a local
set of weights for a long time.  It has to be global.

Still, there'll be at least 8x more processors in each
cell, and that certainly helps in terms of concurrency.

YKY

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Eugen Leitl
On Wed, Feb 09, 2005 at 11:13:18PM +0800, Yan King Yin wrote:

 I guess one problem (I'm doing neural network stuff) is
 whether the *main* memory access rate can be increased
 by using the Cell.  If each subprocessor can access the
 main memory independently that'd be a huge performance
 boost.

There could be several models to use the architecture effectively. 
One would be streaming through main memory, and doing lots of operations on
the local on-die window. 2 MBytes on-die is a lot of core. All of this would
have to be done manually.

The network could very well be locally constrained and/or simulate the
long-range through interim hops. This would limit array dimensionality, and
thus allow you to arrange your prefetches from main core with a good duty
cycle in regards to streaming vs. random access.

Another approach would be do cluster invididual Cell boxes in a 3d torus, and
largely disregard off-die memory. This would absolutely require a signalling
interconnect with very large bandwidth and ~us latency.
 
 The 256K local memory is not entirely ideal because,

It's 8x256, explicitly-addressed.

 like the brain, most useful neural networks need to
 address the entire set of weights within relatively
 short periods, ie you can't just play with a local
 set of weights for a long time.  It has to be global.

I think you have to work with relative offsets (tighter encoding too) on a 3d
or 4d grid, with connection density rapidly decaying with distance. This maps
very well to a torus.
 
 Still, there'll be at least 8x more processors in each
 cell, and that certainly helps in terms of concurrency.

Yes. Still, so far this is vaporware. This might ship in quantities in two
years, or not at all.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpAqotrWhCPH.pgp
Description: PGP signature


Re: [agi] Cell

2005-02-09 Thread Brad Wyble
Hardware advancements are necessary, but I think you guys spend alot of 
time chasing white elephants.  AGI's are not going to magically appear 
just because hardware gets fast enough to run them, a myth that is 
strongly implied by some of the singularity sites I've read.

The hardware is a moot point.  If a time traveler from the year 2022 were 
to arrive tomorrow and give us self-powered uber CPU fabrication plants, 
we'd be barely a mouse fart closer to AGI.

Spend your time learning how to use what we have now, that's what 
evolution did, starting from the primitive processing capabilities of 
single celled organisms.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-09 Thread Martin Striz

--- Brad Wyble [EMAIL PROTECTED] wrote:

 
 Hardware advancements are necessary, but I think you guys spend alot of 
 time chasing white elephants.  AGI's are not going to magically appear 
 just because hardware gets fast enough to run them, a myth that is 
 strongly implied by some of the singularity sites I've read.

Really?  Someone may just artificially evolve them (it happened once already on
wetware), and evolution in silico could move 10, nay 20, orders of magnitude
faster.

Martin Striz

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Cell

2005-02-08 Thread Stephen Reed
The published hardware description of the Cell SPUs: 128 bit vector 
engines, 128 registers each, matches the published Freescale AltiVec 
processor architecture.  I've looked over the programmer's documentation 
for that processor and believe that vector processing is of limited 
usefulness for the typical Cyc knowledge base instruction trace.  As you 
know, vector computations are well suited for fine-grained parallelism in 
which a single operation is applied simultaneously to multiple operands.  
In Cyc, there are more opportunities for large-grained 
inference parallelism as opposed to fine-grained parallelism.

As the Cell programming model unfolds, it will be interesting to see just 
how much entertainment (game) AI programming will use the Cell SPUs as 
compared to using the Cell's conventional Power-derived GPU. I predict 
that no game AI algorithm will use the SPUs.  This could be verified by 
examining the marketing claims of the game development code libraries that 
are sure to appear in the next couple of years.

Generally, I find that the Cell architecture is further evidence that 
Moore's Law performance expectations will hold for several more 
lithography nodes (process technology generations).  In particular, the 
use of chip area for multiple cores as opposed to simply more cache 
memory is a step in the right direction.  A spreadsheet I maintain 
predicts that the x86 architecture will be 256 cores per chip at the 3.76 
nanometer node, in the year 2022, which is nine lithography generations 
from now.  My assumption is that the number of cores will double with 
each lithography generation, and that Intel will continue to migrate to a 
new generation every two years.  It would suit Cyc-style AI processing 
best, if multi-core CPUs evolved in the direction of high performance 
MIMD (multiple instruction, multiple data) integer processing, as 
compared to the Cell SIMD (single instruction, multiple data) floating 
point processing.

Cheers.
-Steve


On Tue, 8 Feb 2005, Eugen Leitl wrote:

 
 I presume everyone here is aware that the Cell architecture has been
 officially announced. Technical details (as opposed to speculations gleaned
 off patents) are yet scarce, but there's definitely some promise this
 architecture becomes mainstream sometime within next two years.
 
 What are you going to do with it?
 
 

-- 
===
Stephen L. Reed  phone:  512.342.4036
Cycorp, Suite 100  fax:  512.342.4040
3721 Executive Center Drive  email:  [EMAIL PROTECTED]
Austin, TX 78731   web:  http://www.cyc.com
 download OpenCyc at http://www.opencyc.org
===

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Cell

2005-02-08 Thread Ben Goertzel

Hmmm...

It seems to me that the Cell is of no use for Novamente cognition, but could
be of great use for a sense-perception front-end for Novamente

Novamente cognition would make better use of efficient MIMD parallelism,
rather than this kind of SIMD parallelism...

ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Stephen Reed
 Sent: Tuesday, February 08, 2005 9:26 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Cell


 The published hardware description of the Cell SPUs: 128 bit vector
 engines, 128 registers each, matches the published Freescale AltiVec
 processor architecture.  I've looked over the programmer's documentation
 for that processor and believe that vector processing is of limited
 usefulness for the typical Cyc knowledge base instruction trace.  As you
 know, vector computations are well suited for fine-grained parallelism in
 which a single operation is applied simultaneously to multiple operands.
 In Cyc, there are more opportunities for large-grained
 inference parallelism as opposed to fine-grained parallelism.

 As the Cell programming model unfolds, it will be interesting to see just
 how much entertainment (game) AI programming will use the Cell SPUs as
 compared to using the Cell's conventional Power-derived GPU. I predict
 that no game AI algorithm will use the SPUs.  This could be verified by
 examining the marketing claims of the game development code
 libraries that
 are sure to appear in the next couple of years.

 Generally, I find that the Cell architecture is further evidence that
 Moore's Law performance expectations will hold for several more
 lithography nodes (process technology generations).  In particular, the
 use of chip area for multiple cores as opposed to simply more cache
 memory is a step in the right direction.  A spreadsheet I maintain
 predicts that the x86 architecture will be 256 cores per chip at the 3.76
 nanometer node, in the year 2022, which is nine lithography generations
 from now.  My assumption is that the number of cores will double with
 each lithography generation, and that Intel will continue to migrate to a
 new generation every two years.  It would suit Cyc-style AI processing
 best, if multi-core CPUs evolved in the direction of high performance
 MIMD (multiple instruction, multiple data) integer processing, as
 compared to the Cell SIMD (single instruction, multiple data) floating
 point processing.

 Cheers.
 -Steve


 On Tue, 8 Feb 2005, Eugen Leitl wrote:

 
  I presume everyone here is aware that the Cell architecture has been
  officially announced. Technical details (as opposed to
 speculations gleaned
  off patents) are yet scarce, but there's definitely some promise this
  architecture becomes mainstream sometime within next two years.
 
  What are you going to do with it?
 
 

 --
 ===
 Stephen L. Reed  phone:  512.342.4036
 Cycorp, Suite 100  fax:  512.342.4040
 3721 Executive Center Drive  email:  [EMAIL PROTECTED]
 Austin, TX 78731   web:  http://www.cyc.com
  download OpenCyc at http://www.opencyc.org
 ===

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Cell draws closer

2004-11-29 Thread Eugen Leitl

http://www.eet.com/article/printableArticle.jhtml?articleID=54200580url_prefix=semi/newssub_taxonomyID=

  Details trickle out on Cell processor
By Brian Fuller Ron Wilson , EE Times
November 24, 2004 (3:54 PM EST)
URL: http://www.eet.com/article/showArticle.jhtml?articleId=54200580

SAN FRANCISCO . The eagerly anticipated Cell processor from IBM, Toshiba and
Sony leverages a multicore 64-bit Power architecture with an embedded
streaming processor, high-speed I/O, SRAM and dynamic multiplier in an
effort, the partners hope, to revolutionize distributed computing
architectures.

Although the technical aspects of the design, which has been in the works for
nearly four years, are tightly held, details are emerging in excerpts from
papers to be released today for the 2005 International Solid-State Circuits
Conference(see story, page 94), as well as in patent filings.

The highly integrated Cell device has been billed as a beefy engine for
Sony's Playstation 3, due to be demonstrated in May. But the architecture
also addresses many other applications, including set-top boxes and mobile
communications. Workstations fitted with the Cell architecture . a $2 billion
endeavor . are already in the hands of game developers.

Five ISSCC papers from members of the 400-strong Cell processor team (see
related story, Best Development Teams, page 64) open peepholes onto a
highly modular and hierarchical first-generation device implemented in
90-nanometer silicon-on-insulator (SOI) technology.

At root, the Cell architecture rests on two concepts: the apulet, a bundle
comprising a data object and the code necessary to perform an action upon it;
and the processing element, a hierarchical bundle of control and streaming
processor resources that can execute any apulet at any time.

The apulets appear to be completely portable among the processing elements in
a system, so that tasks can be doled out dynamically by assigning a waiting
apulet to an available processing element. Scalability can be achieved by
adding processing elements.

These ideas are not easily achieved. According to data from Paul Zimmons, a
PhD graduate in computer science from the University of North Carolina at
Chapel Hill, they require a highly intelligent way of dividing memory into
protected regions called bricks, careful attention to memory bandwidth and
local storage, and massive bandwidth between processing elements . even those
lying on separate chips.

At the top level, the architecture appears to be a pool of cells, or
clusters of perhaps four identical processing elements. All of the cells in a
system . or for that matter, a network of systems . are apparently peers.
According to one of the ISSCC papers on the Cell design, a single chip
implements a single processing element. The initial chips are being built in
90-nm SOI technology, with 65-nm devices reportedly sampling.

Each processing element comprises a Power-architecture 64-bit RISC CPU, a
highly sophisticated direct-memory access controller and up to eight
identical streaming processors. The Power CPU, DMA engine and streaming
processors all reside on a very fast local bus. And each processing element
is connected to its neighbors in the cell by high-speed highways. Designed
by Rambus Inc. with a team from Stanford University, these highways . or
parallel bundles of serial I/O links . operate at 6.4 GHz per link. One of
the ISSCC papers describes the link characteristics, as well as the
difficulties of developing high-speed analog transceiver circuits in SOI
technology.

The streaming processors, described in another paper, are self-contained SIMD
units that operate autonomously once they are launched.

They include a 128-kbyte local pipe-lined SRAM that goes between the stream
processor and the local bus, a bank of one hundred twenty-eight 128-bit
registers and a bank of four floating-point and four integer execution units,
which appear to operate in single-instruction, multiple-data mode from one
instruction stream. Software controls data and instruction flow through the
processor.

Another ISSCC paper describes a dynamic Booth double-precision multiplier
designed in 90-nm SOI technology.

Performance estimates

The processing element's DMA controller is so designed, it appears, that any
chip in a system can access any bank of DRAM in the cell through a
band-switching arrangement. This would make all the processing resources
appear to be a single pool under control of the system software.

Giving scale to the performance targets for the project, one of the ISSCC
papers puts the performance of the streaming-processor SRAM at 4.8 GHz. This
suggests the data transfer rate for 128-bit words across the local bus within
the processing element. When the Cell alliance was announced in 2001, Sony
Computer Entertainment CEO Ken Kutagari estimated the performance of each
Cell processor . a collection of apparently four processing elements in the
first implementation . at 1 teraflops.

But UNC's