On Thu, Feb 10, 2005 at 08:42:59AM -0500, Brad Wyble wrote:

> I don't think you and I will ever see eye to eye here, because we have 
> different conceptions in our heads of how big this parameter space is.

It depends on the system. The one I talked about (automata networks) is not
very large. I.e. doable with a mole of switches. Is it a sufficiently
flexible framework? I suspect so, but the only way to find out would be to
try.
 
> Instead, I'll just say in parting that, like you, I used to think AGI was 
> practically a done deal.  I figured we were 20 years out.

Where did I say that AI is a done deal? Have you ever tried ordering a
mole of buckytronium from Dell? Try it sometime.
 
> 7 years in Neuroscience boot-camp changed that for good.  I think anyone 
> who's truly serious about AI should spend some time studying at least one 
> system of the brain.  And I mean really drill down into the primary 
> literature, don't just settle for the stuff on the surface which paints 
> nice rosy pictures.

Extremely relevant for whole body emulation, rather not relevant for AI.
(Don't assume that my background is computer science). This is getting
off-topic, but this is precisely why WBE needs a molecular-level scan, and
machine learning to fall up the simulation layer ladder. Humans can't do it.
 
> Delve down to network anatomy, let your mind be blown by the precision and 
> complexity of the connectivity patterns.

It's a heterogenous excitable medium, a spiking high-connectivity network
that works with gradients and neurotransmitter packets. Some thousands ion
channel types, some hundreds to thousands neuron cell types.

This is about enough detail to seed your simulation with. Don't forget: we're
only using this as an educated guess to prime the co-evolution. On a
different substrate (you can emulate automata networks on 3d packet-switched
systems very efficiently).
 
> Then delve down to cellular anatomy, come to understand how tightly 
> compact and well engineered our 300 billion CPUs are.  Layers and layers 
> of feedback regulation interwoven with an exquisite perfection, both 
> within cells and between cells.  What we don't know yet is truly 
> staggering.

Agreed. Fortunately, all of this is irrelevant for AI, because the hardware
artifacts are different.
 
> I guarantee this research will permanently expand your mind.

It did. Unfortunately, didn't go beyond monographs.
 
> Your idea of what a "Hard" problem is will ratchet up a few notches, and 
> you will never again look upon any significant slice of the AGI pie as 
> something simple enough that it can can be done by GA running on a few kg 

Evolutionary algorithms, not GA.

> of molecular switches.

Do you think anyone is smart enough to code a seed? If not, what is your idea
of an AI bootstrap?


-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Attachment: pgpxU0XJlFOrp.pgp
Description: PGP signature

Reply via email to