Thanks for the detailed response.

----- Original Message ----- 
From: "Eugen Leitl" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Wednesday, April 04, 2007 8:52 AM
Subject: Re: [agi] Growing a Brain in Switzerland


> The learning part is not really relevant, because typically you would
> plug in more or less mature brain tissue. (Of course, by looking at
> structure/functure diffs in less mature systems you can see how
> the knowledge extraction from the environment is done (e.g. synapse
pruning
> is a hint).

The brain has been found to store it's data in a very distributed fashion.
How could a computer that tried to simulate a brain work if the exact
knowledge of the whole brain was required before anything could work at all?
It hadn't occurred to me that you or the Blue Brain Project would be coding
the exact adult result of decades of human learning.  The learning
algorithms and experience would then just be some of this data encoded in
the neurons and connections, created by a human, I presume?

> The wiring is not determined by the genome, it's only a facility envelope.
> Getting the genome into the picture will be necessary at some point, but
> the current simulations are sore pressed by looking at function of static
> hardware (s vs. minutes/hours/days).

It would seem to me that the wiring of a baby's brain must be either random
or determined by information encoded in the genome.  If you have an
alternative suggestion, I would be happy to hear it.  They (Blue Brain
Project) are just modeling a specific neuron array taken directly from a
human without any knowledge of why or how it works?

> I would say that if you'd be able to mimick the human infant development
> for a few years, then you'd get one damn useful AI. If you can make this
> scale to an adult, the AI problem is solved.

My point wasn't if you could teach the AI like a child but rather if a child
level intelligence was considered intelligent.  My feeling is that we
already have 6 billion people and if all we want is just another human
equivalent being, then why create an AGI when the real thing is easily
available?

> Our computers are pretty pathetic, and our programmers are even more so.

Compared to what?  "Pathetic" is a pretty strong word!  Good software takes
time and great software is on the way IMHO.

> The models are not complex. The emulation part is a standard numerics
> package. The complexity comes directly from scans of neurons. The
resulting
> behaviour is complex, but IMHO not hopelessly so. I'm interested in
> automatic optimization, which is based on feature and function
abstraction,
> and co-evolution of machine/represenation. This is a much harder task
> than "mere" brute-force simulation -- however, much easier than classical
> AI.

I would be interested in your reasons to backup that "however, much easier
than classical AI".  Everyone should try to work on the projects they feel
have the best chance of success, but the relative merits of differing
approaches are certainly still open for debate.

> I'm not religious about this, it's just this appears to me be barely
doable,
> whereas the classical AI quite beyond of what mere human designers and
> programmers can do. Just because you're intelligent, it doesn't mean
you're
> intelligent enough to understand how you're intelligent.

It doesn't mean I don't "understand how you're (I'm) intelligent".  If I am
intelligent or not says nothing either way about my ideas on intelligence :)

> It's the people. Humans can't handle complexity very well.

At any instant in time, I would agree with you.  However, a human can use
tools like paper and computers to juggle any amount of complexity to solve
problems.  I see no limit to the level of complexity a single human can work
intelligently with.  If problems or tasks are sub-divided amount a number of
people, then the speed and scope of the complexity can be even greater.  Our
eye gathers information mostly from a very small, very high cone area of the
retina.  The spot is quickly (many times per second) shifted around whatever
we happen to be looking at and our brains create the images we think we see
out of these montages.  Our brains work on complex projects in a similar
manner that creates a virtual rather than real complex picture.

> For some
> reason (no idea why) there was a school that thought that human experts
> knew just what they were solving, and could externalize that knowledge
> into a rule-based design that computers could execute. That approach was
> pretty much a complete debacle (the experts both didn't knew how they
> were doing it, nor could they externalise that knowledge in a
representation
> that was useful for encoding it in a classical machine).

I have worked with many very smart people professional people who knew their
work but had no clue about systems or the precise analytical thought
required for programming or working with computers.  I don't think expert
systems or rule based systems are of any value for creating an AGI.  There
are other ways to build intelligent systems IMO, that work at a language
type level that don't have the drawbacks you mention.

-- David Clark


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to