On Wed, Apr 04, 2007 at 08:23:37AM -0700, David Clark wrote:

> Although some emergent and self-organization surely occurs in our brains,

I meant is that the trouble with biology is that it's difficult to
analyse without having access to the whole picture. Unlike human designs,
which can be understood analytically from the sum of the parts.

The learning part is not really relevant, because typically you would
plug in more or less mature brain tissue. (Of course, by looking at
structure/functure diffs in less mature systems you can see how
the knowledge extraction from the environment is done (e.g. synapse pruning
is a hint).

> how do you reconcile the fact that babies are very stupid compared to
> adults?  Babies have no less genetic hardware than adults but the difference

The wiring is not determined by the genome, it's only a facility envelope.
Getting the genome into the picture will be necessary at some point, but
the current simulations are sore pressed by looking at function of static
hardware (s vs. minutes/hours/days).

> in intelligence is gigantic.  I content the difference is that adults have
> 20+ years of learning from other intelligent people and babies do not.  I
> don't see any evidence that would support a claim that adult level
> intelligence "emerges" by itself in adults.  This evidence seems to be
> lacking for both humans and computer AIs.

Babies come with enough functionality onboard in order to be able to
extract knowledge from a suitably stuctured/supportive environment. 
As such they're a good model for an artificial infant, but that is not
the scope of the Blue Brain project, as far as I understand.
 
> If a baby never got any more intelligent than just after it was born, would
> you call it intelligent?  I am not saying babies, just born, exhibit NO
> intelligence but would you say that baby level intelligence is good enough
> for an AGI to be called intelligent?

I would say that if you'd be able to mimick the human infant development
for a few years, then you'd get one damn useful AI. If you can make this
scale to an adult, the AI problem is solved.
 
> I know your project (or AGI idea) is based on some form of brain simulation
> but making blanket statements about currently unknown (unproven) issues
> doesn't seem warranted by the facts.

I don't have an AI project, I'm interested in individually accurate numerical
models of animals, including people. It's about removing some of the limits
to the human condition. The AI part is only a side effect of that.
 
> Why can't "database design" level intelligence be modeled and studied at a
> level that our computers can efficiently do, without the need to model the

Our computers are pretty pathetic, and our programmers are even more so.

> incredible complexity of a human brain?  If I was creating an accounting

The models are not complex. The emulation part is a standard numerics
package. The complexity comes directly from scans of neurons. The resulting
behaviour is complex, but IMHO not hopelessly so. I'm interested in
automatic optimization, which is based on feature and function abstraction,
and co-evolution of machine/represenation. This is a much harder task
than "mere" brute-force simulation -- however, much easier than classical
AI.

I'm not religious about this, it's just this appears to me be barely doable,
whereas the classical AI quite beyond of what mere human designers and
programmers can do. Just because you're intelligent, it doesn't mean you're
intelligent enough to understand how you're intelligent. 

> program, I wouldn't need a model of the human brain to make sure that the
> debits equal the credits.  What makes intelligence ineligible to a solution
> using existing computer techniques?

It's the people. Humans can't handle complexity very well. For some
reason (no idea why) there was a school that thought that human experts
knew just what they were solving, and could externalize that knowledge
into a rule-based design that computers could execute. That approach was
pretty much a complete debacle (the experts both didn't knew how they
were doing it, nor could they externalise that knowledge in a representation
that was useful for encoding it in a classical machine).

-- 
Eugen* Leitl <a href="http://leitl.org";>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to