On Jan 7, 2008 9:12 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>
> Robert,
>
> Look, the basic reality is that computers have NOT yet been creative in any
> significant way, and have NOT yet achieved AGI - general intelligence, - or
> indeed any significant rulebreaking adaptivity; (If you disagree, please
> provide examples. Ben keeps claiming/implying he's solved them or made
> significant advances, but when pressed never provides any indication of
> how).

We all agree that AGI is not yet achieved.

Space travel to Proxima Centauri is also not yet achieved, nor is human
cloning ... there is a big difference in science between

-- "not yet achieved, but seems possible based on available knowledge"

and

-- "doesn't seem possible based on available knowledge"

> If you are truly serious about solving these problems, I suggest, you should
> prepared to be "hurt" - you should be ready to consider truly radical ideas
> - for the ground on which you stand to be questioned - and be seriously
> shaken up. You should WELCOME any and all of your assumptions being
> questioned. Even if, let's say, what I or someone else suggests is in the
> end nutty, drastic ideas are good for you to contemplate at least for a
> while.

Most of us on this list are already aware of the possibility that it
is not possible
to achieve high levels of intelligence using digital computer programs, given
realistic space and time constraints.

It is scientifically possible that Penrose is right, and to achieve human-like
levels of intelligence in a machine, one needs to use a machine making use
of weird, as yet poorly understood quantum gravity effects.

However, at present, that Penrose-ean hypothesis does not seem that likely
to most of us on this list; and given the current state of science, it's not a
hypothesis that we really can explore in detail.  Quantum gravity is
in a confused
state and quantum computing (let alone quantum gravity computing) is
in its infancy.

There is also always the possibility that the whole modern scientific world-view
is deeply flawed in a way that is relevant to AGI.  Maybe digital computers are
unable to lead to human-level AI, for some reason totally unrelated to
computability
theory and quantum gravity and all that.  There is plenty in the world
that we don't
understand -- I recommend Damien Broderick's recent and excellent book
"Outside the Gates of Science" for anyone who doesn't agree....

But, this list is devoted to exploring the hypothesis that AGI **can**
be achieved
via creating intelligent machines -- and mainly, at the moment, to the
hypothesis that
it can be achieved via creating intelligent digital computer programs.

We realize this hypothesis may be wrong, but it seems likely enough to
us to merit
a lot of attention and effort aimed at validation.

Your supposed arguments against the hypothesis are nowhere near as original
as you seem to think, and nearly everyone on this list has heard them before and
not found them convincing.  I read "What Computers Can't Do" by Hubert Dreyfus
as a child in the 1970's and your diatribes don't seem to add anything to what
he said there.

If you think the whole digital-computer-AGI pursuit is a wrong
direction and a waste
of time, that's fine.  But why do you feel the need to keep repeatedly
informing us
of this fact?

For instance, I think string theory is probably wrong.  But I don't
see any point in
spending my time trolling on string theory email lists and harping on this point
repeatedly and confusingly.  Let them explore their hypothesis...

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=82594941-c3bbc7

Reply via email to