Fine. Got your point. Don't think it's worth worrying about such obscure
definitions, but gotcha.
But , by the way, I did offer you a truly objective definition. My working
title for it is a psychosystems definition. ((Although I'm open to other
suggestions). It focusses on how problems arise from a system's (or agent's)
interactions with its environment.
Where philosophers and scientists of all kinds go wrong here is in
dissociating the "psycho" - what's happening in the head from its causes (or
objects) in the environment. You have to look at the two together for things
fully to make sense. The kinds of definitions you seem to be talking about
seem very psycho.
(The same happens in psychology, when scientists dissociate emotions from
the system's (i.e. person's) interactions with their environment - when
depression and anxiety, for example, are treated as if they are purely
internal matters, rather than responses to the person's lifestyle in their
environment).
To a great extent, you see the same dissociative tendencies in AI, when
philosophers and programmers focus more on the internal programming
methodologies, than the external problems they are designed to solve.
P.S. Formal definitions are worthless at the moment by definition. The only
real examples of intelligence, problem-solving and goals at the moment are
living creatures, and we can't be sure how their brains and nervous systems
work, so we can only validly talk about such "informal" terms as you call
them. Computers are still only extensions of human beings - their
problem-solving is only an extension of ours. They have no independent
existence without us as yet. So intelligence in all its forms must
objectively be defined first in terms of humans and animals.
----- Original Message -----
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Friday, April 27, 2007 12:34 AM
Subject: Re: [agi] Circular definitions of intelligence
Mike Tintner wrote:
It's driving me nuts because it's basically simple.
Mike, you are getting the wrong end of the stick so badly that you are
actually placing me on the *opposite* end of the argument to where I am,
and then trying to argue against me! ;-)
Here is the thing that is NOT simple: there are some people who claim
that they can give hard, objective definitions for terms like
'intelligence' 'goal' 'solve' etc .... but if you look carefully at their
supposedly "hard, objective definitions" they do not match up with
everyone's commonsense use of those terms. Their definitions do things
like classifying weird optimization procedures as 'intelligent', even
though they don't look intelligent to you or I, and they sometimes define
intelligence in terms of abstract 'algorithms' that require infinite
computing power, so their actual behavior can never be checked in the real
world.
It is those people I am arguing against. The trouble is, disentangling
their definitions in order to SHOW that they are bankrupt is hard work. It
is not simple.
I don't give a hoot about informal, loose definitions, which are a dime a
dozen anyhow (so your offering, below, although interesting, has nothing
to do with the argument), I only care about attempts to produce a strictly
formal, objective definition that can be cashed out without any reference
to things that are subjective ...... and even then, I only care about
those because I am attacking them, not because I am defending them.
So please, don't put me on a train bound for the nether end of Derrida's
ass: that is what I have been accusing the other side of doing. ;-)
Part of the reason I am doing this, BTW, is that some of those people are
using their 'definitions' to claim superior scientific status for their
work, and then using that supposedly superior scientific status as a stick
to beat down people who are accusing them of wasting vast amounts of money
pursuing an approach to AI that could well be a complete waste of time.
So this question has real consequences: 90% of the AI research
person-hours might be getting poured down the toilet because some people
are claiming scientific validity for what they do, when in fact they are
just pissing into the wind. This is no joke.
Richard Loosemore.
My definition covers your point:
"
" If I drop this pencil, does it have
the 'goal' of reaching the floor? If 'goal' is the wrong word here (and
clearly it is), then what exactly is a real goal?"
No - only living creatures (and agents with some kind of mind), have
general drives and the power to set specific goals for them, and then
solve the problems of how to reach those goals.
Inanimate matter can only move in basically straight lines and so can
only be said to have "destinations." Put a forked road or obstacle in
its path and it crashes into the obstruction. It has neither the mental
or physical capacity to solve problems.
Living creatures have "goals" - if you put obstacles in the path to their
goals, they have the capacity to solve the problem both cognitively and
physically of how to get around those obstacles and still reach their
goal.
All problem-solving comes down to that.
If you keep redefining the basic terms, like goal, problem etc. , you
achieve 0 but disappearing up your rectum in company with Derrida and a
million other philosophers. Language can never be precise.
The big deal in terms of AGI for me, is polishing up a little the
definition of the second kind of intelligent problem-solving , i.e.
open-ended problem-solving, which is central to AGI . - and then finding
a few choice examples of what is entailed, with the memorability of the
Turing Test example, but without its impossible vagueness..
----- Original Message ----- From: "Richard Loosemore"
<[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, April 26, 2007 9:58 PM
Subject: Re: [agi] Circular definitions of intelligence
Mike Tintner wrote:
You guys are driving me nuts.
Jumping in at the middle, here goes:
"Intelligence is the capacity to solve problems.
(An intelligent agent solves problems in order to reach its goals)
Problems occur when an agent must select between two or more paths to
reach its goals.
Sorry to hear it's driving you nuts, but....
Jumping in at the middle means you missed the original point,
unfortunately: the original point is whether you can do build a
definition without begging any questions, and the terms 'solve',
'problem', 'agent' and 'goal' in the above definition all require
definitions of their own.
When you really push hard on it, it turns out that these terms cannot be
defined without implicitly leaving it up to an 'intelligence' (i.e. us)
to make a judgment call about what constitutes 'solve', 'problem',
'agent' and 'goal'.
Try it: what counts as a 'goal'? If I drop this pencil, does it have
the 'goal' of reaching the floor? If 'goal' is the wrong word here (and
clearly it is), then what exactly is a real goal?
You may have to go back to the beginning of the thread and read exactly
what I was arguing, to know why I said what I did a few hours ago.
Why is it important? Well, I did say why, too..... :-)
Eric B. Ramsay wrote:
> Several emails ago, both Ben and Richard said they were no longer
> going
> to continue this argument, yet here they are - still arguing. Will the
> definition of intelligence be able to accomodate this behavior by
> these
> gentlemen?
Well...... actually I said "Unless you or someone else comes up with a
definition that does not fall into one of these traps, I am not going to
waste any more time arguing the point."
But Ben did try to come up with one, so I continued.
Okay, so at a meta level it was brainless of me to continue ;-)
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.463 / Virus Database:
269.6.1/776 - Release Date: 25/04/2007 12:19
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.463 / Virus Database:
269.6.1/776 - Release Date: 25/04/2007 12:19
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936