Hi,
On Mon, May 14, 2007 4:57 pm, David Clark wrote:
Some people take Mathematics and their so called proofs as the gospel
when it comes to programming and AGI. Even though I have a Math minor
from University, I have used next to no Mathematics in my 30 year
programming/design career. I
Pei,
necessary to spend some time on this issue, since the definition of
intelligence one accepts directly determines one's research goal and
criteria in evaluating other people's work. Nobody can do or even talk
about AI or AGI without an idea about what it means.
This is exactly why I am
P.S.
I should have added one comment in my previous remarks: part of my
attack against those who try to make formal definitions of intelligence
is that I have a specific, technical argument that says that such formal
definitions are strictly impossible: that is what my AGIRI 2006 paper
Pei,
Thankyou for that.
I like everything you say about trying to define intelligence. In
essence, you and I are in perfect agreement at that level of the discussion.
However, there was a slight confusion, in that previous 'challenge' of
mine, with the exact target my remarks.
I was
But there is a second type of definition that tries to *formalize* what
the subject is, and that is where my challenge was really directed.
I believe that Gödel's Incompleteness Theorem basically renders this form of
your challenge impossible.
- Original Message -
From: Richard
Mark Waser wrote:
But there is a second type of definition that tries to *formalize*
what the subject is, and that is where my challenge was really directed.
I believe that Gödel's Incompleteness Theorem basically renders this
form of your challenge impossible.
Okay, now I have to figure
For any formal definition of intelligence, there will exist a form of
intelligence that is not covered by that definition because intelligence is
non-trivial/complex enough to invoke Gödel's theorem.
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To:
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I think that the analogue of Gödel's theorem here would be something
more like: For any formal definition of intelligence there will exist a
form of
I didn't say anything about true. I said not covered by that definition.
While I have no problem with your definition and even accept that it may be
clearer -- I think that it is exactly analogous to mine since cannot be
proven and not covered are the same.
- Original Message -
Shane Legg wrote:
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I think that the analogue of Gödel's theorem here would be something
more like: For any formal definition of intelligence there will
Shane Legg wrote:
Thus I think that the analogue of Gödel's theorem here would be
something more like: For any formal definition of intelligence
there will exist a form of intelligence that cannot be proven to be
intelligent even though it is intelligent.
With unlimited computing power this
Richard,
I was distinguishing between two different attitudes that people take to
the problem of making a definition. One attitude (the one you adopt
here, and the one I would also wholeheartedly adopt) is to look for a
useful *descriptive* definition: something that takes the commonsense
On 5/15/07, Richard Loosemore [EMAIL PROTECTED] wrote:
I will try to see if I can extract NARS and Novamente as special cases
of the framework at some point. I believe I have a chance of doing this
(I have actually thought about it, believe it or not), but its not going
to happen soon. :-)
Hmmm. If Goldbach's conjecture is true (and provable), the program will loop
forever and is provably non-intelligent. If it's false, there's a
counterexample and it's intelligent. (Assuming you mean by halt to go on to
the AIXItl part). The overall program is only a stumper if Goldbach is
It would be nice to have a universal definition of general intelligence, but I
don't think we even share enough common intuition about what is intelligent or
what is general.
Instead what we seem to have is, for example, a definition based on uncertain
reasoning from somebody building an
On 5/15/07, Derek Zahn [EMAIL PROTECTED] wrote:
The point is that maybe we don't need a definition of intelligence, all we
need is a vision of an endpoint and (the really interesting bit), the steps
we'll take to get there.
In that case, the vision of an endpoint is exactly your working
I too very largely and strongly agree with what Pei says below.
But in all this discussion, it looks like one thing is being missed (please
correct me).
The task is to define TWO kinds of intelligence not just one - you need a
dual not just a single definition of intelligence. Everyone seems
AGI is a race where everyone has drawn their own finish line.
My goal is to have a machine predict natural language text as well as the
average adult human. Why?
1. It is a hard AI problem. A solution might lead to a better understanding
of human learning.
2. Language modeling has useful
Mike,
If you take a look at my papers, you'll see that I distinguish not 2,
but 5 different types of goals currently associated with the label
AI. Your first type, to simulate human mind is also included.
Since a working definition is used to guide one's research, it doesn't
need to cover other
On 5/15/07, Derek Zahn [EMAIL PROTECTED] wrote:
Rather than try to come up with universally accepted definitions for a
concept that we all view differently, perhaps any proposed AGI
(or AGI-like) path could put forward its perceived endpoint: that is,
imagine the system you'd like to build...
Pei,
Fully agree. The situation in mainstream AI is even worse on this
topic, compared to the new AGI community. Will you write something for
AGI-08 on this?
Marcus suggested that I submit something to AGI-08. However I'm not
sure what I could submit at the moment. I'll have a think about
I am suggesting that there are two main types of intelligence - and humans
have both.
Simulating the human mind isn't a definition of either of those types, or
intelligence, period.
The two main types of intelligence have long been given names by mainstream
pyschology -
convergent or
On 5/15/07, Shane Legg [EMAIL PROTECTED] wrote:
Hmmm. Ok, imagine that you have two optimization algorithms
X and Y and they both solve some problem equally well. The
difference is that Y uses twice as many resources as X to do it.
As I understand your notion of intelligence, X would be
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I am suggesting that there are two main types of intelligence - and humans
have both.
Simulating the human mind isn't a definition of either of those types, or
intelligence, period.
Sorry for the misunderstanding.
The two main types of
For the philosophy of AI - and this IS a discussion of philosophy - to
ignore Psychology and human intelligence, and the very extensive work
already done here, including on creativity - doesn't seem v. wise, given
that AI/AGI still haven't got to square one in the attempt either to
emulate
or to
Pei,
Here are some references. You can Google divergent vs convergent.
Do note that I am NOT suggesting these definitions are adequate, merely that
Psychology has long identified two different kinds of intelligence, and
broadly I think that's right, and yes, conforms fairly neatly with the
Ben,
Am a little confused here - not that we're not talking very roughly along the
same lines and about the same areas. It's just that for me conceptual blending
is simply a form of analogy, which we've just discussed (and one that works by
sensory/imaginative rather than symbolic analogy).
Mike,
If the difference is just innate ability vs. acquired ability, we
don't need two types of intelligence. Many AGI models, including NARS,
can handle both consistently.
Pei
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei,
Here are some references. You can Google divergent vs
28 matches
Mail list logo