Mike Tintner wrote:
Sounds a little confusing. Sounds like you plan to evolve a system
through testing thousands of candidate mechanisms. So one way or
another you too are taking a view - even if it's an evolutionary, I'm
not taking a view view - on, and making a lot of asssumptions about
Linas Vepstas wrote:
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75). The (p=0.75)
Hi,
No: the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.
Lack of grounding makes an AGI fall flat on its face and not work.
I can't summarize the grounding literature in one post. (Though, heck,
I have actually tried to do that in
Benjamin Goertzel wrote:
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Ben,
Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness
Benjamin Goertzel wrote:
Hi,
No: the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.
Lack of grounding makes an AGI fall flat on its face and not work.
I can't summarize the grounding literature in one post. (Though,
Richard,
So here I am, looking at this situation, and I see:
AGI system intepretation (implicit in system use of it)
Human programmer intepretation
and I ask myself which one of these is the real interpretation?
It matters, because they do not necessarily match up.
That
RL:In order to completely ground the system, you need to let the system
build its own symbols
V. much agree with your whole argument. But - I may well have missed some
vital posts - I have yet to get the slightest inkling of how you yourself
propose to do this.
-
This list is
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:
RL:In order to completely ground the system, you need to let the system
build its own symbols
Correct. Novamente is designed to be able to build its own symbols.
what is built-in, are mechanisms for building symbols, and for
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
The complaint is not your symbols are not connected to experience.
Everyone and their mother has an AI system that could be connected to
real world input. The simple act of connecting to the real world is
NOT the core problem.
Are
Bryan Bishop wrote:
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
The complaint is not your symbols are not connected to experience.
Everyone and their mother has an AI system that could be connected to
real world input. The simple act of connecting to the real world is
NOT the
On Nov 14, 2007 11:58 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
Are we sure? How much of the real world are we able to get into our AGI
models anyway? Bandwidth is limited, much more limited than in humans
and other animals. In fact, it might be the equivalent to worm tech.
To do the
Sounds a little confusing. Sounds like you plan to evolve a system through
testing thousands of candidate mechanisms. So one way or another you too
are taking a view - even if it's an evolutionary, I'm not taking a view
view - on, and making a lot of asssumptions about
-how systems evolve
Mike Tintner wrote:
RL:In order to completely ground the system, you need to let the system
build its own symbols
V. much agree with your whole argument. But - I may well have missed
some vital posts - I have yet to get the slightest inkling of how you
yourself propose to do this.
Well,
Mark Waser wrote:
I'm going to try to put some words into Richard's mouth here since
I'm curious to see how close I am . . . . (while radically changing the
words).
I think that Richard is not arguing about the possibility of
Novamente-type solutions as much as he is arguing about
Richard,
The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**
-- simple probabilistic truth values without highly specific interpretation
-- more complex, logically refined truth values, when this level of
precision is necessary
To make
Mike Tintner wrote:
RL:Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75). The (p=0.75)
is supposed to express the idea that the statement [I like cats] is
Ben,
Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to force the system to do its own grounding, and hence its own
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben,
Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75). The (p=0.75)
is supposed to
But has a human, asking Wen out on a date, I don't really know what
Wen likes cats ever really meant. It neither prevents me from talking
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...
yes, exactly...
The NLP statement Wen likes cats is
So, vagueness can not only be important
imported, I meant
into an AI system from natural language,
but also propagated around the AI system via inference.
This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...
-- Ben G
-
21 matches
Mail list logo