On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote:
I mis-understood you, Josh. I thought you were saying semantics could be
a type of grounding. It appears you were saying that grounding requires
direct experience, but that grounding is only one (although perhaps the
best)
Edward W. Porter wrote:
This is in response to Josh Storrs Monday, October 15, 2007 3:02 PM
post and Richard Loosemore’s Mon 10/15/2007 1:57 PM post.
I mis-understood you, Josh. I thought you were saying semantics could
be a type of grounding. It appears you were saying that grounding
RL:Just because System 2 did
not acquire its own knowledge from its own personal experience would not
be good grounds [sorry] for saying it is not grounded.
How can it test its knowledge, and ongoing inferences? AGI - human and
animal GI - is continual self-questioning and testing. What IS the
On Tuesday 16 October 2007 09:24:34 am, Richard Loosemore wrote:
If I may interject: a lot of confusion in this field occurs when the
term semantics is introduced in a way that implies that it has a clear
meaning [sic].
Semantics does have a clear meaning, particularly in linguistics and
Just a quick note: Sex - that's a narrow AI, but Levy reportedly also
forecasts legalization of marriages with robots by 2050. That would
probably take AGI and I gues not just an AGI, but, in *many* ways,
very human like AGI. It seems to me that most AGI researchers don't
really target such
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Subject: Re: [agi] Why roboticists have more fun
Just a quick note: Sex - that's a narrow AI, but Levy reportedly also
forecasts legalization of marriages with robots by 2050. That would
probably take AGI and I gues not just an AGI, but, in
On 16/10/2007, John G. Rose [EMAIL PROTECTED] wrote:
Part of the reason AI has so much damaged credibility is that over the past
decades there have always been these predictions that by some year robots
will be doing this or robots will be doing that. Any idiot can make
predictions for 2050.
Josh, your Tue 10/16/2007 8:58 AM post was a very good one. I have just
a few comments in all-caps.
The view I suggest instead is that it's not the symbols per se, but the
machinery that manipulates them, that provides semantics.
MACHINERY WITHOUT REPRESENTATION TO COMPUTE FROM IS OF AS
RICHARD LOOSEMORE WROTE IN HIS Tue 10/16/2007 9:25 AM POST.
So if someone tries to talk about what the grounding problem is by
defining it in terms of semantics, I start to wonder what they're
putting on their cornflakes in the morning. The trivial sense of
semantics don't apply, and the deeper
The trivial sense of
semantics don't apply, and the deeper senses are so vague that they
are almost synonymous with grounding.
Completely wrong. Grounding is a fairly shallow concept that falls apart
as an
explanation of meaning under fairly moderate scrutiny. Semantics is, by
On Tuesday 16 October 2007 03:24:07 pm, Edward W. Porter wrote:
AS I SAID ABOVE, I AM THINKING OF LARGE COMPLEX WEBS OF COMPOSITIONAL AND
GENERALIZATIONAL HIERARCHIES, ASSOCIATIONS, EPISODIC EXPERIENCES, ETC, OF
SUFFICIENT COMPLEXITY AND DEPTH TO REPRESENT THE EQUIVALENT OF HUMAN WORLD
11 matches
Mail list logo