agi@v2.listbox.com

2007-10-16 Thread Edward W. Porter
In response to below post from Josh Hall: I am using "Holonic" as Eliezer S. Yudkowsky used in in his LEVELS OF ORGANIZATION IN GENERAL INTELLIGENCE in which he said ""Holonic" is a useful word to describe the simultaneous application of reductionism and holism, in which a single quality is simul

agi@v2.listbox.com

2007-10-16 Thread J Storrs Hall, PhD
On Tuesday 16 October 2007 08:43:23 pm, Edward W. Porter wrote: > ... holonic pattern matching, ... Now there's a word you don't hear every day :-) I've always thought of it as a feature of Arthur Koestler's somewhat poetic ontology of hierarchy. And it appears to enjoy a minor vogue as a subsp

agi@v2.listbox.com

2007-10-16 Thread Edward W. Porter
Josh, You asked "What do these webs of associations *do*?" They are the knowledge base overwhich massively parallel hardware computes. Massive search, holonic pattern matching, spreading activation, rippling confabulation-like relaxations, thresholding, etc. Ed Porter -Original Message---

agi@v2.listbox.com

2007-10-16 Thread J Storrs Hall, PhD
On Tuesday 16 October 2007 03:24:07 pm, Edward W. Porter wrote: > AS I SAID ABOVE, I AM THINKING OF LARGE COMPLEX WEBS OF COMPOSITIONAL AND > GENERALIZATIONAL HIERARCHIES, ASSOCIATIONS, EPISODIC EXPERIENCES, ETC, OF > SUFFICIENT COMPLEXITY AND DEPTH TO REPRESENT THE EQUIVALENT OF HUMAN WORLD > KNO

agi@v2.listbox.com

2007-10-16 Thread Benjamin Goertzel
> > > > The trivial sense of > > "semantics" don't apply, and the deeper senses are so vague that they > > are almost synonymous with grounding. > > Completely wrong. Grounding is a fairly shallow concept that falls apart > as an > explanation of meaning under fairly moderate scrutiny. Semantics is

agi@v2.listbox.com

2007-10-16 Thread Edward W. Porter
RICHARD LOOSEMORE WROTE IN HIS Tue 10/16/2007 9:25 AM POST. “So if someone tries to talk about what the grounding problem is by defining it in terms of semantics, I start to wonder what they're putting on their cornflakes in the morning. The trivial sense of "semantics" don't apply, and the deepe

agi@v2.listbox.com

2007-10-16 Thread Edward W. Porter
Josh, your Tue 10/16/2007 8:58 AM post was a very good one. I have just a few comments in all-caps. “The view I suggest instead is that it's not the symbols per se, but the machinery that manipulates them, that provides semantics.” MACHINERY WITHOUT REPRESENTATION TO COMPUTE FROM IS OF AS LITTL

Re: [agi] Why roboticists have more fun

2007-10-16 Thread Bob Mottram
On 16/10/2007, John G. Rose <[EMAIL PROTECTED]> wrote: > Part of the reason AI has so much damaged credibility is that over the past > decades there have always been these predictions that by some year robots > will be doing this or robots will be doing that. Any idiot can make > predictions for 20

RE: [agi] Why roboticists have more fun

2007-10-16 Thread John G. Rose
> From: Jiri Jelinek [mailto:[EMAIL PROTECTED] > Subject: Re: [agi] Why roboticists have more fun > > Just a quick note: Sex - that's a narrow AI, but Levy reportedly also > forecasts legalization of marriages with robots by 2050. That would > probably take AGI and I gues not just "an AGI", but, i

Re: [agi] Why roboticists have more fun

2007-10-16 Thread Jiri Jelinek
Just a quick note: Sex - that's a narrow AI, but Levy reportedly also forecasts legalization of marriages with robots by 2050. That would probably take AGI and I gues not just "an AGI", but, in *many* ways, very human like AGI. It seems to me that most AGI researchers don't really target such overa

agi@v2.listbox.com

2007-10-16 Thread J Storrs Hall, PhD
On Tuesday 16 October 2007 09:24:34 am, Richard Loosemore wrote: > If I may interject: a lot of confusion in this field occurs when the > term "semantics" is introduced in a way that implies that it has a clear > meaning [sic]. "Semantics" does have a clear meaning, particularly in linguisti

agi@v2.listbox.com

2007-10-16 Thread Mike Tintner
RL:Just because System 2 did not acquire its own knowledge from its own personal experience would not be good grounds [sorry] for saying it is not grounded. How can it test its knowledge, and ongoing inferences? AGI - human and animal GI - is continual self-questioning and testing. What IS the m

agi@v2.listbox.com

2007-10-16 Thread Richard Loosemore
Edward W. Porter wrote: This is in response to Josh Storrs Monday, October 15, 2007 3:02 PM post and Richard Loosemore’s Mon 10/15/2007 1:57 PM post. I mis-understood you, Josh. I thought you were saying semantics could be a type of grounding. It appears you were saying that grounding requ

agi@v2.listbox.com

2007-10-16 Thread J Storrs Hall, PhD
On Monday 15 October 2007 04:45:22 pm, Edward W. Porter wrote: > I mis-understood you, Josh. I thought you were saying semantics could be > a type of grounding. It appears you were saying that grounding requires > direct experience, but that grounding is only one (although perhaps the > best) pos