From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
On Tuesday 12 June 2007 11:24:16 am David Clark wrote:
... What if models of how the world works
could be coded by "symbol grounded" humans so that, as the AGI learned,
it
could test it's theories and assumptions on these models without
necessarily
ith
symbols but that can generate knowledge by statistical processing,
induction, analogical mapping of structures, categorization, etc.
Sergio Navega.
----- Original Message -
From: Mike Tintner
To: agi@v2.listbox.com
Sent: Tuesday, June 12, 2007 2:15 PM
Subject: Re: [agi] Sym
On Tuesday 12 June 2007 12:49:12 pm Derek Zahn wrote:
> Often I see AGI types referring to physical embodiment as a costly sideshow
or as something that would be nice if a team of roboticists were available.
But really, a simple robot is trivial to build, and even a camera on a
pan/tilt base po
On Tuesday 12 June 2007 11:24:16 am David Clark wrote:
> ... What if models of how the world works
> could be coded by "symbol grounded" humans so that, as the AGI learned, it
> could test it's theories and assumptions on these models without necessarily
> actually having a direct connection to the
One last bit of rambling in addition to my last post:
When I assert that almost everything important gets discarded while merely
distilling an array of rod and cone firings into a symbol for "chair", it's
fair to ask exactly what that "other stuff" is. Alas, I believe it is
fundamentally impo
Sergio:This is because in order to *create* knowledge
(and it's all about self-creation, not of "external insertion"), it
is imperative to use statistical (inductive) methods of some sort.
In my way of seeing things, any architecture based solely on logical
(deductive) grounds is doomed to fail.
On 6/12/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
Some people, especially those espousing a modular software-engineering type
of approach seem to think that a perceptual system basically should spit out
a token for "chair" when it sees a chair, and then a reasoning system can
take over to reaso
I think probably "AGI-curious" person has intuitions about this subject. Here
are mine:
Some people, especially those espousing a modular software-engineering type of
approach seem to think that a perceptual system basically should spit out a
token for "chair" when it sees a chair, and then a
David Clark <[EMAIL PROTECTED]> wrote: - Original Message -
From: "J Storrs Hall, PhD"
To:
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding
> Here's how Harnad defines it in his original paper:
>
> " My own example of t
ot of "external insertion"), it
is imperative to use statistical (inductive) methods of some sort.
In my way of seeing things, any architecture based solely on logical
(deductive) grounds is doomed to fail.
Sergio Navega.
- Original Message -
From: Mark Waser
To: agi@v2.list
- Original Message -
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding
> Here's how Harnad defines it in his original paper:
>
> " My own example of the symbol grounding proble
On 6/12/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>> a question is whether a software program could tractably learn language
without such associations, by relying solely on statistical associations
within texts.
Isn't there an alternative (or middle ground) of starting the software
program wit
>> a question is whether a software program could tractably learn language
>> without such associations, by relying solely on statistical associations
>> within texts.
Isn't there an alternative (or middle ground) of starting the software program
with a seed of initial structure and then letti
On Monday 11 June 2007 09:47:38 pm James Ratcliff wrote:
>
> "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007
08:12:08 pm James Ratcliff wrote:
> > 1. Is anyone taking an approach to AGI without the use of Symbol
Grounding?
>
> You'll have to go into that a bit more for me
"Symbol grounding" basically means the association of linguistic tokens
(words, linguistic concepts, etc.) with nonlinguistic (e.g.
perceptual-motor)
patterns.
E.g. associating the word "apple" with a set of visual images of apples, or
associating (some sense of) the word "from" with a set of rem
"J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 08:12:08
pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
You'll have to go into that a bit more for me please.
Symbol grounding is something of a red herring. There's a
On Monday 11 June 2007 08:12:08 pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
Symbol grounding is something of a red herring. There's a whole raft of
philosophical conundrums (qualia among them) that simply evaporate if you
take the syste
1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
Or is that intrinsic in everyones approaches at this stage?
(short of some Neural Network approaches)
2. How do you describe Symbol Grounding for an AGI?
What do you consider the best ways to have the system get Symbol Gro
> > > My guess at a good basis for KR is simply the cleanest, most powerful, and> > > most general programming language I can come up with. That's because to learn
> > > new concepts and really understand them, the AI will have to do the> > > equivalent of writing recognizers, simulators, experime
> My guess at a good basis for KR is simply the cleanest, most powerful, and
> most general programming language I can come up with. That's because to
learn
> new concepts and really understand them, the AI will have to do the
> equivalent of writing recognizers, simulators, experiment generators,
Alas but the poor AI will only have the internet, grid and anything accessible therefrom
much like an alien studying earth 50 light years from here will only have radio and TV
signals.
To be able to really learn, an AI must have some direct covert connection directly to human
neural systems and
On Monday 25 September 2006 21:11, Ben Goertzel wrote:
> [Harnad]
> Suppose you had to learn Chinese as a first language and the only
> source of information you had was a Chinese/Chinese dictionary![8]
> ...
> The standard reply of the symbolist (e.g., Fodor 1980, 1985) is that
> the meaning of th
22 matches
Mail list logo