On 6/12/07, Mark Waser [EMAIL PROTECTED] wrote:
a question is whether a software program could tractably learn language
without such associations, by relying solely on statistical associations
within texts.
Isn't there an alternative (or middle ground) of starting the software
program with a
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding
Here's how Harnad defines it in his original paper:
My own example of the symbol grounding problem has two versions, one
Harnad's symbol grounding paper has been criticized some
times, but it remains a seminal idea. The problem faced
by many tradicional artificial cognitions is the exclusive reliance
on arbitrary symbols, such as linguistic inputs. That approach
is appealing, and has fooled (it still fools) many
David Clark [EMAIL PROTECTED] wrote: - Original Message -
From: J Storrs Hall, PhD
To:
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding
Here's how Harnad defines it in his original paper:
My own example of the symbol grounding problem has two versions,
I think probably AGI-curious person has intuitions about this subject. Here
are mine:
Some people, especially those espousing a modular software-engineering type of
approach seem to think that a perceptual system basically should spit out a
token for chair when it sees a chair, and then a
On 6/12/07, Derek Zahn [EMAIL PROTECTED] wrote:
Some people, especially those espousing a modular software-engineering type
of approach seem to think that a perceptual system basically should spit out
a token for chair when it sees a chair, and then a reasoning system can
take over to reason
[Further to the symbol grounding discussion, you might like to look at (pass
on) this trendsetting science video-journal, which is just one of many signs of
the new multimedia [vs the old literate, symbolic] culture]
Dear Scientist,
The 4th issue of JoVE, a video-based publication on
Sergio:This is because in order to *create* knowledge
(and it's all about self-creation, not of external insertion), it
is imperative to use statistical (inductive) methods of some sort.
In my way of seeing things, any architecture based solely on logical
(deductive) grounds is doomed to fail.
One last bit of rambling in addition to my last post:
When I assert that almost everything important gets discarded while merely
distilling an array of rod and cone firings into a symbol for chair, it's
fair to ask exactly what that other stuff is. Alas, I believe it is
fundamentally
Matt,
Here is a program that feels pain.
I got the logic, but no pain when processing the code in my mind.
Maybe you should mention in the pain.cpp description that it needs to
be processed for long enough - so whatever is gonna process it, it
will eventually get to the 'I don't feel like
On 6/12/07, Mark Waser [EMAIL PROTECTED] wrote:
If you think my scheme cannot be fair then the alternative of
traditional management can only be worse (in terms of fairness, which in
turn affects the quality of work being done). The situation is quite
analogous to that between a state-command
On Tuesday 12 June 2007 11:24:16 am David Clark wrote:
... What if models of how the world works
could be coded by symbol grounded humans so that, as the AGI learned, it
could test it's theories and assumptions on these models without necessarily
actually having a direct connection to the real
On Tuesday 12 June 2007 12:49:12 pm Derek Zahn wrote:
Often I see AGI types referring to physical embodiment as a costly sideshow
or as something that would be nice if a team of roboticists were available.
But really, a simple robot is trivial to build, and even a camera on a
pan/tilt base
Mike, we think alike, but there's a small point in which
our thoughts diverge. We agree that entirely symbolic architectures
will fail, possibly sooner than predicted by its creators.
But we've got to be careful regarding our notion of symbol.
If symbol is understood in a large enough context,
From: J Storrs Hall, PhD [EMAIL PROTECTED]
On Tuesday 12 June 2007 11:24:16 am David Clark wrote:
... What if models of how the world works
could be coded by symbol grounded humans so that, as the AGI learned,
it
could test it's theories and assumptions on these models without
necessarily
There are always the difficulties of creating AGI in software written by
people. Maybe it would be easier to create the application that writes the
AGI software. This is similar to a software that modifies its own source
code yet different where the generator is a separate entity not integrated
Board members will be nominated and elected by the entire group, and
hopefully we can find some academics who have reputation in certain areas of
AI, and are not contributors themselves. I tend to think that they will be
more judicious than other types of people.
Again, how is that
Go ahead :).. If you have *enough* time, almost no thinking is needed:
Many think smart AGI can fit into as little as 100KB. Have ~1K of data
sufficient for solving a couple of very tricky problems, generate all
possible combinations of bits for 100K file, run each instance in some
kind of
No seriously. A good mathematical model of AGI, say lowest common
denominator, created in category theory where all real AGI's are isomorphic
to it. Make it abstract and minimalistic as possible. In the generator you
have to include numerous mappings between math and source code so the thing
On 6/13/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
A successful AI could do a superior job of dividing up the credit from
available historical records. (Anyone who doesn't spot this is not
thinking recursively.)
During the pre-AGI interim, people have got to make money and to enjoy
On 6/13/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I wouldn't bother working with anyone who was seriously worried over
who got the credit for building a Singularity-class AI - no other
kind matters. There are two reasons for this, not just the obvious one.
Come on, there're no obvious
I keep getting the following message whenever I post to [agi].
It looks like spam. Can we get rid of it? Or is it just me?
YKY
-- Forwarded message --
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: Jun 13, 2007 12:19 PM
Subject: Re: Re: [agi] AGI Consortium [ZONEALARM
22 matches
Mail list logo