Ben, et al,
A proposed solution:
How about some rules on the composition of the first lines of postings, e.g.
if it has to do with Open Cog, then OpenCog should be on the subject line.
If someone is disparaging someone else, then put disparage on the first
line, etc. My postings are often about
Thanks a lot, I really appreciate your message. It is good to get
contributions on these META themes from individuals who are *not*
among the 5-10% of list members who frequently post.
If any other lurkers or semi-lurkers have opinions on these META
issues, I and others would be interested
AGI list,
What I see in most of these e-mail list discussions is people with very
diversified backgrounds, cultures, ideas, failing to understand each other.
What people should remember is that e-mail is not even close to a complete
communication medium. By its definition, you are going to miss
Eric Burton wrote:
I apologize: 1/16. Which, to be fair, is half as many, and somewhat
diminishes the point I was trying to make. ,_,
Eric, *please* read the whole of the post before you comment! Of the
58 matches, in all but two of the cases the word was used by someone
else and I just
This thread has been killed. Let's end this discussion please. Continue it
via private email, start another meta-list for statistical analysis of
postings on this list, or whatever ;-p
thanks!
ben
On Mon, Aug 4, 2008 at 9:41 AM, Richard Loosemore [EMAIL PROTECTED]wrote:
Eric Burton wrote:
Ben,
The thread that you just killed was a response to a serious allegation
that *you* made on this list.
You accused one person on the list of engaging in extremely disruptive
behavior. Then, other people joined in and repeated the charge.
The victim of your initial allegation
Richard,
I truly did **not** kill that thread because some of the comments in it were
critical of me personally, I killed it because it was irrelevant to AI, and
in parts (without saying anything about any particular poster; there were
many involved) rambling, childish and unpleasant.
I don't
As I've come out of the closet over the list tone issues, I guess I
should post something AI-related as well -- at least that will make me
net neutral between relevant and irrelevant postings. :-)
One of the classic current AI issues is grounding, the argument being
that a dictionary cannot
I mentioned earlier that I'd forward a private email I'd previously sent to
YKY, on the topic of probabilistic inductive logic programming.
Here is is.
As noted there, my impression is that PILP could be implemented within
OpenCog's PLN backward chainer (currently being ported to OpenCog by Joel
Harry Chesley [EMAIL PROTECTED] wrote:
One of the classic current AI issues is grounding, the argument being that a
dictionary cannot be complete because it is only self-referential, and *has*
to be grounded at some point to be truly meaningful. This argument is used
to claim that abstract AI
On Mon, Aug 4, 2008 at 10:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over the list tone issues, I guess I should
post something AI-related as well -- at least that will make me net neutral
between relevant and irrelevant postings. :-)
One of the classic
Harry,
Count me in the camp that views grounding as the essential problem of
traditional AI approaches, at least as it relates to AGI. An embodied AI [*],
in which the only informational inputs to the AI come via so-called sensory
modalities, is the only way I can see for an AI to arrive at
Harry: I have never bought this line of reasoning. It seems to me that
meaning is a
layered thing, and that you can do perfectly good reasoning at one (or
two
or three) levels in the layering, without having to go all the way
down.
And if that layering turns out to be circular (as it is in a
This topic has been discussed in this list for several times.
A previous post of mine can be found at
http://www.listbox.com/member/archive/303/2007/10/sort/time_rev/page/13/entry/22
Pei
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over
Harry,
In what way do you think your approach is not grounded?
--Abram
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over the list tone issues, I guess I should
post something AI-related as well -- at least that will make me net neutral
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
the argument being that a dictionary cannot be complete because it is only
self-referential, and *has* to be grounded at some point to be truly
meaningful. This argument is used to claim that abstract AI can never
On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 8/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:
As noted there, my impression is that PILP could be implemented within
OpenCog's PLN backward chainer (currently being ported to OpenCog by Joel
Pitt, from the
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
My perspective on grounding is partially summarized here
www.goertzel.org/papers/PostEmbodiedAI_June7.htm
I agree that AGI should ideally have multiple sources of knowledge as you
describe: explicitly taught, learned from
Terren Suydam wrote:
...
Without an internal
sense of meaning, symbols passed to the AI are simply arbitrary data
to be manipulated. John Searle's Chinese Room (see Wikipedia)
argument effectively shows why manipulation of ungrounded symbols is
nothing but raw computation with no
Vladimir Nesov wrote:
It's too fuzzy an argument.
You're right, of course. I'm not being precise, and though I'll try to
improve on that here, I probably still won't be. But here's my attempt:
There are essentially three types of grounding: embodiment, hierarchy
base nodes, and
Terren Suydam wrote:
I don't know, how do you do it? :-] A human baby that grows up with virtual
reality hardware surgically implanted (never to experience anything but a
virtual reality) will have the same issues, right?
There is no difference in principle between real reality and virtual
Hi Harry,
All the Chinese Room argument shows, if you accept the arguments, is that
approaches to AI in which symbols are *given*, cannot manifest understanding
(aka an internal sense of meaning) from the perspective of the AI. By given, I
mean simply that symbols are incorporated into the
As I understand it, FOL is only Turing complete when
predicates/relations/functions beyond the ones in the data are
allowed. Would PLN naturally invent predicates, or would it need to be
told to specifically? Is this what concept creation does? More
concretely: if I gave PLN a series of data, and
Well, having an intuitive understanding of human language will be useful for
an AGI even if its architecture is profoundly nonhumanlike. And, human
language is intended to be interpreted based on social, spatiotemporal
experience. So the easiest way to make an AGI grok human language is very
The Chinese Room concept became more palatable to me when I started
putting the emphasis on nese and not on room. /Chinese/ Room, not
Chinese /Room/. I don't know why this is.
I think it changes the implied meaning from a room where Chinese
happens to be spoken, to a room for the
On Tue, Aug 5, 2008 at 12:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem is that writing stories in a formal language, with enough nuance
and volume to really contain the needed commonsense info, would require a
Cyc-scale effort at formalized story entry. While possible in principle,
When do you think Novamente will be ready to go out and effectively
learn from (/interract with) environments not fully controlled by the
dev team?
I wish I could say tomorrow, but realistically it looks like it's gonna be
2009 ... hopefully earlier rather than later in the year but I'm not
27 matches
Mail list logo