Matthias,
You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
On Wed, Oct 22, 2008 at 3:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
It seems to me that many people think that embodiment is very important for
AGI.
I'm not one of these people, but I at least learn what their
arguments. You seem to have made up an argument which you've then
knocked
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.
My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and a certain
The restriction is by far not arbitrary. If your AGI is in a space ship or
on a distant planet and has to solve the problems in this domain then it has
no chance to leave this domain.
If this domain contains every information which is necessary to solve the
problem then an AGI *must* be able
I do not claim that AGI might not have bias which is equivalent to genes of
your example. The point is that AGI is the union set of all AI sets. If I
have a certain domain d and a problem p and I know that p can be solved
using nothing else than d, then AGI must be able to solve problem p in d
On Wed, Oct 22, 2008 at 6:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.
No-one can learn chess from playing chess alone.
Chess
On Wed, Oct 22, 2008 at 2:10 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
No-one can learn chess from playing chess alone.
Chess is necessarily a social activity.
As such, your suggestion isn't even sensible, let alone reasonable.
Current AIs learn chess without engaging in social
If you give the system the rules of chess then it has all which is necessary
to know to become a good chess player.
It may play against itself or against a common chess program or against
humans.
- Matthias
Trent Waddington [mailto:[EMAIL PROTECTED] wrote
No-one can learn chess from playing
I do not regard chess as important as a drosophila for AI. It would just be
a first milestone where we can make a fast proof of concept for an AGI
approach. The faster we can sort out bad AGI approaches the sooner we will
obtain a successful one.
Chess has the advantage to be an easy
I agree that chess is far from sufficient for AGI. But I have mentioned this
already at the beginning of this thread.
The important role of chess for AGI could be to rule out bad AGI approaches
as fast as possible.
Before you go to more complex domains you should consider chess as a first
You may not like Therefore, we cannot understand the math needed to define
our own intelligence., but I'm rather convinced that it's correct.
Do you mean to say that there are parts that we can't understand or that the
totality is too large to fit and that it can't be cleanly and completely
I had not noticed this before, though it was posted earlier this year.
Finally Josef Urban translated Mizar into a standard first-order logic
format:
http://www.cs.miami.edu/~tptp/MizarTPTP/http://www.cs.miami.edu/%7Etptp/MizarTPTP/
Note that there are hyperlinks pointing to the TPTP-ized
However, the point I took issue with was your claim that a stupid person
could be taught to effectively do science ... or (your later modification)
evaluation of scientific results.
At the time I originally took exception to your claim, I had not read the
earlier portion of the thread, and
I don't agree at all.
The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in
Not everything that is a necessary capability of a completed
Matthias Heger:
If chess is so easy because it is completely described, complete information
about
state available, fully deterministic etc. then the more important it is that
your AGI
can learn such an easy task before you try something more difficult.
Chess is not easy. Becoming
In brief -- You've agreed that even a stupid person is a general
intelligence. By do science, I (originally and still) meant the
amalgamation that is probably best expressed as a combination of critical
thinking and/or the scientific method. My point was a combination of both
a) to be a
It doesn't, because **I see no evidence that humans can
understand the semantics of formal system in X in any sense that
a digital computer program cannot**
I just argued that humans can't understand the totality of any formal system X
due to Godel's Incompleteness Theorem but the rest of
I don't want to diss the personal value of logically inconsistent thoughts.
But I doubt their scientific and engineering value.
I doesn't seem to make sense that something would have personal value and then
not have scientific or engineering value.
I can sort of understand science if you're
Very useful link. Thanks.
-Matthias
Von: Ben Goertzel [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 22. Oktober 2008 15:40
An: agi@v2.listbox.com
Betreff: [agi] A huge amount of math now in standard first-order predicate
logic format!
I had not noticed this before, though it was
I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical
(1) We humans understand the semantics of formal system X.
No. This is the root of your problem. For example, replace formal system
X with XML. Saying that We humans understand the semantics of XML
certainly doesn't work and why I would argue that natural language
understanding is
On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:
I don't want to diss the personal value of logically inconsistent
thoughts. But I doubt their scientific and engineering value.
I doesn't seem to make sense that something would have personal value and
then not have
Well, if you are a computable system, and if by think you mean represent
accurately and internally then you can only think that odd thought via
being logically inconsistent... ;-)
True -- but why are we assuming *internally*? Drop that assumption as Charles
clearly did and there is no
http://brainwave.opencog.org/
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall,
I disagree, and believe that I can think X: This is a thought (T) that is
way too complex for me to ever have.
Obviously, I can't think T and then think X, but I might represent T as a
combination of myself plus a notebook or some other external media. Even if
I only observe part of T at
You have not convinced me that you can do anything a computer can't do.
And, using language or math, you never will -- because any finite set of
symbols
you can utter, could also be uttered by some computational system.
-- Ben G
Can we pin this somewhere?
(Maybe on Penrose? ;-)
The problem is to gradually improve overall causal model of
environment (and its application for control), including language and
dynamics of the world. Better model allows more detailed experience,
and so through having a better inbuilt model of an aspect of
environment, such as language,
Ben wrote:
The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in
These narrow, closed, deterministic domains are *subsets* of what AGI is
IMHO that is an almost hopeless approach, ambiguity is too integral to
English or any natural language ... e.g preposition ambiguity
Actually, I've been making pretty good progress. You just always use big words
and never use small words and/or you use a specific phrase as a word.
(joke)
What? You don't love me any more?
/thread
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 11:11 AM
Subject: Re: [agi] constructivist issues
(joke)
On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel [EMAIL
Come by the house, we'll drop some acid together and you'll be convinced ;-)
Been there, done that. Just because some logically inconsistent thoughts have
no value doesn't mean that all logically inconsistent thoughts have no value.
Not to mention the fact that hallucinogens, if not the
--- On Tue, 10/21/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Sorry, but this was no proof that a natural language
understanding system is
necessarily able to solve the equation x*3 = y for
arbitrary y.
1) You have not shown that a language understanding system
must necessarily(!)
On Wed, Oct 22, 2008 at 7:47 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem is to gradually improve overall causal model of
environment (and its application for control), including language and
dynamics of the world. Better model allows more detailed experience,
and so through having a
Too many responses for me to comment on everything! So, sorry to those
I don't address...
Ben,
When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think meaning is
more basic. I mentioned already what my current definition of
This is the standard Lojban dictionary
http://jbovlaste.lojban.org/
I am not so worried about word meanings, they can always be handled via
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by
using rarer, less ambiguous words
Prepositions are more worrisome, however, I
So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a
Not everything that is a necessary capability of a completed human-level,
roughly human-like AGI, is a sensible first step toward a human-level,
roughly human-like AGI
This is surely true. But let's say someone wants to develop a car. Doesn't
it makes sense first to develop and test
Mark,
The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.
--Abram
On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED] wrote:
It looks like all this disambiguation by
What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose
You make the implicit assumption that a natural language understanding
system will pass the turing test. Can you prove this?
Furthermore, it is just an assumption that the ability to have and to apply
the rules are really necessary to pass the turing test.
For these two reasons, you still
I think this would be a relatively pain-free way to communicate with an AI
that lacks the common sense to carry out disambiguation and reference
resolution reliably. Also, the log of communication would provide a nice
training DB for it to use in studying disambiguation.
Awesome. Like I
All theorems in the same formal system are equivalent anyways ;-)
On Wed, Oct 22, 2008 at 1:03 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
What, then, do you make of my definition? Do you think deductive
consequence is insufficient for meaningfulness?
I am not sure exactly where you
Also, I don't prefer to define meaning the way you do ... so clarifying
issues with your definition is your problem, not mine!!
On Wed, Oct 22, 2008 at 1:03 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
What, then, do you make of my definition? Do you think deductive
consequence is
Douglas Hofstadter's newest book I Am A Strange Loop (currently available
from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) has
an excellent chapter showing Godel in syntax and semantics. I highly
recommend it.
The upshot is that while it is
Vlad,
Thanks for your below reply to my prior email of Tue 10/21/2008 7:08 PM
I agreed with most of your reply. There are only two major issues upon
which I wanted further confirmation, clarification, or comment.
1. WHY C(N,S) IS DIVIDED BY T(N,S,O) TO FORM A LOWER BOUNDS FOR
Well, I am confident my approach with subscripts to handle disambiguation
and reference resolution would work, in conjunction with the existing
link-parser/RelEx framework...
If anyone wants to implement it, it seems like just some hacking with the
open-source Java RelEx code...
Like what
It depends what to play chess poorly mean. No one would expect that a
general AGI architecture can outperform special chess programs with the same
computational resources. I think you could convince a lot of people if you
demonstrate that your approach which is obviously completely different
A couple of distinctions that I think would be really helpful for this
discussion . . . .
There is a profound difference between learning to play chess legally and
learning to play chess well.
There is an equally profound difference between discovering how to play chess
well and being taught
* *
Mathematics, though, is interesting in other ways. I don't believe that
much of mathematics involves the logical transformations performed in
proof steps. A system that invents new fields of mathematics, new terms,
new mathematical ideas -- that is truly interesting. Inference control
--- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
You make the implicit assumption that a natural language
understanding system will pass the turing test. Can you prove this?
If you accept that a language model is a probability distribution over text,
then I have already
Mark,
I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little
[Usual disclaimer: this is not the approach I'm taking, but I don't find it
stupid]
The idea is that by teaching an AI in a minimally-ambiguous language, one
can build up its commonsense understanding such that it can then deal with
the ambiguities of natural language better, using this
On Thu, Oct 23, 2008 at 11:23 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
So how does yet another formal language processing system help us understand
natural language? This route has been a dead end for 50 years, in spite of
the ability to always make some initial progress before getting stuck.
53 matches
Mail list logo