AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
The goal of chess is well defined: Avoid being checkmate and try to
checkmate your opponent.

What checkmate means can be specified formally.

Humans mainly learn chess from playing chess. Obviously their knowledge
about other domains are not sufficient for most beginners to be a good chess
player at once. This can be proven empirically.

Thus an AGI would not learn chess completely different from all what we now.
It would learn from experience which is one of  the most common kinds of
learning.

I am sure that everyone who learns chess by playing against chess computers
and is able to learn good chess playing (which is not sure as also not
everyone can learn to be a good mathematician) will be able to be a good
chess player against humans.

My first posting in this thread shows the very weak point in the
argumentation of those people who say that social and other experiences are
needed to play chess.

You suppose knowledge must be available from another domain to solve
problems of the domain of chess.
But everything of chess in on the chessboard itself. If you are not able to
solve chess problems from chess alone then you are not able to solve certain
solvable problems. And thus you cannot call your AI AGI.

If you give an AGI all facts which are sufficient to solve a problem then
your AGI must be able to solve the problem using nothing else than these
facts.

If you do not agree with this, then how should an AGI know which experiences
in which other domains are necessary to solve the problem? 

The magic you use is the overestimation of real-world experiences. It sounds
as if the ability to solve arbitrary problems in arbitrary domains depend
essentially on that your AGI plays in virtual gardens and speaks often with
other people. But this is completely nonsense. No one can play good chess by
those experiences. Thus such experiences are not sufficient. On the other
hand there are programs which definitely do not have such experiences and
outperform humans in chess. Thus those experiences are neither sufficient
nor necessary to play good chess and emphasizing on such experiences is
mystifying AGI, similar as it is done by the doubters of AGI who always
argue with Goedel or quantum physics which in fact has no relevance for
practical AGI at all.

- Matthias





Trent Waddington [mailto:[EMAIL PROTECTED] wrote

Gesendet: Donnerstag, 23. Oktober 2008 07:42
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I do not think that it is essential for the quality of my chess who had
 taught me to play chess.
 I could have learned the rules from a book alone.
 Of course these rules are written in a language. But this is not important
 for the quality of my chess.

 If a system is in state x then it is not essential for the future how x
was
 generated.
 Thus a programmer can hardcode the rules of chess in his AGI and then,
 concerning chess the AGI would be in the same state as if someone teaches
 the AGI the chess rules via language.

 The social aspect of learning chess is of no relevance.

Sigh.

Ok, let's say I grant you the stipulation that you can hard code the
rules of chess some how.  My next question is, in a goal-based AGI
system, what goal are you going to set and how are you going to set
it?  You've ruled out language, so you're going to have to hard code
the goal too, so excuse my use of language:

Play good chess

Oh.. that sounds implementable.  Maybe you'll give it a copy of
GNUChess and let it go at it.. but I've known *humans* who learnt to
play chess that way and they get trounced by the first human they play
against.  How are you going to go about making an AGI that can learn
chess in a complete different way to all the known ways of learning
chess?

Or is the AGI supposed to figure that out?

I don't understand why so many of the people on this list seem to
think AGI = magic.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Vladimir Nesov
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If you consider programming an AI social activity, you very
 unnaturally generalized this term, confusing other people. Chess
 programs do learn (certainly some of them, and I guess most of them),
 not everything is hardcoded.

 They may learn tactics or even how to prune their tree better, but I
 know of no chess AI that learns how to play the same way you would
 say a person learns how to play.

Of course.

 And that's the whole point of this
 general AI thing we're trying to get across.. learning how to do a
 task given appropriate instruction and feedback by a teacher is the
 golden goose here..


Not necessarily. The ultimate teacher is our real environment in general.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread BillK
On Thu, Oct 23, 2008 at 12:55 AM, Matt Mahoney wrote:


 I suppose you are right. Instead of encoding mathematical rules as a grammar, 
 with enough training
 data you can just code all possible instances that are likely to be 
 encountered. For example, instead
 of a grammar rule to encode the commutative law of addition,

  5 + 3 = a + b = b + a = 3 + 5
 a model with a much larger training data set could just encode instances with 
 no generalization:

  12 + 7 = 7 + 12
  92 + 0.5 = 0.5 + 92
  etc.

 I believe this is how Google gets away with brute force n-gram statistics 
 instead of more sophisticated  grammars. It's language model is probably 
 10^5 times larger than a human model (10^14 bits vs
 10^9 bits). Shannon observed in 1949 that random strings generated by n-gram 
 models of English
 (where n is the number of either letters or words) look like natural language 
 up to length 2n. For a
 typical human sized model (1 GB text), n is about 3 words. To model strings 
 longer than 6 words we
 would need more sophisticated grammar rules. Google can model 5-grams (see
 http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
  ), so it is able to
 generate and recognize (thus appear to understand) sentences up to about 10 
 words.



Gigantic databases are indeed Google's secret sauce.
See:
http://googleresearch.blogspot.com/2008/09/doubling-up.html

Quote:
Monday, September 29, 2008   Posted by Franz Josef Och

Machine translation is hard. Natural languages are so complex and have
so many ambiguities and exceptions that teaching a computer to
translate between them turned out to be a much harder problem than
people thought when the field of machine translation was born over 50
years ago. At Google Research, our approach is to have the machines
learn to translate by using learning algorithms on gigantic amounts of
monolingual and translated data. Another knowledge source is user
suggestions. This approach allows us to constantly improve the
quality of machine translations as we mine more data and
get more and more feedback from users.

A nice property of the learning algorithms that we use is that they
are largely language independent -- we use the same set of core
algorithms for all languages. So this means if we find a lot of
translated data for a new language, we can just run our algorithms and
build a new translation system for that language.

As a result, we were recently able to significantly increase the number of
languages on translate.google.com. Last week, we launched eleven new
languages: Catalan, Filipino, Hebrew, Indonesian, Latvian, Lithuanian, Serbian,
Slovak, Slovenian, Ukrainian, Vietnamese. This increases the
total number of languages from 23 to 34.  Since we offer translation
between any of those languages this increases the number of language
pairs from 506 to 1122 (well, depending on how you count simplified
and traditional Chinese you might get even larger numbers).
-


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Understanding and Problem Solving

2008-10-23 Thread Terren Suydam

Once again, there is a depth to understanding - it's not simply a binary 
proposition.

Don't you agree that a grandmaster understands chess better than you do, even 
if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 for x, but I might 
understand that y equals 3 times x plus four. My understanding is superficial 
compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language requires solving 
problems? If not, how would you account for an AI's ability to understand novel 
metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM




 
 






Terren Suydam wrote: 

   

Understanding goes far beyond mere
knowledge - understanding *is* the ability to solve problems. One's
understanding of a situation or problem is only as deep as one's (theoretical)
ability to act in such a way as to achieve a desired outcome.  

   

   

I disagree. A grandmaster of chess can
explain his decisions and I will understand them. Einstein could explain his
theory to other physicist(at least a subset) and they could understand it. 

   

I can read a proof in mathematics and I
will understand it – because I only have to understand (= check) every
step of the proof. 

   

Problem solving is much much more than only
understanding. 

Problem solving is the ability to *create*
a sequence of actions to change a system’s state from A to a desired
state B. 

   

For example: Problem Find a path from A to
B within a graph. 

An algorithm which can check a solution and
can answer details about the solution is not necessarily able to find a
solution. 

   

-Matthias 

   





 







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser

I have already proved something stronger


What would you consider your best reference/paper outlining your arguments? 
Thanks in advance.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 8:55 PM
Subject: Re: AW: AW: [agi] Language learning (was Re: Defining AGI)



--- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:


You make the implicit assumption that a natural language
understanding system will pass the turing test. Can you prove this?


If you accept that a language model is a probability distribution over 
text, then I have already proved something stronger. A language model 
exactly duplicates the distribution of answers that a human would give. 
The output is indistinguishable by any test. In fact a judge would have 
some uncertainty about other people's language models. A judge could be 
expected to attribute some errors in the model to normal human variation.



Furthermore,  it is just an assumption that the ability to
have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and
3b.


I suppose you are right. Instead of encoding mathematical rules as a 
grammar, with enough training data you can just code all possible 
instances that are likely to be encountered. For example, instead of a 
grammar rule to encode the commutative law of addition,


 5 + 3 = a + b = b + a = 3 + 5

a model with a much larger training data set could just encode instances 
with no generalization:


 12 + 7 = 7 + 12
 92 + 0.5 = 0.5 + 92
 etc.

I believe this is how Google gets away with brute force n-gram statistics 
instead of more sophisticated grammars. It's language model is probably 
10^5 times larger than a human model (10^14 bits vs 10^9 bits). Shannon 
observed in 1949 that random strings generated by n-gram models of English 
(where n is the number of either letters or words) look like natural 
language up to length 2n. For a typical human sized model (1 GB text), n 
is about 3 words. To model strings longer than 6 words we would need more 
sophisticated grammar rules. Google can model 5-grams (see 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html ) 
, so it is able to generate and recognize (thus appear to understand) 
sentences up to about 10 words.



By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm


It would be interesting to see a version of the Turing test where the 
human confederate, machine, and judge all have access to a computer with 
an internet connection. I wonder if this intelligence augmentation would 
make the test easier or harder to pass?




-Matthias


 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5
but
 you have not shown that
 3a) that a language understanding system
necessarily(!) has
 this rules
 3b) that a language understanding system
necessarily(!) can
 apply such rules

It must have the rules and apply them to pass the Turing
test.

-- Matt Mahoney, [EMAIL PROTECTED]



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser
But, I still do not agree with the way you are using the incompleteness 
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm a 
little at a loss here . . . .


It is important to distinguish between two different types of 
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify 
something.
2. Godelian Incompleteness-- a logical theory fails to completely specify 
something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the 
difference between Normal and Godelian incompleteness is based upon our 
desires.  I think I'm having a complete disconnect with your intended 
meaning.



However, it seems like all you need is type 1 completeness for what

you are saying.

So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and I 
didn't destroy anything else, is that wrong?  :-)


It seems as if you're not arguing with my conclusion but saying that my 
arguments were way better than they needed to be (like I'm being 
over-efficient?) . . . .


= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just 
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way that 
I am using the theory because the theory is overkill for what is necessary.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness 
theorem.


It is important to distinguish between two different types of 
incompleteness.


1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
semantics is used. If a logic's provably-true statements don't match
up to its semantically-true statements, it is incomplete.

However, it seems like all you need is type 1 completeness for what
you are saying. Nobody claims that there is a complete, well-defined
semantics for natural language against which we could measure the
provably-true (whatever THAT would mean).

So, Godel's theorem is way overkill here in my opinion.

--Abram

On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser [EMAIL PROTECTED] wrote:

Most of what I was thinking of and referring to is in Chapter 10.  Gödel's
Quintessential Strange Loop (pages 125-145 in my version) but I would
suggest that you really need to read the shorter Chapter 9. Pattern and
Provability (pages 113-122) first.

I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser [EMAIL PROTECTED] wrote:


Douglas Hofstadter's newest book I Am A Strange Loop (currently 
available

from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM)
has
an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete
formal
system of syntax, that formal system can always be used to convey
something
(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be 
added

to
even inside a formal system of syntax.

This is why I contend that language translation ends up being
AGI-complete
(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: Abram Demski 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED]
wrote:


It looks like all this disambiguation by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface 

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Mark Waser
Hi.  I don't understand the following statements.  Could you explain it some 
more?

- Natural language can be learned from examples. Formal language can not.

I think that you're basing this upon the methods that *you* would apply to each 
of the types of language.  It makes sense to me that because of the 
regularities of a formal language that you would be able to use more effective 
methods -- but it doesn't mean that the methods used on natural language 
wouldn't work (just that they would be as inefficient as they are on natural 
languages.

I would also argue that the same argument applies to the first statement of 
following the following two.

- Formal language must be parsed before it can be understood. Natural language 
must be understood before it can be parsed.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 9:23 PM
  Subject: Lojban (was Re: [agi] constructivist issues)


Why would anyone use a simplified or formalized English (with regular 
grammar and no ambiguities) as a path to natural language understanding? Formal 
language processing has nothing to do with natural language processing other 
than sharing a common lexicon that make them appear superficially similar.

- Natural language can be learned from examples. Formal language can 
not.
- Formal language has an exact grammar and semantics. Natural language 
does not.
- Formal language must be parsed before it can be understood. Natural 
language must be understood before it can be parsed.
- Formal language is designed to be processed efficiently on a fast, 
reliable, sequential computer that neither makes nor tolerates errors, between 
systems that have identical, fixed language models. Natural language evolved to 
be processed efficiently by a slow, unreliable, massively parallel computer 
with enormous memory in a noisy environment between systems that have different 
but adaptive language models.

So how does yet another formal language processing system help us 
understand natural language? This route has been a dead end for 50 years, in 
spite of the ability to always make some initial progress before getting stuck.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 10/22/08, Ben Goertzel [EMAIL PROTECTED] wrote:

  From: Ben Goertzel [EMAIL PROTECTED]
  Subject: Re: [agi] constructivist issues
  To: agi@v2.listbox.com
  Cc: [EMAIL PROTECTED]
  Date: Wednesday, October 22, 2008, 12:27 PM



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled 
via reference to WordNet via usages like run_1, run_2, etc. ... or as you say 
by using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be 
handled in a similar way, e.g. by defining an ontology of preposition meanings 
like with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts 
of subscripts, and in this way to recognize a highly controlled English that 
would be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple 
sentences, so the only real hassle to deal with is disambiguation.   We could 
use similar hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with 
an AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] 
wrote:

 IMHO that is an almost hopeless approach, ambiguity is too 
integral to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always 
use big words and never use small words and/or you use a specific phrase as a 
word.  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, 
Basic 

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Matt Mahoney
--- On Thu, 10/23/08, Mark Waser [EMAIL PROTECTED] wrote:

 Hi.  I don't understand the following 
 statements.  Could you explain it some more?
  
 - Natural language can be learned from examples. Formal language
 can not.

I really mean that formal languages like C++ and HTML are not designed to be 
learned by the machines that implement them. We write a formal specification of 
their syntax and semantics. Obviously they are learnable by humans in the same 
way that humans learn natural languages -- by generalizing from lots of 
examples. Formal languages serve as a bridge between humans and machines. As 
such, a language is designed as a compromise between ease of machine 
specification and ease of human learnability.

 - Formal language must be parsed before it can be understood. Natural
 language must be understood before it can be parsed.

In formal languages, the meaning of sentence depends heavily on its parse, for 
example:

a = b - c; // a comment
b = c - a; // a comment
// a - b = c; a comment

In natural language, a parse depends greatly on the meanings of the words. For 
example:

- I ate pizza with chopsticks.
- I ate pizza with pepperoni.
- I ate pizza with Bob.

But word order has only a small effect on meaning:

- With Bob I ate pizza.
- I with Bob ate pizza.
- Pizza Bob I ate with.

This is my objection to using formal languages to train AGI in a childhood 
development model like OpenCog (artificial toddler, child, adult, scientist). A 
child would be trained on single words with semantic content like pizza. Then 
an adult would learn increasingly complex grammatical structures. Only at the 
scientist level would an AGI be capable of learning formal languages. There 
really isn't any stage where a clean language like Lojban or Esperanto seems 
to help much with knowledge acquisition. If it did, then we would be teaching 
it in our schools.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Understanding and Problem Solving

2008-10-23 Thread Mike Tintner
Guys,

A slightly weird conversation. *Everything* cognitive involves problem-solving. 
Perception (is it a bird or a plane?) involves problem-solving.

Perhaps what you really mean is ...involves *deliberate/conscious* 
problem-solving as opposed to *automatic/unconscious* problem-solving ?


Matthias, 

I say understanding natural language requires the ability to solve 
problems. Do you disagree?  If so, then you must have an explanation for how an 
AI that could understand language would be able to understand novel metaphors 
or analogies without doing any active problem-solving. What is your explanation 
for that?

If on the other hand you agree that NLU entails problem-solving, then 
that is a start. From there we can argue whether the problem-solving abilities 
necessary for NLU are sufficient to allow problem-solving to occur in any 
domain (as I have argued). 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  From: Dr. Matthias Heger [EMAIL PROTECTED]
  Subject: AW: [agi] Understanding and Problem Solving
  To: agi@v2.listbox.com
  Date: Thursday, October 23, 2008, 10:12 AM


  I do not agree. Understanding a domain does not imply the ability to 
solve problems in that domain.

  And the ability to solve problems in a domain even does not imply to 
have a generally a deeper understanding of that domain.



  Once again my example of the problem to find a path within a graph 
from node A to node B:

  Program p1 (= problem solver) can find a path.

  Program p2  (= expert in understanding) can verify and analyze paths.



  For instance, p2 could be able compare the length of the path for the 
first half of the nodes with the length of the path for the second half of the 
nodes. It is not necessary that  P1 can do this as well.



  P2 can not necessarily find a path. But p1 can not necessarily 
analyze its solution.



  Understanding  and problem solving are different things which might 
have a common subset but it is wrong that the one implies the other one or vice 
versa.



  And that’s the main reason why natural language understanding is not 
necessarily AGI-complete.



  -Matthias





  Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:






Once again, there is a depth to understanding - it's not simply 
a binary proposition.

Don't you agree that a grandmaster understands chess better 
than you do, even if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 
for x, but I might understand that y equals 3 times x plus four. My 
understanding is superficial compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language 
requires solving problems? If not, how would you account for an AI's ability to 
understand novel metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] 
wrote:

From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM

Terren Suydam wrote:

  

Understanding goes far beyond mere knowledge - understanding 
*is* the ability to solve problems. One's understanding of a situation or 
problem is only as deep as one's (theoretical) ability to act in such a way as 
to achieve a desired outcome. 

  



I disagree. A grandmaster of chess can explain his decisions 
and I will understand them. Einstein could explain his theory to other 
physicist(at least a subset) and they could understand it.



I can read a proof in mathematics and I will understand it – 
because I only have to understand (= check) every step of the proof.



Problem solving is much much more than only understanding.

Problem solving is the ability to *create* a sequence of 
actions to change a system’s state from A to a desired state B.



For example: Problem Find a path from A to B within a graph.

An algorithm which can check a solution and can answer details 
about the solution is not necessarily able to find a solution.



-Matthias








  agi | Archives | Modify Your Subscription
 
 
   




--

agi | Archives | Modify Your Subscription
   
   




--
   

AW: AW: [agi] Understanding and Problem Solving

2008-10-23 Thread Dr. Matthias Heger
Natural language understanding is a problem. And a system with the ability
to understand natural language is obviously able to solve *this* problem.

But the ability to talk about certain domains does not imply the ability to
solve the problems  in  this domain.

I have argued this point with my example of the two programs for the domain
of graphs.

 

As Ben has said, it essentially depends on definitions. Probably, you have a
different understanding of the meaning of understanding ;-)

But for me there is a difference between understanding a domain and the
ability to solve problems in a domain.

 

I can understand a car  but this does not imply that I can drive a car.

I can understand a proof but this does not imply that I can create it.

My computer understands my programs because it executes every step correctly
but it cannot create a single statement in the language it understands.

 

Did you never experienced a situation where you could not solve a problem
but when another person has shown you the solution you understood it at
once?

You could not create it but you needed not to learn to understand it.

Of course, often when you see a solution for a problem then you learn to
solve it at the same time. But this is exactly the reason why you have the
illusion that understanding and problem solving are the same.

 

Think about a very difficult proof. You can understand every step. But when
you get just an empty piece of paper to write it down again then you cannot
remember the whole proof and thus you cannot create it. But you can
understand it, if you read it. Obviously there is a difference between
understanding and problem solving.

.

 

I am sure, you want to define understanding differently. But I do not
agree because then the term understanding would be overloaded and too much
mystified.

And we already have too much terms which are unnecessarily mystified in AI.

 

- Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED] wrote





Matthias, 

I say understanding natural language requires the ability to solve problems.
Do you disagree?  If so, then you must have an explanation for how an AI
that could understand language would be able to understand novel metaphors
or analogies without doing any active problem-solving. What is your
explanation for that?

If on the other hand you agree that NLU entails problem-solving, then that
is a start. From there we can argue whether the problem-solving abilities
necessary for NLU are sufficient to allow problem-solving to occur in any
domain (as I have argued). 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: AW: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 10:12 AM

I do not agree. Understanding a domain does not imply the ability to solve
problems in that domain.

And the ability to solve problems in a domain even does not imply to have a
generally a deeper understanding of that domain.

 

Once again my example of the problem to find a path within a graph from node
A to node B:

Program p1 (= problem solver) can find a path.

Program p2  (= expert in understanding) can verify and analyze paths.

 

For instance, p2 could be able compare the length of the path for the first
half of the nodes with the length of the path for the second half of the
nodes. It is not necessary that  P1 can do this as well.

 

P2 can not necessarily find a path. But p1 can not necessarily analyze its
solution.

 

Understanding  and problem solving are different things which might have a
common subset but it is wrong that the one implies the other one or vice
versa.

 

And that's the main reason why natural language understanding is not
necessarily AGI-complete.

 

-Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:

 



Once again, there is a depth to understanding - it's not simply a binary
proposition.

Don't you agree that a grandmaster understands chess better than you do,
even if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 for x, but I
might understand that y equals 3 times x plus four. My understanding is
superficial compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language requires
solving problems? If not, how would you account for an AI's ability to
understand novel metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM

Terren Suydam wrote:

  

Understanding goes far beyond mere knowledge - understanding *is* the
ability to solve problems. One's understanding of a situation or problem is
only as deep as one's (theoretical) ability to act in such a way as to
achieve a desired outcome. 

  

 

I disagree. A 

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Trent Waddington
On Fri, Oct 24, 2008 at 8:41 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Yes ... at the moment the styles of human and computer chess players are
 different enough that doing well against computer players does not imply
 doing nearly equally well against human players ... though it certainly
 helps a lot ...

Does it?  I've heard many chess instructors say that playing against a
computer hinders young players more than it helps.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Ben Goertzel
On Thu, Oct 23, 2008 at 6:46 PM, Trent Waddington 
[EMAIL PROTECTED] wrote:

 On Fri, Oct 24, 2008 at 8:41 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  Yes ... at the moment the styles of human and computer chess players are
  different enough that doing well against computer players does not imply
  doing nearly equally well against human players ... though it certainly
  helps a lot ...

 Does it?  I've heard many chess instructors say that playing against a
 computer hinders young players more than it helps.



I suspect that's a half-truth...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Trent Waddington
On Fri, Oct 24, 2008 at 8:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 I suspect that's a half-truth...

Well as a somewhat good chess instructor myself, I have to say I
completely agree with it.  People who play well against computers
rarely rank above first time players.. in fact, most of them tend to
not even know the rules of the game.. having had the computer there to
coddle them at every move.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
Just now there is a world championship in chess. My chess programs (e.g.
Fritz 11) can give a ranking for all moves given an arbitrary chess
position.

The program agrees with the grandmasters which moves are in the top 5. In
most situations it even agrees which move is the best one.

Thus, human style chess of top grandmasters and computer chess are quite the
same today.

 

- Matthias

 

 

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 00:41
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

 

On Thu, Oct 23, 2008 at 5:38 PM, Trent Waddington
[EMAIL PROTECTED] wrote:

On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I am sure that everyone who learns chess by playing against chess
computers
 and is able to learn good chess playing (which is not sure as also not

 everyone can learn to be a good mathematician) will be able to be a good
 chess player against humans.

And you're wrong.


Trent



Yes ... at the moment the styles of human and computer chess players are
different enough that doing well against computer players does not imply
doing nearly equally well against human players ... though it certainly
helps a lot ...

ben g 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Ben Goertzel
Yeah, but these programs did not learn to play via playing other computer
players or studying the rules of the game ... they use alpha-beta pruning
combined with heuristic evaluation functions carefully crafted by human
chess experts ... i.e. they are created based on human knowledge about
playing human players...

I do think that a sufficiently clever AGI should be able to learn to play
chess very well based on just studying the rules.  However, it's notable
that **either no, or almost no, humans have ever done this** ... so it would
require a quite high level of intelligence in this domain...

ben g

On Thu, Oct 23, 2008 at 7:25 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  Just now there is a world championship in chess. My chess programs (e.g.
 Fritz 11) can give a ranking for all moves given an arbitrary chess
 position.

 The program agrees with the grandmasters which moves are in the top 5. In
 most situations it even agrees which move is the best one.

 Thus, human style chess of top grandmasters and computer chess are quite
 the same today.



 - Matthias







 *Von:* Ben Goertzel [mailto:[EMAIL PROTECTED]
 *Gesendet:* Freitag, 24. Oktober 2008 00:41
 *An:* agi@v2.listbox.com
 *Betreff:* Re: [agi] If your AGI can't learn to play chess it is no AGI





 On Thu, Oct 23, 2008 at 5:38 PM, Trent Waddington 
 [EMAIL PROTECTED] wrote:

 On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED]
 wrote:
  I am sure that everyone who learns chess by playing against chess
 computers
  and is able to learn good chess playing (which is not sure as also not

  everyone can learn to be a good mathematician) will be able to be a good
  chess player against humans.

 And you're wrong.


 Trent



 Yes ... at the moment the styles of human and computer chess players are
 different enough that doing well against computer players does not imply
 doing nearly equally well against human players ... though it certainly
 helps a lot ...

 ben g


  --

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
I am very impressed about the performance of humans in chess compared to
computer chess.

The computer steps through millions(!) of positions per second. And even if
the best chess players say they only evaluate max 3 positions per second I
am sure that this cannot be true because there are so many traps in chess
which must be considered.

 

I think humans represent chess by a huge number of *visual* patterns. The
chessboard is 8x8 squares. Probably, a human considers all 2x2, 3x3 4x4 and
even more subsets of the chessboard at once beside the possible moves. We
see if a pawn is alone or if a knight is at the edge of the board. We see if
the pawns are in a diagonal and much more. I would guess that the human
brain observes many thousands of visual patterns in a single position. 

This is the only explanation for me why the best chess players still have a
little chance to win against computers.

 

Even a beginner who never has played chess would see some patterns in the
initial position. All pieces with the same color are together at different
sides. All pawns of the same color are in the same raw and so on. The
interesting question is why the beginner can already see regularities. I
think the human has a lot of visual bias which is also useful to see
patterns in chess. On the other hand visual embodied experience is of course
important too. In my opinion, sophisticated vision is much more important
for an artificial human than natural language understanding

 

-Matthias

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 01:53
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 


Yeah, but these programs did not learn to play via playing other computer
players or studying the rules of the game ... they use alpha-beta pruning
combined with heuristic evaluation functions carefully crafted by human
chess experts ... i.e. they are created based on human knowledge about
playing human players...

I do think that a sufficiently clever AGI should be able to learn to play
chess very well based on just studying the rules.  However, it's notable
that **either no, or almost no, humans have ever done this** ... so it would
require a quite high level of intelligence in this domain...

ben g




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Benjamin Johnston

 Within the domain of chess there is everything to know about chess. 
 So if it comes up to be a good chess player learning chess from playing
 chess must be sufficient. Thus, an AGI which is not able to enhance its 
 abilities in chess from playing chess alone is no AGI.  

I'm jumping into this conversation a little late, but I think chess is
something that should be avoided in the context of AGI. I have three reasons
for this, but as far as I can see, it has primarily been the first of these
that has been discussed.

1. Intelligence is too easy to fake in chess
2. Chess is too hard to learn from scratch
3. Chess is tainted with the failed ambitions of early GOFAI.

The success of Deep Blue and more modern chess playing systems like Fritz
and Rybka, with hand-coded search algorithms and heuristics demonstrates how
easy it is to fake real intelligence. I'm not a good chess player, but it
wouldn't be too hard for me to implement a search algorithm over a simple
heuristic evaluation function, resulting in a chess system that can outplay
me. In contrast, the seemingly less intelligent problem of walking, is
very hard to get right by hand-coding my own knowledge (the nicest walks are
almost always discovered by machine learning). That is, in chess it is too
easy to fake intelligence by hand coding your own knowledge into the
heuristics. Less structured problems, in which expert knowledge is little
assistance, are a better challenge for early AGI systems.

Chess is also too difficult a problem to learn in a general way from zero
knowledge. The difficulty of chess in the general game playing competitions
confirms this (last time I heard from one of the teams, even though the best
systems do pretty well on simpler games, in chess they can't do much more
than simply play legal moves). When playing chess, we have draw on knowledge
of space and time and concepts like control and domination. We quickly
realize by ourselves that it is good to control the centre of the board, and
that the queen is often worth defending, and that even though you can win
with just two pieces it is generally bad to lose pieces. But a real AGI
would have to discover concepts like center and more powerful by itself
(center is a difficult concept to express if you only know about 64 squares
and which ones are next to each other). The chess board itself is too large,
the moves are too complicated, and the rewards come far far too late to
expect a system to automatically discover how to play good chess with no
prior knowledge of simpler games or of the larger world. I suspect that the
complexity of the problem is such that a system that learns chess without
prior knowledge would discover quirky rules that provide local maxima: for
example, it might unintentionally learn to sacrifice many of its own pieces
because doing so makes the search space smaller, so that the system can
think more moves ahead (rather than, say, developing heuristics to simply
disregard some of its pieces).

And finally, while I did not personally experience the early days of AI,
there seems to be an implication in some of the early literature that if
only we could create a chess playing robot, then we'll have solved the
problem of AI. I think AI has moved on from this simple attitude, and even
mentioning chess seems to - at least in my mind - sound like forgetting all
the mistakes and lessons from the past. Even with a good argument for
resurrecting chess, and an explanation for the past failure of the game to
generate real progress in strong AI, I still suspect that mentioning chess
is bad marketing for a young field.

-Ben




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Trent Waddington
On Fri, Oct 24, 2008 at 10:38 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I think humans represent chess by a huge number of *visual* patterns.

http://www.eyeway.org/inform/sp-chess.htm

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com



Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Mike Tintner


Trent:

On Fri, Oct 24, 2008 at 10:38 AM, Dr. Matthias Heger [EMAIL PROTECTED] 
wrote:

I think humans represent chess by a huge number of *visual* patterns.


http://www.eyeway.org/inform/sp-chess.htm



We've been over this one several times in the past (perhaps you haven't been 
here). Blind people can see - they can draw the shapes of objects. . They 
create their visual shapes out of touch.Touch comes prior to vision in 
evolution


All living creatures are common sense intelligences. IOW the senses are 
integrated and information is shared between them. It is only at an 
intellectual level that we can think we can function with only one sense in 
isolation. It's actually impossible in practice. {See Michael Tye]. - (And 
there is much, much food for thought in that reality).


So yes, Matthias, is correct. How other than visually (and 
common-sensically) do you think people play chess?


P.S. Matthias seems to be cheerfully cutting his own throat here. The idea 
of a single domain AGI  or pre-AGI is a contradiction in terms every which 
way - not just in terms of domains/subjects or fields, but also sensory 
domains.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Trent Waddington
On Fri, Oct 24, 2008 at 1:04 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 We've been over this one several times in the past (perhaps you haven't been
 here). Blind people can see - they can draw the shapes of objects. . They
 create their visual shapes out of touch.Touch comes prior to vision in
 evolution

Just cause you've repeated yourself several times doesn't mean you've
convinced anyone.

If you redefine visual to mean adjacency then maybe you've got a
nice workable theory there.. Objects exist in the world and the brain
has to have a good model of this..

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-23 Thread Abram Demski
Mark,

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of provably true and
semantically true for natural language). Does that make sense, or am
I still confusing?

Matthias,

I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram

On Thu, Oct 23, 2008 at 4:07 PM, Mark Waser [EMAIL PROTECTED] wrote:
 So to sum up, while you think linguistic vagueness comes from Godelian
 incompleteness, I think Godelian incompleteness can't even be defined
 in this context, due to linguistic vagueness.

 OK.  Personally, I think that you did a good job of defining Godelian
 Incompleteness this time but arguably you did it by reference and by
 building a new semantic structure as you went along.

 On the other hand, you now seem to be arguing that my thinking that
 linguistic vagueness comes from Godelian incompleteness is wrong because
 Godelian incompleteness can't be defined . . . .

 I'm sort of at a loss as to how to proceed from here.  If Godelian
 Incompleteness can't be defined, then by definition I can't prove anything
 but you can't disprove anything.

 This is nicely Escheresque and very Hofstadterian but . . . .


 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, October 23, 2008 11:54 AM
 Subject: Re: [agi] constructivist issues




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com