> Manipulating of patterns needs reading and writing operations. Data 
> structures will be changed. Translation needs just reading operations to the 
> patterns of the internal model.



So translation is a pattern manipulation where the result isn't stored?



> I disagree that AGI must have some process for learning language. If we 
> concentrate just on the domain of mathematics we could give AGI all the rules 
> for a sufficient language to express its results and to understand our 
> questions.



The domain of mathematics is complete and unambiguous.  A mathematics AI is not 
a GI in my book.  It won't generalize to the real world until it handles 
incompleteness and ambiguity (which is my objection to your main analogy).



(Note:  I'm not saying that it might not be a good first step . . . . but I 
don't believe that it is on the shortest path to GI).



> New definitions makes communication more comfortable but they are not 
> necessary.

 

Wrong.  Methane is not a new definition, it is a new label.  New definitions 
that combine lots of raw data into much more manipulable knowledge are 
necessary exactly as much as a third-, fourth-, or fifth- generation language 
is necessary instead of machine language.



> I don't know the telephone game. The details are essential. It is not 
> essential where the data comes from and where it ends. Just the process of 
> translating internal data into a certain language and vice versa is important.



Start with a circle of people.  Tell the first person a reasonable length 
phrase, have them tell the next, and so on.  The end result is fascinating and 
very similar to what happens when an incompetent pair of translators attempt to 
translate from one language to another and back again.



> It is clear that an AGI needs an interface for human beings. But the question 
> in this discussion is whether the language interface is a key point in AGI or 
> not. In my opinion it is no key point. It is just a communication protocol. 
> The real intelligence has nothing to do with language understanding. 
> Therefore we should use a simple formal hard coded language for first AGI.



The communication protocol needs to be extensible to handle output after 
learning or transition into a new domain.  How do you ground new concepts?  
More importantly, it needs to be extensible to support teaching the AGI.  As I 
keep saying, how are you going to make your communication protocol extensible?  
Real GENERAL intelligence has EVERYTHING to do with extensibility.



> I don't see any problems with my model and I do not see any flaws which I 
> don't have answered.

> I haven't seen any point where my analogy comes short.



I keep pointing out that your model separating communication and database 
updating depends upon a fully specified model and does not tolerate ambiguity 
(i.e. it lacks extensibility and doesn't handle ambiguity).  You continue not 
to answer these points.



Unless you can handle valid objections by showing why they aren't valid, your 
model is disproven by counter-example.



  ----- Original Message ----- 
  From: Dr. Matthias Heger 
  To: [email protected] 
  Sent: Sunday, October 19, 2008 4:53 PM
  Subject: AW: AW: [agi] Re: Defining AGI


  Mark Waser wrote:

   

  >How is translating patterns into language different from manipulating 
patterns? 

  > It seems to me that they are *exactly* the same thing.  How do you believe 
that they differ?

   

  Manipulating of patterns needs reading and writing operations. Data 
structures will be changed. Translation needs just reading operations to the 
patterns of the internal model.

   

   

  >Do you really believe that if A is easier than B then that makes A easy? 

  > How about if A is leaping a tall building in a single bound and B is 
jumping to the moon?

   

  The word *easy*  is not exactly definable.

   

   

  > Do you believe that language is fully specified?  That we can program 
English into an AGI by hand?

   

  No. That's the reason why I would not use human language for the first AGI.

   

  >Yes, I imagine that an AGI must have some process for learning language 
because language is necessary for 

  >learning knowledge and knowledge is necessary for intelligence.  

  >What part of that do you disagree with?  Please be specific.

   

  I disagree that AGI must have some process for learning language. If we 
concentrate just on the domain of mathematics we could give AGI all the rules 
for a sufficient language to express its results and to understand our 
questions.

   

   

   

   >>>

  >And this is where we are not communicating.  Since language is not fully 
specified, then the participants in 

  >many conversations are *constantly* creating and learning language as a part 
of the process of 

  >communication.  This is where Gödel's incompleteness comes in.  To be a 
General Intelligence, you must be able to extend beyond what is currently known 
and specified into new domains.  Any time that we are teaching or learning 
(i.e. modifying our model of the world), we are also necessarily extending our 
models of each other and language.  The computer database analogy you are 
basing your entire argument upon does not have the necessary 
features/complexity to be an accurate or useful analogy.

  <<< 

   

  Language must only grow if you make new definitions and want to communicate 
the definition to another agent. But new definitions are not necessary for 
general intelligence. If you define 

  Methane := CH4

  Then it is your choice whether you say the new word methane or you use the 
known expression CH4.

  New definitions makes communication more comfortable but they are not 
necessary.

   

  ***Even if you change your model and your language at the same time then 
there is still a strict distinction between them.

  Language would still only be used for communication and not for the data 
structure of the patterns for the world model or the algorithms which 
manipulate these patterns.***

   

   

  >Again, I disagree.  You added internal details but the end result after the 
details are hidden is that e-mail

  > programs are just point-to-point repeaters.  That is why I used the 
examples (the telephone game and round-

  >trip (mis)translations) that I did which you did not address.

   

  I don't know the telephone game. The details are essential. It is not 
essential where the data comes from and where it ends. Just the process of 
translating internal data into a certain language and vice versa is important.

   

  >> You *believe* that language cannot be separated from intelligence. I don't 
and I have described a model which has a strict separation. We both have no 
proof.

   

   

   >>>

  Three points. 

  1.       My statement was that intelligence can't be built without 
language/communication.  That is entirely different from the fact that they 
can't be separated.  I also gave reasoning why this was the case that you 
haven't addressed.

  <<< 

   

  The main point in this discussion is whether language /communication can be 
separated from intelligence.

  It is clear that an AGI needs an interface for human beings. But the question 
in this discussion is whether the language interface is a key point in AGI or 
not. In my opinion it is no key point. It is just a communication protocol. The 
real intelligence has nothing to do with language understanding. Therefore we 
should use a simple formal hard coded language for first AGI.

   

  >>> 

  2.       Your model has serious flaws that you have not answered.  You are 
relying upon an analogy that has points that you have not shown that you are 
able to defend.  Until you do so, this invalidates your model.

  <<< 

   

  I don't see any problems with my model and I do not see any flaws which I 
don't have answered.

   

  >>> 

  3.  You have not provided a disproof or counter-example to what I am saying.  
I have clearly specified where your analogy comes up short and other 
inaccuracies in your statements while you have not done so for any of mine 
(other than of the "tis too, tis not" variety).

   <<<

  I haven't seen any point where my analogy comes short.

   

  >>> 

  I have had the courtesy to directly address your points with clear 
counter-examples.  Please return the favor and do not simply drop my examples 
without replying to them and revert back to global statements.  Global 
statements are great for an initial exposition but eventually you have to get 
down to the details and work out the nitty-gritty.  Thanks.

  <<< 

   

  I haven't drop your examples. 

   


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to