>> The process of translating patterns into language should be easier than the 
>> process of creating patterns or manipulating patterns. 



How is translating patterns into language different from manipulating patterns? 
 It seems to me that they are *exactly* the same thing.  How do you believe 
that they differ?



>> Therefore I say that language understanding is easy. 



Do you really believe that if A is easier than B then that makes A easy?  How 
about if A is leaping a tall building in a single bound and B is jumping to the 
moon?



>> When you say that language is not fully specified then you probably imagine 
>> an AGI which learns language.



Do you believe that language is fully specified?  That we can program English 
into an AGI by hand?



Yes, I imagine that an AGI must have some process for learning language because 
language is necessary for learning knowledge and knowledge is necessary for 
intelligence.  What part of that do you disagree with?  Please be specific.



>> This is a complete different thing. Learning language is difficult as I 
>> already have mentioned.



And this is where we are not communicating.  Since language is not fully 
specified, then the participants in many conversations are *constantly* 
creating and learning language as a part of the process of communication.  This 
is where Gödel's incompleteness comes in.  To be a General Intelligence, you 
must be able to extend beyond what is currently known and specified into new 
domains.  Any time that we are teaching or learning (i.e. modifying our model 
of the world), we are also necessarily extending our models of each other and 
language.  The computer database analogy you are basing your entire argument 
upon does not have the necessary features/complexity to be an accurate or 
useful analogy.



>> Email programs are not just point to point repeaters.

>> They receive data in a certain communication protocol. They translate these 
>> data into an internal representation and store the data. And they can 
>> translate their internal data into a linguistic representation to send the 
>> data to another email client. This process  of communication is conceptually 
>> the same as we can observe it with humans.



Again, I disagree.  You added internal details but the end result after the 
details are hidden is that e-mail programs are just point-to-point repeaters.  
That is why I used the examples (the telephone game and round-trip 
(mis)translations) that I did which you did not address.



>> The word "meaning" was bad chosen from me. But brains do not transfer 
>> meaning as well. They also just transfer  data. Meaning is a mapping. 



As I said, brains *try* to transfer meaning (though they must do it via the 
transfer of data).  If you don't believe that brains try (and most frequently 
succeed) at transferring meaning they we should just agree to disagree.



>> You *believe* that language cannot be separated from intelligence. I don't 
>> and I have described a model which has a strict separation. We both have no 
>> proof.



Three points. 

1.  My statement was that intelligence can't be built without 
language/communication.  That is entirely different from the fact that they 
can't be separated.  I also gave reasoning why this was the case that you 
haven't addressed.

2.  Your model has serious flaws that you have not answered.  You are relying 
upon an analogy that has points that you have not shown that you are able to 
defend.  Until you do so, this invalidates your model.

3.  You have not provided a disproof or counter-example to what I am saying.  I 
have clearly specified where your analogy comes up short and other inaccuracies 
in your statements while you have not done so for any of mine (other than of 
the "tis too, tis not" variety).



I have had the courtesy to directly address your points with clear 
counter-examples.  Please return the favor and do not simply drop my examples 
without replying to them and revert back to global statements.  Global 
statements are great for an initial exposition but eventually you have to get 
down to the details and work out the nitty-gritty.  Thanks.



        Mark




----- Original Message ----- 

  From: Dr. Matthias Heger 
  To: [email protected] 
  Sent: Sunday, October 19, 2008 2:19 PM
  Subject: AW: AW: [agi] Re: Defining AGI


  The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy. 

   

  When you say that language is not fully specified then you probably imagine 
an AGI which learns language.

  This is a complete different thing. Learning language is difficult as I 
already have mentioned.

   

  Language cannot be translated into meaning. Meaning is a mapping from a 
linguistic string to patterns.

   

  Email programs are not just point to point repeaters.

  They receive data in a certain communication protocol. They translate these 
data into an internal representation and store the data. And they can translate 
their internal data into a linguistic representation to send the data to 
another email client. This process  of communication is conceptually the same 
as we can observe it with humans.

  The word "meaning" was bad chosen from me. But brains do not transfer meaning 
as well. They also just transfer  data. Meaning is a mapping. 

   

  You *believe* that language cannot be separated from intelligence. I don't 
and I have described a model which has a strict separation. We both have no 
proof.

   

  - Matthias

   

  >>> 

  Mark Waser [mailto:[EMAIL PROTECTED]  wrote



   

   

  BUT!  This also holds true for language!  Concrete unadorned statements 
convey a lot less information than statements loaded with adjectives, adverbs, 
or even more markedly analogies (or innuendos or . . . ).

  A child cannot pick up the same amount of information from a sentence that 
they think that they understand (and do understand to some degree) that an 
adult can.

  Language is a knowledge domain like any other and high intelligences can use 
it far more effectively than lower intelligences.

   

  ** Or, in other words, I am disagreeing with the statement that "the process 
itself needs not much intelligence".

   

  Saying that the understanding of language itself is simple is like saying 
that chess is simple because you understand the rules of the game.

  Godel's Incompleteness Theorem can be used to show that there is no upper 
bound on the complexity of language and the intelligence necessary to pack and 
extract meaning/knowledge into/from language.

   

  Language is *NOT* just a top-level communications protocol because it is not 
fully-specified and because it is tremendously context-dependent (not to 
mention entirely Godellian).  These two reasons are why it *IS* inextricably 
tied into intelligence.

   

  I *might* agree that the concrete language of lower primates and young 
children is separate from intelligence, but there is far more going on in adult 
language than a simple communications protocol.

   

  E-mail programs are simply point-to-point repeaters of language (NOT 
meaning!)  Intelligences generally don't exactly repeat language but *try* to 
repeat meaning.  The game of telephone is a tremendous example of why language 
*IS* tied to intelligence (or look at the results of translating simple phrases 
into another language and back -- "The drink is strong but the meat is 
rotten").  Translating language to and from meaning (i.e. your domain model) is 
the essence of intelligence.

   

  How simple is the understanding of the above?  How much are you having to 
fight to relate it to your internal model (assuming that it's even compatible 
:-)?

   

  I don't believe that intelligence is inherent upon language EXCEPT that 
language is necessary to convey knowledge/meaning (in order to build 
intelligence in a reasonable timeframe) and that language is influenced by and 
influences intelligence since it is basically the core of the critical 
meta-domains of teaching, learning, discovery, and alteration of your internal 
model (the effectiveness of which *IS* intelligence).  Future AGI and humans 
will undoubtedly not only have a much richer language but also a much richer 
repertoire of second-order (and higher) features expressed via language.

   

  ** Or, in other words, I am strongly disagreeing that "intelligence is 
separated from language understanding".  I believe that language understanding 
is the necessary tool that intelligence is built with since it is what puts the 
*contents* of intelligence (i.e. the domain model) into intelligence .  Trying 
to build an intelligence without language understanding is like trying to build 
it with just machine language or by using only observable data points rather 
than trying to build those things into more complex entities like third-, 
fourth-, and fifth-generation programming languages instead of machine language 
and/or knowledge instead of just data points.

   

  BTW -- Please note, however, that the above does not imply that I believe 
that NLU is the place to start in developing AGI.  Quite the contrary -- NLU 
rests upon such a large domain model that I believe that it is 
counter-productive to start there.  I believe that we need to star with limited 
domains and learn about language, internal models, and grounding without 
brittleness in tractable domains before attempting to extend that knowledge to 
larger domains.

   

    ----- Original Message ----- 

    From: David Hart 

    To: [email protected] 

    Sent: Sunday, October 19, 2008 5:30 AM

    Subject: Re: AW: [agi] Re: Defining AGI

     


    An excellent post, thanks!

    IMO, it raises the bar for discussion of language and AGI, and should be 
carefully considered by the authors of future posts on the topic of language 
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

    -dave

    On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> 
wrote:

    The process of outwardly expressing meaning may be fundamental to any social
    intelligence but the process itself needs not much intelligence.

    Every email program can receive meaning, store meaning and it can express it
    outwardly in order to send it to another computer. It even can do it without
    loss of any information. Regarding this point, it even outperforms humans
    already who have no conscious access to the full meaning (information) in
    their brains.

    The only thing which needs much intelligence from the nowadays point of view
    is the learning of the process of outwardly expressing meaning, i.e. the
    learning of language. The understanding of language itself is simple.

    To show that intelligence is separated from language understanding I have
    already given the example that a person could have spoken with Einstein but
    needed not to have the same intelligence. Another example are humans who
    cannot hear and speak but are intelligent. They only have the problem to get
    the knowledge from other humans since language is the common social
    communication protocol to transfer knowledge from brain to brain.

    In my opinion language is overestimated in AI for the following reason:
    When we think we believe that we think in our language. From this we
    conclude that our thoughts are inherently structured by linguistic elements.
    And if our thoughts are so deeply connbected with language then it is a 
small
    step to conclude that our whole intelligence depends inherently on language.

    But this is a misconception.
    We do not have conscious control over all of our thoughts. Most of the
    activities within our brain we cannot be aware of when we think.
    Nevertheless it is very useful and even essential for human intelligence
    being able to observe at least a subset of the own thoughts. It is this
    subset which we usually identify with the whole set of thoughts. But in fact
    it is just a tiny subset of all what happens in the 10^11 neurons.
    For the top-level observation of the own thoughts the brain uses the learned
    language.
    But this is no contradiction to the point that language is just a
    communication protocol and nothing else. The brain translates its patterns
    into language and routes this information to its own input regions.

    The reason why the brain uses language in order to observe its own thoughts
    is probably the following:
    If a person A wants to communicate some of its patterns to a person B then
    it has solve two problems:
    1. How to compress the patterns?
    2. How to send the patterns to the person B?
    The solution for the two problems is language.

    If a brain wants to observe its own thoughts it has to solve the same
    problems.
    The thoughts have to be compressed. If not you would observe every element
    of your thoughts and you would end up in an explosion of complexity. So why
    not use the same compression algorithm as it is used for communication with
    other people? That's the reason why the brain uses language when it observes
    its own thoughts.

    This phenomenon leads to the misconception that language is inherently
    connected with thoughts and intelligence. In fact it is just a top level
    communication protocol between two brains and within a single brain.

    Future AGI will have a much broader bandwidth and even for the current
    possibilities of technology human language would be a weak communication
    protocol for its internal observation of its own thoughts.

    - Matthias

     

     


----------------------------------------------------------------------------

          agi | Archives | Modify Your Subscription 
         
         


------------------------------------------------------------------------------

        agi | Archives | Modify Your Subscription
       
       

   


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to