I said:So if you removed all knowledge except that knowledge that referred to a 
box or a ball then the text-based program would not be able to figure out that 
a "ballbox" was referring to a box shaped like a ball rather than a 
cube.-------------------------Well of course if the program had some 
information on the shapes of cubes and balls then it might be able to guess 
that a "ballbox" was a box shaped like a ball.  However, in a developed AGI 
program we would expect that a lot of significant information about the how 
shapes of things can affect their uses (and definitions) would be kept separate 
from the descriptions of various kinds of objects.  So, yes, of course an AGI 
program could, using only the information about boxes and balls, infer that the 
shape was somehow relevant to the new word, if it guessed that the combination 
of the two words intended to provide a pointer to possible definitions of what 
the word meant.  I was just trying to make the point that if concepts were 
really primitive and nothing else that could be applied to them had been 
defined, then it would be very difficult to make inferences about the concepts 
outside of the primitive definitions.  It takes a lot of information to 
understand something simple. The real problem is not that these inferences 
cannot possibly be made (Mike is wrong of course) but that there are so many 
possible inferences that could be made that it would easily overwhelm an AGI 
program.  However, if there was a huge amount of information related to boxes, 
balls, shapes and there were other cases that the program could analyze where 
two words were conjoined to create a new word based on the features of the two 
objects the words referred to, then a few best possible inferences  could be 
assembled and many of the less likely cases might be ignored.Jim Bromer
 From: jimbro...@hotmail.com
To: a...@listbox.com
Subject: RE: [agi] A General O.D. (Operational Definition) for all AGI projects
Date: Sat, 11 May 2013 08:26:44 -0400




Mike Tintner said:
Your project  must have an E.M. for  how BALL + BOX =  BALLBOX i.e. you have  
to show how with only standard knowledge of two objects, balls & boxes, you  
can a) generate and/or b) understand a new, third object, “ball-box” that is  
derived from them by non-standard means. In this case, a BALLBOX is a box 
shaped  like a ball rather than a 
cube.------------------------------------------
 
I am only replying to this as a way to repeat one of my ideas that seems 
obvious or commonsensical me.
 
It takes many 'statements' about a simple idea to understand it.  So if you 
removed all knowledge except that knowledge that referred to a box or a ball 
then the text-based program would not be able to figure out that a "ballbox" 
was referring to a box shaped like a ball rather than a cube.  And in fact, I 
did not realize what Mike was talking about until he made the statement that, 
"a ballbox is a box shaped like a ball rather than a cube."  So no, an AGI 
program would not be able to figure that out without information beyond the 
heavily redacted information about balls and boxes.  However, once the 
definition of a "ballbox" was made, as Mike made it for us, a text only AGI 
program would be able to figure it out (just as I was able to figure it out 
once I read it,) and use the term intelligibly.  And, significantly, if the 
text-based AGI program had many statements about different things it would be 
able to consider ideas like:
 
A box that can hold a ball.  (That was my first guess.)
A sphere that can hold a box. (That is a simple rearrangement of terms.)
An box that was shaped like a ball.
A ball that was shaped like a box.
A metaphor of something else. 
For example, a square line on the ground where balls are put or something 
(similar to a "batter's box").
 
It is easy to see that these could all be generated using computational methods 
of rational creativity if the program had general knowledge about many 
different things.
 
The real question is whether or not a text-based AGI program would ever be able 
to distinguish what kinds of things words "box" or "ball" referred to without 
ever seeing one.  I can say that there are human beings who are born blind but 
who can use references to things like, "the view of the mountains in the 
distance," intelligibly.  If the program used terms like this intelligibly 
their use would always be removed slightly from our more familiar use of the 
terms. But so what?  The text-based AGI model is just a step that is being made 
to try to discover *how thinking works* in general rather than precise 
subprograms that concern the visual shapes of things (as in Mike's unconscious 
cherry-picked example.)
 
Jim Bromer 
 


  
    
      
      


      
    
  

                                          


  
    
      
      


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to