I am not interested in making a synthetic human being.  Such a goal is absurd. 
 Everyone understands that. So I want to discover the principles that would 
allow a computer program to react in ways that would be more like human 
thinking.   It is not realistic to assume that the technology of 
sensori-robotics contains the secrets of human reasoning. When the computer 
program is utilizing Input data it is just that: data.  This is not just an 
abstraction but the reality.  (Or it is an human description of the reality.)  
There is no mysterious link between representations of sensors and the objects 
that are being sensed. The objects that sensors can pick up are 'ambiguous' so 
to speak.  They can change depending on their interactions with other objects 
and forces, and the changing variations of the IO environment that the sensors 
can sense.  So an object in a visual scene which is white in one lighting 
situation can be green in another and grey in another and invisible in a dark 
environment.  The shadows that fall across the object can fool the algorithms 
that are trying to detect the object. The idea that a sensor would allow 
objects to be detected easily is absurd and it is not a conclusion that is 
drawn from familiarity with the problem. I agree that a combination of sensors 
would make an effective AGI program more capable in many (but not all) 
situations.  However, without an effective AGI program the belief that adding 
sensors to a half-baked AGI theory might make it work has clearly been debunked 
by the history of AI/AGI. Jim Bromer    Subject: Re: [agi] What I Was Trying to 
Say.
From: wann...@ababian.com
Date: Fri, 10 May 2013 14:58:42 -0500
To: a...@listbox.com

I'm with Mike on this one.  But to be a bit more constructive, I wouldn't say 
it's simply a matter of a movie, though it certainly depends on physicality.  
Both sensory and motor.  Also, I would put a greater emphasis on the role of 
metaphor in understanding.  I'm a fan of Hofstadter work, and would recommend 
his new book.  Still, there are many pieces left.  And I hate to agree in a 
negative way like this, but I'm afraid even our beloved list founder Ben really 
seems to be missing a few key points, as well.  His intuition seems to be 
flawed on some of them.andi

Can I help?
On May 10, 2013, at 12:59 PM, "Mike Tintner" <tint...@blueyonder.co.uk> wrote:







You haven’t the foggiest what you’re talking about – you’re just playing 
(as I more or less indicated you would) desperate logic games. Your examples 
are 
totally hypothetical – and false.
 
Here is an *actual* not a hypothetical trompe l’oeil. Please explain what 
on earth words or symbols have to do with understanding it – or try and explain 
how any non-visual, non-sensory processing is involved.
 
http://www.meridian.net.au/Art/Artists/MCEscher/Gallery/Images/escher-relativity-lithograph-medium.jpg
 
Note I can describe it in words:
 
ESCHER’S PICTURE SHOWS HUMANS IMPOSSIBLY GOING UP AND DOWN DIFFERENT SIDES 
OF A STAIRCASE  – AND STAIRCASES CONNECTING ALTHOUGH ON TOTALLY DIFFERENT 
DIMENSIONAL LEVELS .
 
Do you think my brain produced those words by consulting semantic networks? 
Please explain how – or be honest, think of your God, and acknowledge that you 
don’t have the slightest clue how the brain can produce those words.
 
And here are some actual not hypothetical visual ambiguities:
 
http://brainden.com/images/optical-illusions.gif
https://upload.wikimedia.org/wikipedia/commons/4/45/Duck-Rabbit_illusion.jpg
 
Please explain how the brain can recognize these ambiguities by other than 
visual/sensory means.
 
Or how the brain can understand a verbal description -  A DUCK CAN 
LOOK LIKE A RABBIT BACK TO FRONT – by consulting only words/symbols.
 
I would suggest that the obvious way the brain is able to recognize that 
these are ambiguous pictures is by seeing that the same picture can be 
physically fitted to fit two very different prototypical figures. – the same 
drawing outlines/figure can be fitted to the figures of both a duck and a 
rabbit, a young girl and an old woman.
 
The brain is physically manipulating and moving around figures, not 
superfluous words.
 
And more of this another time – but that – “figurative thought – the 
capacity to physically, endlessly reconfigure the figures of objects, both 
individually and jointly -  is the basis of language and the basis of 
AGI.
 
What is totally non-AGI is “mere words” – no matter how many logic games 
you play.
 
 


 

From: Jim Bromer 
Sent: Friday, May 10, 2013 6:15 PM
To: AGI 

Subject: RE: [agi] What I Was Trying to Say.
 

Suppose that a box was cleverly carved so that it looked like it 
had a towel draped over it.  A visual based AGI program would be unable to 
detect the difference without some kind of additional action to help it 
discover 
the trompe l'oeil.  
 
And suppose that a word was used to refer 
to different things.  A visual based AGI program would have the same kinds 
of problems understanding that as a word-based AGI would have unless some kind 
of education to point out that the word was being used in different ways was 
available to it.
 
An AGI program has to be able to effectively 
utilize education.  It has to be able to meaningfully convert instruction 
into workable knowledge.  The distinction between procedural knowledge and 
declarative knowledge for a person is not that distinct except when looked at 
in 
detail.  (The decision to call certain mental events "procedural" would be 
somewhat arbitrary.)
 
The ability to be educated is one of the 
hallmarks of intelligence.  It should not be disregarded. And this can be 
achieved in text-based AGI. It is just a matter of when it is done.  Watson 
may have been long overdue but it was a major milestone in 
AI/AGI.
 
Jim Bromer
 




From: tint...@blueyonder.co.uk
To: a...@listbox.com
Subject: Re: [agi] What 
I Was Trying to Say.
Date: Fri, 10 May 2013 14:59:02 +0100






I’ll gladly put $1000 (or considerably more) down now publicly that neither 
your nor any other word-based “so-called AGI” prog will generate a single thing 
in 1/2/5 years – generativity, I think we can agree, being a test of AGI.
 
 

 

 


  
  
    
    


  
  
          
    


  
  
    AGI | Archives  | Modify 
      Your Subscription 
    


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to