Can one have situations inside situations? 
What's the difference between an object and a situation? 
Kindly advise.
~PM

Date: Mon, 28 Apr 2014 16:50:24 -0600
From: [email protected]
To: [email protected]
Subject: Re: [agi] Situations


  
    
  
  
    Greetings Telmo, 

       I've responded to your comments below.

      Are you working on an ontology based AGI approach? 

      

      Stan

      

      On 04/28/2014 02:30 PM, Telmo Menezes via AGI wrote:

    
    
      Hi Stanley,

        

          

          On Mon, Apr 28, 2014 at 9:23 PM,
            Stanley Nilsen via AGI <[email protected]> wrote:

            
              
                Hi PM,

                  

                  A few thoughts - 

                  

                  One might try to come up with methods to generalize
                  situations - put in categories and sub categories and
                  sub sub categories...  This sounds logical, but also
                  terribly tedious. 

                  

                  My alternative is to look at the world as sets of
                  triggers.   A trigger initiates an action - maybe to
                  assert a new fact.  The new fact might then trigger
                  something else...

                
              
            
            

            
            Ok, but I don't see how this removes the need for an
              ontology.
          
        
      
    
    As I understand it, there are several efforts to create massive
    ontology.  And, we can all see the "value" of it.  The struggle is
    in finding the mechanisms that can cash in on that value - the magic
    sauce?  

    

    I focus on how to become more intelligent when you start at next to
    nothing.  What's the bootstrap look like?  At what point does a
    computer begin to build it's intelligence?  And, what do the
    construction elements resemble?  

    

    
      
        
          
             It could be implicit or explicit, but you still have
              to be able to make more and more distinctions between
              triggers or actions. I tell the AI to book me a trip to
              Cambridge. What Cambridge, UK or USA? And then, to book
              the ticket I have to know that Cambridge is a town, and
              that I already know something about how to book travels
              into towns, and so on.
          
        
      
    
    

    Software "assistants" are pretty popular now.  I understand
    Microsoft is planning one to compete with Siri.   Maybe this is the
    way to the future.  Start out assisting and one day take over :)

    

    
      
        
          
            

            
            You need some way to generalise, and this leads to some
              hierarchy of types. I bet our brain encodes a huge one.
              But how does it encode it?
             
            
              
                 

                  What is triggered depends on what our "understanding"
                  makes of triggers.  Pretty much a Rube Goldberg
                  contraption, but gets interesting quickly. 
                  Understanding isn't that vague, it's whatever can be
                  coded into rules. 

                
              
            
            

            
            So you would say that a thermostat understands
              temperature?
          
        
      
    
    No, I would say that whatever is reading and setting the thermostat
    needs to understand the effect they want to achieve.  The "user"
    chooses the thermostat based on  understanding of outcomes that are
    expected.  

    

    The thermostat is simply a "see" mechanism - it triggers something
    else.  If you wrote a rule to act like a thermostat, I would say
    that the rule understands an aspect of a thermostat - e.g. numbers
    change over time and there is a trigger point.  I don't think the
    rule needs to know about atomic vibrations, or the cost of a barrel
    of oil.  

    

    I'm not downplaying ontology, it will be useful.  I just don't put
    it as first priority in building an AGI. 

    

    Stan

    

    

  


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to