I want to write an AGI feasibility program which would respond in real time.
 
I believe that there are some algorithms which are missing in most fundamental 
AI paradigms and even though they may be considered in more elaborate plans I 
think they tend to get downplayed when competing against a lot of other 
detailed subprograms.  For instance, some kinds of problems can be solved 
effectively with the use of weighted reasoning (such as Bayesian Reasoning) but 
some cannot be.  Now most people who advocate the use of weighted reasoning as 
a basis for their AGI program know that a weighted method can describe a 
discrete situation, so they wrongly conclude that that is not a serious problem 
for them.  (By the way, this leads to the recognition of a more general 
problem.  Just because a technique has the potential to represent a specific 
situation that does not mean that a particular implementation plan will 
effectively implement that potential.)
 
The anomalousness of intelligence does not mean that you cannot start with a 
program that is relatively simple and develop something that will actually be 
capable of general intelligence.  This is somewhat similar to the error of 
believing that since inflexible programs cannot deal with every kind of 
situation that are needed for artificial general intelligence then that proves 
that artificial intelligence (using contemporary computers) must be impossible. 
 An AGI implementation cannot be like a program in which only active conditions 
are implicitly specified by (user) input.  More advanced programs can use input 
to specify some conditions, some conditionals and some actions on conditions.  
Another more advanced programming potential is one in which the actions on 
conditions can be derived based on what occurs in IO.  So my implementation 
would not only rely on restricted axioms where all the possible conditional 
actions would be completely specified in the implementation.  Of course most 
programs have some of all the characteristics that I described.  I am saying 
that by being aware of these theoretical distinctions between elementary 
programming and more advanced programming facilities the AGI programmer should 
have more insight into the characteristics that seem to me to be necessary for 
an AGI program and have a greater potential to use them effectively.  Another 
more advanced form of programming can assign (the program can learn to assign) 
symbols and names to the actions that it can acquire and then use these names 
in an internal or external language. (This means that the distinctions between 
conditions, conditional operations and the actions taken on conditions are be 
relativistic.)
 
So my program - which is supposed to be an initial AGI feasibility test - would 
work in real time until it didn't. But I am hoping that by dealing with things 
like using a trial and error method, and as I mentioned in this message, using 
a trial and error method to detect when weighted reasoning can work and when it 
does not work, I can get this feasibility test to create a recognizable base 
for intelligence that is (recognizably) more broadly expansive than what you 
see in contemporary AI/AGI programs.
 
Jim Bromer
 
 
 
From: [email protected]
Date: Wed, 17 Jul 2013 10:12:12 +0200
Subject: Re: [agi] A Very Simple AGI Project
To: [email protected]


On Tue, Jul 16, 2013 at 11:25 PM, Jim Bromer <[email protected]> wrote:


not worrying about writing something that would be scalable to adult human 
level AGI.

That's OK then, Matt is bound to make you honorary member of the "without 
actually accomplishing anything" club. Just joking.


Of course all kinds of simple learning have been tried for decades, like I said 
mostly without the ambition to solve AGI, very often just to publish a paper 
(and as I've noted before a multitude of authors of interesting papers and 
dissertations ended up working in more or less unrelated fields). Jan above is 
right that a lot of engineering knowledge does come from simple exercises that 
we eventually discover if and how we can eventually scale - I should point out 
however that intelligence is anomalous, it is not like building a hut first and 
a 100-story skyscraper later, it is more like building a 100 dimensional 
skyscraper . But what may have been missed by a collective IQ of a million or a 
billion? I don't know but in my outline of RiskAI, an intellect that first and 
foremost manages risk in its environment trying to survive, I proposed a rather 
challenging starting point for AGI: real time intelligence! The basic idea is 
that risk becomes infinite if you are too slow, and then again you may always 
be too slow for some environments and activities, in which case you stay closer 
to your comfort zone where your reaction times are not a handicap, but still 
they would have to be relatively fixed and consistent. 


Now, Jim, this is a perspective that at least guarantees you that you don't 
fall in your complexity/recursive traps. Instead of coding learning first and 
waiting for a program to respond later, you first make sure the program 
responds, and then build learning around it. I am not going to lie, this can be 
quite an engineering challenge, and frankly I think it is an area that will see 
many breakthroughs, especially if you look at the "real-time ecosystem", for 
example FPGAs and HPC where you could be guaranteed very "thin" computing power 
like a million agents each running for some milliseconds. You can of course 
arbitrarily choose the response time on your hardware, even 10 minutes or 
whatever, but the idea is to stick to whatever limit you chose. Then you can 
always claim that some hardware engineering can speed your algorithm 1000x and 
make it suitable for ordinary environments.


AT



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  


                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to