I just glanced at Stanley's linked deconstruction of my paper.  It is obvious 
what he was doing.  He gathered sentences that he thought could help him 
interpret certain key terms that I used in my paper and if these definitions 
contained other words that he thought were key to understanding my paper then 
he could hyperlink them to a collection of statements that were related to that 
word.  This linkage, built from his work, showed a typical computational 
relationship between words and statements.  After looking at his work he felt 
that my paper did not go into enough depth to explain how I intended that the 
terms should be interpreted.  That is in line with the opinion that I expressed 
in my paper that it takes many statements to understand one simple statement.  
As I said, this linked semantic-usage hyper-document formed a typical 
computational structure.  However, while he used a typical computational method 
in his effort to interpret what I said, the simple fact is that present day 
computer programs cannot do what he did because he used his mind to make 
decisions on placing the boundaries for these 'definitions' that he took from 
the paper.  A computer programming interpreter or compiler is able to transform 
a system of program statements into a computer program but the programming 
language must fully specify every part of the translation or transformation.  
When Stanley chose sections from the paper he had to use his judgment about 
what he was reading in order to decide on the boundaries of each clipped 
statement.  He also had to use his judgment to decide which words should be 
treated as if they were keywords.  Programming interpreters and compilers are 
not able to show this kind of judgment.  So, for example, Stanley could choose 
some other way to form boundaries around the parts that he thinks are relevant 
to his task, if he wanted, and then associate them with other bounded sections.
 
An analysis of what he was doing could be useful for an AGI project.  The idea 
that researchers can spend years throwing a lot of good abstract ideas around 
without building on extensive analyses of how human beings would do the kinds 
of things that they want their AGI programs to do is just nonsense.  And the 
idea that researchers and would-be researchers will dismiss discussions of the 
importance of the basics like this is evidence of the problem.  I would not be 
able to imbue an AGI program with the kind of insight that an adult can bring 
to a task, but I can start to think how an AGI program might be able to handle 
simpler problems by analyzing them analogously.  But this has to be an ongoing 
process where my ideas will be converted to program statements and tested to 
provide some evidence on the best way to refine or redefine my ideas.  I hope 
to start this development and testing next month.
 
During the early stages of developing my test programs I plan to try to see how 
I might solve a problem given the situation that I think the computer would be 
in.  From that point I plan to add a some feasible approximations that might be 
used to create a few of the best possibilities to start decoding a situation.  
Then I plan to see if there are ways to minimize the effects of bad guesses. 
 
For the armchair programmer-researcher, this is as important a description of 
implementation as any.
 
Jim Bromer 

                                                                                
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to