What you may be forgetting is that you have to build functionality.  The 
initial function development will be slow, but as the functions combine you may 
get an exponential increase in capabilities through the combination and 
recombination of your functions. 

Just keep going.  Don't be discouraged.
~PM
Date: Sat, 2 Feb 2013 10:52:50 -0500
Subject: [agi] My Project and Using Prediction to Derive New Insights
From: [email protected]
To: [email protected]


One month has gone by and the progress I have made on
my AGi project is so slow that it is obvious to me that I will not be able to
get it working in a year or two unless I do a lot better.  So using a predicted 
date in a theory about
the development of my project has allowed me to get a better sense of how I am
doing even though there are no true tests to benchmark this.

Expectations may be implicitly or explicitly
associated with a great deal of knowledge but I just do not feel that they are
essential objectives of intelligence. 
Making a thorough philosophical analysis of how expectation plays in
different theories about intelligence/AI/AGI is probably not the best use of my
time, but let me say that there are some subtleties involved.

It is very useful to find observable objectives
that can be used to establish some sense of the effectiveness and proper usage
of a theory, and these observable objectives can even be associated with more
elusive objectives but the more elusive parts of combined objectives have to be
used wisely.  So my *feelings* of how
much progress I have made on my AGi project are subjective "objectives"
but as long as I am honest and willing to put some thought into it I can
interpret them wisely.  I am 1/12 of the
way to my "deadline," can I say that my program is 1/12 of the way to
being intelligent?  No.

The fact that I am doing more programming and that
I have a better plan than I did before are encouraging signs.  And while I 
haven't discovered anything about
AGI I have discovered something.  When I
wanted to try to run a simple initial test of intelligence I realized that it
was beyond me because intelligence requires a great deal of integrated knowledge
to serve as the potential background for a simple test.  So I realized that I 
will not be able to try
exactly what I wanted to try.  So instead
I will need to create some novel initial tests (of AGi) by combining what
I discover can be done with a number of simple algorithms that I am using and
then applying my imagination to see how I might use those algorithms to gain
some kind of artificial insight.  So this
may be a trivial insight about developing an AGI project but I believe that the 
fact
that I have formally recognized it will make it more likely that I will be able 
to develop good initial
AGi tests than I might have been otherwise. 
I will have a better sense of well this new plan will work by the start
of the next month.

Jim Bromer





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to