Mark,
I did not mean that I could create a human-level AGI program in a year.  I
was saying that I thought I would be able to write a simple AGI program in
a year (what I called AGi, perhaps in another thread.)  I have been trying
to explain, that I can use my plans and 'predictions' as a means to
evaluate how well my ideas worked.  But, I would have to be willing to make
the commitment to accept the results of my evaluation methods if I wanted
to claim that the effort was scientific. Although making predicted
schedules is not a hallmark of genuine innovation, i was saying that if you
keep missing your predicted achievements then it is a pretty good
indication that you did not have it all figured out.

One of the methods that have been mentioned in this group is the use of
prediction as a tool of AGI.  However, the people in this group almost
never use the method in their own day to day lives except perhaps as a
another name for knowledge.  Used in this way it is a circular reference at
best or a labelling method at worse. I cannot remember who popularized the
use of prediction as a method of confirmation and validation of a theory
but that was the original value of prediction in AI.  So I am saying if
someone actually believes that prediction can be used as a primary method
of validation, then by all means do so with your own predictions about your
projects or the projects you believe in.  But you have to be willing to
accept the results of a test of your predictions if you want to show how
valuable it is a method of confirmation and validation.  The fact that most
of us are sitting around here year after year convinced that we have it all
figured out but are never quite able to make convincing demonstrations of
our ideas is enough evidence to show just how weak validation through
prediction can be.

If a prediction came true and was reliably true then it would serve as a
confirming evidence of the theory that generated the prediction.  But
the other outcomes must also be handled if you seriously want to claim it
as a validation method.  How do you handle that situation?  By creating
contrary theories to handle the case where your prediction did not come
true.  So while I have predicted that I would be able to create a simple
AGI program within the year, I have also predicted that I would not be able
to and I have started to generate theories to explain that possibility.
The one thing I want to do is find how to fix my AGI theories so honestly
keeping track of how well my ideas are working is a natural method to do
that.  To do so,  I have to create a number of different alternative
explanations concerning the target and then find which one most closely
matched the actual outcome.

There is still more to it, but that is a start of the explanation.
On Wed, Jan 2, 2013 at 8:33 PM, Mark Nuzzolilo II <[email protected]> wrote:

>
>
>
> On Fri, Dec 28, 2012 at 5:30 AM, Logan Streondj <[email protected]>wrote:
>
>>
>>
>>
>> On Thu, Dec 27, 2012 at 6:54 PM, Jim Bromer <[email protected]> wrote:
>>
>>> I believe I can write a simple AGI program in a year.
>>>
>>
>> Ha, ya sure you can,
>> k go ahead.
>>
>> I'm sure you'll be more humble after you do ;-),
>> write an AGI program for a year that is.
>>
>>
> I haven't had the time to follow this thread closely, I've been sick with
> bronchitis and not doing too great, but I am going to respond to this
> statement right here.  If you've been paying any attention to accelerating
> change theories from Kurzweil and others, or if you've just been following
> software technology advancement over time, you may have noticed that it is
> becoming easier to write software each year.  I'm gonna go out on a limb
> and say that it is becoming *exponentially easier* to write software.  The
> number of open-source libraries is increasing at a rapid rate, and there
> are an increasing number of possible ways to combine them.
>
> More combinatorial choices for components *does* mean that a programmer is
> faced with a more complex workspace, but I strongly believe that this only
> negatively impacts the difficulty at a logarithmic rate (due to the ability
> to find what we are looking for using Google, and recommendation systems
> are improving this factor yet even more), while the positive benefits of
> such a large workspace are increasing *roughly linearly* with the number of
> open-source components to choose from.
>
> At some point, these two lines on the graph will diverge at a very fast
> rate, leaving us with exponentially easier software development, leading to
> exponentially more software.  This doesn't guarantee that we will have
> *better* software on any given metric, but we will certainly have more of
> it.
>
> So as time moves forward, it becomes more likely that a single person
> could write an AGI system in one year.  I don't think it can be done today,
> but perhaps in the next decade.
>
> This is how a Singularity will likely happen, and this rapid growth may
> end up spawning an AGI system via easy software development, rather than
> the AGI software spawning the rapid growth.  AGI will increase the rate of
> growth extremely faster, but I think that by the time we end up making it,
> we will already be moving at a very fast rate.
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to