I agree that using theories to make predictions about the results of
computational experiments, and then seeing if the predictions are correct
(and if not, in what ways they are incorrect), is a good way of exploring
theories...

However, theories of AGI don't generally give specific guidance about the
number of man-months it will take to implement and test a given capability,
let alone about the number of calendar-months it will take given real-world
uncertainties about team size and allocation, etc. ....  Most commonly an
AI or computer science theory will constrain "number of man-months" (or
similar measures -- setting the "mythical man-month" factor aside) for
implementing an algorithm/structure/system only within an order of
magnitude or so...

So if one is talking about confirming or disconfirming an AGI theory, one
should be talking about predictions of the form "A software system
implementing approach X, when given input A, will probably give output
something like B" ...

The ultimate problem with most of the AI gurus of the late 1960s and early
1970s is not that their temporal predictions of progress rate were wrong,
but that their underlying ideas about how the mind works and how to build
AGI were woefully incomplete....  Had their basic ideas been right, but
merely taken 40 years rather than 10 to implement -- then we would have AGI
now and those early AI guys would be considered on part with Newton and
Archimedes....  Instead, subsequent work showed that the REASON their early
optimistic AI projections were wrong was NOT that they underestimated the
amount of practical work or system testing/debugging needed in implementing
their ideas in the real world, but rather that their basic ideas were not
adequate... (i.e. stuff like expert systems are just not going to yield AGI
even if you worked on them for centuries with billions of dollars...)

These are very elementary points, and not so interesting to go over in
depth, IMO ;-/ ...

ben







On Wed, Jan 2, 2013 at 11:13 PM, Jim Bromer <[email protected]> wrote:

> I am going to try one more time. Many people in this group have talked
> about ‘prediction’ as a fundamental tool of AGI. There was discussion in
> the old days of using prediction as a method of validation in an AI
> program. My challenge to people in this group is to try using prediction in
> your everyday life if you think it is such a great method. I have. Like
> other AGI methods it is useful once in a while but not often. However, what
> is useful is that it can help you crystalize explanations and additional
> theories around the prediction and some of the theories accreted by the
> effort.
>
>
>
> For example, I have a meeting with someone. I think that the guy does not
> care about the meeting so I predict that he will be really late or not even
> show up. He shows up on time. Therefore my reasoning was wrong. And the
> fact that he did show up on time also provides evidence that the reason I
> predicted that he would be late (because he did not care about the meeting)
> was probably wrong as well. In this case the prediction invalidated the
> theory and I generated additional theories concerning the event
> serendipitously.
>
>
>
> So I am saying that you can use your predictions about your projects or
> about the projects that you believe in to discover new insights about your
> AGI theories. However, you have to be willing to accept the results of the
> trial of your predictions in order to effectively use this as a validation
> tool or as a tool of correcting your mistakes.  And as Karl Popper
> suggested the best predictions should be pretty precise (or determinative)
> and pretty unlikely if the theory is not correct.
>
>
>
> So I predicted that I would be able to write an effective AGi program in
> one year. Now let’s say that two weeks have gone by and I have only just
> started the user interface and haven’t even gotten very far on that. Does
> that mean that I do not know what I am talking about? No. But look at it
> another way. I am 1/26 of the way there and I haven’t even gotten the
> basics of the user interface done. Does this bode well for my predicted
> schedule? No. What should I do? I haven’t gotten far enough to get good
> evidence that my ideas will work or not work, so I have to look at a way to
> get more done so I can start to evaluate my ideas sooner. So I might decide
> to stop writing to this group as often so I can get more done on my
> project. And now I have learned something serendipitously. Real researchers
> do not spend much time on groups like this because they just don’t have the
> time. It may have nothing to do with the crackpots who post on this group
> (We don’t all know who We are) because it is primarily a question of not
> having enough time to waste on it.
>
>
>
> So how can you use predictions to validate a theory when the results are
> not overwhelmingly convincing? You have to be able to create a unlikely
> prediction that would occur if your theory was right but would not occur if
> your theory was wrong.  You have to have alternative theories for the
> case where the prediction does not come true or does not do too well. And
> the alternative theories have to be good ones. But when you use prediction
> in your real life there is still the possibility that some prediction did
> not come true because of side issues. So if a prediction does not come true
> it does not validate your theory but it might not invalidate your theory
> either. So you either need to find alternative methods to improve your
> ability to conduct the experiments or you need to find other ways to gather
> information on your theories. Either path is going to require that you
> generate new theories and some of these new theories have to examine the
> possibility that you were wrong about something. If you can figure out what
> you are doing wrong then you may be able to do something to correct it.
>
>
>
> But since the effort can produce such a capacity for crystalizing new
> insights serendipitously an AGI program should be designed to do this as
> well. (This is something I learned serendipitously from experimenting
> with the prediction method in real life.)
>
>
>
> So I have to spend less time writing to this group, less time watching tv
> and movies and more time programming. If I do that I predict that I will be
> able to start testing my AGi methods in a few months. If I was sure that my
> ideas will be effective enough I would spend more time on my project, so if
> I don’t follow through I am going to have to start realizing that I don’t
> have substantial reasons to believe that I know how to even get started.
>
>
>
> So, using predictions about your projects. If your theory predicts
> something take a look at other theories that can be used to explain why the
> prediction did not come true or why it did not work too well. Look for
> critical and productive predictions. You can look at interpolations of your
> predictions in the early stages to evaluate what you have to do to make the
> experiment more effective. And you may need to work on alternative theories
> to explain why you weren’t able to get the preparatory work done in case
> that is the outcome of your efforts. But the statement that I was not able
> to prove that my theories work because I did not get a chance to test them
> out really is pretty weak. There are some cases when you might have a valid
> reason for not being able to them, but it is more likely that you did not
> get a chance to test them out because you just had not figured it all out.
>
>
>
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to