On Sat, Dec 29, 2012 at 1:30 AM, Bruce Williams <[email protected]>wrote:

> Does this have anything to do with AGI or are you asking what we think of
> your development style?
>



It is not easy for me to understand why you would ask a question like
this.  It is as if I was unable to express what I am trying to say.  But to
answer your question, I believe that it has a great deal to do with AGI.
But it also has to do with using the scientific method in developing ideas
about AGI programming.

I believe that an empirical evidentiary-based method is absolutely
necessary for AGI.  (When I use the term "method" I am not talking about
Object Oriented Programming.)  This means that the program would try
something and then try to learn something from the attempt.  For a
primitive AGI program, (which is what I have in mind) it might be enough
just to learn something like - hey that worked, or, well that didn't work.
But I doubt it.  I also believe in reason-based reasoning.  I believe that
an AGI program can use reasons to assess why a function worked  or why it
didn't work.  This might not be very powerful method for a limited AGI
program (what I call an AGi program) but another variation of this
reason-based reasoning is to investigate what failed if an analysis of an
operaton could be made.  The flip side of this is the retention of an
action or response that produced an interesting result so that it might be
used sometime in the future even if the result wasn't useful at that moment.

People in this group have often talked about using "prediction" as a
fundamental part of their AGI models.  However, when you talk to some of
these same people about using the results of their "predictions" in their
real lives you will find that they are not able to intelligently
discuss using failed predictions as empirical evidence. (This is an
extraordinary observation by the way.)  Although there is no precise method
to define a pre-analysis of how much leeway should be given to a bad
result, there is a way to begin to think about using your expectations and
predictions that repeatedly failed to come true.  If something doesn't work
after trying and trying or if you never even get started that is a pretty
good indication that you didn't have it all figured out. Then you might
start trying to find what was missing in your conjectures.  When working
with actual experiments your interest will change to be directed at those
parts of the experiment which seemed to work out and on those parts that
did not work out.  But what would you do if something that you were sure
would work didn't even after working hard on it.  You might continue to
work on it but you probably should also begin examining what might have
gone wrong with your implementation or what might have been wrong with your
ideas.  So here is a simple model of dealing with failed expectations.  And
it is my opinion that this simple model can also be applied to an AGI
program as it tries to gather empirical evidence from a plan or a
"prediction" that did not work (or from one that did work or from one that
produced something interesting even though it was not immediately
useful).  My AGI model would use reason-based reasoning to try to assess
what went wrong and come up with likely variations that might make it work
if it had not already exhausted that line of reasoning, and it had not come
up with some better ideas in the interim.

Jim Bromer


On Sat, Dec 29, 2012 at 1:30 AM, Bruce Williams <[email protected]>wrote:

> Does this have anything to do with AGI or are you asking what we think of
> your development style?
> On Dec 28, 2012 9:45 AM, "Jim Bromer" <[email protected]> wrote:
>
>> Logan,
>> Your comments were inappropriate because you showed a lack of
>> understanding of what I was trying to say. So I will try to restate it in a
>> different way.
>> 1. I described a method by which I defined how my hoped for results might
>> initially be confirmed on the basis of the impressions of a group of
>> enthusiasts who thought it was interesting and showed promise.
>> 2. But, assuming that the initial demonstration was weak (and perhaps
>> resembled a narrow AI method), I would then have to demonstrate that I
>> could make incremental improvements and that it could be applied to
>> different IO modalities. (Different Input Output Modalities refers to
>> different kinds of AI problems, like visual, text, numerical, and special
>> problems which combine different modalities.)
>> 3. But then I pointed out that if after a year I did not have anything
>> that even resembled AGI I would have to concede that my ideas did not work.
>> 4. Finally I pointed out that if after 5 months I hadn't even started the
>> program and I was reasonably healthy and had as much free time as I have
>> now that would be a pretty strong indication that I did not have everything
>> figured out and that my plan must of lacked something.
>>  Of course I would continue to work on my ideas even if I did not have
>> anything after a year. But I would have to concede that there was something
>> seriously lacking in my plans.
>> Logan's remarks, regardless of his attitude, showed how this group does
>> not quite grasp how the scientific method works. Although I wasn't able to
>> carefully describe my ideas in such a short message and I wasn't able to
>> fill a detailed assessment of predicted results, I did describe a
>> fundamental attitude that I could make a prediction and then accept the
>> results whether I was pleased with them or not.
>>  It is true that there would be some sceptics who would not accept any
>> kind of reasonable achievement just as some people deny that Watson was an
>> important step towards AGI. So there would be some people who just won't
>> get it, as there are some people who just don't get the nature of the
>> modern scientific method.
>>  The one important detail that I left off my brief message was a
>> description of the case where the results could be used to identify a flaw
>> that could be resolved (without waiting for some future breakthrough). That
>> of course is a most important case of being able to learn from your
>> mistakes. However, I tried to let the reader infer that case from my
>> description of being able to improve on weak results and on generalization
>> by adapting the method for different IO modalities. If you can improve your
>> results or adapt the program to different kinds of situations then you
>> would be fixing flaws and making it more general.
>>  Assuming that Logan was trying to be friendly I would say that I
>> believe that I would be able to appreciate weak results even while I was
>> able to recognize that they were weak. So I would probably continue to work
>> on the program even though other enthusiasts were not very interested.
>> However, if I could not make the improvements that would be necessary to
>> convince some enthusiasts then I would have to concede that there was
>> something that I haven't figured out. There is something to the Turing Test
>> even though it is not enough to help us solve the complications that we
>> cannot presently solve. So it might not a truly compelling demonstration
>> but other AGI enthusiasts would become interested in what I was doing if I
>> truly have the basics figured out.
>> Jim Bromer
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to