No, that is inaccurate.  It is, of course, true that I am externalizing my
impressions about something but that is what everyone is doing in this
discussion group.

However, I am not testing the accuracy of my self model. I am testing
whether "prediction" can be used to test the effectiveness of the theories
which led me to believe that I might be able to write an AGi program within
two years.  If I was able to write the kind of program that I described,
then it would serve as a kind of verification that I did have a good idea
how to go about writing the program.  Now suppose that two years goes by
and I either have nothing to show or I have a program that is seriously
less than predicted.  That would show that I did not have a good
theoretical basis to come to the conclusion that I would be able to get my
ideas working in a short time.

There is a slight chance that my ideas might be solid but I still would not
be able to turn it into an actual program.  But the question then is: If
the ideas were so good why wasn't I able to effectively use them in an
actual program?  What do I do if and/or when I am unable to complete the
program?  Do I discard all of my ideas?  No of course not.  But one thing I
will do is I will have to accept the results of my experiment and begin
analyzing why things did not pan out the way I thought they would. If
anyone had it all figured out he should be able to get a demo working.
(Again I am making sense and frankly I think that is what is bothering you.)

The fact that some of the guys in this group have so much trouble accepting
my reasoning about this is remarkable.  No, I am not saying I have it all
figured out and you should be exactly like me if you want to succeed.  I am
actually aiming (not externalizing) my criticisms at myself and I am just
amazed that a little warning bell did not go off when you wrote the kind
of comments that you wrote (based, for example, on a dogmatic definition of
what a "reply" means.)  I am saying what I have been saying over and over
again.  You have to use some kind of verification method for your
implementation of theories about AGI and I suggest that you try using some
of them with your own problem solving.  But rather than trying to pin that
criticism on the rest of you, I have been trying to show you how it might
work on my own assessment of my project.

Your criticism that I am only testing my own self-model and that I am using
the bulletin board as a method for self-feedback makes it look like you
missed the point that I have been repeating for the past few months.  Try
developing a verification method that your AGI program might use, then try
applying it to some real world application in your own life to see if it
really works.

I am in equilibrium, assimilation feeds on itself?  Where is your awareness
of the challenge that I -just- wrote about?  What has been keeping you from
recognizing what it was that I was saying?  I mean you might have said that
you have different methods of verification and then you could have
explained them.  Or you might have chosen to talk about the problem of
crystallization of an idea vs ideative fluidization or something like
that.  Instead you chose to make a shallow (although not unfriendly)
dismissal of what I was saying.  Sorry but I am pretty unmoved by your
comment.  Because you were unable to make a remark on -any- of the subjects
that I was discussing and because I was pretty explicit about much of the
subject matter that I was discussing I have to conclude that your comment
was based solely on the soundless image of my use of the reply feature to
extend my comments and not on the content of my messages.


On Tue, Apr 2, 2013 at 1:45 PM, Piaget Modeler <[email protected]>wrote:

> Your prediction is about the accuracy of your self-model.  How well you
> "understand" yourself and your own capabilities.
> Your prediction has nothing to do with testing the notion of "prediction"
> itself.
>
> Your writing here is just to externalize your thought in order to make
> new inferences about them.  You're using this bulletin
> board as a notepad or journal for your own feedback.  Nothing illegal
> about that.
>
> You are in equilibrium. "A system of assimilation tends to feed itself." ~
> Piaget
>
> There is something to be said for ignoring what everyone else has written
> about a subject and coming up with your own ideas.
> That's how paradigm shifts in begin.
>
> ~PM
>
> ------------------------------
> Date: Tue, 2 Apr 2013 12:44:24 -0400
> Subject: [agi] Re: Monthly Analysis of My Prediction That I Can Write an
> AGi Program Before 2015
> From: [email protected]
> To: [email protected]
>
>
> My use of the prediction that I would be able to create a working model of
> my theories by a certain time enabled me to create a series of predictions
> of partial achievements which I could use both as benchmarks for the
> development and as seeds for an ongoing analysis of what went wrong.  I
> think the reasoning of how these predictions enabled me to create these
> benchmarks should be familiar to anyone who has tried to finish a major
> undertaking within a certain amount of time.
>
>
>
> However, the question of why my latest theory seems to have given me
> greater hope that I will be able use incremental steps in the development
> of my AGi project was a little hard to figure out at first.  My theory, to
> refine it a little further, is that the ability to learn effective
> specializations is a necessary requirement for the development of effective
> generalizations.  But why has this particular theory given me the sense
> that it may lead to a way to gradually develop my program when my
> examination of previous efforts to develop AGI seemed to suggest that
> gradual development was methodologically unsound? There are a number of
> important aspects to the theory. First, it is a good theory although it
> might seem a little simplistic. I mean that it makes a lot of sense.
> Secondly, while people may feel that they have already implicitly
> incorporated something like the theory into their own theories about AGI,
> the fact that I highlighted it (in my own mind) is a step that is in some
> ways similar to formalization.  It is a sensible theory and (I feel that)
> it would be an important part of a formalization of a theory of AGI.  For
> example, a Neural Net enthusiast might claim that Neural Nets were able to
> incorporate both specializations and generalizations but my criticism of
> that might be since this process is locked within the complex processes of
> the Neural Net itself the implicitness of the processes do not make them
> readily available to the programmer.  Because I am more interested in
> using discernible specializations and generalizations the recognition that
> these kinds of processes are mutually significant and that one of the
> challenges in AGI was the achievement of greater generalization, the
> appreciation of the theory provided me with a new means to break the
> program down into more fundamental parts.  And since I knew that I could
> write a program that would let me personally define the nature of
> specializations and of generalizations (in a partially automated program) I
> realized that I could test different ideas in a simple progression. So when
> I realized that I could try applying simulations of learned specifications
> and generalizations I realized that I could test different parts of the
> theory without having to fully develop the program.
>
>
>
> So there was something about my appreciation of the nature of
> thought-derived generalizations that allowed me to develop this new theory.
> And there was something about the appreciation of the theory that allowed
> me to break the AGI problem down in a somewhat novel way.  And because I
> realized that I could use these parts as I chose to in an ongoing
> development of the program I realized that I could use different strategies
> to develop and test variations in a controlled way.  But the development
> of these ideas will not go smoothly if the theory is not a good one.
>
>
>
> This analysis gives me some more insight into how problems may be
> effectively broken into smaller pieces even when previous efforts to do
> this have run into obstacles.
>
>
>
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to