Aaron,
I will try not to make public speculations about your psychological states.
Sorry about that.
Jim Bromer




On Wed, Jan 23, 2013 at 7:49 PM, Aaron Hosford <[email protected]> wrote:

> Or perhaps it is just too psychologically threatening for you to take that
>> next step...
>
>
> Just a quick FYI, public speculation about my psychological motivations or
> fears is psychologically annoying. Please stop. I don't need a shrink, and
> if I did, I would pay someone.
>
>
>
> Speaking of needing a shrink:
>
>> Insanity: doing the same thing over and over again and expecting
>> different results.
>> *Albert Einstein <http://www.quotationspage.com/quotes/Albert_Einstein/>*,
>> *(attributed)*
>
> *
> *
>>
>> When a reporter asked, “How did it feel to fail 1,000 times?” Edison
>> replied, “I didn’t fail 1,000 times. The light bulb was an invention with
>> 1,000 steps."
>
>
>
> We learn things from our failures. Sometimes all that's needed is a minor
> change, and expecting different results is no longer insanity. And besides,
> why can't a project just be so big or difficult that it takes more than a
> year to see progress? So when people tell me that it can't be done because
> it hasn't been done, I roll my eyes and stop listening. Inductive reasoning
> is not applicable to the creative process.
>
>
>
>
> The fact that people can make predictions that do not come true then just
>> rationalize the failure away is powerful evidence that the power of
>> 'prediction' as an AGI tool is nonsense. It is the ability to create new
>> ways of interpreting the evidence that drives human creatiivity, not
>> verification through prediction or an exotic method of combining possible
>> outcomes given the evidence of the moment into a decision process that
>> determines the next step.
>
>
> I'm no big advocate of prediction as an AGI tool, but I think you
> misunderstand how to use it. If I were going to build prediction into an
> AGI, it wouldn't take the form of wild speculations about years to come. It
> would be things like, "If I turn this handle the right way, water will
> start flowing out of the tap," or, "If I honk the horn, maybe this person
> will get out of my way," or, "If I try some new perspectives, I can
> probably find one that simplifies the problem down to something I
> understand."
>
> Notice the if/then form these take. These are immediate predictions
> directly useful for reaching specific goals, and without the ability to
> make these sorts of simple real-world predictions, we wouldn't be able to
> accomplish anything in our day-to-day lives. I comb my hair in the morning
> because I predict I'll be ridiculed if I don't. I turn the key in the lock
> because I predict it will keep out burglars. I cook my dinner because I
> predict it will satisfy my hunger. I pay my bill because I predict my
> electricity will stay on. We all make these sorts of minor, conditional,
> goal-oriented predictions all day, every day. It's not about creativity.
> It's about getting through the day & doing the (practical!) things that
> have to get done.
>
> And no, none of the predictions I mentioned are 100% reliable. But they
> all make great heuristics, good enough to guide my behavior and improve or
> maintain my quality of life. And if one is wrong, that's when reasons and
> explanations come in. I use them to revise the conditions under which I
> make a particular prediction based on when it fails and when it succeeds,
> so it becomes more accurate, and consequently more useful, over time.
>
>
>
>
>
>
> On Tue, Jan 22, 2013 at 8:49 PM, Jim Bromer <[email protected]> wrote:
>
>> Aaron,
>> I did not want to say anything that might be deemed overly critical after
>> you seemed to be willing to go along with my challenge, but I get the
>> feeling that you did not actually understand what I was doing.
>> I do not actually think that it is likely that I will get my program to
>> work within a year. I was just challenging people who talk about prediction
>> in AGI to try using it in real life, but when they do they have to be
>> willing to accept the results of the the actual outcomes of their
>> experiences as compared to their predictions. So while it feels like I
>> should be able to write an AGi program within a year, I do not actually
>> expect that I will be able to do so. So I said that if a year passes (that
>> is if another year passes) and I still am unable to show that I have
>> something interesting, I will have to accept the results of my experiment
>> and recognize that the expectations that I had were not realistic...And the
>> idea that it will just take another year aren't too likely to be realistic
>> either - if I do not have something interesting to show for my efforts
>> after a year (or a year and a half).
>> Perhaps you were not able to totally get what I was actually saying
>> because you are just skimming what I wrote. Or perhaps it is just too
>> psychologically threatening for you to take that next step and recognize
>> that if the programming is just too complicated to get it going in a couple
>> of years then maybe it is just too complicated a problem for us. So if I
>> don't succeed in a year then it will also stand as evidence that it is
>> unlikely that you will succeed. Ok, maybe it is only weak evidence, but if
>> you don't succeed in two years that will stand as evidence that you will be
>> unlikely to succeed - unless you try another tack.
>> I fully realized that people used the concept of prediction in different
>> ways. I just don't see how that changes anything. If it is such a powerful
>> tool then try using it in a way that is consistent with its proclaimed
>> value. The fact that people can make predictions that do not come true then
>> just rationalize the failure away is powerful evidence that the power of
>> 'prediction' as an AGI tool is nonsense. It is the ability to create new
>> ways of interpreting the evidence that drives human creatiivity, not
>> verification through prediction or an exotic method of combining possible
>> outcomes given the evidence of the moment into a decision process that
>> determines the next step. Those methods are useful but all that I am saying
>> is that the driver of intelligence is the rationally creative process. But
>> if creativity is not used rationally then it can turn into delusion.
>> I just found that the old salvaged program that I discarded years ago was
>> an earlier version of the program. The more recent version was a greatly
>> improved but it was unfinished and is too difficult to be useful to me
>> because the bugs are multiple at many bug points. But I think I might be
>> able to use my reviving memories of what I was trying to do back then to
>> build a new simpler program. And with that simpler program I can begin
>> testing some simple AI / AGi ideas that I have as I go along. And if things
>> work out then I could make the program more sophisticated as I go along.
>> So although I do not actually believe that I will be successful, I think
>> that I have a better development plan than I did last time. However, the
>> discovery that the more advanced version is just too complicated and the
>> less advanced program not developed enough is a major set back. It is very
>> negative evidence. This insight comes directly from the schedule and the
>> mature recognition that a month out of a year with nothing to show is a
>> substantial negative indicator. That is an example of how rationalism can
>> be combined with creative insight to produce an insight of value - even
>> though it is not an encouraging insight. How can I use this setback? I have
>> a lot of functions that do work, and I have a lot of plans that can be
>> implemented rather quickly. And I have the most serious mistakes that I
>> made in the past to work with now. But I have to get the basic program
>> going pretty quickly so I can begin some early testing of my AI / AGi ideas.
>> Jim Bromer
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to