PM: Jim, your prior e-mail reads like you are either a chatbot or are
attempting NLP (Neuro-Linguistic Programming) or DHE.
Just ask, my answer may be yes or no. My own reason for assisting is that
I'd like you to understand my approach.
-----------------------------------------------

I will try the experiment on Aaron.
Jim Bromer





On Wed, Dec 5, 2012 at 12:18 PM, Piaget Modeler
<[email protected]>wrote:

>
> Jim, your prior e-mail reads like you are either a chatbot or are
> attempting NLP (Neuro-Linguistic Programming) or DHE.
>
> Just ask, my answer may be yes or no.  My own reason for assisting is that
> I'd like you to understand my approach.
>
> Differentiation IS conditional branching.  Observation is receiving
> sensory stimuli.  Coordination means making inferences.
> Integration is combining different concepts via their attributes akin to
> crossover or memetic recombination.
>
> Please define verification.  It may be what I call correlation.
>
> Cheers!
>
> Imagine, NLP via e-mail.  Whooda thunk it?
>
>
> ------------------------------
> Date: Wed, 5 Dec 2012 07:54:16 -0500
>
> Subject: Re: [agi] Internal Representation
> From: [email protected]
> To: [email protected]
>
> I agree with Piaget's remark.
>
> I am going to conduct an experiment.  I want to see if I can get you to
> solve a problem for me.  So I am going to keep track of our conversation by
> keeping notes on particular issues related to this experiment.  It is
> unlikely that you would be able to solve a particular problem that is of
> interest to me, so I am going to be looking for an unexpected solution to
> some related problem that I will pick up somewhat serendipitously from our
> conversation.  The best way to get you to cooperate with me on this is to
> get you talk about the thing you are interested in. However, the solutions
> to the problems of your projects probably will not be the solutions to the
> problems of my projects, so I have to find a way to get you talk about
> something that is common to both of our projects.
> So I have gotten you to describe some ways that your program can apply
> imagination to problem solving.  Your seem to acknowledge that integration
> is a part of the process, but you haven't acknowledged that complexity is a
> problem.  So now, in order to get you to continue discussing this I have to
> back off from talking about complexity and emphasize the problems of
> 'verifying' and integrating internal projections.  I will review your
> message in response to my question of how your program will use
> imagination, and I will copy that response into my notes.  Now that I have
> reviewed some of your previous messages I see that you mentioned Piaget's
> comments on coordination before.  Coordination seems to be very similar to
> conceptual integration.  I also found that you had told me that Michalski
> had a fast inferencing method so that must be important to you for some
> reason.
>
> So, to repeat myself for clarity.  I am going to run a
> subjective experiment for a couple of weeks. The goal is to get you to
> solve a problem for me and I want to be able to note how I personally
> integrate subject related serendipity into my knowledge structures
> concerning the subject. It is unlikely that you would be able to solve a
> problem that I specified in advance so I am going to look for an unexpected
> serendipitous solution to some problem that I haven't yet
> completely identified.  In order to get you to participate in this
> experiment I need to encourage you to talk about your project using terms
> that are relevant to both of us.  Since I will be keeping notes I have
> started by reviewing and collecting some of the comments you made in this
> thread.  I can then use this knowledge to get you to continue talking about
> things that interest you.  I noted that you have not acknowledged that
> complexity is a problem so I will back off that particular problem and try
> to shift to integration (coordination) issues that seem challenging for an
> automated AGI program to use effectively.  Now that I have explained this
> 'experiment' to you I will stop talking about it and get back to the
> subject.
>
>
> On the list of mental coordination methods, internal simulation methods
> and inferencing you did not specifically mention conditional branching so
> there is a chance that you (or Piaget) left that off the list. I would say
> that is a pretty important concept! On the other hand, running different
> methods to use in a comparison with perceived events seems to imply a
> conditional branching.
>
>
> Anyway, the next question I have for you concerns 'verification' and
> integration (coordination).  Without strong verification, coordination is
> essentially going to tie weak inferences together. If you accept that this
> could be a problem then how would your program use the products of
> coordination reliably?
>
> Jim Bromer
>
>
> On Tue, Dec 4, 2012 at 11:45 PM, Piaget Modeler <[email protected]
> > wrote:
>
>  "The central idea is that knowledge proceeds neither solely from the
> experience of  objects nor from
> an innate programming performed in the subject, but from successive
> constructions, the result of
> constant development of new structures.”   ~ Jean Piaget****
>
>
> So I think we knit together these insights, piecemeal, until they recur
> and strengthen, and become
> more predictable and forceful in our minds.  Then they integrate and form
> a larger structure, and
> eventually they become a subsystem, integrating with other subsystems,
> until they finally integrate
> with the totality.
>
>
> Or at least that's how I interpreted it in "The Development of Thought" by
> J.Piaget.
>
>
> Cheers.
>
>
> ~PM.
>
>
> ------------------------------
> Date: Tue, 4 Dec 2012 23:12:06 -0500
>
> Subject: Re: [agi] Internal Representation
> From: [email protected]
> To: [email protected]
>
> Well, I would look at Ryszard Michalski's work on dynamically interlaced
> hierarchies if it was convenient for me to do so. Nothing about this is
> mentioned on his home page and the first reference I looked at did not seem
> like a breakthrough paper.
>
> I want to finish something that I was thinking about.
>
> We (or a machine) would be able to build strong knowledge if the knowledge
> that was gained could be used to reliably predict, explain or produce a
> specific outcome.  But often, the outcomes are weak or unreliable
> indicators of much of value.  So instead we are left with a lot of weakly
> related situation-action-reaction insights that are inexplicably
> conditional and variant.
>
> This is a lot like serendipitous learning.  If I try to learn something, I
> probably won't be able to figure out what I wanted to figure out (unless it
> is something that other people had already figured out and it was within my
> field of knowledge).  But I would probably learn something new
> serendipitously.  Now can we patch a lot of weak unexpected insights
> together?  Yes, but in order to build something reliable out of a lot of
> weak structural pieces they have to be integrated pretty thoroughly. The
> integration does not have to perfect but the matrix of these things have to
> be strong enough to serve as a foundation for greater insights.
>
> Jim Bromer
>
> On Tue, Dec 4, 2012 at 9:31 PM, Piaget Modeler 
> <[email protected]>wrote:
>
>
> I would agree that you also need mult-strategy reasoning in addition
> to correlations.
>
> Look at Rysard Michalski's work on dynamically interlaced hierarchies.
> He has a fast and efficient mechanism for inference.  He inspired me.
>
> Cheers,
>
> ~PM.
>
>
> ------------------------------
> Date: Tue, 4 Dec 2012 18:36:20 -0500
>
> Subject: Re: [agi] Internal Representation
> From: [email protected]
> To: [email protected]
>
> I discovered something about logic that I never knew before.  It is
> something that I have thought about for 40 years, but I never stopped to
> explore the application.  Now, shouldn't this new insight give me greater
> understanding?  Well, yeah, but it doesn't work that way.  I have a new
> insight but I haven't got any use for it.  So now I have to try to find
> some practical use for it.  Well even though I don't have any use for it, I
> might pick up some street creds by telling other people about it right?
> Well no, not really.  It is really a turn-the-crank kind of thing and the
> fact that I thought about it for so long without ever once examining its
> application is kind of embarrassing.  So now, before I can talk about it I
> have to search for some way to use the idea effectively.  If I found some
> utility for it then I could pick up some credit for it, but until then it
> is just going to make my work with logic more complicated.
>
> The insight was a turn-the-crank kind of insight so it represented the
> application of a familiar idea onto another familiar idea in a way that was
> very familiar to me.  The only thing I did different was to actually see
> how it worked in a few examples.  When I did that I realized that the
> effects were not exactly what I expected.  However, logic is an artificial
> field which is well formed so that other logic-based ideas, like something
> from mathematics, can sometimes be easily integrated into it.  In real
> world examples of ideative projection, the analysis of turn-the-crank
> imagination cannot easily be achieved just by using other (integrated or
> related) methods of internal ideative projection.  And as I just explained,
> simple correlation methods are not an easy substitute for insightful
> methods.
>
> Jim Bromer
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to