Well I did not want to get caught up in the question about the value of a
superficial co-occurrence between a black box observation and the behavior
of your program but correlation derived from trial and comparison is a
valid method.  However, the point that I forgot to make about that is that
view point - taken dogmatically - can lead to insipid internal
representations (and processes) which may discover superficial correlations
between observed events and internally projected events (aka "predictions")
without providing any insight about the deeper nature of the observed
event.  At this point someone can (dogmatically) trot the functional
identity argument (as I have called it) and say well, if my AGI program was
able to explain more and more observed events using its predictions (or
internal projections such as predictions, explanations, theories about the
observed events and so on), then eventually the system would be able to
predict anything!  It would be true AGI!  The singularity would
occur! Well, yeah, not the singularity part, but yeah it would be true
AGI only this is the dreaded hollow functional identity argument all over
again.  Where ever my criticisms have taken me the other guy can
proudly show his functional identity argument again even though it is
clearly a hollow argument.  The real problem is how might a program
effectively do this?  It is obviously more to it than mere correlation
prediction and simple refinement iteration.  If the program does not have
something that other programs lack then it will never get beyond a
few superficially accurate "predictions" (or other shallow ideative
explanatory projections) that would be overwhelmed by a many explanatory
projections that are off the mark.

As I have said over and over again, I really believe that it is a
complexity problem.

So, without unique situational analysis for each situation (situation
component) that can occur, how do you refine a response (like recognition)
so that the complexity problems can be reliably avoided?  Can we
programmers simulate this situation using highly structured configurations
of events to see if we can discover models to allow computers to overcome
non-unique component analysis complexity?  Or would this be a waste of time
because the artificiality of the experiment would just make it easier
for us to avoid the true nature of the problem?

jim Bromer


On Tue, Dec 4, 2012 at 4:58 PM, Piaget Modeler <[email protected]>wrote:

>  It's probably more "trial and error", but consistent trial and
> error--which can be iterative (akin to a search algorithm).
>
> So I view imagination as a combination of coordination (adding inferences)
> and mental simulation (supporting diverging
> viewpoints).   According to Piaget, there is observation (direct sensory
> perception), and coordination (drawing inferences
> from perception an well as other inferences).
>
> So what inferences can we have during coordination?  Certainly instance
> induction (forming cases),  type induction
>  (forming types), concurrence association, sequence association, similarity
> creation, difference creation, equality
> creation, and analogy which I view as "idea substitution", as well as
> other processes.  I would lump planning into the
> coordination bucket as well.
>
> All these things can occur during coordination.  And all these
> coordination processes are always running in PAM-P2.
>
>
> Finally, to answer Mike Tinter's question of mental simulation.
> Daydreaming is already accounted for in the PAM-P2 architecture.
>
>      http://piagetmodeler.tumblr.com
>
>
> PAM-P2 uses a current model of the world and forward models of the world
> (called. "viewpoints").   The forward models are
> for mental simulation / daydreaming. Daydreaming is always occurring. The
> Simulation Supervisor component controls
> daydreaming and the Reaction subsystem can interrupt it.
>
> We don't know how the mind works exactly, but we can define requirements.
> We know what behavior we want a
> Cognitive System to exhibit and can build a system that meets our
> requirements, however short it falls.  Then refine our
> requirements to close the gaps with [human] exemplars.  A basic spiral
> prototyping approach.
>
>
> ~PM.
>
> ------------------------------
> Date: Tue, 4 Dec 2012 15:08:08 -0500
> Subject: Re: [agi] Internal Representation
> From: [email protected]
> To: [email protected]
>
>
> We do not need to know exactly how the brain (mind) works. But to say
> that,  'all we really need are some observable learning events to work
> from,' is too simplistic. If the program is intelligent then it is
> intelligent. Yes of course.  But the interesting questions concern the
> problem of overcoming those challenges that we haven't figured out yet.  So
> yes, of course, if your program produces thought-acts just like a child
> then you can say that you don't need to know the details of how a human
> child's mind is able to work with general intelligence in order to get your
> program to work.  I agree with that, but the chance to have that
> conversation is not why I have been posting in these groups.  It is
> actually a functional identity hypothesis. I was never truly interested in
> the functional identity issues that these discussion groups get caught up
> in, I only got caught up in them while trying to get other people to move
> on to more interesting discussions.  Since a computer is not a living brain
> the matter is a priori settled regardless of any divergence of opinion.
>
> The real issue is figuring the internal representations and processes
> which could get a program to work. Your experimental methods are
> commendable, but to declare that the method is a "simple iterative
> process," is not an accurate description of what you actually do.  It is
> like saying that life is just a simple iterative process. I might use a
> line like that in poetry or fiction (if I ever wrote poetry or fiction) but
> i would not want that to be remembered as my philosophy of life!
>
> Here is a question I am interested in:
> How do you or how would you integrate imagination into the analysis of
> some simple recognition problem?
> Jim Bromer
>
>
>
> On Tue, Dec 4, 2012 at 12:21 PM, Piaget Modeler <[email protected]
> > wrote:
>
> Jim: "If you are curious about my opinions on this I would try to explain
> it,"
> Sure Jim, I'd like to know your thoughts on the subject. Perhaps I'm
> missing something.
> My point is that we don't really need to know what's under the hood from
> an architectural
> perspective. The internal representation is an implementation detail, if
> you think of the
> larger functional processes as black boxes with specific inputs and
> outputs and well
> defined behavior.
>
> I have a straw man representation which I am experimenting with. If it's
> adequate, then
> that's all that is required. Basic experimentation will prove it out. If
> it fails, then we
> ascribe causes to the failure, modify the representation to avoid the
> failure, and try again.
> Simple iterative process. Call me naive.
>
> The internal representation has to support certain requirements,
> assumptions, dependencies,
> and constraints. For me my main criteria are as follows:
>
> 1. The representation needs to support activation.
> 2. The representation needs to support relationships (patterns among
> elements).
> 3. The representation needs to support reification.
>
> As long as the representation does that, I'm satisfied.
>
> ~PM
>
>
> ------------------------------
> Date: Tue, 4 Dec 2012 09:51:11 -0500
> Subject: Re: [agi] Deb Roy: The Birth of a Word
> From: [email protected]
> To: [email protected]
>
> PM: "For me knowing the brain's internal representation would be helpful,
> but is not necessary,
> as long as a program can mimic the output using its own internal
> representation. I can
> use my own straw man representation and see if that works. Any
> representation would
> do for me actually, as long as it gets results."
> -----------------------------------------------------------
> I have no idea why you would make a remark like this, but as I was trying
> to explain why it was wrong I realized that argument was a side issue, at
> least partly based on semantics, which is not very important. If you are
> curious about my opinions on this I would try to explain it, but since you
> probably aren't I am just going to get back on track as quickly as I can.
> We certainly could write programs that could learn individual words using
> an observe-interact-and-compare strategy. The problem is that as knowledge
> grows, the possibilities of finding meaning and relevant actions for a
> particular IO event increase to the point that it becomes impossible to
> search through them all.
>  In other words, all evidence (or my intuition about the evidence that I
> have seen) points to the necessity of using an extensive (not exhaustive
> but extensive) comparative method to look at possibilities for meaning and
> finding good reactions to an IO event. An AGI program cannot note every
> detail of an ongoing event and use that information to perfectly denote the
> meaning of the event, so it must rely on an exhaustive search of
> possibilities. When you have extensive knowledge about uncountable
> combinations of possibilities that might be relevant to a situation, then
> the program just cannot search through them all in a reasonable amount of
> time. And remember, the program has to be using some creativity as it
> searches through the possibilities, so some of the possibilities that it
> has to consider would be functionally imaginative.
>  Your (would-be) AGI program can learn first words much faster than a
> baby. The problem is that we don't have any good strategies of producing
> more complex levels of recognition and reaction that can be used
> effectively. Perhaps I am wrong about this and perhaps I do have a good
> strategy in mind that might actually work to some degree. It is just that I
> don't feel that is too likely. But maybe I should try some of my ideas out
> just to see what happens.
>  Jim
>   On Tue, Dec 4, 2012 at 2:50 AM, Piaget Modeler <
> [email protected]> wrote:
>
> The way I view it these days is that a particular set of schemes (or
> solutions as I call them)
> are activated and differentiated over this time period: the period it
> takes for "gaa" to
> transform into "water" during sessions of primary circular reactions (the
> infant hearing
> his own voice and deciding to have it match his caregiver's
> pronunciation) or secondary
> circular reactions (the infant getting the caregiver to say "water").
>  For me knowing the brain's internal representation would be helpful, but
> is not necessary,
> as long as a program can mimic the output using its own internal
> representation. I can
> use my own straw man representation and see if that works. Any
> representation would
> do for me actually, as long as it gets results.
>
> ~PM
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to