We do not need to know exactly how the brain (mind) works. But to say that,
 'all we really need are some observable learning events to work from,' is
too simplistic. If the program is intelligent then it is intelligent. Yes
of course.  But the interesting questions concern the problem of overcoming
those challenges that we haven't figured out yet.  So yes, of
course, if your program produces thought-acts just like a child then you
can say that you don't need to know the details of how a human child's mind
is able to work with general intelligence in order to get your program to
work.  I agree with that, but the chance to have that conversation is not
why I have been posting in these groups.  It is actually a functional
identity hypothesis. I was never truly interested in the functional
identity issues that these discussion groups get caught up in, I only got
caught up in them while trying to get other people to move on to more
interesting discussions.  Since a computer is not a living brain the matter
is a priori settled regardless of any divergence of opinion.

The real issue is figuring the internal representations and processes which
could get a program to work. Your experimental methods are commendable, but
to declare that the method is a "simple iterative process," is not an
accurate description of what you actually do.  It is like saying that life
is just a simple iterative process. I might use a line like that in poetry
or fiction (if I ever wrote poetry or fiction) but i would not want that to
be remembered as my philosophy of life!

Here is a question I am interested in:
How do you or how would you integrate imagination into the analysis of some
simple recognition problem?
Jim Bromer



On Tue, Dec 4, 2012 at 12:21 PM, Piaget Modeler
<[email protected]>wrote:

> Jim: "If you are curious about my opinions on this I would try to explain
> it,"
> Sure Jim, I'd like to know your thoughts on the subject. Perhaps I'm
> missing something.
> My point is that we don't really need to know what's under the hood from
> an architectural
> perspective. The internal representation is an implementation detail, if
> you think of the
> larger functional processes as black boxes with specific inputs and
> outputs and well
> defined behavior.
>
> I have a straw man representation which I am experimenting with. If it's
> adequate, then
> that's all that is required. Basic experimentation will prove it out. If
> it fails, then we
> ascribe causes to the failure, modify the representation to avoid the
> failure, and try again.
> Simple iterative process. Call me naive.
>
> The internal representation has to support certain requirements,
> assumptions, dependencies,
> and constraints. For me my main criteria are as follows:
>
> 1. The representation needs to support activation.
> 2. The representation needs to support relationships (patterns among
> elements).
> 3. The representation needs to support reification.
>
> As long as the representation does that, I'm satisfied.
>
> ~PM
>
>
> ------------------------------
> Date: Tue, 4 Dec 2012 09:51:11 -0500
> Subject: Re: [agi] Deb Roy: The Birth of a Word
> From: [email protected]
> To: [email protected]
>
> PM: "For me knowing the brain's internal representation would be helpful,
> but is not necessary,
> as long as a program can mimic the output using its own internal
> representation. I can
> use my own straw man representation and see if that works. Any
> representation would
> do for me actually, as long as it gets results."
> -----------------------------------------------------------
> I have no idea why you would make a remark like this, but as I was trying
> to explain why it was wrong I realized that argument was a side issue, at
> least partly based on semantics, which is not very important. If you are
> curious about my opinions on this I would try to explain it, but since you
> probably aren't I am just going to get back on track as quickly as I can.
> We certainly could write programs that could learn individual words using
> an observe-interact-and-compare strategy. The problem is that as knowledge
> grows, the possibilities of finding meaning and relevant actions for a
> particular IO event increase to the point that it becomes impossible to
> search through them all.
> In other words, all evidence (or my intuition about the evidence that I
> have seen) points to the necessity of using an extensive (not exhaustive
> but extensive) comparative method to look at possibilities for meaning and
> finding good reactions to an IO event. An AGI program cannot note every
> detail of an ongoing event and use that information to perfectly denote the
> meaning of the event, so it must rely on an exhaustive search of
> possibilities. When you have extensive knowledge about uncountable
> combinations of possibilities that might be relevant to a situation, then
> the program just cannot search through them all in a reasonable amount of
> time. And remember, the program has to be using some creativity as it
> searches through the possibilities, so some of the possibilities that it
> has to consider would be functionally imaginative.
> Your (would-be) AGI program can learn first words much faster than a baby.
> The problem is that we don't have any good strategies of producing more
> complex levels of recognition and reaction that can be used effectively.
> Perhaps I am wrong about this and perhaps I do have a good strategy in mind
> that might actually work to some degree. It is just that I don't feel that
> is too likely. But maybe I should try some of my ideas out just to see what
> happens.
> Jim
>   On Tue, Dec 4, 2012 at 2:50 AM, Piaget Modeler <
> [email protected]> wrote:
>
> The way I view it these days is that a particular set of schemes (or
> solutions as I call them)
> are activated and differentiated over this time period: the period it
> takes for "gaa" to
> transform into "water" during sessions of primary circular reactions (the
> infant hearing
> his own voice and deciding to have it match his caregiver's
> pronunciation) or secondary
> circular reactions (the infant getting the caregiver to say "water").
> For me knowing the brain's internal representation would be helpful, but
> is not necessary,
> as long as a program can mimic the output using its own internal
> representation. I can
> use my own straw man representation and see if that works. Any
> representation would
> do for me actually, as long as it gets results.
>
> ~PM
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to