I do understand that we are here at different stages of life, some more
scientific than others, some more brillian than others, but this thread is
treading on the edge of insanity. Let me see if I can avoid making things
worse:

Concepts are a bit mysterious. They have to arise out of fuzzy data, be
organized in an ontology and ideally avoid contradictions, circuitous logic
(of infinite length) etc. Then they have to show some elasticity, expand or
shrink, without making a mess. I have previously expressed sympathy and
appreciation for projects like CYC that "annotate" the physical and mental
worlds which otherwise could be intractable for let's say, bayesian
statistics.

Invariance is just a property of formal systems/mathematics. In arithmetic
the numbers 0 and 1 are "invariants for addition and multiplication", in
geometric projections certain axes and planes for rotations, translations
etc. When you build vision systems, invariants are defined before you even
process the first picture: if you chose to average the
color/frequency/"temperature" of all your pixels into a single number, then
you'd simply collapse all faces into racial categories like white and
black. If you include a couple of more metrics you may be able to
distinguish your mother from your father, etc.

Now if the question is how to generate "ideal" invariances so that the
system will "never" confuse all the possible and impossible objects of the
world then you are back to square one, the best solution is the one that
stores all actual annotated images in maximum resolution and hope for the
best interpolating and extrapolating for all the images that you do not
have - training neural nets to do feature recognition will always be
mathematically inferior, even though feature extraction and feature vector
storage helps enormously with the computational demands of "100 million
pixels". Since we are talking psychology, it is worth noting that
behaviorists collected enough data to suggest that "mental life" and
concepts are an illusion and what happens is that there is mapping from
stimulus to action. Ie. there is no line and there is no spoon
independently of a possible action or reaction. Behaviorists may be out of
fashion and somewhat limited in their approach, but a whole lot of
roboticists and artificial life people stay within the behavioral paradigm,
and this lecture can be seen as following the trend
http://www.youtube.com/watch?v=7s0CpRfyYp8

My humble opinion is that we should be pursuing both behavioral and
"conceptual" versions of AGI, eventually we may get somewhere more
enlightening.

AT

On Sun, Jul 22, 2012 at 12:49 AM, Piaget Modeler
<[email protected]>wrote:

>
> Perhaps the appropriate word is "differentiated"
>
> Once we have one instance of a concept A, the the instance can be modified
> into
> a new sub instance A2, while preserving the original instance as A1.  The
> concept
> A can now be characterized as  A = A1 + A2.  This can be done ad
> inifinitum.
>
> So initially, we have a concept "Line" which serves as our concept A.  We
> encounter
> a second instance of line which we now call A2, and we differentiate our
> initial concept
> of line so that A = A1 + A2.  And so on, ad infinitum.
>
>
>
>
> > From: [email protected]
> > To: [email protected]
>
> > Subject: RE: [agi] Re: How the Brain Works -- new H+ magazine article,
> by me
> > Date: Sat, 21 Jul 2012 11:49:24 -0500
>
> >
> > Mike,
> >
> > Invariant representations are not adapted. They are *created* each time
> you
> > see something. Then they are compared to determine if the image is
> familiar
> > to you. Some may be kept in storage for all your life.
> >
> > Your challenge is very easy. I already explained how to do each and every
> > detail: automatically, with a camera that looks at the picture and
> applies
> > EI to obtain the invariant representations. I do not anticipate this
> > actually happening for a few years because new hardware would be
> required,
> > which does not exist yet.
> >
> > You must also account for the fact that you can't keep asking me to
> > endlessly explain the same thing.
> >
> > Sergio
> >
> >
> > -----Original Message-----
> > From: Mike Tintner [mailto:[email protected]]
> > Sent: Saturday, July 21, 2012 11:38 AM
> > To: AGI
> > Subject: Re: [agi] Re: How the Brain Works -- new H+ magazine article,
> by me
> >
> > DRAW what you mean.
> >
> > Here are examples of a "line".Explain visually how an existing
> > concept/invariant representation of "line" can be adapted - VISUALLY - to
> > embrace the endless new lines that you may be presented with.
> >
> > http://freethumbs.dreamstime.com/267/big/free_2672831.jpg
> >
> >
> http://media.smithsonianmag.com/images/Jackson-Pollock-1943-Mural-631.jpg
> >
> > Saying there are infinite line representations explains nothing. You
> have to
> > recognize how all the examples you may have in your head classify as a
> > "line" - what they have in common. And to distinguish a "line" from
> another
> > shape - for example, a blob or blot.
> >
> > I am pretty sure, Sergio, that you have v. little idea what you are
> talking
> > about. Show - draw - me wrong.
> >
> > (So far you've always backed out and disappeared when seriously
> challenged).
> >
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/14050631-7d925eb1> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to