On Sat, Apr 6, 2013 at 9:47 PM, Piaget Modeler <piagetmode...@hotmail.com>wrote:

> ... just talk about my definition of semiosis:
> computationally linking signs to meanings, and vice versa.
>

Getting a bit desperate, PM? You seem to have asked both how it might be
happening in humans AND how it might happen in some artificial mind design,
but since this is the AGI list I will assume you stayed on topic and meant
the latter. Anyway, the various representation-less models can never be
rejected in either natural or artificial brains: from input to
understanding in one distributed automagic step, or better still an
automagic feedback loop. In this kind of model the "explanations" we are
able to generate on demand have as much validity as the explanations
patients provide in all the seminal agnosia experiments, where they ask
"why can't you see the apple now that you moved it to your left" and they
get back "because you distracted me" except for the more accurate "half my
brain is missing after the accident".

Let me rephrase once again, we just do it. Whatever happens at the semiosis
or any other stages is (falsely?) manufactured. Perhaps.

Now in the machine intelligence case, if it is symbolic enough to withstand
scrutiny, I see no mystery. The a-brain works exclusively with meanings and
models and scenarios and other things that are de facto "representing
unambiguously themselves" and "hopes" the signing system will work, whether
with another entity or with oneself. I don't know about you but it has
happened to me that I kept meticulous notes for personal use of one thing
or another but ended up mystified because I chose wrong and ambiguous or
too few signs (words and sketches). So sign is something that you hope
encodes the meaning you want to convey. This system of hope works anyway
you can make it work, you may want to subject the other entity, the
counterparty, to years of repetitive education so that they stand a chance
to understand your more complicated meanings. During education the meanings
of signs will have to be rediscovered/reinvented by the trainees through
trial and error and any other mechanism we know of, none of which is
guaranteed to work.

 For whatever reason the "5 senses" are more like 1000, (I am including
wide ranges of  feelings and emotions that are universal enough to have
communication and survival value), and they are pre-verbal and non-verbal
to a large extent, which again brings us to a non-representational model,
if some tribal Amazon leader disapproves of your first time visit and first
contact you don't have to connect his frown or threatening gesture or
scream to a meaning, whatever you see or hear is as good as the meaning.

I will briefly remind the reader that language-creation experiments with
embodied agents/robots are a tool of the highest significance in our search
for AGI. In those the sign is the observable state of the agent from the
outside (eg the position of its bodyparts and any sounds and blinking
lights and TCP/IP traffic it produces), while the meaning is everything
else. I'd risk asserting that ANY genetic programming experiment involving
a multitude of agents and either a) assigning survival value to increased
signal-mediated cooperation or b) driven by the ever-increasing utility
value of a single agent that tries to maximize his control of all the
others, either a) or b) would eventually produce AGI, albeit not
particularly friendly. I would also assert that if fear and pride did not
"force" humans to participate in education we would never have evolved our
signaling and internal states, we'd be more basic, more visceral, than the
aforementioned uncontacted tribe.

AT



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to