Quoting [EMAIL PROTECTED]:

On Jan 28, 2008 7:56 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:

X:Of course this is a variation on "the grounding problem" in AI.  But
do you think some sort of **absolute** grounding is relevant to
effective interaction between individual agents (assuming you think
any such ultimate grounding could even perform a function within a
limited system), or might it be that systems interact effectively to
the extent their dynamics are based on **relevant** models, regardless
of even proximate grounding in any functional sense?

Er.. my body couldn't make any sense of this :). Could you be clearer giving
examples of the agents/systems  and what you mean by absolute/ proximate
grounding?

I see that you're talking about interaction between systems considered
to be "minds", and highlighting the question of what is necessary to
form a shared basis for **relevant** interaction.  I agree that a
"mind" without an environment of interaction is meaningless, in the
same way that any statement (or pattern of bits) without context is
meaningless.  However, I would argue that just as context is never
absolute, nor is there ever any need for it to be absolute, indeed for
practical (functional) reasons it can never be absolute, embodiment
need not be absolute, complete, or ultimately grounded.

I use the term "system" to refer as clearly as possible to any
distinct configuration of inter-related objects, with the implication
that the system must be physically realizable, therefore it models
neither infinities or infinitesimals, nor could it model a Cartesian
singularity of Self.

I use the term "agent" to refer as clearly as possible to a system
exhibiting "agency", i.e. behavior recognized as intentional, i.e.
operating on behalf of an entity.  It may be useful here to point out
that recognition of agency inheres in the observer (including the case
of the observer being the agent-system itself), rather than agency
being somehow an objectively measurable property of the system itself.
 Further, the "entity" which is the principal behind any agency is
entirely abstract (independent of any physical instantiation.)
[Understanding this is key to various paradoxes of personal identity.]

I distinguish between "absolute" and "proximate" grounding in regard
to the functional (and information-theoretic) impossibility of a
system modeling it's entire chain of connections to "ultimate
reality", while in actuality any system interacts only with its
proximate environment, just as to "know" an object is not to know what
it "is" but to know its interface.  To presume to know more would be
to presume some privileged mode of knowledge.

So in short, I agree with you that "embodiment" is essential to
meaningful interaction, thus for there to be agency, thus for there to
be a "Self" for the mind to know.  But I extend this and emphasize
that it's not necessary that such "embodiment" be physical, nor that
it be logically grounded in "ultimate reality", but rather, that
interaction is relevant and meaningful to the extent that some
(necessarily partial and arbitrarily distant from "reality") context
is shared.


Vow, this is well worded, structured in a really nice set of feedback loops.

What is a non physical embodiment. I would like to know more about this.

If we have a form of embodied AGI (with all the definitions and descriptions
above, even a non physical one not being grounded in an ultimate reality), and
there is space for movement/motion (see other posts and definitions for
movement), has anybody thought about DESIRE. How could desire come into this.
What kind of mind is desirable?


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90638550-c0e5be

Reply via email to