Terren:> Just wanted to add something, to bring it back to feasibility of
embodied/unembodied approaches. Using the definition of embodiment I
described, it needs to be said that it is impossible to specify the goals of
the agent, because in so doing, you'd be passing it information in an
unembodied way. In other words, a fully-embodied agent must completely
structure internally (self-organize) its model of the world, such as it is.
Goals must be structured as well. Evolutionary approaches are the only means
at our disposal for shaping the goal systems of fully-embodied agents, by
providing in-built biases towards modeling the world in a way that is in
alignment with our goals. That said, Friendly AI is impossible to guarantee
for fully-embodied agents.
The question then becomes, is it necessary to implement full embodiment,
in the sense I have described, to arrive at AGI. I think most in this
forum will say that it's not. Most here say that embodiment (at least
partial embodiment) would be useful but not necessary.
OpenCog involves a partially embodied approach, for example, which I
suppose is an attempt to get the best of both worlds - the experiential
aspect of embodied senses combined with the precise specification of goals
and knowledge, not to mention additional components that aim to provide
things like natural language processing.
The part I have difficulty understanding is how a system like OpenCog
could hope to marry the information from each domain - the self-organized,
emergent domain of embodied knowledge, and the externally-organized, given
domain of specified knowledge. These two domains must necessarily involve
different knowledge representations, since one emerges (self-organizes) at
runtime. How does the cognitive architecture that processes the specified
goals and knowledge dovetail with the constructions that emerge from the
embodied senses? Ben, any thoughts on that?
Terren,
You're struggling a bit for definitions - but I don't mean that in the least
critically, because so is everyone that seems to interest you - struggling
to form a new worldview.
The outgoing worldview, to which AGI is still wedded, sees the world as
rationally structured - structured both physically, behaviourally and
intelligently.
The new worldview sees living organisms as creatively
self-structuring -again, physically, behaviourally and intelligently - aka
autopoiesis and Kauffman's self-organizing organisms. And as Kauffman
points out, rationally structured algorithms/programs are demonstrably
incapable of producing the kind of creative thinking that is essential for
General Intelligence.
Isn't it clear that if you look at a General Intelligence that works, like
the human kind, the process of learning and becoming intelligent is the same
in every field - from reaching out and grasping, to babbling and talking, to
reading, writing and drawing, and mastering every activity up to and
including,ironically, learning to program - first you creatively flail and
only then, secondly, do you (and the unconscious mind) impose structure and
routines/algorithms on the messy results? (Therein lies the General Method
of General Intelligence). And then those routines/algorithms can only ever
deal with the routine parts of intelligent activities. AI here as
elsewhere, gets things completely back to front, and assumes that structure
and order come first. The whole of evolution, including the
evolution/development of intelligent behaviour, contradicts that, (as I
think you're pointing out).
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com