Ben,
I must assume you are being genuine here - and don't perceive that you have not
at any point illustrated how complexity might lead to the solution of any
given general (domain-crossing) problem of AGI.
Your OpenCog design also does not illustrate how it is to solve problems - how
it is, for example, to solve the problems of concept, especially speculative
concept,, formation. There are no examples in the relevant passages. General
statements of principle but no practical examples. [Otherwise offhand I can't
see any sections that relate to crossing domains].
You rarely give examples - i.e. you do not ground your theories - your novel
ideas, (as we have discussed before). [You give standard textbook examples of
problems, of course, in other, unrelated discussions].
You have already provided one very suitable example of a general AGI problem -
how is your pet having learnt one domain - to play "fetch", - to use that
knowledge to cross into another domain - to learn/discover the game of
"hide-and-seek."? But I have repeatedly asked you to give me your ideas how
your system will deal with this problem. And you have always avoided it. I
don't think, frankly, you have an idea how it will make the connection in an
AGI way. I am extremely confident you couldn't begin to explain how a complex
approach will make the cross-domain connection between fetching and
hiding/seeking. (What *is* the connection BTW?)
If it is any consolation - this reluctance to deal with AGI problems is
universal among AGI-ers. Richard. Pei. Minsky...
Check how often in the past few years cross-domain problems have been dealt
with on this group. Masses of programming, logical and mathematical problems,
of course, in great, laudable detail. But virtually none that relate to
crossing domains.
One thing is for sure - if you don't discuss and deal with the problems of AGI
- and lots and lots of examples - you will never get any better at them. The
answers won't magically pop up. No one ever got better at a skill by *not*
practising it.
P.S. As for :
"gather as much money as possible while upsetting as few people as pos [or as
little]" - it is a massively open-ended [and indeed GI] problem that can be
instantiated in a virtual infinity of moneymaking domains [from stockmarkets,
to careers, small jobs, prostitution and virtually any area of the economy]
with a virtual infinity of constructions of "upsetting." . Please explain how
a complex AGII program, which by definition would not be pre-prepared for such
a problem , would tightly define it or even *want* to .
And note your first instinct - rather than asking- how can we deal with this
open-ended problem in an open-ended AGI way - you immediately talk about trying
to define it in a closed-ended, tightly defined, basically *narrow* AI way.
That again is a typical, pretty universal instinct among AGI-ers.
{Remember Levitt's "What people need is not a quarter-inch drill, but
quarter-inch holes" - AGI should be first & foremost not about how you
construct certain logical programs, but how you solve certain problems - and
then work out what programs you need.]
Ben,
Well, funny perhaps to some. But nothing to do with AGI - which has
nothing to with "well-defined problems."
I wonder if you are misunderstanding his use of terminology.
How about the problem of gathering as much money as possible while upsetting
people as little as possible?
That could be well defined in various ways, and would require AGI to solve as
far as I can see...
The one algorithm or rule that can be counted on here is that AGI-ers won't
deal with the problem of AGI - how to cross domains (in ill-defined,
ill-structured problems).
I suggestion the OpenCogPrime design can handle this, and it's outlined in
detail at
http://www.opencog.org/wiki/OpenCogPrime:WikiBook
You are not offering any counterarguments to my suggestion, perhaps (I'm not
sure)
because you lack the technical expertise or the time to read about the design
in detail.
At least, Richard Loosemore did provide a counterargument, which I disagreed
with ... but you provide
no counterargument, you just repeat that you don't believe the design
addresses the problem ...
and I don't know why you feel that way except that it intuitively doesn't
seem to "feel right"
to you...
-- Ben G
------------------------------------------------------------------------------
agi | Archives | Modify Your Subscription
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com