Mark Waser wrote:
> How does this relate to what you said above? Well, your statement that
> you "don't believe that the core of intelligence is complex according to
> this definition" seems, to me, to indicate that you have compelling
> reasons to suppose that the above scenario really cannot be correct.
Your above Choice 2 scenario is clearly correct for humans. I am
arguing that
1. the boot strap system/core mechanisms (the same basic drivers that
are there before the development process kicks off and the
operators start getting built and the symbols start being
collected) is/are not required to be complex (as long as they are
designed top-down instead of bottom up -- but designed in such a
way that they *DO* ground with the bottom)
Ah, but there is the rub. Do we have any *reason* to suppose that they
can be designed in such a way that (a) they produce the same powerful
operators which are responsible for intelligence, and (b) they
nevertheless do that without being complex?
Hey, I'm not saying "no!", I am just saying that we have never
succeeded, we have no theory that says that they can succeed, and it is
also kind of noticeable that whenever we try to make such systems
non-complex, they get out of our control and end up being complex anyway
(plus they don't work!).
The only response I have ever heard to this question is: "I just don't
think it is going to be a problem". I need more than that.
2. humans are spaghetti-coded for the same reason *any* evolved
system is spaghetti-coded (particularly opaque systems where a
programmer can't go back in and clean up)
3. a clean OO-like version of this operator system can be built in a
reasonable amount of time with the information that we are able to
get from a) human and animal studies and b) early failures of this
system
Tricky. There may not be a clean version that works. And we do not
find it easy to figure out what the operators are. And, worst of all,
nobody is actually trying to figure out those human-cognitive operators
(they all say they do not need to copy human cognition at all).
4. this operator system *MUST* be able to build operators
after-the-fact (and preferably, be able to manipulate core
operators with sufficient safety precautions)
Nope. It can build new ones, by more of the same sort of kludging. And
it does not have to be able to go back and manipulate old ones.
5. with sufficient analysis and clean-up tools, you *can* get a
non-complex version of an intelligent system working (assuming
that you believe that an F-14 is non-complex i.e. an adaptive,
controllable system working in a complex world)
Why?
Consider: what if every act of cutting-edge creative thought (of the
sort that happens when someone mentions two unrelated phenomena, and all
of a sudden an analogy pops into your head that enables you to see a
commonality between the two things, where before you never thought that
they were related) is the result of new operators being spawned from old
by those immensely tangled, complex processes that are what builds
operators?
After all, this idea of "operators" finds its most obvious expression in
analogy-making.
What if the whole business of analogy-making is the result of building
new operators, and the only way we know of to do that is through the
tangled, complex mechanisms that humans probably do use to build new
operators out of old ones?
(To be more specific, I suggest that operators are not built by an
"operator-builder' mechanism of the sort that you might be able to clean
up and reverse engineer, but instead operators are built by other
operators, so the more operators you have, the more ways you have to
build new operators .... which makes the stock of operator-building
machinery always increase).
If real, high-level human-equivalent intelligence is inextricably bound
up with the human ability to build analogies, and if analogy-making is
as complex as I just implied, how could you ever parse the complexity
out of the system?
6. you are far more like to get a clean, non-complex system working
than a horrible mess of kludges like the human brain (and it will
be *much* safer)
Hmmm.... not necessarily at all. But that is a big argument.
7. with sufficient analysis and clean-up tools available to the
system, it will *NOT* slowly diverge from stability as it
interacts with the world (the F-14 model)
I have to say I reject the F-14 model, since it depends on treating all
complexity as irrelevant noise.
> I cannot see any reason that COMPELS me to believe that such a clean
> version can be built.
I cannot see any reason that COMPELS me to believe that it cannot.
Humans look to me to be pretty complicated but not all that complex
(using your definitions).
We have plenty of evidence that complexity exists in the system: the
boot is on the other foot.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com