I had said:
>> I believe that these mysteries of conceptual complexity (or ideological
>> interactions) can be discovered through discussion and experiment so long
>> as that effort is not thwarted by the expression of immature negative
>> emotions and abusive anti-intellectual rants. While some of us who are
>> trying to create constructive dialogues can be annoying and
>> confrontational at times, those who are characteristically angry and
>> hostile toward us are unlikely to make significant advancements in the
>> more subtle studies that are required without first acquiring greater
>> insight into what is really driving their emotional reactions. And those
>> who are ideologically opposed to the study of ideas, by their own
>> ideological biases, are also unlikely to participate constructively in
>> such discussions.
Mark replied:
:-) Hopefully I fall under "annoying and confrontational at times" and not
"ideologically opposed to the study of ideas". :-)
------------------------------------
You definitely fall under the "annoying and confrontational at
times," category!
------------------------------------
I had said:
>> I feel that it is time to examine how ideas interact using
>> simple theories but this kind of effort will only make sense by the
>> recognition that simple theories must be combined and integrated with
>> previously acquired knowledge through some complicated processes of
>> intelligence which are not yet widely appreciated.
Mark replied:
AMEN! So where would you start?
-------------------------------------------
You are one of the few people who actually asked me a
question like that. The irony, given
the confrontational nature these discussions often take, is that it is a true
challenge to say to someone, ok, so lets hear some of these ideas that you say
you have. Amazingly, most people try to
challenge me by repeating their usual arguments. (I guess I make the same kind
of mistake, but I also make sure
that I occasionally ask people to explain their ideas to me in some detail.)
I believe that by studying the way ideas work, or the way
idea-like relations work, and trying to outline models of programs that might
implement those ideas we (the greater group) stand a better chance of making
progress on advanced AI than just plunging into some method that might look
alluringly objective on the surface, but is as detached from the way people
actually think as it is superficially objective.
To give you an example of how an idea may be examined, look
at the universal generalization that all people have parents. Now imagine the
situation where an
unsupervised child is playing too close and out of concern some adult asks
(perhaps semi-rhetorically), "Where are that child's parents?"
To this question comes the reply, "He doesn't have
parents."
The idea that all children have parents is not contradicted
by this response if the child is an orphan. This can be explained by detailing
that he was born of parents, but his
parents died and so they are no more or something like that. But the problem
is that the original
generalization, the universally true statement that all children have parents,
(or all people have mothers), seems to have somehow been qualified by the
example of the orphan. Even though we
can explain it all by referring to mortality and the passage of time, the
simple truth that children come from parents seems as if it could not meet a
sharply logical test without substantial qualification. And since other true
statements, like those
concerning mortality or the passage of time may themselves be qualified, the
simple reality that all children have parents may be nearly impossible to
represent in perfect rational form without substantial notations of the
possible qualifications. This means
that it would be practically impossible to build an axiomatic system or a
previously-learned-concept-dependent extensible system to represent fundamental
ideas because each idea can be deeply dependent on other statements that
themselves
require extensive qualification. I
don't know about the other people in this group but I don't work that way. I
can use logic but I need to use it in a
fairly simple and direct manner. That
means that I can consider a simple, universally true statement without
simultaneously considering all the possible qualifications and exceptions that
may exist to the statement. On the other
hand, when I need to deduce an exception to the rule I can often do that as
well. So, in trying to design the data
structure or frame work that an AI program would need to implement the facility
to deal with possible counter-examples to universally true statements, it is
clear (or at least it seems clear to me) that a simplistic rational method would
have to be accompanied some kind of relative boundary method. That is, if you
want to use rational methods, then your assertions
need to be presumptively bounded. But
when you want to examine other information that is directly related to the
rational statement, you will sometimes need to go beyond the boundaries, but
that requires some qualification and handling as well. Even so, the program
still needs to be wary of
actual contradictions.
Next imagine that you are reading a science fiction story
about children synthesized in test tubes. Although you recognize that it is
fiction because you know that children
have parents and are born of mothers, again, you need to be able to deal with
another possibility in order to understand what you are reading. Again this
contradiction can be explained
away by pointing to the nature of fiction (or of possible worlds), but that
does not explain how an AI program might effectively adapt to and learn from
kinds of situations. And I feel that
these kinds of situations are the rule to most ideas and not the exceptions.
The solution to this kind of problem, is not to find wish
for improved numerical evaluation systems, but to develop meta-reference
systems that tolerate the examination of simple direct ideas but are also able
to try to shape some flexible or semi-permeable boundaries around simplistic
statements that seem to make
sense but may not fit perfectly into other related assumptions.
It is interesting that I can define a simple rule of thumb
about contradicting universals: some contradictions are not as devastating to
an assertion as others. (This example shows that there is a relation between
declarative and procedural knowledge by the way. No AI program is likely to
work very well without the recognition of the significance of this kind of
relation.)
This idea of trying relative boundaries does not only apply
to universals, it can also apply to the way other ideas work together as
well.
I believe that by examining the way other ideas work,
further insights into the problem of designing a more advanced AI program may
also be discovered. Of course actual
experiments with AI prototypes are necessary as well, but I do feel that there
are some significant mysteries about the way ideas work and interact that still
to be discovered.
Jim Bromer
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com