I often find the discussions in these AI groups to be distracting
because although they are related to ideas that I feel strongly about,
they are rarely inline with what I think is of central importance.
But even though the other people in the group are not interested in
the exact same kinds of ideas that I am interested in, the diversity
of the interests does sometimes help me to examine ideas in new ways.

For example, I think that complexity, in the more general sense, is of
great significance to the advancement of AI, but the specifics of the
conversations here on the subject during the past months were not
within what I felt should have been at the focus of the discussions.
But I still derived some new insights as I considered the comments
that were made.

One of those new ideas is that extensibility of complexity is itself
of fundamental importance to solving some of the contemporary problems
of AI.  Although I had thought about something similar before, the
newly placed significance on the centrality of this concept per se has
inspired me to reconsider a simple starting off point for a possible
AI program.  One of the problems with AI research during the past 60
years is that controlled models or prototypes of future programs will
sometimes work well up to a point of complexity that seems to be just
beyond the researchers horizon.  As a result, the early researchers
often got people excited by their discoveries but after a few years it
became clear to the skeptics that the new breakthroughs had been
overstated.  Thinking about it from another vantage however, it is
nearly impossible to conceive of a truly novel and effective program
without first testing component ideas out in controlled environments.

What I am thinking about now is that I should start with a simple
model that can produce extensible complexity of reference.  One of the
most important advantages that a simple model possesses is that you
can make controlled experiments on it.  (It is also simple of course
which helps with the feasibility thing.)  But the problem with the
simple AI models of the past has been that they tended to fail as they
became more complicated.  My thinking now is that if the design goal
of the experimentation is to focus the study on extensible complexity
(in the general sense) then perhaps some insights into solving some of
those problems may be achieved.

I have been rejecting the idea of starting with some overly simplistic
AI method for years because of the failures of such models to deal
with greater complexity.  But, this argument may not be as meaningful
if the simplification is directed toward the centrality of devising
methods of extensible complexity.  So I am seriously thinking about
trying some ideas I have had about dealing with complexity and
ambiguity.  By using simplified artificial problems, and making
complexity extensibility a central goal I think I may (if I am lucky)
be able to discover something of importance in this field.  Although I
considered doing something very similar to this years ago, the
discussions about complexity here during the last few months have
helped me to refocus my efforts specifically onto this question of
complexity extensibility and how it relates to ambiguity in controlled
natural language problems.

My point is: I often get something out of these conversations even
though other people's thinking is usually very different from mine.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to