On Tue, Mar 25, 2008 at 11:23 AM, William Pearson <[EMAIL PROTECTED]>
wrote:

>
>
>  On 24/03/2008, Jim Bromer <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > To try to understand what I am talking about, start by imagining a
> > simulation of some physical operation, like a part of a complex factory in a
> > Sim City kind of game.  In this kind of high-level model no one would
> > ever imagine all of the objects should interact in one stereotypical way,
> > different objects would interact with other objects in different kinds of
> > ways.  And no one would imagine that the machines that operated on other
> > objects in the simulation were not also objects in their own right.  For
> > instance the machines used in production might require the use of other
> > machines to fix or enhance them.  And the machines might produce or
> > operate on objects that were themselves machines.  When you think about
> > a simulation of some complicated physical systems it becomes very obvious
> > that different kinds of objects can have different effects on other objects.
> > And yet, when it comes to AI, people go on an on about systems that
> > totally disregard this seemingly obvious divergence of effect that is so
> > typical of nature.  Instead most theories see insight as if it could be
> > funneled through some narrow rational system or other less rational field
> > operations where the objects of the operations are only seen as the
> > ineffective object of the pre-defined operations of the program.
> >
>
>
> How would this differ from the sorts of computational systems I have been
> muttering about? Where you have an architecture where an active bit of code
> or program is equivalent to an object in the above paragraph. Also have a
> look at Eurisko by Doug Lenat.
>
>    Will Pearson
>


There is no reason to believe that anything I might imagine would be the
same as something that was created 35 years ago!

I have a lot of trouble explaining myself on some days.  The idea of the
effect of the application of ideas is that most people do not consciously
think about the subject, and so, just by becoming aware of it one can change
how his program works regardless of how automated the program is.  It can
work with strictly defined logical systems or with inductive systems that
can be extended creatively or with systems that are capable of learning.
However, it is not a complete solution to AI, it is more like something that
you will need to think about if you plan to write some seriously
innovative AI application in the near future.  So, I haven't written such a
program, but I do have something to say.

A system that has heuristics that can modify the heuristics of the system is
important, and such a system does implement what I am talking about.
However, the point is, that Lenat never seemed to completely accept the
range that such a thing would have to have to generate true intelligence.
The reason is that it would become so complicated that it would make any
feasible AI program impossible.  And the reason that a truely intelligent AI
program is still not feasible is just because it would be complicated.

I am saying that the method of recognizing and defining the effect of ideas
on other ideas would not, by itself, make it all work, but rather it would
help us to better understand how to better automate the kind of extensive
complications of effect that would be necessary.

I am thinking of a writing about a simple imaginary model that could be
incrementally extended.  This model would not be useful, because it would be
too simple.  But I should be able to give you some idea about what I am
thinking about.

As any program becomes more and more complicated, the programmer has to
think more and more about how various combinations of data and processes
will interact.  Why would anyone think that an advanced AI program would be
any simpler?

Ideas affect other ideas.  Heuristics that can act on other heuristics is a
basis of this kind of thing, but it has to be much more complicated than
that.  So while I don't have the answers, I can begin to think of hand
crafting a model where such a thing could be examined, by recognizing that
the application of ideas to other ideas will have complicated effects that
need to be defined.  The more automated AI program would have to use some
systems to shape these complicated interactions, but the effect of those
heuristics would be modifiable by other learning (to some extent.)

Jim Bromer

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to