Thanks for asking.  I will try to come up with a simple model during the
next week. I can create an example because the principle can be used in
well-defined constrained models or in more extensible models.



The theory does not answer all questions about AGI.  I would think that
should be taken as a reasonable assumption about any single theory; however,
I believe that it can help in the discovery of more dynamic and flexible
principles that may be of some use in AI.  The reason I think this is
because the theory explicitly deals with an issue that has not been
abstracted and highlighted in any discussions that I can recall.  So while
the idea of an effect of application of an idea may have been implicitly
invoked in any number of discussions, I don't really think that it has ever
really emerged as a fundamental subject matter in its own right.



Concept grounding could be taken as an example of the effect of application.
The association of a concept with some data that exemplifies it within the
greater data environment would naturally produce some kinds of knowledge
that could affect other kinds of knowledge.



To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City kind of game.  In this kind of high-level model no one would ever
imagine all of the objects should interact in one stereotypical way,
different objects would interact with other objects in different kinds of
ways.  And no one would imagine that the machines that operated on other
objects in the simulation were not also objects in their own right.  For
instance the machines used in production might require the use of other
machines to fix or enhance them.  And the machines might produce or operate
on objects that were themselves machines.  When you think about a simulation
of some complicated physical systems it becomes very obvious that different
kinds of objects can have different effects on other objects.  And yet, when
it comes to AI, people go on an on about systems that totally disregard this
seemingly obvious divergence of effect that is so typical of nature.  Instead
most theories see insight as if it could be funneled through some narrow
rational system or other less rational field operations where the objects of
the operations are only seen as the ineffective object of the pre-defined
operations of the program.



A usage evaluation could be taken as an example of an effect of application,
because the idea of usage and of statistical evaluation can be combined with
the object of consideration along with other theories that detail how such
combinations could be usefully applied to some problem.  But it is obviously
not the only effective process that would be necessary to understand
complicated systems.  No one would only use statistical models to discuss
the management and operations of a real factory for example.  It is rather
obvious that such limited methods would be grossly inadequate.  Why would
anyone imagine that a narrow operational system would be adequate for an AI
program?  The theory of the effect of application of an idea tries to
address this inadequacy by challenging the programmer to begin to think
about and program applications that can detail how simple interactive
effects can be combined with novel insights in a feasible extensible object.
So while I don't have the solution, I believe I can see a path.



I feel that by using the principle of the effect of the application of
ideas, one could build simple extensible models. The models would start out
as being simplistic.  But by carefully studying how complicated interactions
interfere or cohere I believe that some new AI principles may be found.  I
will try to come up with a simple model during the next week.



Jim Bromer



On Sun, Mar 23, 2008 at 4:53 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> Jim,
>
> It sounds like something about concept grounding, but that's all I
> got. Can you give an example that demonstrates the structure of what
> you are talking about?
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to