Please define "correlative structure".
~PM
-----------

> Date: Sun, 29 Dec 2013 12:00:21 -0500
> Subject: Re: [agi] Abstract Creativity
> From: jimbro...@gmail.com
> To: a...@listbox.com
> 
> Analogical reasoning involves a kind of creativity but it is not the
> only form of creativity.
> 
> I believe that creative imagination is necessary for higher
> intelligence. You can see that animals do the same kinds of things
> that human beings do when they dream so this tells us that animals
> have an imagination.  And since all intelligent activity involves the
> application of some kind of mental models to compare against sensory
> events, and to anticipate possibilities from that comparison, then
> intelligent understanding can be thought of as an application of
> imagination.  (We only constrain the definition of what imagination is
> because it is usually used to refer to a special form of intelligent
> activity.)
> 
> The idea of 'injecting correlative structure' can be stretched in a
> lot of different ways. There is no question that it goes way beyond
> analogical reasoning.
> Jim Bromer
> 
> On Sun, Dec 29, 2013 at 10:14 AM, John Rose <johnr...@polyplexic.com> wrote:
> > OK, here is another way an abstract creativity would work. I call it
> > abstract because it is a creativity that operates amongst many domains.
> >
> > The world is full of correlative structure. A simple example is a circle.
> > It's everywhere. A more complex example could be a chunk of BNF, a
> > contextually free correlative structure. Another correlative structure would
> > be "symmetry". Many of the omnipresent structures can be cataloged into a
> > database. This is essentially a type of common sense knowledgebase.
> >
> > Then, creativity is the act of injecting and modelling correlative structure
> > domain specific, estimating computational expense effect in and after
> > integration, and choosing amongst with confidence. Being more creative would
> > essentially mean using more complex, less applicable, and more estimative
> > correlative structure successfully. This is very simple at a highly
> > conceptional level. Note that correlative structure might be new in a
> > specific instance and might be derived recently from observation. In this
> > model it is very close to what intelligence is, even to where it is a
> > component of intelligence. Also it does inherently include
> > counterfactuality. And even though it includes "analogy" it is not bound by
> > the cognitive concept of what that is. I find it annoying when people say -
> > oh that's just analogy or "analogical reasoning" and then it gets
> > pigeonholed into that circle. This might be some form of analogical
> > reasoning this is implementable for a specific model of AGI.
> >
> > John
> >
> > -----Original Message-----
> > From: Jim Bromer [mailto:jimbro...@gmail.com]
> > Sent: Friday, December 27, 2013 11:16 PM
> > To: AGI
> > Subject: Re: [agi] Abstract Creativity
> >
> > The view that an insight is a system based on observations and a lot of
> > creative explanations is a little problematic.
> >
> > But, just because a part of an insight is imaginative does not mean that it
> > is not a rational bridge in the insight (of course).
> >
> > So when we can come up with a creative explanation to fill in a gap of an
> > insight we would like to make the explanation utilize some observations of
> > effects in a way as to provide the explanation with more structure. So it is
> > not just an observation correlation but an rational explanation that
> > correlates with some effective observation points.  Observation points are
> > often used in definitions and the rational explanations needed to fill in
> > the gaps are often based explanations for similar kinds of things.
> >
> > For example:
> > A programming language is based mostly on using a context-free grammar.
> > (Some of the observation points here are the programmer's recall of first
> > realizing that he is using syntactic grammars to write
> > programs.)
> >
> > So a computer program that is designed to learn can be said to be using a
> > syntactic grammar.  Even if an AI program that is designed to learn a
> > natural language grammar through trial and error does not start with a base
> > of a natural language grammar, it still cannot be said to use no grammar at
> > all. It is using a computational grammar of some sort even if the programmer
> > does not consciously think of it in that way. (Here, for example, the
> > programmer might recall his recognition that computer programs are
> > inputting, rearranging  and outputting strings of individual values that are
> > similar to or are characters in a syntactic string.
> >
> > A computer could learn a very simple context-free grammar through trial and
> > error alone.  (We have all seen programs that were able to 'learn' something
> > incrementally and most of us are familiar with reinforcement methods so this
> > does not require a lot of fantasizing to arrive at the conclusion that this
> > may be feasible.  And when you realize that what I am talking about is that
> > simple context free grammars only have to be treated as worded input
> > 'commands' -that are followed at least some of the time- then this looks
> > very feasible. In fact, it seems so feasible that almost any experienced
> > programmer who has some sense of what I am talking about could try it.)
> >
> > Finally, the acquired (not pre-programmed) simple context-free grammars
> > (using words) could be used to teach the AI program some simple natural
> > language structure that use context-sensitive and other natural language
> > grammars. (This is the conjecture which seems feasible if you accept the
> > other steps.  But this step absolutely requires experimentation to confirm.
> > The skeptics try to point out that learning to use natural language requires
> > some fundamental knowledge of what the words represent but that is what can
> > be taught when the program is learning to react to simple worded commands
> > and later higher level explanations.) (There were few observation points in
> > this last part but it is really the rearrangement of familiar definitions
> > that are serving as rational bridges over the spans that the incredulous
> > skeptics of the conjecture object to.  So even though no one has observed an
> > AI/AGI program that can do this, it really does make sense.  If there is a
> > problem then, it probably must be due to the complexity of the knowledge
> > that would be required to make this an effective AGI paradigm.
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/248029-3b178a58
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-f5817f28
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> 
> 
> 
> -- 
> Jim Bromer
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to