>RICHARD LOOSEMORE=====> At the cognitive level, on the other hand, there is

a strong possibility that what happens when the mind builds a model of 
some situation, it gets a large nummber of concepts to come together and 
try to relax into a stable representation, and that relaxation process 
is potentially sensitive to complex effects (some small parameter in the 
design of the "concepts" could play a crucial role in ensuring that the 
relaxation process goes properly, for example)
ED PORTER=====> Copycat uses a variant of simulated annealing to do its
relaxation process, except it is actually a much more chaotic relaxation
process than many (e.g., much more than Hecht-Neilsen's Confabulation),
because it involves millions of separate codlets being generated to score,
decide the value of, and to add or remove elements from a graph, that labels
grouping and relationships in the initial string, and between the example
initial string and the solution initial string, and between the example
initial string and the example changed string, and between the both the
solution initial string and the example changed string and the solution
changed string, as well as constructing the solution changed string itself
during this process.  

Each of the labelings and mapping links is made by a separate small program
called a codelet.  Codelets are chosen in a weighted random manner.  And one
codelet can clobber the work done by another.  The ratio of importance
between some fitness weighting and pure randomness in the picking of codlets
varyies with temperature, which is a measure of overall labeling, mapping,
and solution fit, which tends to go down over time as the system moves
toward a coherent solution.  But it can go up if the system starts settling
into a solution that creates a mapping or labeling flaw, at which time more
random codelets will be created and randomly change the system, but with the
changes being more likely in the parts of the graph or labeling that have
the least good fit, and thus requires the least energy to kick apart.

Despite this very chaotic process, and the fact this process is sensitive to
complex dynamic effects that enable a slight change of state to causes it to
settle into different solutions, as Richard mentioned above, the weighting
of the system, which varies dynamically in a context sensitive way,  causes
most of the solutions that it settles into to be appropriate, although they
may be quite different.   

For example, for the copycat problem where the goal is to change "ijkk" in a
manner similar to that in which aabc was changed to produce aabd, which
problem can be represented as

ex      aabc --> aabd
        ijkk --> ?

On one thousand runs the results were
        # of occurrence       result          temperature
1       612 were                ijll            29
2       198 were                ijkl            49
3       121 were                jjkk            47
4       47  were                hjkk            19
5       9   were                jkkk            42
6       6   were                ijkd            57
7       3   were                ijdd            46
8       3   were                ijkk            69
9       1   was                 djkk            58

===EXPLANATION OF ANALOGY IN EACH SOLUTION===
ex-last char in string has alphabet number incremented
1-last set of the same chars in each string had alphabet number incremented
2-last char in each string had alphabet number incremented
3-one end char in each string had alphabet number incremented
4-one end char in each string had alphabet number changed by one
5-set of chars in string had alphabet numbers incremented
6-last char in each string is changed to d
7-last set of same chars in each initial string was changed to d
8-last char in each string had alphabet number changed by a value of zero or
one
9-one char on end of string was changed to d

So you see that each of the changes except solution 8, which had the worst
temperature, meaning the system felt it was the worst "fit" actually
captured an analogous change.  If temperature were used to filter out the
misfits, none of the runs would have produced a non-analogy.   So despite
the chaotic nature of the system, it almost always settled on a labeling,
graphing, and solution that was appropriate, and when it didn't it knew it
didn't, because of the systems measure of analogical fit.

Although this definitely is a toy problem, it might have as much potential
for "complexity" as the game of life, in terms of its number of components
(if you count its codlets), its computations, and its non-linearities.  I
was told by somebody who worked with Hofstader that individual copycat
solutions running on unoptimized LISP code on roughly 1990s Sun work
stations normally took between about half hour to a major fraction of a day.


The difference between this and the game of life is that has been designed
to work.  Despite its somewhat chaotic manner of approaching the problem, it
has weights, many of which are contextual, that guide the chaotic process,
in a very (very) roughly analogous way to that in which Adam Smith's
invisible hand guides the complexities of a market economy.

Its processes of labeling, creating graphs of labels, graph matching, graph
extrapolating, and varying measures of similarity in a context sensitive way
are all processes that a Novamente type system would be using.

All though this is a very very small example, it is a positive one.  It
indicates that a collection of relatively free running non-linear
interactions (i.e., its codelets) can operate relatively reliably in an
intended manner -- despite the computational irreducibility of the operation
and the relatively large number of non-linear components -- given the
guiding hand provided by the systems goals, weights, and measures.

Ed Porter

(disclaimer.  I am writing this late at night when I am very tired and
largely from my memory of having read the copycat chapter of "Fluid Concepts
and Creative Analogies" multiple times, but none more recently than several
years ago.  Pei Wang may be able to correct me if I have made any gross
mis-descriptions.)






-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 9:08 PM
To: [email protected]
Subject: Re: [agi] None of you seem to be able ...

Scott Brown wrote:
> Hi Richard,
> 
> On Dec 6, 2007 8:46 AM, Richard Loosemore <[EMAIL PROTECTED] 
> <mailto:[EMAIL PROTECTED]>> wrote:
> 
>     Try to think of some other example where we have tried to build a
system
>     that behaves in a certain overall way, but we started out by using
>     components that interacted in a completely funky way, and we succeeded
>     in getting the thing working in the way we set out to.  In all the
>     history of engineering there has never been such a thing.
> 
> 
> I would argue that, just as we don't have to fully understand the 
> complexity posed by the interaction of subatomic particles to make 
> predictions about the way molecular systems behave, we don't have to 
> fully understand the complexity of interactions between neurons to make 
> predictions about how cognitive systems behave.  Many researchers are 
> attempting to create cognitive models that don't necessarily map 
> directly back to low-level neural activity in biological organisms.  
> Doesn't this approach mitigate some of the risk posed by complexity in 
> neural systems?

I completely agree that the neural-level stuff does not have to impact 
cognitive-level stuff:  that is why I work at the cognitive level and do 
not bother too much with exact neural architecture.

The only problem with your statement was the last sentence:  when I say 
that there is a complex systems problem, I only mean complexity at the 
cognitive level, not complexity at the neural level.

I am not too worried about any complexity that might exist down at the 
neural level because as far as I can tell that level is not *dominated* 
by complex effects.  At the cognitive level, on the other hand, there is 
a strong possibility that what happens when the mind builds a model of 
some situation, it gets a large nummber of concepts to come together and 
try to relax into a stable representation, and that relaxation process 
is potentially sensitive to complex effects (some small parameter in the 
design of the "concepts" could play a crucial role in ensuring that the 
relaxation process goes properly, for example).

I am being rather terse here due to lack of time, but that is the short 
answer.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73449569-8b8a37

<<attachment: winmail.dat>>

Reply via email to