Free Association, Creativity and AGI (was RE: [agi] The Smushaby of Flatway.)

2009-01-11 Thread Benjamin Johnston

Hi Mike (Tintner),

You've often made bold claims about what all AGIers do or don't do. This
is despite the fact that you haven't met me in person and I haven't revealed
many of my own long term plans on this list (and I'm sure I'm not the only
one): you're making bold claims about *all* of us, without actually knowing
what we're *all* doing. When you first joined the group, I tried to point
out that many AGI and AI people are trying to do the very thing that you
claim we aren't doing (and many others have similar done likewise). Please
don't talk about all AGIers until you have met every single one.


Anyway, for the point of discussion I thought I'd give your ideas a moment
and actually try your suggestions.

You sometimes you talk about the grand goals of AGI and complex human
behaviours, as though you're the only one who sees this. This isn't
necessary, most of us share such goals - I can see this in the work of any
researcher in the area. These grand goals, however, are quite difficult and
it will take time to get there. What we need is a plan on how to get there,
and some kind of easier-to-reach stepping stones along the way. These
stepping stones are what characterizes most current research... steps, that
I believe clearly point towards the long term objective. You seem to think
that these steps have nothing to do with the real problem.

It appears that you view artistic acts as the crucial problem of
intelligence.

So let us look at free association, as you have recently done. I found your
free association lists silly and nonsensical. For example, in one list you
free associate nose with Afghanistan and oxygen with eyes. They seem silly
because the association depends on some hidden steps (that you only
explained later). I think the lists would be a lot better if you made the
steps explicit. (e.g., replace oxygen-eyes with something like
oxygen-invisible-eyes or oxygen-everywhere-visible-eyes, or whatever it was
that you were thinking).

Okay. So, let's keep it simple. Let's do free association where each step
has to make some kind of sense, and where we only deal with single words.
Surely this is a sensible stepping stone to AGI?


Well, as Matt said, this actually turns out to be quite simple.

I've put together a free association machine. Give it a fairly common
word, and it will free-associate from there to create a list of 15 words.

http://www.comirit.com/freeassoc/

Every single list will be unique. Every single list combines randomness with
structure. Every list crosses domains, and if you compare the first word to
the last word (in light of the words in-between) you see that a very
original kind of analogy has apparently taken place. This is, as far as I
can tell, what you've been talking about.

But this isn't AGI. This isn't even interesting AI. I hope that you're also
ready to give your own argument for why it isn't AGI.

The thing is, however, that something like this free association machine
seems to be the very thing that you're pointing to. A similar approach can
be used to generate somewhat-structured and somewhat-unstructured drawings
like on that imagination3 site that you thought could be the basis of a
mathematical formalization.

(I'm not going to tell you how it works - it uses a very dirty trick - the
point is, though, that it gives the appearance of intelligence and seems to
do what you want it to do)

It didn't take me long to put this together - in fact, the hardest bit was
just working out how to upload it to my server.

I think creativity and artistic expression are poor choices for exploring
AGI, they are too subjective to be measured and it seems easier to fool
people into thinking a pattern looks creative and artistic than it is to
create a pattern that solves some measurable problem. However, when I begin
to speculate what I would need to do to improve the free association machine
to make it into an interesting piece of AI, I find that logics,
probabilistic logics, evolutionary learning, search, complex systems,
structure mapping, neural networks would all jump out as interesting avenues
to explore. 

In other words, creativity and artistic expression bring us to EXACTLY the
same core problems; but in a domain that seems to me to be less amenable to
productive, measurable or even profit-generating research. It certainly
isn't a given that creativity and art is the only approach to AGI. In fact,
after we get over that very first hurdle of making something appear to be
artistic, we're at exactly the same problems that are encountered in every
other problem domain: problems of learning and building problem-solving
representations automatically - the very problems that motivate AGI
researchers every day.

So, I tried your suggestion. After getting past that first step of writing
the version-zero free associator, I found myself in exactly the same place
as I was before and facing exactly the same problems, but in a problem
domain that is less useful that what I was 

Re: [agi] The Smushaby of Flatway.

2009-01-10 Thread Jim Bromer
On Thu, Jan 8, 2009 at 10:41 AM, Ed Porter ewpor...@msn.com wrote:
 Ed Porter

 This is certainly not true of a Novamente-type system, at least as I
 conceive of it being built on the type of massively parallel, highly
 interconnected hardware that will be available to AI within 3-7 years.  Such
 a system would be hierarchical in both the compositional and
 generalizational dimensions, and the computation would be taking place by
 importance weighted probabilisitic spreading activation, constraint
 relaxation, and k-winner take all competition across multiple layers of
 these hierarchies, so the decision making would not funnel all reasoning
 through a single narrowly focused process any more that human though
 processes do.


 If a decision is to be made, it makes computational sense to have some
 selection process that focuses attention on a selected one of multiple
 possible candidate actions or

 though.  If that is the type of funneling that you object to, you are
 largely objecting to decision making itself.

I have been busy and I just started reading the remarks on this
thread. I want to reply to Ed's comment since his remarks seemed to be
focused in on what I said.  (And I was able to understand what he was
talking about!)

Parallel methods do not in of themselves constitute what I call
structural reasoning.

I object to the funneling and flat methods of reasoning itself.

Although I do not have any new alternatives to add to logic, fuzzy
logic, probability, genetic algorithms and various network decision
processes, my objection is directed toward the narrow focus on the
fundamentals of those decision making processes, or to the creative
(but somewhat dubious) steps taken to force the data to conform to the
inadequacies of (what I called) flat decision processes.

For instance, when it is discovered that probabilistic reasoning isn't
quite good enough for advanced nlp, many hopefuls will rediscover the
creative 'solution' of using orthogonal multidimensional 'measures' of
semantic distance.  Instead of following their intuition and coming up
with ways to make the reasoning seem more natural, they first turn
toward a more fanciful method by which they try to force the corpus of
natural language to conform to their previously decision to use a
simple metric.

My recommendation would be to first try to begin thinking about how
natural reasoning might be better structured to solve those problems
before you start distorting the data.

For an example, reasons are often used in natural reasoning. A reason
can be good or bad.  A reason can provide causal information about the
reasoning but even a good reason may only shed light on information
incidental to the reasoning. The value of a reason can be relative to
both the reasoning and the nature of the supplied reason itself.  My
point here is that the relation of reason to reasoning is significant
(especially when they work) although it can be very complicated.  But
even though the use of a reason is not simple, notice how natural and
familiar it seems.  Example: 'I do this because I want to!'  Not a
good reason to explain why I am doing something unless you are (for
instance) curious about the emotional issues behind my actions.
Another example: I advocate this theory because it seems natural! A
much better reason for the advocacy.  It tells you something about
what is motivating me to make the advocacy but it also tells you
something about the theory as it is being advocated.

There are other kinds of structures to reasoning that can be
considered as well.  This was only one.

I realized during the past few days, that most reasoning in a
contemporary AGI program would be ongoing and so yes the reasoning
would be more structured than I originally thought.  (I wouldn't have
written my original message at all except that I was a little more off
than usual that night for some reason.)  However, even though ongoing
reasoning does represent some additional complexity to the process of
reasoning, the fact that structural reasoning itself is not being
discussed means that it is being downplayed and even ignored.  So you
have the curious situation where the less natural metric of semantic
distance being enthusiastically offered while a more complete
examination of the potential of using natural reasons in reasoning is
almost totally ignored.

So while I believe that modifications and extensions of logic,
categorical systems, probability, and network decision processes will
be used to eventually create more powerful AGI programs, I don't think
the contemporary efforts to produce such advanced AGI will be
successful without the conscious consideration and use of structural
reasoning.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] The Smushaby of Flatway.

2009-01-10 Thread Richard Loosemore

Mike Tintner wrote:

Richard,


You missed Mike Tintner's explanation . . . .


Mark,

 Right 

So you think maybe what we've got here is a radical influx of globally 
entangled free-association bosons?



Richard,

Q.E.D.  Well done.

Now tell me how you connected my ridiculous [or however else you might 
want to style it] argument with your argument re bosons - OTHER than 
by free association? What *prior* set of associations in your mind, or 
prior, preprogrammed set of rules, what logicomathematical thinking 
enabled you to form that connection?  (And it would be a good idea to 
apply it to your previous joke re Blue - because they must be *generally 
applicable* principles)


And what prior principles enabled you to spontaneously and creatively 
form the precise association of radical influx of globally entagled 
free-association bosons - to connect RADICAL INFLUX with GLOBALLY 
ENTANGLED ..and FREE ASSOCIATION and BOSONS.


You were being v. funny, right?  But humour is domain-switching (which 
you do multiple times above) and that's what you/AGI can't do or explain 
computationally.


***

Ironically, before I saw your post I had already written (and shelved) a 
P.S.  Here it is:


P.S. Note BTW - because I'm confident you're probably still thinking 
what's that weird nutter on about? what's this got to do with AGI? - 
the very best evidence for my claim. That claim is now that the brain is


* potentially infinitely domain-switching on both

a) a basic level,  and

b) a meta-level -

i.e. capable of forming endless new connections/associations on a higher 
level too and so, forming infinite new modes of reasoning, ( new *ways* 
of associating ideas as well as new association)


The very best evidence are *logic and mathematics themselves*. For logic 
and mathematics ceaselessly produce new branches of themselves. New 
logics. New numbers, New kinds of geometry. *New modes of reasoning.*


And an absolutely major problem for logic and mathematics (and current 
computation) is that they *cannot explain themselves* - cannot explain 
how these new modes of reasoning are generated/ There are no logical and 
mathematical or other formal ways of explaining these new branches.


Rational numbers cannot be used to deduce irrational numbers and thence 
imaginary numbers. Trigonometry cannot be used to deduce calculus. 
Euclidean geometry cannot be used to deduce riemannian to deduce 
topology. And so on. Aristotelian logic cannot explain fuzzy logic 
cannot explain PLN.


Logicomathematical modes of reasoning are *not* generated 
logicomathematically.but creatively-as both Ben, I think, and 
certainly Franklin have acknowledged.


And clearly the brain is capable of forming infinitely new logics and 
mathematics - infinite new forms of reasoning -  by 
*non-logicomathematical*/*nonformal* means. By, I suggest,  free 
association among other means.




It's easy to make cheap, snide comments. But can either of you actually 
engage directly with the problem of domain-switching, and argue 
constructively about  particular creative problems and thinking - using 
actual evidence? I've seen literally no instances from either of you (or 
indeed, though this may at first seem surprising and may need a little 
explanation - anyone in the AI community).


let's take an actual example of  good creative thinking happening on the 
fly - and what I've called  directed free association -


It's by one Richard Loosemore. You as well as others thought pretty 
creatively about the problem of the engram a while back. Here's the 
transcript of that  thinking - as I said, good creative thinking, really 
trying to have new ideas (as opposed to just being snide here).:


Now perhaps you can tell me what prior *logic* or programming produced 
the flow of your own ideas here? How do you get from one to the next?


Richard: Now you're just trying to make me think ;-). 1.

Okay, try this. 2.

[heck, you don't have to:  I am just playing with ideas here...]  3.

The methylation pattern has not necessarily been shown to *only* store
information in a distributed pattern of activation - the jury's out on
that one (correct me if I'm wrong). 4.5

Suppose that the methylation end caps are just being used as a way
station for some mechanism whose *real* goal is to make modifications to
 some patterns in the junk DNA.  6. So, here I am suggesting that the junk
DNA of any particular neuron is being used to code for large numbers of
episodic memories (one memory per DNA strand, say), with each neuron
being used as a redundant store of many episodes. 7.  The same episode is
stored in multiple neurons, but each copy is complete.  8. When we observe
changes in the methylation patterns, perhaps these are just part of the
transit mechanism, not the final destination for the pattern.  9. To put it
in the language that Greg Bear would use, the endcaps were just part of
the radio system. 

Re: [agi] The Smushaby of Flatway.

2009-01-10 Thread Jim Bromer
On Sat, Jan 10, 2009 at 3:47 PM, Jim Bromer jimbro...@gmail.com wrote:
 For instance, when it is discovered that probabilistic reasoning isn't
 quite good enough for advanced nlp, many hopefuls will rediscover the
 creative 'solution' of using orthogonal multidimensional 'measures' of
 semantic distance.  Instead of following their intuition and coming up
 with ways to make the reasoning seem more natural, they first turn
 toward a more fanciful method by which they try to force the corpus of
 natural language to conform to their previously decision to use a
 simple metric.

 My recommendation would be to first try to begin thinking about how
 natural reasoning might be better structured to solve those problems
 before you start distorting the data.

 For an example, reasons are often used in natural reasoning. A reason
 can be good or bad.  A reason can provide causal information about the
 reasoning but even a good reason may only shed light on information
 incidental to the reasoning. The value of a reason can be relative to
 both the reasoning and the nature of the supplied reason itself.  My
 point here is that the relation of reason to reasoning is significant
 (especially when they work) although it can be very complicated.  But
 even though the use of a reason is not simple, notice how natural and
 familiar it seems.

I realized after I wrote this that the invented metric of semantic
distance can be used to 'solve' a semantic problem using mathematical
means. In my suggestion that more highly structured methods of
reasoning should be considered before distorting the data with some
artifice I pointed out that reasons that are naturally used in
decision making could be included in the structure of reasoning .  But
the problem is, of course, that examining the reasons for a conclusion
does not immediately -solve- the programming problem the way numerical
metrics and mathematical methods can.  Ok, but you can still create
artificial methods to test structural reasoning if you are eager to
start programming.  I am going to try this out because I believe that
a somewhat extensible GOFAI model can be derived from a use of
structured reasoning (and some other ideas I have) even though I would
have to first supply simplistic 'solutions' for the program to use.

I am saying that before you start creating elaborate artifices to jump
start your project you should first use your intuition to see if more
natural ways of dealing with the problem exist.  This might not make
the problem look easier.  But even though I would have to create some
simplistic solutions for my first model, I believe that the concept of
more highly structured reasoning should help me keep these artifices
to a minimum.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Eric Burton
Ronald: I didn't have to choose 'Display images' to see your attached
picture again. What are you doing? It's fun, but scary.

On 1/9/09, Ronald C. Blue ronb...@u2ai.us wrote:
 But how can it dequark the tachyon antimatter containment field?
 Richard Loosemore

 A model that can answer all questions is defective precisely because it can
 do so.

 But in your case matter does not exist except at certain time phases as a
 standing opponent process informational system from a zero point energy
 point of view.  An example is the negative phase oscillon in the matter
 picture surrounded by electrons oscilating in and out of existence.

 Oscillon pairs with opposite waves form bonding are very stable.   This is
 like the Pauli exculsion principle.  Only electron pairs with opposite spins
 can be in orbit together.  This is also true for shaddow matter or the
 nucleus of an atom.


 Emotionally I like the idea that anti-matter is matter moving into the past.
  But due to vortex
 of energy it looks like negative time but it is just like as an old wagon
 wheel in a black and white movie looking like it is
 colorized and going backwards.  3D is an illusion.

 The above picture supports Anyons and  topological charge proposed by
 Frank Wilczek.

 ((Recommended reading from New Scientist.
 Anyons: The breakthrough quantum computing needs?
   a.. 01 October 2008 by Don Monroe
   b.. Magazine issue 2676. Subscribe and get 4 free issues.
   c.. For similar stories, visit the Quantum World Topic Guide
 Read full article
 Continue reading page |1 |2 |3
 WE SHOULD have known there was something in it when Microsoft got involved.
 Back in 2003, the software giant began sponsoring a small research effort
 with an interest in an abstruse area of physics known as the fractional
 quantum Hall effect. The effect, which has been the subject of two Nobel
 prizes, involves the delicate manipulation of electrons inside semiconductor
 crystals. What could a software company like Microsoft hope to gain from
 funding this research?

 The answer is now clear. This year, we have seen the first indications that
 this strange and abstract phenomenon could bring us a revolution in
 computing. We have good reason to believe that, if we can do anything [with
 this], we can do a lot, says Michael Freedman of Microsoft-sponsored
 Station Q research group in Santa Barbara, California.

 Microsoft is interested because an ability to manipulate the fractional
 quantum Hall effect promises a unique and powerful way to process
 information using the resources of the subatomic world. Down at the level of
 photons, electrons, protons and atoms, matter behaves very differently from
 what we are used to. These quantum objects can be in two places at once, for
 example, or spin clockwise and anticlockwise at the same time. This
 phenomenon, known as superposition, is entirely foreign to the way things
 work in the ordinary classical world.

 It was realised years ago that superposition provides an opportunity for
 information processing, and researchers have been working for decades to
 build a quantum computer that exploits it. Encode a 0 as the clockwise
 spin of an electron and 1 as the anticlockwise spin, for example, and
 superposition gives you a kind of buy one, get one free special offer,
 with both of these binary digits appearing on the same particle. Process one
 of these quantum bits, or qubits, and you get two answers. If you could
 create an array of electrons in superposition, it would be possible to use
 this phenomenon for superfast processing. In principle, qubits enable huge
 sequences of binary digits to be encoded and processed with much less
 computational effort than would be needed in the classical world.

 The thing is, while theorists drew up the blueprint for a quantum computer
 more than two decades ago, we still don't have one. That is largely because
 of a problem called decoherence. Quantum superpositions are notoriously
 delicate. If the electron in a superposition state is disturbed - by
 something in its environment such as a little heat or a stray
 electromagnetic field, say - the superposition will collapse and lose the
 double helping of information it was carrying.

 Follow the trail
 This is where the fractional quantum Hall effect can help. Quantum particles
 are conventionally divided into two types: fermions, such as the electron;
 and bosons, such as the photon. Then, about 25 years ago, researchers such
 as Frank Wilczek of the Massachusetts Institute of Technology began to
 realise there might be a third type.

 The idea came from considering whether you can tell two identical particles
 apart from each other. Imagine a quantum version of the magic cup game much
 beloved by dodgy street magicians. Two photons, marked A and B, are hidden
 under two cups sitting on a table. The magician swaps the cups around on the
 table top at a furious pace. When the swaps are finished, would there be any
 way to tell, 

Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mark Waser

But how can it dequark the tachyon antimatter containment field?


Richard,

   You missed Mike Tintner's explanation . . . .

You're not thinking your argument through. Look carefully at my 
spontaneous

COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON
THIS etc. etc



It can do this partly because
a) single ideas have multiple, often massively mutiple, idea/domain 
connections in the human brain, and allow one to go off in any of multiple 
tangents/directions
b) humans have many things - and therefore multiple domains - on their 
mind at the same time concurrently - and can switch as above from the 
immediate subject to some other pressing subject domain (e.g. from 
economics/politics (local vs global) to the weather (what a nice day).


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with MAHONEY and going on for 
another 10 or so items - and trying to figure out how






- Original Message - 
From: Richard Loosemore r...@lightlink.com

To: agi@v2.listbox.com
Sent: Thursday, January 08, 2009 8:05 PM
Subject: Re: [agi] The Smushaby of Flatway.



Ronald C. Blue wrote:

[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent 
processing machine ... globally entangled ... Paul rf trap ... parallel

 modulating string pulses ... a relative zero energy value or

opponent process  ...   phase locked ... parallel opponent process
... reciprocal Eigenfunction ...  opponent process ... summation 
interference ... gaussian reference rf trap ...

 oscillon output picture ... locked into the forth harmonic ...
 ... entangled with its Eigenfunction ..

 [snip]
 That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Richard Loosemore



Ronald C. Blue wrote:

[snip] [snip] ... chaos stimulation because ... correlational
wavelet opponent processing machine ... globally entangled ...
Paul rf trap ... parallel modulating string pulses ... a relative
zero energy value or opponent process  ...   phase locked ...
parallel opponent process ... reciprocal Eigenfunction ...
opponent process ... summation interference ... gaussian
reference rf trap ... oscillon output picture ... locked into the
forth harmonic ... ... entangled with its Eigenfunction .. [snip]
 That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore







Mark Waser wrote:

But how can it dequark the tachyon antimatter containment field?


Richard,

You missed Mike Tintner's explanation . . . .

You're not thinking your argument through. Look carefully at my 
spontaneous COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL 
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME

ON THIS etc. etc



It can do this partly because a) single ideas have multiple, often
massively mutiple, idea/domain connections in the human brain, and
allow one to go off in any of multiple tangents/directions b)
humans have many things - and therefore multiple domains - on their
 mind at the same time concurrently - and can switch as above from
the immediate subject to some other pressing subject domain (e.g.
from economics/politics (local vs global) to the weather (what a
nice day).


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with MAHONEY and going on

for another 10 or so items - and trying to figure out how



Mark,

 Right 

So you think maybe what we've got here is a radical influx of globally 
entangled free-association bosons?





Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mike Tintner
 will think without actually thinking it. It's not just a property 
 of the human brain, but of all Turing machines. No program can 
 non-trivially model itself. (By model, I mean that P models Q if for any 
 input x, P can compute the output Q(x). By non-trivial, I mean that P does 
 something else besides just model Q. (Every program trivially models 
 itself). The proof is that for P to non-trivially model Q requires K(P)  
 K(Q), where K is Kolmogorov complexity, because P needs a description of Q 
 plus whatever else it does to make it non-trivial. It is obviously not 
 possible for K(P)  K(P)).

 So if you learned the associations A-B and B-C, then A will predict C. 
 That is called reasoning.

 Also, each concept is associated with thousands of other concepts, not 
 just A-B. If you pick the strongest associated concept not previously 
 activated, you get the semi-random thought chain you describe. You can 
 demonstrate this with a word-word matrix M from a large text corpus, where 
 M[i,j] is the degree to which the i'th word in the vocabulary is 
 associated with the j'th word, as measured by the probability of finding 
 both words near each other in the corpus. Thus, M[rain,wet] and 
 M[wet,water] have high values because the words often appear in the same 
 paragraph. Traversing related words in M gives you something similar to 
 your free association chain like rain-wet-water-...

 -- Matt Mahoney, matmaho...@yahoo.com


 --- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 From: Mike Tintner tint...@blueyonder.co.uk
 Subject: Re: [agi] The Smushaby of Flatway.
 To: agi@v2.listbox.com
 Date: Thursday, January 8, 2009, 3:54 PM
 Matt:Free association is the basic way of recalling
 memories. If you experience A followed by B, then the next
 time you experience A you will think of (or predict) B.
 Pavlov demonstrated this type of learning in animals in
 1927.

 Matt,

 You're not thinking your argument through. Look
 carefully at my spontaneous

 COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS
 GLOBAL
 THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH
 MORE TIME ON
 THIS etc. etc

 that's not A-B association.

 That's 1. A-B-C  then  2. Gamma-Delta then  3.
 Languages  then  4. Number of Lines in Letters.

 IOW the brain is typically not only freely associating
 *ideas* but switching freely across, and connecting,
 radically different *domains* in any given chain of
 association. [e.g above from Animals to Economics/Politics
 to Weather to Personal Timetable]

 It can do this partly because

 a) single ideas have multiple, often massively mutiple,
 idea/domain connections in the human brain, and allow one to
 go off in any of multiple tangents/directions
 b) humans have many things - and therefore multiple domains
 - on their mind at the same time concurrently  - and can
 switch as above from the immediate subject to  some other
 pressing subject  domain (e.g. from economics/politics
 (local vs global) to the weather (what a nice day).

 If your A-B, everything-is-memory-recall thesis
 were true, our chains-of-thought-association would be
 largely repetitive, and the domain switches inevitable..

 In fact, our chains (or networks) of free association and
 domain-switching are highly creative, and each one is
 typically, from a purely technical POV, novel and
 surprising. (I have never connected TAIL and CURRENT CRISIS
 before - though Animals and Politics yes. Nor have I
 connected LOCAL VS GLOBAL THINKING before with WHAT A NICE
 DAY and the weather).

 IOW I'm suggesting, the natural mode of human thought -
 and our continuous streams of association - are creative.
 And achieving such creativity is the principal problem/goal
 of AGI.

 So maybe it's worth taking 20 secs. of time - producing
 your own chain-of-free-association starting say with
 MAHONEY  and going on for another 10 or so items
 -  and trying to figure out how the result could.possibly be
 the  narrow kind of memory-recall you're arguing for.
 It's an awful lot to ask for, but could you possibly try
 it, analyse it and report back?

 [Ben claims to have heard every type of argument I make
 before,  (somewhat like your A-B memory claim), so perhaps
 he can tell me where he's read before about the Freely
 Associative, Freely Domain Switching nature of human thought
 - I'd be interested to follow up on it].




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Matt Mahoney
Mike, after a sequence of free associations, you drift from the original 
domain. How is that incompatible with the model I described? I use A, B, C, as 
variables to represent arbitrary thoughts.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Fri, 1/9/09, Mike Tintner tint...@blueyonder.co.uk wrote:
From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: [agi] The Smushaby of Flatway.
To: agi@v2.listbox.com
Date: Friday, January 9, 2009, 10:08 AM


I
 
 
 _filtered #yiv455060292 {
font-family:Courier;}
 _filtered #yiv455060292 {
font-family:Tms Rmn;}
 _filtered #yiv455060292 {margin:1.0in 77.95pt 1.0in 77.95pt;}
#yiv455060292 P.MsoNormal {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 LI.MsoNormal {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 DIV.MsoNormal {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H1 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:12pt 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H2 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:6pt 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H3 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H4 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H5 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 H6 {
FONT-WEIGHT:normal;FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 P.MsoHeading7 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 LI.MsoHeading7 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 DIV.MsoHeading7 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 P.MsoHeading8 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 LI.MsoHeading8 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 DIV.MsoHeading8 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 P.MsoHeading9 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 LI.MsoHeading9 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 DIV.MsoHeading9 {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier;}
#yiv455060292 P.MsoNormalIndent {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt 0.5in;FONT-FAMILY:Courier;}
#yiv455060292 LI.MsoNormalIndent {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt 0.5in;FONT-FAMILY:Courier;}
#yiv455060292 DIV.MsoNormalIndent {
FONT-SIZE:12pt;MARGIN:0in 0in 0pt 0.5in;FONT-FAMILY:Courier;}
#yiv455060292 A:link {
COLOR:blue;TEXT-DECORATION:underline;}
#yiv455060292 SPAN.MsoHyperlink {
COLOR:blue;TEXT-DECORATION:underline;}
#yiv455060292 A:visited {
COLOR:purple;TEXT-DECORATION:underline;}
#yiv455060292 SPAN.MsoHyperlinkFollowed {
COLOR:purple;TEXT-DECORATION:underline;}
#yiv455060292 P.MsoPlainText {
FONT-SIZE:10pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier New;}
#yiv455060292 LI.MsoPlainText {
FONT-SIZE:10pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier New;}
#yiv455060292 DIV.MsoPlainText {
FONT-SIZE:10pt;MARGIN:0in 0in 0pt;FONT-FAMILY:Courier New;}
#yiv455060292 DIV.Section1 {
}

Matt,
 
I mainly want to lay down a marker here for a 
future discussion.
 
What you have done  is what all AGI-ers/AI-ers 
do. Faced with the problem of domain-switching - (I pointed out that the human 
brain and human thought are * freely domain-switching*), -  you 
have simply ignored it - and, I imagine, are completely unaware that you have 
done so. And this, remember, is *the* problem of AGI - what should be 
the central focus of all discussion here.
 
If you look at your examples, you will find that 
they are all *intra-domain* and do not address domain-switching at all 
-
 
a. if you learned the associations A-B and B-C, then A will predict C. 

 That is called reasoning
 
b) a word-word 
matrix M from a large text corpus, ..gives you something similar to 
 
your free association chain like rain-wet-water-...
 
No domain-switching there.
 
Compare these 
with my 
 
b) 
domain-switching chain -COW - DOG - TAIL - CURRENT CRISIS - LOCAL 
VS
 GLOBAL
 THINKING - WHAT A NICE DAY - MUST GET ON- CANT 
SPEND MUCH
 MORE TIME ON
 
THIS
 
 (switching between the domains of - Animals - Politics/Economics - 
Weather - Personal Timetable)
 
a) your 
(extremely limited) idea of (logical) reasoning is also entirely 
intra-domain - the domain of the Alphabet,   
(A-B-C). 
 
But my creative 
and similar creative chains are analogous  to switching from say 
an Alphabet domain (A-B-C) to a Foreign Languages domain (alpha - omega)  
to a Semiotics one (symbol - sign - representation) to a Fonts one (Courier - 
Times Roman) etc. etc.  - i.e. we could all easily and spontaneously form 
such a domain-switching chain.
 
Your programs 
and all the programs ever written are still incapable of doing this - switching 
domains. This, it bears repeating, is the problem of 
AGI. 
 
Because you're 
ignoring it, you don't see that you're in effect maintaining an absurdity

Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Matt Mahoney
--- On Thu, 1/8/09, Vladimir Nesov robot...@gmail.com wrote:

  I claim that K(P)  K(Q) because any description of P must include
  a description of Q plus a description of what P does for at least one other 
  input.
 
 
 Even if you somehow must represent P as concatenation of Q and
 something else (you don't need to), it's not true that always
 K(P)K(Q). It's only true that length(P)length(Q), and longer strings
 can easily have smaller programs that output them. If P is
 10^(10^10) symbols X, and Q is some random number of X smaller
 than 10^(10^10), it's probably K(P)K(Q), even though Q is a
 substring of P.

Well, it is true that you can find |P|  |Q| for some cases of P nontrivially 
simulating Q depending on the choice of language. However, it is not true on 
average. It is also not possible for P to nontrivially simulate itself because 
it is a contradiction to say that P does everything that Q does and at least 
one thing that Q doesn't do if P = Q.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:34 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Well, it is true that you can find |P|  |Q| for some cases of P nontrivially
 simulating Q depending on the choice of language. However, it is not true on
 average. It is also not possible for P to nontrivially simulate itself 
 because it is
 a contradiction to say that P does everything that Q does and at least one 
 thing
 that Q doesn't do if P = Q.


What you write above is a separate note unrelated to one about
complexity. P simulating P and doing something else is well-defined
according to your definition of simulation in the previous message
(that includes a special format for request for simulation), no
contradictions, and you've got an example.

-- 
Vladimir Nesov


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mike Tintner

Richard,


You missed Mike Tintner's explanation . . . .


Mark,

 Right 

So you think maybe what we've got here is a radical influx of globally 
entangled free-association bosons?



Richard,

Q.E.D.  Well done.

Now tell me how you connected my ridiculous [or however else you might 
want to style it] argument with your argument re bosons - OTHER than by 
free association? What *prior* set of associations in your mind, or prior, 
preprogrammed set of rules, what logicomathematical thinking enabled you to 
form that connection?  (And it would be a good idea to apply it to your 
previous joke re Blue - because they must be *generally applicable* 
principles)


And what prior principles enabled you to spontaneously and creatively form 
the precise association of radical influx of globally entagled 
free-association bosons - to connect RADICAL INFLUX with GLOBALLY ENTANGLED 
..and FREE ASSOCIATION and BOSONS.


You were being v. funny, right?  But humour is domain-switching (which you 
do multiple times above) and that's what you/AGI can't do or explain 
computationally.


***

Ironically, before I saw your post I had already written (and shelved) a 
P.S.  Here it is:


P.S. Note BTW - because I'm confident you're probably still thinking 
what's that weird nutter on about? what's this got to do with AGI? - the 
very best evidence for my claim. That claim is now that the brain is


* potentially infinitely domain-switching on both

a) a basic level,  and

b) a meta-level -

i.e. capable of forming endless new connections/associations on a higher 
level too and so, forming infinite new modes of reasoning, ( new *ways* of 
associating ideas as well as new association)


The very best evidence are *logic and mathematics themselves*. For logic and 
mathematics ceaselessly produce new branches of themselves. New logics. New 
numbers, New kinds of geometry. *New modes of reasoning.*


And an absolutely major problem for logic and mathematics (and current 
computation) is that they *cannot explain themselves* - cannot explain how 
these new modes of reasoning are generated/ There are no logical and 
mathematical or other formal ways of explaining these new branches.


Rational numbers cannot be used to deduce irrational numbers and thence 
imaginary numbers. Trigonometry cannot be used to deduce calculus. Euclidean 
geometry cannot be used to deduce riemannian to deduce topology. And so on. 
Aristotelian logic cannot explain fuzzy logic cannot explain PLN.


Logicomathematical modes of reasoning are *not* generated 
logicomathematically.but creatively-as both Ben, I think, and certainly 
Franklin have acknowledged.


And clearly the brain is capable of forming infinitely new logics and 
mathematics - infinite new forms of reasoning -  by 
*non-logicomathematical*/*nonformal* means. By, I suggest,  free association 
among other means.




It's easy to make cheap, snide comments. But can either of you actually 
engage directly with the problem of domain-switching, and argue 
constructively about  particular creative problems and thinking - using 
actual evidence? I've seen literally no instances from either of you (or 
indeed, though this may at first seem surprising and may need a little 
explanation - anyone in the AI community).


let's take an actual example of  good creative thinking happening on the 
fly - and what I've called  directed free association -


It's by one Richard Loosemore. You as well as others thought pretty 
creatively about the problem of the engram a while back. Here's the 
transcript of that  thinking - as I said, good creative thinking, really 
trying to have new ideas (as opposed to just being snide here).:


Now perhaps you can tell me what prior *logic* or programming produced the 
flow of your own ideas here? How do you get from one to the next?


Richard: Now you're just trying to make me think ;-). 1.

Okay, try this. 2.

[heck, you don't have to:  I am just playing with ideas here...]  3.

The methylation pattern has not necessarily been shown to *only* store
information in a distributed pattern of activation - the jury's out on
that one (correct me if I'm wrong). 4.5

Suppose that the methylation end caps are just being used as a way
station for some mechanism whose *real* goal is to make modifications to
 some patterns in the junk DNA.  6. So, here I am suggesting that the junk
DNA of any particular neuron is being used to code for large numbers of
episodic memories (one memory per DNA strand, say), with each neuron
being used as a redundant store of many episodes. 7.  The same episode is
stored in multiple neurons, but each copy is complete.  8. When we observe
changes in the methylation patterns, perhaps these are just part of the
transit mechanism, not the final destination for the pattern.  9. To put it
in the language that Greg Bear would use, the endcaps were just part of
the radio system. 

RE: [agi] The Smushaby of Flatway.

2009-01-09 Thread Ronald C Blue
In outlook express change format to html and insert picture.  Generally this 
safer than an attachment.

-Original Message-
From: Eric Burton brila...@gmail.com
To: agi@v2.listbox.com
Sent: 1/9/09 8:03 AM
Subject: Re: [agi] The Smushaby of Flatway.

Ronald: I didn't have to choose 'Display images' to see your attached
picture again. What are you doing? It's fun, but scary.

On 1/9/09, Ronald C. Blue ronb...@u2ai.us wrote:
 But how can it dequark the tachyon antimatter containment field?
 Richard Loosemore

 A model that can answer all questions is defective precisely because it can
 do so.

 But in your case matter does not exist except at certain time phases as a
 standing opponent process informational system from a zero point energy
 point of view.  An example is the negative phase oscillon in the matter
 picture surrounded by electrons oscilating in and out of existence.

 Oscillon pairs with opposite waves form bonding are very stable.   This is
 like the Pauli exculsion principle.  Only electron pairs with opposite spins
 can be in orbit together.  This is also true for shaddow matter or the
 nucleus of an atom.


 Emotionally I like the idea that anti-matter is matter moving into the past.
  But due to vortex
 of energy it looks like negative time but it is just like as an old wagon
 wheel in a black and white movie looking like it is
 colorized and going backwards.  3D is an illusion.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Ed Porter
Mike,

What is the evidence, if any, that it would be difficult for a sophisticated
Novamente-like AGI to switch domains?  

In fact, much of valuable AGI thinking would involve patterns and mental
behaviors that extended across different domains. Human natural language
understanding is believed to use multiple domains of knowledge in the brain
when necessary, such as visual imagination, to help in understanding what is
being said or what is to be said. The OpenCog WikiBook describes multiple
procedures for controlling, and automatically learning to control inference,
activation, and procedure execution.  This could be used to accomplish
sophisticated interaction of knowledge from different domains, much as the
human brain does.

Brain studies conducted by Wolf Singer indicate that the brain synchronies
can be used to interconnect portions of the brain with different areas of
expertise when performing a job that requires one of those areas to tune
into information coming from the other.  It would be easy to make an AGI do
something equivalent.  In fact, in routine tasks, the synchronies are often
set up in advance of there being an activation in some of the regions
connected by them, because the brain has learned from prior inferencing
patterns to expect such activations to arise given the task being performed.

Ed Porter





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com
attachment: winmail.dat

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Wed, 1/7/09, Ben Goertzel b...@goertzel.org wrote:
if proving Fermat's Last theorem was just a matter of doing math, it would 
have been done 150 years ago ;-p

obviously, all hard problems that can be solved have already been solved...

???

In theory, FLT could be solved by brute force enumeration of proofs until a 
match to Wiles' is found. In theory, AGI could be solved by coding all the 
knowledge in LISP. The difference is that 50 years ago people actually expected 
the latter to work. Some people still believe so.

AGI is an engineering and policy problem. We already have small scale neural 
models of learning, language, vision, and motor control. We currently lack the 
computing power (10^16 OPS, 10^15 bits) to implement these at human levels, but 
Moore's law will take care of that.

But that is not the hard part of the problem. AGI is a system that eliminates 
our need to work, to think, and to function in the real world. Its value is USD 
10^15, the value of the global economy. Once we have the hardware, we still 
need to extract 10^18 bits of knowledge from human brains. That is the 
complexity of the global economy (assuming 10^10 people x 10^9 bits per person 
x 0.1 fraction consisting of unique job skills). This is far bigger than the 
internet. The only way to extract this knowledge without new technology like 
brain scanning is by communication at the rate of 2 bits per second per person. 
The cheapest option is a system of pervasive surveillance where everything you 
say and do is public knowledge.

AGI is too expensive for any person or group to build or own. It is a vastly 
improved internet, a communication system so efficient that the world's 
population starts to look like a single entity, and nobody notices or cares as 
silicon gradually replaces carbon.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner


Matt: Logic has not solved AGI because logic is a poor model of the way 
people think.


Neural networks have not solved AGI because you would need about 10^15 
bits of memory and 10^16 OPS to simulate a human brain sized network.


Genetic algorithms have not solved AGI because the computational 
requirements are even worse. You would need 10^36 bits just to model all 
the world's DNA, and even if you could simulate it in real time, it took 3 
billion years to produce human intelligence the first time.


Probabilistic reasoning addresses only one of the many flaws of first 
order logic as a model of AGI. Reasoning under uncertainty is fine, but 
you haven't solved learning by induction, reinforcement learning, complex 
pattern recognition (e.g. vision), and language. If it was just a matter 
of writing the code, then it would have been done 50 years ago.



Matt,

What then do you see as the way people *do* think? You surprise me, Matt, 
because both the details of your answer here and your thinking generally 
strike me as *very* logicomathematical - with lots of emphasis on numbers 
and compression - yet you seem to be acknowledging here, like Jim,  the 
fundamental deficiencies of the logicomathematical form - and it is indeed 
only one form - of thinking. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway...PS

2009-01-08 Thread Mike Tintner
PS I should have said the fundamental deficiencies of the PURELY 
logicomathematical form of thinking. It's not deficient in itself - only if 
you think like so many AGIers that it's the only form of thinking, or able 
to accommodate the entirety of human thinking. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 What then do you see as the way people *do* think? You
 surprise me, Matt, because both the details of your answer
 here and your thinking generally strike me as *very*
 logicomathematical - with lots of emphasis on numbers and
 compression - yet you seem to be acknowledging here, like
 Jim,  the fundamental deficiencies of the logicomathematical
 form - and it is indeed only one form - of thinking. 

Pattern recognition in parallel, and hierarchical learning of increasingly 
complex patterns by classical conditioning (association), clustering in context 
space (feature creation), and reinforcement learning to meet evolved goals.

You can't write a first order logic expression that inputs a picture and tells 
you whether it is a cat or a dog. Yet any child can do it. Logic is great for 
abstract mathematics. We regard it as the highest form of thought, the hardest 
thing that humans can learn, yet it is the easiest problem to solve on a 
computer.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner

Matt,

Thanks. But how do you see these:

Pattern recognition in parallel, and hierarchical learning of increasingly 
complex patterns by classical conditioning (association), clustering in 
context space (feature creation), and reinforcement learning to meet evolved 
goals.


as fundamentally different from logicomathematical thinking? (Reinforcement 
learning strikes me as literally extraneous and not a mode of thinking). 
Perhaps you need to explain why conditioned association is different.


It may help if I set up a pole of comparison. I see the brain, for example, 
as working primarily by free association. I can start right now with a 
thought -


COW - and proceed - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL 
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON 
THIS etc. etc.


and that literally was an ad hoc and ad lib chain and form of reasoning. 
Free association. In no way was the whole programmed. Parts of it certainly 
were - my spelling of different words, use of phrases etc. but not the 
whole - I could have gone off at different points on very different 
tangents. (Try it for yourself).


Also, of course, each association is indeed an *association* with and not a 
*logical/ necessary sequitur from the previous idea.


Now free association is clearly antithetical to logicomathematical thinking 
which do indeed represent forms of routines and programs. I would have 
thought that it is also antithetical to any kind of thinking you would 
advocate.





Matt/MT:

What then do you see as the way people *do* think? You
surprise me, Matt, because both the details of your answer
here and your thinking generally strike me as *very*
logicomathematical - with lots of emphasis on numbers and
compression - yet you seem to be acknowledging here, like
Jim,  the fundamental deficiencies of the logicomathematical
form - and it is indeed only one form - of thinking.


Pattern recognition in parallel, and hierarchical learning of increasingly 
complex patterns by classical conditioning (association), clustering in 
context space (feature creation), and reinforcement learning to meet 
evolved goals.


You can't write a first order logic expression that inputs a picture and 
tells you whether it is a cat or a dog. Yet any child can do it. Logic is 
great for abstract mathematics. We regard it as the highest form of 
thought, the hardest thing that humans can learn, yet it is the easiest 
problem to solve on a computer.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ed Porter
 of smushing are over generalized, and that to
the extent they have any validity, the hardware and the software approaches
to AGI that will start dominating within 3 to 10 years will have made their
relevance largely historical.

 


Ed Porter


 

 

-Original Message-
From: Jim Bromer [mailto:jimbro...@gmail.com] 
Sent: Wednesday, January 07, 2009 8:24 PM
To: agi@v2.listbox.com
Subject: [agi] The Smushaby of Flatway.

 

All of the major AI paradigms, including those that are capable of

learning, are flat according to my definition.  What makes them flat

is that the method of decision making is minimally-structured and they

funnel all reasoning through a single narrowly focused process that

smushes different inputs to produce output that can appear reasonable

in some cases but is really flat and lacks any structure for complex

reasoning.

 

The classic example is of course logic.  Every proposition can be

described as being either True or False and any collection of

propositions can be used in the derivation of a conclusion regardless

of whether the input propositions had any significant relational

structure that would actually have made it reasonable to draw the

definitive conclusion that was drawn from them.

 

But logic didn't do the trick, so along came neural networks and

although the decision making is superficially distributed and can be

thought of as being comprised of a structure of layer-like stages in

some variations, the methodology of the system is really just as flat.

 Again anything can be dumped into the neural network and a single

decision making process works on the input through a

minimally-structured reasoning system and output is produced

regardless of the lack of appropriate relative structure in it.  In

fact, this lack of discernment was seen as a major breakthrough!

Surprise, neural networks did not work just like the mind works in

spite of the years and years of hype-work that went into repeating

this slogan in the 1980's.

 

Then came Genetic Algorithms and finally we had a system that could

truly learn to improve on its previous learning and how did it do

this?  It used another flat reasoning method whereby combinations of

data components were processed according to one simple untiring method

that was used over and over again regardless of any potential to see

input as being structured in more ways than one.  Is anyone else

starting to discern a pattern here?

 

Finally we reach the next century to find that the future of AI has

already arrived and that future is probabilistic reasoning!  And how

is probabilistic reasoning different?  Well, it can solve problems

that logic, neural networks, genetic algorithms couldn't!  And how

does probabilistic reasoning do this?  It uses a funnel

minimally-structured method of reasoning whereby any input can be

smushed together with other disparate input to produce a conclusion

which is only limited by the human beings who strive to program it!

 

The very allure of minimally-structured reasoning is that it works

even in some cases where it shouldn't.  It's the hip hooray and bally

hoo of the smushababies of Flatway.

 

Jim Bromer

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ronald C. Blue
IFrom: Jim Bromer [mailto:jimbro...@gmail.com] 
Sent: Wednesday, January 07, 2009 8:24 PM


  All of the major AI paradigms, including those that are capable of

  learning, are flat according to my definition.  What makes them flat

  is that the method of decision making is minimally-structured and they

  funnel all reasoning through a single narrowly focused process that

  smushes different inputs to produce output that can appear reasonable

  in some cases but is really flat and lacks any structure for complex

  reasoning.

Consider a wave machine made of oil and color water.   Technically speaking the 
information is flat even under chaos stimulation because the sum of the top oil 
and the bottom water remains constant within the limits of the power parameters 
of the system.  This is a pure 3D correlational wavelet opponent processing 
machine.  The system is globally entangled to new information but the response 
time is globally slow.  Now insert floating needles locked into a mao location 
that go up and down from the modulations.  Let the needles make contact with a 
parallel processor.  The on and off switching is read with a constant Paul rf 
trap and protected in parallel modulating string pulses.  The information is 
sent to another wave machine and the flowing needles electromagnetically can be 
make to move up and down duplicating the pure gaussian memory of the first wave 
machine with some lost of noise.  Noise is not noise, it is the declining value 
of previous information.  The system makes calculations in a relative zero 
energy value or opponent process.  Noise is important for creative thought and 
intelligent behavior.



Now let see this process phase locked or a snap shot picture.  The information 
is globally flat because it is a parallel opponent process but contains 
reciprocal Eigenfunction to produce the opponent process summation interference 
if the picture was reversed and both added together.
The new picture would be a flat gray.  The brain and a good AGI would have 
trouble keeping up with the data flow at a particular location just like the 
wave machine but over a slow gaussian reference rf trap the summation would 
approach zero.  A good AGI machine would have a slow integration time or 
oscillon output picture or http://www.youtube.com/watch?v=hvTzeWXCqXQ .



Now consider a self programming electronic wave machine with two systems - a 
object map system and an action map system.  Example 
http://oolong.co.uk/resonata.htm locked into the forth harmonic.  The two 
systems are Eigenfunction.   Example a child says loudly MILK!.
Milk as a stimulus means at the same time when it hits the dual memory map the 
object milk and it is entangled with its Eigenfunction for action map - get 
MILK now.

The program get milk now is turned on.  Because the action program is turned on 
it does a data input into the gaussian data base for milk.  That is what 
entangled memory means.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com
attachment: students_abstract.jpg
attachment: OSCILLON.JPG


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread J. Andrew Rogers

On Jan 8, 2009, at 10:29 AM, Ronald C. Blue wrote:

...Noise is not noise...



Speaking of noise, was that ghastly HTML formatting really necessary?   
It made the email nearly unreadable.


J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 Matt,
 
 Thanks. But how do you see these:
 
 Pattern recognition in parallel, and hierarchical
 learning of increasingly complex patterns by classical
 conditioning (association), clustering in context space
 (feature creation), and reinforcement learning to meet
 evolved goals.
 
 as fundamentally different from logicomathematical
 thinking? (Reinforcement learning strikes me as
 literally extraneous and not a mode of thinking). Perhaps
 you need to explain why conditioned association is
 different.

Free association is the basic way of recalling memories. If you experience A 
followed by B, then the next time you experience A you will think of (or 
predict) B. Pavlov demonstrated this type of learning in animals in 1927. Hebb 
proposed a neural model in 1949 which has since been widely accepted. The model 
is unrelated to first order logic. It is a strengthening of the connections 
from neuron A to neuron B.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Mike Tintner
Matt:Free association is the basic way of recalling memories. If you 
experience A followed by B, then the next time you experience A you will 
think of (or predict) B. Pavlov demonstrated this type of learning in 
animals in 1927.


Matt,

You're not thinking your argument through. Look carefully at my spontaneous

COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON
THIS etc. etc

that's not A-B association.

That's 1. A-B-C  then  2. Gamma-Delta then  3. Languages  then  4. Number of 
Lines in Letters.


IOW the brain is typically not only freely associating *ideas* but switching 
freely across, and connecting,  radically different *domains* in any given 
chain of association. [e.g above from Animals to Economics/Politics to 
Weather to Personal Timetable]


It can do this partly because

a) single ideas have multiple, often massively mutiple,  idea/domain 
connections in the human brain, and allow one to go off in any of multiple 
tangents/directions
b) humans have many things - and therefore multiple domains - on their mind 
at the same time concurrently  - and can switch as above from the immediate 
subject to  some other pressing subject  domain (e.g. from 
economics/politics (local vs global) to the weather (what a nice day).


If your A-B, everything-is-memory-recall thesis were true, our 
chains-of-thought-association would be largely repetitive, and the domain 
switches inevitable..


In fact, our chains (or networks) of free association and domain-switching 
are highly creative, and each one is typically, from a purely technical POV, 
novel and surprising. (I have never connected TAIL and CURRENT CRISIS 
before - though Animals and Politics yes. Nor have I connected LOCAL VS 
GLOBAL THINKING before with WHAT A NICE DAY and the weather).


IOW I'm suggesting, the natural mode of human thought - and our continuous 
streams of association - are creative. And achieving such creativity is the 
principal problem/goal of AGI.


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with MAHONEY  and going on for 
another 10 or so items -  and trying to figure out how the result 
could.possibly be the  narrow kind of memory-recall you're arguing for. It's 
an awful lot to ask for, but could you possibly try it, analyse it and 
report back?


[Ben claims to have heard every type of argument I make before,  (somewhat 
like your A-B memory claim), so perhaps he can tell me where he's read 
before about the Freely Associative, Freely Domain Switching nature of human 
thought - I'd be interested to follow up on it]. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Eric Burton
That email had really nice images, but I don't know why gmail viewed
them automatically!

On 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:
 Matt:Free association is the basic way of recalling memories. If you
 experience A followed by B, then the next time you experience A you will
 think of (or predict) B. Pavlov demonstrated this type of learning in
 animals in 1927.

 Matt,

 You're not thinking your argument through. Look carefully at my spontaneous

 COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL
 THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON
 THIS etc. etc

 that's not A-B association.

 That's 1. A-B-C  then  2. Gamma-Delta then  3. Languages  then  4. Number of
 Lines in Letters.

 IOW the brain is typically not only freely associating *ideas* but switching
 freely across, and connecting,  radically different *domains* in any given
 chain of association. [e.g above from Animals to Economics/Politics to
 Weather to Personal Timetable]

 It can do this partly because

 a) single ideas have multiple, often massively mutiple,  idea/domain
 connections in the human brain, and allow one to go off in any of multiple
 tangents/directions
 b) humans have many things - and therefore multiple domains - on their mind
 at the same time concurrently  - and can switch as above from the immediate
 subject to  some other pressing subject  domain (e.g. from
 economics/politics (local vs global) to the weather (what a nice day).

 If your A-B, everything-is-memory-recall thesis were true, our
 chains-of-thought-association would be largely repetitive, and the domain
 switches inevitable..

 In fact, our chains (or networks) of free association and domain-switching
 are highly creative, and each one is typically, from a purely technical POV,
 novel and surprising. (I have never connected TAIL and CURRENT CRISIS
 before - though Animals and Politics yes. Nor have I connected LOCAL VS
 GLOBAL THINKING before with WHAT A NICE DAY and the weather).

 IOW I'm suggesting, the natural mode of human thought - and our continuous
 streams of association - are creative. And achieving such creativity is the
 principal problem/goal of AGI.

 So maybe it's worth taking 20 secs. of time - producing your own
 chain-of-free-association starting say with MAHONEY  and going on for
 another 10 or so items -  and trying to figure out how the result
 could.possibly be the  narrow kind of memory-recall you're arguing for. It's
 an awful lot to ask for, but could you possibly try it, analyse it and
 report back?

 [Ben claims to have heard every type of argument I make before,  (somewhat
 like your A-B memory claim), so perhaps he can tell me where he's read
 before about the Freely Associative, Freely Domain Switching nature of human
 thought - I'd be interested to follow up on it].




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
Mike,

Your own thought processes only seem mysterious because you can't predict what 
you will think without actually thinking it. It's not just a property of the 
human brain, but of all Turing machines. No program can non-trivially model 
itself. (By model, I mean that P models Q if for any input x, P can compute the 
output Q(x). By non-trivial, I mean that P does something else besides just 
model Q. (Every program trivially models itself). The proof is that for P to 
non-trivially model Q requires K(P)  K(Q), where K is Kolmogorov complexity, 
because P needs a description of Q plus whatever else it does to make it 
non-trivial. It is obviously not possible for K(P)  K(P)).

So if you learned the associations A-B and B-C, then A will predict C. That is 
called reasoning.

Also, each concept is associated with thousands of other concepts, not just 
A-B. If you pick the strongest associated concept not previously activated, you 
get the semi-random thought chain you describe. You can demonstrate this with a 
word-word matrix M from a large text corpus, where M[i,j] is the degree to 
which the i'th word in the vocabulary is associated with the j'th word, as 
measured by the probability of finding both words near each other in the 
corpus. Thus, M[rain,wet] and M[wet,water] have high values because the words 
often appear in the same paragraph. Traversing related words in M gives you 
something similar to your free association chain like rain-wet-water-...

-- Matt Mahoney, matmaho...@yahoo.com


--- On Thu, 1/8/09, Mike Tintner tint...@blueyonder.co.uk wrote:

 From: Mike Tintner tint...@blueyonder.co.uk
 Subject: Re: [agi] The Smushaby of Flatway.
 To: agi@v2.listbox.com
 Date: Thursday, January 8, 2009, 3:54 PM
 Matt:Free association is the basic way of recalling
 memories. If you experience A followed by B, then the next
 time you experience A you will think of (or predict) B.
 Pavlov demonstrated this type of learning in animals in
 1927.
 
 Matt,
 
 You're not thinking your argument through. Look
 carefully at my spontaneous
 
 COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS
 GLOBAL
 THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH
 MORE TIME ON
 THIS etc. etc
 
 that's not A-B association.
 
 That's 1. A-B-C  then  2. Gamma-Delta then  3.
 Languages  then  4. Number of Lines in Letters.
 
 IOW the brain is typically not only freely associating
 *ideas* but switching freely across, and connecting, 
 radically different *domains* in any given chain of
 association. [e.g above from Animals to Economics/Politics
 to Weather to Personal Timetable]
 
 It can do this partly because
 
 a) single ideas have multiple, often massively mutiple, 
 idea/domain connections in the human brain, and allow one to
 go off in any of multiple tangents/directions
 b) humans have many things - and therefore multiple domains
 - on their mind at the same time concurrently  - and can
 switch as above from the immediate subject to  some other
 pressing subject  domain (e.g. from economics/politics
 (local vs global) to the weather (what a nice day).
 
 If your A-B, everything-is-memory-recall thesis
 were true, our chains-of-thought-association would be
 largely repetitive, and the domain switches inevitable..
 
 In fact, our chains (or networks) of free association and
 domain-switching are highly creative, and each one is
 typically, from a purely technical POV, novel and
 surprising. (I have never connected TAIL and CURRENT CRISIS
 before - though Animals and Politics yes. Nor have I
 connected LOCAL VS GLOBAL THINKING before with WHAT A NICE
 DAY and the weather).
 
 IOW I'm suggesting, the natural mode of human thought -
 and our continuous streams of association - are creative.
 And achieving such creativity is the principal problem/goal
 of AGI.
 
 So maybe it's worth taking 20 secs. of time - producing
 your own chain-of-free-association starting say with
 MAHONEY  and going on for another 10 or so items
 -  and trying to figure out how the result could.possibly be
 the  narrow kind of memory-recall you're arguing for.
 It's an awful lot to ask for, but could you possibly try
 it, analyse it and report back?
 
 [Ben claims to have heard every type of argument I make
 before,  (somewhat like your A-B memory claim), so perhaps
 he can tell me where he's read before about the Freely
 Associative, Freely Domain Switching nature of human thought
 - I'd be interested to follow up on it]. 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] The Smushaby of Flatway.

2009-01-08 Thread Ronald C Blue
A picture is like an  instant 1000 words and you will remind a picture almost 
70 years but not 1000 words. 

-Original Message-
From: J. Andrew Rogers and...@ceruleansystems.com
To: agi@v2.listbox.com
Sent: 1/8/09 1:59 PM
Subject: Re: [agi] The Smushaby of Flatway.

On Jan 8, 2009, at 10:29 AM, Ronald C. Blue wrote:
 ...Noise is not noise...


Speaking of noise, was that ghastly HTML formatting really necessary?   
It made the email nearly unreadable.

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney matmaho...@yahoo.com wrote:
 Mike,

 Your own thought processes only seem mysterious because you can't predict 
 what you will think without actually thinking it. It's not just a property of 
 the human brain, but of all Turing machines. No program can non-trivially 
 model itself. (By model, I mean that P models Q if for any input x, P can 
 compute the output Q(x). By non-trivial, I mean that P does something else 
 besides just model Q. (Every program trivially models itself). The proof is 
 that for P to non-trivially model Q requires K(P)  K(Q), where K is 
 Kolmogorov complexity, because P needs a description of Q plus whatever else 
 it does to make it non-trivial. It is obviously not possible for K(P)  K(P)).


Matt, please stop. I even constructed an explicit counterexample to
this pseudomathematical assertion of yours once. You don't pay enough
attention to formal definitions: what this has a description means,
and which reference TMs specific Kolmogorov complexities are measured
in.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Richard Loosemore

Ronald C. Blue wrote:

[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent 

 processing machine ... globally entangled ... Paul rf trap ... parallel
 modulating string pulses ... a relative zero energy value or

opponent process  ...   phase locked ... parallel opponent process
... reciprocal Eigenfunction ...  opponent process ... 
summation interference ... gaussian reference rf trap ...

 oscillon output picture ... locked into the forth harmonic ...
 ... entangled with its Eigenfunction ..
 
[snip]
 
That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Matt Mahoney
--- On Thu, 1/8/09, Vladimir Nesov robot...@gmail.com wrote:

 On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney
 matmaho...@yahoo.com wrote:
  Mike,
 
  Your own thought processes only seem mysterious
 because you can't predict what you will think without
 actually thinking it. It's not just a property of the
 human brain, but of all Turing machines. No program can
 non-trivially model itself. (By model, I mean that P models
 Q if for any input x, P can compute the output Q(x). By
 non-trivial, I mean that P does something else besides just
 model Q. (Every program trivially models itself). The proof
 is that for P to non-trivially model Q requires K(P) 
 K(Q), where K is Kolmogorov complexity, because P needs a
 description of Q plus whatever else it does to make it
 non-trivial. It is obviously not possible for K(P) 
 K(P)).
 
 
 Matt, please stop. I even constructed an explicit
 counterexample to
 this pseudomathematical assertion of yours once. You
 don't pay enough
 attention to formal definitions: what this has a
 description means,
 and which reference TMs specific Kolmogorov complexities
 are measured
 in.

Your earlier counterexample was a trivial simulation. It simulated itself but 
did nothing else. If P did something that Q didn't, then Q would not be 
simulating P.

This applies regardless of your choice of universal TM.

I suppose I need to be more precise. I say P simulates Q if for all x, 
P(what is Q(x)?) = Q(x)=y iff Q(x)=y (where x and y are arbitrary strings). 
When I say that P does something else, I mean that it accepts at least one 
input not of the form what is Q(x)?. I claim that K(P)  K(Q) because any 
description of P must include a description of Q plus a description of what P 
does for at least one other input.


-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Your earlier counterexample was a trivial simulation. It simulated itself but 
 did
 nothing else. If P did something that Q didn't, then Q would not be 
 simulating P.

My counterexample also bragged, outside the input format that
requested simulation. ;-)


 This applies regardless of your choice of universal TM.

 I suppose I need to be more precise. I say P simulates Q if for all x,
 P(what is Q(x)?) = Q(x)=y iff Q(x)=y (where x and y are arbitrary 
 strings).
 When I say that P does something else, I mean that it accepts at least one
 input not of the form what is Q(x)?.

This is a step in the right direction.
What does it mean for P to NOT accept some input? Must it hang? What
it P outputs I understand you perfectly for each input not in the
form what is Q(x)?? (Which was my counterexample IIRC.)

 I claim that K(P)  K(Q) because any description of P must include
 a description of Q plus a description of what P does for at least one other 
 input.


Even if you somehow must represent P as concatenation of Q and
something else (you don't need to), it's not true that always
K(P)K(Q). It's only true that length(P)length(Q), and longer strings
can easily have smaller programs that output them. If P is
10^(10^10) symbols X, and Q is some random number of X smaller
than 10^(10^10), it's probably K(P)K(Q), even though Q is a
substring of P.


-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] The Smushaby of Flatway.

2009-01-07 Thread Jim Bromer
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition.  What makes them flat
is that the method of decision making is minimally-structured and they
funnel all reasoning through a single narrowly focused process that
smushes different inputs to produce output that can appear reasonable
in some cases but is really flat and lacks any structure for complex
reasoning.

The classic example is of course logic.  Every proposition can be
described as being either True or False and any collection of
propositions can be used in the derivation of a conclusion regardless
of whether the input propositions had any significant relational
structure that would actually have made it reasonable to draw the
definitive conclusion that was drawn from them.

But logic didn't do the trick, so along came neural networks and
although the decision making is superficially distributed and can be
thought of as being comprised of a structure of layer-like stages in
some variations, the methodology of the system is really just as flat.
 Again anything can be dumped into the neural network and a single
decision making process works on the input through a
minimally-structured reasoning system and output is produced
regardless of the lack of appropriate relative structure in it.  In
fact, this lack of discernment was seen as a major breakthrough!
Surprise, neural networks did not work just like the mind works in
spite of the years and years of hype-work that went into repeating
this slogan in the 1980's.

Then came Genetic Algorithms and finally we had a system that could
truly learn to improve on its previous learning and how did it do
this?  It used another flat reasoning method whereby combinations of
data components were processed according to one simple untiring method
that was used over and over again regardless of any potential to see
input as being structured in more ways than one.  Is anyone else
starting to discern a pattern here?

Finally we reach the next century to find that the future of AI has
already arrived and that future is probabilistic reasoning!  And how
is probabilistic reasoning different?  Well, it can solve problems
that logic, neural networks, genetic algorithms couldn't!  And how
does probabilistic reasoning do this?  It uses a funnel
minimally-structured method of reasoning whereby any input can be
smushed together with other disparate input to produce a conclusion
which is only limited by the human beings who strive to program it!

The very allure of minimally-structured reasoning is that it works
even in some cases where it shouldn't.  It's the hip hooray and bally
hoo of the smushababies of Flatway.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-07 Thread Matt Mahoney
Logic has not solved AGI because logic is a poor model of the way people think.

Neural networks have not solved AGI because you would need about 10^15 bits of 
memory and 10^16 OPS to simulate a human brain sized network.

Genetic algorithms have not solved AGI because the computational requirements 
are even worse. You would need 10^36 bits just to model all the world's DNA, 
and even if you could simulate it in real time, it took 3 billion years to 
produce human intelligence the first time.

Probabilistic reasoning addresses only one of the many flaws of first order 
logic as a model of AGI. Reasoning under uncertainty is fine, but you haven't 
solved learning by induction, reinforcement learning, complex pattern 
recognition (e.g. vision), and language. If it was just a matter of writing the 
code, then it would have been done 50 years ago.

-- Matt Mahoney, matmaho...@yahoo.com


--- On Wed, 1/7/09, Jim Bromer jimbro...@gmail.com wrote:

 From: Jim Bromer jimbro...@gmail.com
 Subject: [agi] The Smushaby of Flatway.
 To: agi@v2.listbox.com
 Date: Wednesday, January 7, 2009, 8:23 PM
 All of the major AI paradigms, including those that are
 capable of
 learning, are flat according to my definition.  What makes
 them flat
 is that the method of decision making is
 minimally-structured and they
 funnel all reasoning through a single narrowly focused
 process that
 smushes different inputs to produce output that can appear
 reasonable
 in some cases but is really flat and lacks any structure
 for complex
 reasoning.
 
 The classic example is of course logic.  Every proposition
 can be
 described as being either True or False and any collection
 of
 propositions can be used in the derivation of a conclusion
 regardless
 of whether the input propositions had any significant
 relational
 structure that would actually have made it reasonable to
 draw the
 definitive conclusion that was drawn from them.
 
 But logic didn't do the trick, so along came neural
 networks and
 although the decision making is superficially distributed
 and can be
 thought of as being comprised of a structure of layer-like
 stages in
 some variations, the methodology of the system is really
 just as flat.
  Again anything can be dumped into the neural network and a
 single
 decision making process works on the input through a
 minimally-structured reasoning system and output is
 produced
 regardless of the lack of appropriate relative structure in
 it.  In
 fact, this lack of discernment was seen as a major
 breakthrough!
 Surprise, neural networks did not work just like the mind
 works in
 spite of the years and years of hype-work that went into
 repeating
 this slogan in the 1980's.
 
 Then came Genetic Algorithms and finally we had a system
 that could
 truly learn to improve on its previous learning and how did
 it do
 this?  It used another flat reasoning method whereby
 combinations of
 data components were processed according to one simple
 untiring method
 that was used over and over again regardless of any
 potential to see
 input as being structured in more ways than one.  Is anyone
 else
 starting to discern a pattern here?
 
 Finally we reach the next century to find that the future
 of AI has
 already arrived and that future is probabilistic reasoning!
  And how
 is probabilistic reasoning different?  Well, it can solve
 problems
 that logic, neural networks, genetic algorithms
 couldn't!  And how
 does probabilistic reasoning do this?  It uses a funnel
 minimally-structured method of reasoning whereby any input
 can be
 smushed together with other disparate input to produce a
 conclusion
 which is only limited by the human beings who strive to
 program it!
 
 The very allure of minimally-structured reasoning is that
 it works
 even in some cases where it shouldn't.  It's the
 hip hooray and bally
 hoo of the smushababies of Flatway.
 
 Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-07 Thread Ben Goertzel
  If it was just a matter of writing the code, then it would have been done
 50 years ago.



if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p

obviously, all hard problems that can be solved have already been solved...

???



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com