Mike Tintner wrote:
Richard,
You missed Mike Tintner's explanation . . . .
Mark,
.... Right ....
So you think maybe what we've got here is a radical influx of globally
entangled free-association bosons?
Richard,
Q.E.D. Well done.
Now tell me how you connected my "ridiculous" [or however else you might
want to style it] argument with your argument re "bosons" - OTHER than
by free association? What *prior* set of associations in your mind, or
"prior, preprogrammed set of rules, what logicomathematical thinking
enabled you to form that connection? (And it would be a good idea to
apply it to your previous joke re Blue - because they must be *generally
applicable* principles)
And what prior principles enabled you to spontaneously and creatively
form the precise association of "radical influx of globally entagled
free-association bosons" - to connect RADICAL INFLUX with GLOBALLY
ENTANGLED ..and FREE ASSOCIATION and BOSONS.
You were being v. funny, right? But humour is domain-switching (which
you do multiple times above) and that's what you/AGI can't do or explain
computationally.
***
Ironically, before I saw your post I had already written (and shelved) a
P.S. Here it is:
"P.S. Note BTW - because I'm confident you're probably still thinking
"what's that weird nutter on about? what's this got to do with AGI?" -
the very best evidence for my claim. That claim is now that the brain is
* potentially infinitely domain-switching on both
a) a basic level, and
b) a meta-level -
i.e. capable of forming endless new connections/associations on a higher
level too and so, forming infinite new modes of reasoning, ( new *ways*
of associating ideas as well as new association)
The very best evidence are *logic and mathematics themselves*. For logic
and mathematics ceaselessly produce new branches of themselves. New
logics. New numbers, New kinds of geometry. *New modes of reasoning.*
And an absolutely major problem for logic and mathematics (and current
computation) is that they *cannot explain themselves* - cannot explain
how these new modes of reasoning are generated/ There are no logical and
mathematical or other formal ways of explaining these new branches.
Rational numbers cannot be used to deduce irrational numbers and thence
imaginary numbers. Trigonometry cannot be used to deduce calculus.
Euclidean geometry cannot be used to deduce riemannian to deduce
topology. And so on. Aristotelian logic cannot explain fuzzy logic
cannot explain PLN.
Logicomathematical modes of reasoning are *not* generated
logicomathematically.but creatively -as both Ben, I think, and
certainly Franklin have acknowledged.
And clearly the brain is capable of forming infinitely new logics and
mathematics - infinite new forms of reasoning - by
*non-logicomathematical*/*nonformal* means. By, I suggest, free
association among other means.
********************************
It's easy to make cheap, snide comments. But can either of you actually
engage directly with the problem of domain-switching, and argue
constructively about particular creative problems and thinking - using
actual evidence? I've seen literally no instances from either of you (or
indeed, though this may at first seem surprising and may need a little
explanation - anyone in the AI community).
let's take an actual example of good creative thinking happening on the
fly - and what I've called directed free association -
It's by one Richard Loosemore. You as well as others thought pretty
creatively about the problem of the engram a while back. Here's the
transcript of that thinking - as I said, good creative thinking, really
trying to have new ideas (as opposed to just being snide here).:
Now perhaps you can tell me what prior *logic* or programming produced
the flow of your own ideas here? How do you get from one to the next?
"Richard: Now you're just trying to make me think ;-). 1.
Okay, try this. 2.
[heck, you don't have to: I am just playing with ideas here...] 3.
The methylation pattern has not necessarily been shown to *only* store
information in a distributed pattern of activation - the jury's out on
that one (correct me if I'm wrong). 4.5
Suppose that the methylation end caps are just being used as a way
station for some mechanism whose *real* goal is to make modifications to
some patterns in the junk DNA. 6. So, here I am suggesting that the junk
DNA of any particular neuron is being used to code for large numbers of
episodic memories (one memory per DNA strand, say), with each neuron
being used as a redundant store of many episodes. 7. The same episode is
stored in multiple neurons, but each copy is complete. 8. When we observe
changes in the methylation patterns, perhaps these are just part of the
transit mechanism, not the final destination for the pattern. 9. To put it
in the language that Greg Bear would use, the endcaps were just part of
the "radio" system. (http://www.gregbear.com/books/darwinsradio.cfm) 10.
Now suppose that part of the junk sequences that code for these memories
are actually using a distributed coding scheme *within* the strand 11. (in
the manner of a good old fashioned backprop neural net, shall we say). 12.
That would mean that, contrary to what I said in the above paragraph,
the individual strands were coding a bunch of different episodic memory
traces, not just one. 13.
(It is even possible that the old idea of flashbulb memories may survive
the critiques that have been launched against it ... 14. and in that case,
it could be that what we are talking about here is the mechanism for
storing that particular set of memories. 15. And in that case, perhaps the
system expects so few of them, that all DNA strands everywhere in the
system are dedicated to storing just the individual's store of flashbulb
memories).16.
Now, finally, suppose that there is some mechanism for "radioing" these
memories to distribute them around the system ... and that the radio
network extends as far as the germ DNA.17.
Now, the offspring could get the mixed flashbulb memories of its
parents, in perhaps very dilute or noisy form. 18.
This assumes that whatever coding scheme is used to store the
information can somehow transcend the coding schemes used by different
individuals. 19. Since we do not yet know how much common ground there is
between the knowledge storage used by individuals yet, this is still
possible.20
There: I invented a possible mechanism. .. 21.
Does it work?" ...22.
[[[Obviously you could break this up into far more sections]]]]]
***
Like I said, good creative thinking. Are you prepared to do some more
and analyse a practical example of your own thinking? You say you're a
serious cognitive scientist - why not do some serious, investigative
cognitive science?
You argued a long time back that of course programs could produce
creative thinking. Good Now apply that argument (or any similar
argument) to your own thinking.
How did you get from sentence one to two - that neat connection from
"trying to make me think" to "okay try this" ? Was that play on "try"
preordained or as I suggest, free switching?
In fact, take ANY sentence above and show me how it flows *logically,*
or in any inevitable formal way from any other sentence, according to
some *general* principle. (Remember your explanation must be capable of
accounting not just for any given pair of sentences, but other similar
sentence pairs that you have already uttered, or will yet utter)
Do you always conclude, for example, with something like "There I
invented a possible mechanism" and always follow up with a question like
"Does it work?" Or is it possible that you actually freely associate
and sometimes in similar situations might make instead a positive
statement like "It just might work.." or a more negative statement like
"Probably won't work, but what the hell?" or not a statement at all but
a single word like "Hmmm..." Or instead a justification "Note that all
that matters here is that it's "possible" rather than "probable"?"
Do you always in similar situations connect an idea - 11. "distributed
coding scheme" with an analogy - 12- "backprop neural net" or maybe
sometimes proceed immediately to further evidence for the idea ?
Hey, this shouldn't be difficult at all - you've got a vast body of
sentences in your posts alone to look at.. If you're right, you should
be able to come up with loads of examples of
logicomathematically/computationally consistent thinking.
But there is a very considerable body of scientific psychology that will
assert that the creative problem you were tackling is "ill-structured,"
and therefore any attempt to solve it must also be ill-structured, - or,
in my terms, free association.
Can you provide a single piece of evidence otherwise?
(Can you even provide a single *normative* argument why any creative,
ill-structured problem *should* be tackled by some pre-existing
formally, well- structured method? Try focussing your argument on the
engram problem)
I suggest that your entire AI practice - and all the formal,
logicomathematical forms of reasoning you espouse - are quite incapable
of explaining or reproducing your own actual thinking. (And that
includes having thousands of computers going through some process of
selection).
I also suggest that one of the underlying reasons why you guys are
laughing is because you've actually never done any serious analysis of
the stream of human thought and problemsolving - and you just don't
have the metacognitive tools to handle it. You know all about
metacognitively analysing logicomathematical/computational problems
(none of which are creative) but next to nothing about analysing real,
actual human problemsolving, especially the creative variety - the kind
done by general intelligences that actually work - the real business of
AGI.
P.S. The only thing that is predictable about how you will reply here is
that you won't take the opportunity to engage constructively with actual
examples of actual thinking. I'd be delighted to be proved wrong,
because it's a great pity. Your engram thinking really is creative.
I'll bite.
{But since it's suppertime, I have to keep it short).
It is so *trivially* easy to IN PRINCIPLE make an AGI do something like
the creative idea-hopping that I did above, that I don't understand your
question.
You say the same thing over and over: you say that "logical" AI could
not do it. In a sense that is true, because there is a certain sort of
logical AI that might not be able to do a very good job of it.
But that is not a general fact of any significance.
A question: you have read the description of my molecular framework in
both the Loosemore & Harley paper, and in the recent consciousness
paper, right? Can you not see that such a system is utterly unlike
anything that you would call a "logical" system.
I could make a longer attempt to explain, but I am not sure you will
believe me or understand what I say, based on my previous attempts to do
so. I don't mean to be mean, that is just an honest assessment.
Richard Loosemore
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com