Richard,

Your below response to Mike was very good.

The notion that programmed algorithms cannot be creative is contrary to
existing evidence.

First, there already are a lot of programs that are creative.  Eliza was
able to create NL chat that was good enough to make people believe it was
coming from a human.  Even though this chat was in a highly limited, very
easy domain, it, nevertheless, was a form of creativity.  My understanding
is that today's chat bots are even more creative (although still far from
passing the Turing Test).  I have read that there have been fictional works
written by computers, that although far from great are probably better than
what some human writers would create, and such writing is a form of
creativity.  The Visualization program that comes with Microsoft's Media
Player creates visualizations, a certain percent of which are more pleasing,
stunning, and varied than many human works of modern art, particularly
considering that their flourishes are often appropriately synchronized with
music.  That is a form of creativity.  There are even more sophisticated
visual art programs that produce some images that would be considered
absolutely brilliant and very creative by many (including myself) if made by
a human artist.  That is even a further sign of creativity.

Even Doug Hofstadter's relatively simple copycat program (discussed in my
12/6/2007 11:27 PM post) often displays surprisingly creative solutions to
the little puzzles it is provided.

So in many meaningful ways computer programs already are creative.

Second, given the surprising level of creativity that we have reached with
computers that are in meaningful ways only 1/10,000 to 1/10,000,000 the
power of the human brain, we have every reason to expect much greater
creativity from human-level computers.

I note the Mike said he was in Penrose's camp.  I think people like Penrose
and Searle suffer from an amazing narrow mindedness on the subject of
consciousness.  

The below excerpt from Wikipedia on Penrose's ideas on consciousness
indicate how lame his ideas are.  It is possible that explaining
consciousness may require some new laws of physics, but it seems highly
probable it will not, just as new laws of chemistry were not required to
explain biological life (although many new biochemical details have been
required).  Anybody who thinks Godel's theorems prevent computers from
mathematical insights doesn't understand the message of fuzzy logic, i.e.
that logical completeness and consistency are not required for much of
useful thought.

Searle's Chinese Room thought experiment was about as dumb as saying that
because chemistry is just about molecules and atoms it could never explain
biological life.  It fails to understand what I used to call "the power of
big numbers" in the early '70s, which is now, probably more appropriately,
called "complexity"

Consciousness is just a very complex computation, full of sensual, body, and
mental state memories.  It is aware of itself, in the sense that its own
state is one of the major inputs into its computations.  This provides what
many consider to be the essential mystery of consciousness, its self
awareness.  As is true of most any complex system whose state forms complex
inputs into its own computation, it is very dynamic.  Its states are
amazingly complex, since it has roughly one hundred billion neurons, each of
which can act somewhat independently.  To use the theater of the mind
analogy, if each neuron is viewed as an audience member, that is an audience
equal to 16 times the earths human population.  And there is evidence that
at least certain types of brain waves are experienced in common by a large
percent of the cortex.  So at least the neurons in that large part of the
cortex can be viewed as being in one common audience.  So it is not
surprising that consciousness is complex and dynamic, and is capable of
creative synthesis -- under experientially learned probabilistic guidance --
of behaviors, thoughts, words, images, and music. This complexity is what
makes its self awareness so vivid and creative.

Ed Porter

===from http://en.wikipedia.org/wiki/Roger_Penrose ===
Physics and consciousness
Penrose has written controversial books
<http://en.wikipedia.org/wiki/Controversial_book>  on the connection between
fundamental physics and human consciousness. In The Emperor's New Mind
<http://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind>  (1989
<http://en.wikipedia.org/wiki/1989> ), he argues that known laws of physics
are inadequate to explain the phenomenon of human consciousness. Penrose
hints at the characteristics this new physics may have and specifies the
requirements for a bridge between classical and quantum mechanics (what he
terms correct quantum gravity
<http://en.wikipedia.org/w/index.php?title=Correct_quantum_gravity&action=ed
it> , CQG). He claims that the present computer is unable to have
intelligence because it is a deterministic system that for the most part
simply executes algorithms, as a billiard table where billiard balls act as
message carriers and their interactions act as logical decisions. He argues
against the viewpoint that the rational processes of the human mind are
completely algorithmic <http://en.wikipedia.org/wiki/Algorithm>  and can
thus be duplicated by a sufficiently complex computer -- this is in contrast
to views, e.g., Biological Naturalism
<http://en.wikipedia.org/wiki/Biological_Naturalism> , that human behavior
but not consciousness might be simulated. This is based on claims that human
consciousness transcends formal logic
<http://en.wikipedia.org/wiki/Formal_logic>  systems because things such as
the insolubility of the halting problem
<http://en.wikipedia.org/wiki/Halting_problem>  and Gödel's incompleteness
theorem <http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem>
restrict an algorithmically based logic from traits such as mathematical
insight. These claims were originally made by the philosopher John Lucas
<http://en.wikipedia.org/wiki/John_Lucas_%28philosopher%29>  of Merton
College <http://en.wikipedia.org/wiki/Merton_College%2C_Oxford> , Oxford
<http://en.wikipedia.org/wiki/University_of_Oxford> .
In 1994 <http://en.wikipedia.org/wiki/1994> , Penrose followed up The
Emperor's New Mind with Shadows of the Mind
<http://en.wikipedia.org/wiki/Shadows_of_the_Mind>  and in 1997
<http://en.wikipedia.org/wiki/1997>  with The Large, the Small and the Human
Mind
<http://en.wikipedia.org/w/index.php?title=The_Large%2C_the_Small_and_the_Hu
man_Mind&action=edit> , further updating and expanding his theories.
Penrose's views on the human thought <http://en.wikipedia.org/wiki/Thought>
process are not widely accepted in scientific circles. According to Marvin
Minsky <http://en.wikipedia.org/wiki/Marvin_Minsky> , because people can
construe false ideas to be factual, the process of thinking is not limited
to formal logic. Furthermore, he says that AI
<http://en.wikipedia.org/wiki/Artificial_intelligence>  programs can also
conclude that false statements are true, so error is not unique to humans.
Penrose and Stuart Hameroff <http://en.wikipedia.org/wiki/Stuart_Hameroff>
have constructed a theory in which human consciousness
<http://en.wikipedia.org/wiki/Consciousness>  is the result of quantum
gravity effects in microtubules <http://en.wikipedia.org/wiki/Microtubule> ,
which they dubbed Orch-OR <http://en.wikipedia.org/wiki/Orch-OR>
(orchestrated object reduction). But Max Tegmark
<http://en.wikipedia.org/wiki/Max_Tegmark> , in a paper in Physical Review
E, calculated that the time scale of neuron firing and excitations in
microtubules is slower than the decoherence
<http://en.wikipedia.org/wiki/Quantum_decoherence>  time by a factor of at
least 10,000,000,000. The reception of the paper is summed up by this
statement in his support: "Physicists outside the fray, such as IBM's John
Smolin <http://en.wikipedia.org/w/index.php?title=John_Smolin&action=edit> ,
say the calculations confirm what they had suspected all along. 'We're not
working with a brain that's near absolute zero. It's reasonably unlikely
that the brain evolved quantum behavior', he says." The Tegmark paper has
been widely cited by critics of the Penrose-Hameroff proposal. It has been
claimed by Hameroff to be based on a number of incorrect assumptions (see
linked paper below from Hameroff, Hagan
<http://en.wikipedia.org/w/index.php?title=Scott_Hagan&action=edit>  and
Tuszyński
<http://en.wikipedia.org/w/index.php?title=Jack_Tuszy%C5%84ski&action=edit>
), but Tegmark in turn has argued that the critique is invalid (see
rejoinder link below). In particular, Hameroff points out the peculiarity
that Tegmark's formula for the decoherence time includes a factor of  [the
square root of temperature, if you can't see the graphic] in the numerator,
meaning that higher temperatures would lead to longer decoherence times.
Tegmark's rejoinder keeps the factor of [the square root of temperature, if
you can't see the graphic]for the decoherence time.




-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 07, 2008 10:09 AM
To: agi@v2.listbox.com
Subject: Can Computers Be Creative? [WAS Re: [agi] A Simple Mathematical
Test of Cog Sci.]

Mike,

This discussion is just another a repetition of a common fallacy, namely 
that computers cannot be creative (or flexible, adaptive, original etc.) 
because they are "programmed".

The fallacy can be illustrated by considering the following set of 
situations.

1) If I tell a child how to solve a calculus problem by giving them 
explicit steps to manipulate the symbols, they are not really doing the 
problem, they are just blindly doing what I am telling them to.  The 
child is just following a program written by me.

2) If I tell a child some general rules for solving calculus problems, 
but let the child figure out which rules map onto the particular problem 
at hand, then the child is now doing some work, but still they don't 
understand calculus.

3) If I tell the child some of the background behind the general rules 
for solving calculus problems, things start to become a little less 
clear.  If the child simply memorizes the rules and the background, and 
can recite them parrot fashion, do they actually understand?  Probably 
not.  Under those circumstances it might still be true to say that I am 
the one solving the problem, and the child is just following my program 
by rote.

4) If I explicitly teach a child all about mathematics (I am the math 
teacher), so they can see the linkages between all the different aspects 
of math that relate to calculus, and if the child now knows about 
calculus, then surely we would say that they understand and can 
"creatively" solve problems?

5) If I teach a child how to *learn* in a completely general way, and 
then give them a math book, and the child uses their learning skills to 
acquire a comprehensive knowledge of mathematics, including calculus, 
and if they do this so well that they understand the complete 
foundations of the field and can do research of their own, is it the 
case that the child is "just following a program" that I taught it 
(because I taught it everything about *how* to learn)?

The problem is that people who make the claim that computers are not 
creative see the relationship between programmers and computers as like 
situation (1) above, when in fact it is like (5).  For example, you say 
below:

 > A *program* is a prior series or set of instructions that shapes and
 > determines an agent's sequence of actions. A precise itinerary for a
 > journey.......

The crucial thing is that THERE ARE DIFFERENT KINDS OF PROGRAMS, and 
some programs are like (1) above.  But you are mistaking the fact that 
some are like (1) for the fact that all of them are.

It is completely false to assume that "programs" in general have that 
kind of simplistic relationship between [code] and [performance carried 
out by the code].

In particular, my type of AI (and Ben's, and others who are attempting 
full-blown AGI) is at least as complex as the type (5) above.  And for 
just the same reason that it would be false to say that a child that can 
do mathematics is just following the rules of their parents and 
kindergarten teacher (who arguably knew nothing about math, but who 
maybe did teach the child how to be a good learner), so it is completely 
false to say that a "program" is just a sequence of instructions that 
determines a computer's sequence of actions.  The program may simply 
determine how the computer goes about the process of learning about the 
world .... while everything from there on out is not explicitly 
determined by the program, at all.

The relationship between program and actual performance can be 
*incredibly* subtle, and sensitive to enormous numbers of factors ... so 
many factors that, in practice, it is not possible to say exactly why 
the computer did a particular thing.  And when it gets to that level of 
complexity, a naive observer might say "the computer is being creative". 
  Indeed it is being creative .... in just the same way that a few 
billion neurons can also be creative.



Richard Loosemore



Mike Tintner wrote:
> Ben,
> 
> Sounds like you may have missed the whole point of the test - though I 
> mean no negative comment by that - it's all a question of communication.
> 
> A *program* is a prior series or set of instructions that shapes and 
> determines an agent's sequence of actions. A precise itinerary for a 
> journey. Even if the programmer doesn't have a full but only a very 
> partial vision of that eventual sequence or itinerary.  (The agent of 
> course can be either the human mind or a computer).
> 
> If the mind works by *free composition,* then it works v. differently - 
> though this is an idea that has still to be fleshed out, and could take 
> many forms. The first crucial difference is that there is NO PRIOR 
> SERIES OR SET OF INSTRUCTIONS - saves a helluva lot on both space and 
> programming work. Rather the mind works principally by free association 
> - making up that sequence of actions/ journey AS IT GOES ALONG. So my 
> very crude idea of this is you start, say, with a feeling of hunger, 
> which = "go get food".  And immediately you go to the fridge. But only 
> then, when the right food isn't there, do you think: in what other place 
> could food be. And you may end up going various places, and/or asking 
> various people, and/or consulting various sources of information, and/or 
> doing things that you don't normally do like actually cooking/preparing 
> various dishes, or looking under sofas or going to a restaurant    - but 
> there was no initial program in your brain for the actual journey you 
> undertake, which is simply thrown together ad hoc and can take many 
> different courses. Rather like an actual Freudian chain of free word 
> associations, where there cannot possibly be a prior program (or would 
> anyone disagree?)
> 
> (Any given journey, though, may  involve many well-established routines).
> 
> As opposed to an initial AI-style program with complete set of 
> instructions, I suggest, the mind in undertaking activities,  has 
> normally only the roughest of "briefs" outlining a goal, together with a 
> rough, abstract and very, even extremely, incomplete sketch of the 
> journey to be undertaken.
> 
> A program is essentially a detailed blueprint for a house. A free 
> composition is a very rough sketchy outline to begin with, that is 
> freely filled in as you go along . Evolution and development seem to 
> work more on the latter principle - remember Dawkins' idea of them  as 
> like an airplane built in mid-flight - though our physical development, 
> while definitely having considerable degrees of freedom as to possible 
> physiques, is vastly more constrained than our physical and mental 
> activities.
> 
> None of the many activities of writing a program that you have 
> undertaken - as distinct from the programs themselves - was, I suggest, 
> remotely preprogrammed itself. Writing a program like any creative 
> activity - writing a story/musical piece/ drawing a picture or producing 
> a design - is a free composition. A crazy walk.
> 
> Genetic algorithms are indeed programs and function v. differently from 
> human creativity. They proceed along predefined lines. Nothing crazy 
> about them.  If they produce surprising results, it is only because the 
> programmer didn't have the capacity to think through the consequences of 
> his instructions.
> 
> Now note here - heavily underlined several times - I have only gone into 
> free composition, in order to give you something more or less vivid to 
> contrast with the idea of a program. But the point of my test is NOT to 
> elucidate the idea of free composition- I don't have to do that - it is 
> to test & hopefully destroy the idea of the mind being driven by neat 
> prior sets of instructions - even pace Richard or genetic algorithms,  
> v. complex sets of instructions.
> 
> Does that make the program/free composition distinction - & the point of 
> the test - clearer, regardless of how you may agree/disagree?
> 
> 
> 
> Ben: I don't really understand what you mean by "programmed" ... nor by 
> "creative"
>>
>> You say that, according to your definitions, a GA is programmed and
>> ergo cannot be creative...
>>
>> How about, for instance, a computer simulation of a human brain?  That
>> would be operated via program code, hence it would be "programmed" --
>> so would you consider it intrinsically noncreative?
>>
>> Could you please define your terms more clearly?
>>
>> thx
>> ben
>>
>> On Jan 6, 2008 1:21 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>>>
>>> MT: This has huge implications for AGI - you guys believe that an AGI 
>>> must
>>> be
>>> > programmed for its activities, I contend that free composition 
>>> instead > is
>>> >> essential for truly adaptive, general intelligence and is the 
>>> basis of
>>> >> all
>>> >> animal and human activities).
>>> >
>>> Ben:  Spontaneous, creative self-organized activity is a key aspect of
>>> Novamente
>>> > and many other AGI designs.
>>>
>>> Ben,
>>>
>>> You are saying that your pet presumably works at times in a 
>>> non-programmed
>>> way - spontaneously and creatively? Can you explain briefly the
>>> computational principle(s) behind this, and give an example of where 
>>> it's
>>> applied, (exploration of an environment, say)? This strikes me as an
>>> extremely significant, even revolutionary claim to make, and it would 
>>> be a
>>> pity if, as with your analogy claim, you simply throw it out again 
>>> without
>>> any explanation.
>>>
>>> And I'm wondering whether you are perhaps confused about this, (or I 
>>> have
>>> confused you) -  in the way you definitely are below. Genetic 
>>> algorithms,
>>> for example, and suchlike classify as programmed and neither truly
>>> spontaneous nor creative.
>>>
>>> Note that Baum asked me a while back what  test I could provide that 
>>> humans
>>> engage in "free thinking."  He, quite rightly, thought it a 
>>> scientifically
>>> significant claim to make, that demanded scientific substantiation.
>>>
>>> My test is not a test, I stress though, of  free will. But have you 
>>> changed
>>> your mind about this? It's hard though not a complete contradiction  to
>>> believe in a mind being spontaneously creative and yet not having 
>>> freedom of
>>> decision.
>>>
>>> MT:  I contend that the proper, *ideal* test is to record
>>> >> humans' actual streams of thought about any problem
>>> >
>>> Ben: > While introspection is certainly a valid and important tool for
>>> inspiring
>>> > work in AI and cog sci, it is not a test of anything.  >
>>>
>>> Ben,
>>>
>>> This is a really major - and very widespread - confusion.  A 
>>> recording of
>>> streams of thought is what it says - a direct or recreated recording 
>>> of a
>>> person's actual thoughts. So, if I remember right, some form of that 
>>> NASA
>>> recording of subvocalisation when someone is immediately thinking 
>>> about a
>>> problem, would classify as a record of their thoughts.
>>>
>>> Introspection is very different - it is a report of thoughts, 
>>> remembered at
>>> a later, often much later time.
>>>
>>> A record(ing) might be me saying "I want to kill you, you bastard " 
>>> in an
>>> internal daydream. Introspection might be me reporting later: "I got 
>>> very
>>> angry with him in my mind/ daydream." Huge difference. An awful lot of
>>> scientists think, quite mistakenly, that the latter is the best 
>>> science can
>>> possibly hope to do.
>>>
>>> Verbal protocols - getting people to think aloud about problems - are 
>>> a sort
>>> of halfway house (or better).

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=82647063-523845

<<attachment: winmail.dat>>

Reply via email to