Mike,

This discussion is just another a repetition of a common fallacy, namely that computers cannot be creative (or flexible, adaptive, original etc.) because they are "programmed".

The fallacy can be illustrated by considering the following set of situations.

1) If I tell a child how to solve a calculus problem by giving them explicit steps to manipulate the symbols, they are not really doing the problem, they are just blindly doing what I am telling them to. The child is just following a program written by me.

2) If I tell a child some general rules for solving calculus problems, but let the child figure out which rules map onto the particular problem at hand, then the child is now doing some work, but still they don't understand calculus.

3) If I tell the child some of the background behind the general rules for solving calculus problems, things start to become a little less clear. If the child simply memorizes the rules and the background, and can recite them parrot fashion, do they actually understand? Probably not. Under those circumstances it might still be true to say that I am the one solving the problem, and the child is just following my program by rote.

4) If I explicitly teach a child all about mathematics (I am the math teacher), so they can see the linkages between all the different aspects of math that relate to calculus, and if the child now knows about calculus, then surely we would say that they understand and can "creatively" solve problems?

5) If I teach a child how to *learn* in a completely general way, and then give them a math book, and the child uses their learning skills to acquire a comprehensive knowledge of mathematics, including calculus, and if they do this so well that they understand the complete foundations of the field and can do research of their own, is it the case that the child is "just following a program" that I taught it (because I taught it everything about *how* to learn)?

The problem is that people who make the claim that computers are not creative see the relationship between programmers and computers as like situation (1) above, when in fact it is like (5). For example, you say below:

> A *program* is a prior series or set of instructions that shapes and
> determines an agent's sequence of actions. A precise itinerary for a
> journey.......

The crucial thing is that THERE ARE DIFFERENT KINDS OF PROGRAMS, and some programs are like (1) above. But you are mistaking the fact that some are like (1) for the fact that all of them are.

It is completely false to assume that "programs" in general have that kind of simplistic relationship between [code] and [performance carried out by the code].

In particular, my type of AI (and Ben's, and others who are attempting full-blown AGI) is at least as complex as the type (5) above. And for just the same reason that it would be false to say that a child that can do mathematics is just following the rules of their parents and kindergarten teacher (who arguably knew nothing about math, but who maybe did teach the child how to be a good learner), so it is completely false to say that a "program" is just a sequence of instructions that determines a computer's sequence of actions. The program may simply determine how the computer goes about the process of learning about the world .... while everything from there on out is not explicitly determined by the program, at all.

The relationship between program and actual performance can be *incredibly* subtle, and sensitive to enormous numbers of factors ... so many factors that, in practice, it is not possible to say exactly why the computer did a particular thing. And when it gets to that level of complexity, a naive observer might say "the computer is being creative". Indeed it is being creative .... in just the same way that a few billion neurons can also be creative.



Richard Loosemore



Mike Tintner wrote:
Ben,

Sounds like you may have missed the whole point of the test - though I mean no negative comment by that - it's all a question of communication.

A *program* is a prior series or set of instructions that shapes and determines an agent's sequence of actions. A precise itinerary for a journey. Even if the programmer doesn't have a full but only a very partial vision of that eventual sequence or itinerary. (The agent of course can be either the human mind or a computer).

If the mind works by *free composition,* then it works v. differently - though this is an idea that has still to be fleshed out, and could take many forms. The first crucial difference is that there is NO PRIOR SERIES OR SET OF INSTRUCTIONS - saves a helluva lot on both space and programming work. Rather the mind works principally by free association - making up that sequence of actions/ journey AS IT GOES ALONG. So my very crude idea of this is you start, say, with a feeling of hunger, which = "go get food". And immediately you go to the fridge. But only then, when the right food isn't there, do you think: in what other place could food be. And you may end up going various places, and/or asking various people, and/or consulting various sources of information, and/or doing things that you don't normally do like actually cooking/preparing various dishes, or looking under sofas or going to a restaurant - but there was no initial program in your brain for the actual journey you undertake, which is simply thrown together ad hoc and can take many different courses. Rather like an actual Freudian chain of free word associations, where there cannot possibly be a prior program (or would anyone disagree?)

(Any given journey, though, may  involve many well-established routines).

As opposed to an initial AI-style program with complete set of instructions, I suggest, the mind in undertaking activities, has normally only the roughest of "briefs" outlining a goal, together with a rough, abstract and very, even extremely, incomplete sketch of the journey to be undertaken.

A program is essentially a detailed blueprint for a house. A free composition is a very rough sketchy outline to begin with, that is freely filled in as you go along . Evolution and development seem to work more on the latter principle - remember Dawkins' idea of them as like an airplane built in mid-flight - though our physical development, while definitely having considerable degrees of freedom as to possible physiques, is vastly more constrained than our physical and mental activities.

None of the many activities of writing a program that you have undertaken - as distinct from the programs themselves - was, I suggest, remotely preprogrammed itself. Writing a program like any creative activity - writing a story/musical piece/ drawing a picture or producing a design - is a free composition. A crazy walk.

Genetic algorithms are indeed programs and function v. differently from human creativity. They proceed along predefined lines. Nothing crazy about them. If they produce surprising results, it is only because the programmer didn't have the capacity to think through the consequences of his instructions.

Now note here - heavily underlined several times - I have only gone into free composition, in order to give you something more or less vivid to contrast with the idea of a program. But the point of my test is NOT to elucidate the idea of free composition- I don't have to do that - it is to test & hopefully destroy the idea of the mind being driven by neat prior sets of instructions - even pace Richard or genetic algorithms, v. complex sets of instructions.

Does that make the program/free composition distinction - & the point of the test - clearer, regardless of how you may agree/disagree?



Ben: I don't really understand what you mean by "programmed" ... nor by "creative"

You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...

How about, for instance, a computer simulation of a human brain?  That
would be operated via program code, hence it would be "programmed" --
so would you consider it intrinsically noncreative?

Could you please define your terms more clearly?

thx
ben

On Jan 6, 2008 1:21 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:

MT: This has huge implications for AGI - you guys believe that an AGI must
be
> programmed for its activities, I contend that free composition instead > is >> essential for truly adaptive, general intelligence and is the basis of
>> all
>> animal and human activities).
>
Ben:  Spontaneous, creative self-organized activity is a key aspect of
Novamente
> and many other AGI designs.

Ben,

You are saying that your pet presumably works at times in a non-programmed
way - spontaneously and creatively? Can you explain briefly the
computational principle(s) behind this, and give an example of where it's
applied, (exploration of an environment, say)? This strikes me as an
extremely significant, even revolutionary claim to make, and it would be a pity if, as with your analogy claim, you simply throw it out again without
any explanation.

And I'm wondering whether you are perhaps confused about this, (or I have confused you) - in the way you definitely are below. Genetic algorithms,
for example, and suchlike classify as programmed and neither truly
spontaneous nor creative.

Note that Baum asked me a while back what test I could provide that humans engage in "free thinking." He, quite rightly, thought it a scientifically
significant claim to make, that demanded scientific substantiation.

My test is not a test, I stress though, of free will. But have you changed
your mind about this? It's hard though not a complete contradiction  to
believe in a mind being spontaneously creative and yet not having freedom of
decision.

MT:  I contend that the proper, *ideal* test is to record
>> humans' actual streams of thought about any problem
>
Ben: > While introspection is certainly a valid and important tool for
inspiring
> work in AI and cog sci, it is not a test of anything.  >

Ben,

This is a really major - and very widespread - confusion. A recording of streams of thought is what it says - a direct or recreated recording of a person's actual thoughts. So, if I remember right, some form of that NASA recording of subvocalisation when someone is immediately thinking about a
problem, would classify as a record of their thoughts.

Introspection is very different - it is a report of thoughts, remembered at
a later, often much later time.

A record(ing) might be me saying "I want to kill you, you bastard " in an internal daydream. Introspection might be me reporting later: "I got very
angry with him in my mind/ daydream." Huge difference. An awful lot of
scientists think, quite mistakenly, that the latter is the best science can
possibly hope to do.

Verbal protocols - getting people to think aloud about problems - are a sort
of halfway house (or better).

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=82603250-95d5f4

Reply via email to