Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 You start v. constructively thinking how to test the non-programmed
 nature of  - or simply record - the actual writing of programs, and
 then IMO fail to keep going.

You could trace their keyboard presses back to the cerebellum and motor 
cortex, yes, this is true, but this isn't going to be like tracing the 
programmer pathway in a brain. You might just end up tracing the 
entire brain [which is another project that I fully support, of 
course]. You can imagine this as the signals being traced back to their 
origins back to the spine and the CNS like the cerebellum and motor 
cortex, and then from the somatosensory cortex that gave them the 
feedback for debugger error output (parse error, rawr), etc. You could 
even spice up the experimental scenario by tracking different 
strategies and their executions in response to bugs, sure.

 Ask them to use the keyboard for everything - (how much do you guys
 use the keyboard vs say paper or other things?) - and you can
 automatically record key-presses.

Right.

 Hasn't anyone done this in any shape or form? It might sound as if it
 would produce terribly complicated results, but my guess is that they
 would be fascinating just to look at (and compare technique) as well
 as analyse.

I don't think it's sufficient to keep it as analyses, here's why:
http://heybryan.org/humancortex.html Basically, wouldn't it be 
interesting to have an online/real-time/run-time system for keeping 
track of your brain as you program? This would allow for neurofeedback 
and some other possibilities.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, William Pearson wrote:
 2008/9/5 Mike Tintner [EMAIL PROTECTED]:
  By contrast, all deterministic/programmed machines and computers
  are guaranteed to complete any task they begin.

 If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so. Have you every had a programmed computer system
 say to you. This program is not responding, do you wish to terminate
 it. There is no reason in principle why the decision to terminate
 the program couldn't be made automatically.

These errors are computed. Do what I mean, not what I say is a common 
phrase thrown around in programming circles. The errors are not because 
that suddenly the ALU decided to not be present, and the errors are not 
because it suddenly lost its status as a Turing machine (although if 
you drove a rock through it, this is quite likely). Rather this is 
because you failed to write a good kernel. And yes, the decision to 
terminate programs can be made automatically, and I sometimes choose 
scripts on my clusters to kill things that haven't been responding for 
a certain amount of time, but usually I prefer to investigate it by 
hand since it's so rare.

  Very different kinds of machines to us. Very different paradigm.
  (No?)

 We commonly talk about single program systems because they are
 generally interesting, and can be analysed simply. My discussion on
 self-modifying systems ignored the interrupt driven multi-tasking
 nature of the system I want to build, because that makes analysis a
 lot more hard. I will still be building an interrupt driven, multi
 tasking system.

That's an interesting proposal, but I'm wondering about something. 
Suppose you have a cluster of processors, and they are all 
communicating with each other in some way to divide up tasks and 
compute away. Now, given the ability to send interrupts from one 
another, and given the linear nature of each individual unit, is it 
really multitasking? At some point it has to integrate all of the 
results together at a single node for writing at a single address on 
the hdd (or something) so that the results are in one single place, 
that or the reading function of the results must do this. Is it really 
then multi-tasking and parallel?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 Were your computer like a human mind, it would have been able to say
 (as you/we all do) - well if that part of the problem is going to be
 difficult, I'll ignore it  or.. I'll just make up an answer... or
 by God I'll keep trying other ways until I do solve this.. or...
 ..  or ... Computers, currently, aren't free thinkers.

I'm pretty sure that compiler optimizers, that go in and look at your 
loops and other computational elements of a program, are able to make 
assessments like that. Of course, they'll just leave it as it is 
instead of completely ignoring parts of your program that you wish to 
compile, but it does seem similar. I recently came across an 
evolutionary optimizer for compilers to test parameters to gcc to try 
to figure the best way to compile a program on a certain architecture 
(to learn all of the gcc parameters yourself seems impossible 
sometimes, you see). Perhaps there's some evolved laziness in the human 
brain that could be modeled with gcc easily enough.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 fundamental programming problem, right?) A creative free machine,
 like a human, really can follow any of what may be a vast range of
 routes - and you really can't predict what it will do or, at a basic
 level, be surprised by it.

What do you say to the brain simulation projects? There is a biophysical 
basis to the brain and it's being discovered and hammered out. You can, 
in fact, predict the results of the eye-blink rabbit experiments (I'm 
working with a lab on this - the simulations return results faster than 
the real neurons do in the lab. You can imagine how this is useful for 
hypothesis testing purposes.).

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, William Pearson wrote:
 I'm very interested in computers that self-maintain, that is reduce
 (or eliminate) the need for a human to be in the loop or know much
 about the internal workings of the computer. However it doesn't need
 a vastly different computing paradigm  it just needs a different way
 of thinking about the systems. E.g. how can you design a system that
 does not need a human around to fix mistakes, upgrade it or maintain
 it in general.

Yes, these systems are interesting. I can easily imagine a system that 
generates systems that have low human maintenance costs. But suppose 
that the system that you make generates a system (with that low hu 
maint cost), and this 2nd-gen system does it again and again. This is 
the problem of clanking replicators too -- you need to have some way to 
correct divergence and for errors of replication; and not only that, 
but as you go into new environments there are new things that have to 
be taken into account for maintenance. Bacteria solve this problem with 
having many billions of cells per culture and then having enough 
genetic variability to somehow scrounge up a partial solution within 
time -- so that once you get to the Nth-generation you're not screwed 
entirely if some change occurs in the environment. There was a recent 
experiment in the news that has been going for 20 years, the Michigan 
man who had bacterial selection experiments in bottles for the past 20 
years only to find that they evolved an ability to metabolize something 
they didn't metabolize before. That's an example of being able to work 
in new environments, and there's a lot of cost to it (dead bacteria, 
many generations, etc.) that silicon projects can't quite do simply 
because of resource/cost constraints if you use traditional approaches. 
What would an alternative approach look like? One where you don't need 
dead silicon projects, and one where you have enough instances of 
programs that you're able to find a solution with your genetic 
algorithm in enough time? The increasing availability of RAM and hdd 
space might be enough to let us bruteforce it, but the embodiment of 
bacteria in the problem domains is something that more memory 
strategies don't quite address. Thoughts?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, Mike Tintner wrote:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

But this is starting to get into an odd-mix of folk psychology. I was 
reading an excellent paper the other day that says this very plainly, 
written by Gerhard Werner: 

The Siren Call of metaphor: Subverting the proper task of Neuroscience
http://www.ece.utexas.edu/~werner/siren_call.pdf

 The case of Neuro-Psychological vs. Naturalistic Neuroscience.
 For grounding the argument, let us look at the case of
 ‘deciding to’ [34] in studies of conditioned motor behavior in
 monkeys, on which there is a rich harvest of imaginative experimental
 work on scholarly reviews available. I write this in profound respect
 for the investigators who conduct this work with immense ingenuity
 and sophistication. However, I question the soundness of the
 conceptual framework on which such experiments are predicated,
 observations are interpreted, and conclusions are formulated. I
 contend that current practices tend to disregard genuine issues in
 Neurophysiology with its own definitions of what legitimate
 propositions and criteria of valid statements in this discipline are.

  Here is the typical experimental protocol: the experimenter
 uses some measure of neural activity of his/her choice (usual neural
 spike discharges), recorded from a neural structure (selected by
 him/her on some criterion, and determines relations to behavior that
 he/she created as link between two events: an antecedent stimulus (
 chosen by him/her) and a consequent, arbitrary behavior, induced by
 the training protocol [49]. So far, the experimenter has done all the
 ‘deciding’, except leaving it up to the monkey to assign a “value” to
 complying with the experimental protocol. Different investigators
 summarize their experimental objective in various ways (in the
 interest of brevity, I slightly paraphrase, though being careful to
 preserving the original sense): to characterize neural computations
 representing the formation of perceptual decision [12]; to
 investigate the neural basis of a decision process [37]; to examine
 the coupling of neural processes of stimulus selection with response
 preparation [34], reflecting connections between motor system and
 cognitive processes [38] ; to assess neural activity indicating
 probabilistic reward anticipation [22,27]. In Shadlen and Newsome’s
 [37] evocative analogy “it is a jury’s deliberation in which sensory
 signals are the evidence in open court, and motor signals the jury’s
 verdict”. Helpful as metaphors and analogies can be as interim steps
 for making sense of the observation in familiar terms, they also
 import the conceptual burden of their source domain and lead us to
 attribute to the animal a decision and choice making capacity along
 principles for which Psychology has developed evidential and
 conceptual accounts in humans under entirely different conditions,
 and based on different observational facts. Nevertheless, armed with
 the metaphors of choice and decision, we assert that the observed
 neural activity is a “correlate” [19] of a decision to emit the
 observed behavior. As the preceding citations indicate, the observed
 neural activity is variously attributed to perceptual discrimination
 between competing (or conflicting) stimuli, to motor planning, or to
 reward anticipation; the implication being that the neural activity
 stands for (“represents”) one or the other of these psychological
 categories.

So, Mike, when you write like:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

It makes me wonder how you can assert the existence of a neurophysical 
basis of the existence of 'task', in terms of the *brain*, not in terms 
of our folk psychology and collective cultural background that has 
given us these names to these things. It's hard to talk about the brain 
from the biology-up, yes, that's true, but it's also very rewarding in 
that we don't make top-down misunderstandings.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Terren Suydam wrote:
 So, Mike, is free will:

 1) an illusion based on some kind of unpredictable, complex but
 *deterministic* interaction of physical components 2) the result of
 probabilistic physics - a *non-deterministic* interaction described
 by something like quantum mechanics 3) the expression of our
 god-given spirit, or some other non-physical mover of physical things

I've already mentioned an alternative on this mailing list that you 
haven't included in your question, would you consider it?
http://heybryan.org/free_will.html
^ Just so that I don't have to keep on rewriting it over and over again.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Terren Suydam

Hey Bryan,

To me, this is indistinguishable from the 1st option I laid out. Deterministic 
but impossible to predict.

Terren


--- On Sun, 9/7/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 From: Bryan Bishop [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Sunday, September 7, 2008, 11:44 AM
 On Friday 05 September 2008, Terren Suydam wrote:
  So, Mike, is free will:
 
  1) an illusion based on some kind of unpredictable,
 complex but
  *deterministic* interaction of physical components 2)
 the result of
  probabilistic physics - a *non-deterministic*
 interaction described
  by something like quantum mechanics 3) the expression
 of our
  god-given spirit, or some other non-physical mover of
 physical things
 
 I've already mentioned an alternative on this mailing
 list that you 
 haven't included in your question, would you consider
 it?
 http://heybryan.org/free_will.html
 ^ Just so that I don't have to keep on rewriting it
 over and over again.
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
 MT:By contrast, all deterministic/programmed machines and computers are

 guaranteed to complete any task they begin.

 Will:If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so.

 Will,

 That's a legalistic, not a valid objection, (although heartfelt!).In the
 above case, the computer is guaranteed to hang - and it does, strictly,
 complete its task.

Not necessarily, the task could be interrupted at that process stopped
or paused indefinately.

 What's happened is that you have had imperfect knowledge of the program's
 operations. Had you known more, you would have known that it would hang.

If it hung because of mult-process issues, you would need perfect
knowledge of the environment to know the possible timing issues as
well.

 Were your computer like a human mind, it would have been able to say (as
 you/we all do) - well if that part of the problem is going to be difficult,
 I'll ignore it  or.. I'll just make up an answer... or by God I'll keep
 trying other ways until I do solve this.. or... ..  or ...
 Computers, currently, aren't free thinkers.


Computers aren't free thinkers, but it does not follow from an
inability to switch,  cancel, pause and restart or modify tasks. All
of which they can do admirably. They just don't tend to do so, because
they aren't smart enough (and cannot change themselves to be so) to
know when it might be appropriate for what they are trying to do, so
it is left up to the human operator to do so.

I'm very interested in computers that self-maintain, that is reduce
(or eliminate) the need for a human to be in the loop or know much
about the internal workings of the computer. However it doesn't need a
vastly different computing paradigm  it just needs a different way of
thinking about the systems. E.g. how can you design a system that does
not need a human around to fix mistakes, upgrade it or maintain it in
general.

As they change their own system I will not know what they are going to
do, because they can get information from the environment about how to
act. This will me it a 'free thinker' of sorts. Whether it will be
enough to get what you want, is an empirical matter, as far as I am
concerned.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Will,

Yes, humans are manifestly a RADICALLY different machine paradigm- if you 
care to stand back and look at the big picture.


Employ a machine of any kind and in general, you know what you're getting - 
some glitches (esp. with complex programs) etc sure - but basically, in 
general,  it will do its job.


Humans are only human, not a machine. Employ one of those, incl. yourself, 
and, by comparison, you have only a v. limited idea of what you're getting - 
whether they'll do the job at all, to what extent, how well. Employ a 
programmer, a plumber etc etc.. Can you get a good one these days?... 
VAST difference.


And that's the negative side of our positive side - the fact that we're 1) 
supremely adaptable, and 2) can tackle those problems that no machine or 
current AGI  - (actually of course, there is no such thing at the mo, only 
pretenders) - can even *begin* to tackle.


Our unreliability
.

That, I suggest, only comes from having no set structure - no computer 
program - no program of action in the first place. (Hey, good  idea, who 
needs a program?)


Here's a simple, extreme example.

Will,  I want you to take up to an hour, and come up with a dance, called 
the Keyboard Shuffle. (A very ill-structured problem.)


Hey, you can do that. You can tackle a seriously ill-structured problem. You 
can embark on an activity you've never done before, presumably had no 
training for, have no structure for,  yet you will, if cooperative, come up 
with something - cobble together a session of that activity, and 
end-product, an actual dance. May be shit, but it'll be a dance.


And that's only an extreme example of how you approach EVERY activity. You 
similarly don't have a structure for your next hour[s], if you're writing an 
essay, or a program, or spending time watching TV, flipping chanels. You may 
quickly *adopt* or *form* certain structures/ routines. But they only go 
part way, and you do have to adopt and/or create them.


Now, I assert,  that's what an AGI is - a machine that has no programs, (no 
preset, complete structures for any activities), designed to tackle 
ill-structured problems by creating and adopting structures, not 
automatically following ones that have been laboured over for ridiculous 
amounts of time by human programmers offstage.


And that in parallel, though in an obviously more constrained way, is what 
every living organism is - an extraordinary machine that builds itself 
adaptively and flexibly, as it goes along  -  Dawkins' famous plane that 
builds itself in mid-air. Just as we construct our activities in mid-air. 
Also a very different machine paradigm to any we have at the mo  (although 
obviously lots of people are currently trying to design/understand such 
self-building machines).


P.S. The irony is that scientists and rational philosophers, faced with the 
extreme nature of human imperfection - our extreme fallibility (in the sense 
described above - i.e. liable to fail/give up/procrastinate at any given 
activity at any point in a myriad of ways) - have dismissed it as, 
essentially, down to bugs in the system. Things that can be fixed.


AGI-ers have the capacity like no one else to see and truly appreciate that 
such fallibility = highly desirable adaptability and that humans/animals 
really are fundamentally different machines.


P.P.S.  BTW that's the proper analogy for constructing an AGI - not 
inventing the plane (easy-peasy), but inventing the plane that builds itself 
in mid-air, (whole new paradigm of machine- and mind- invention).


Will: MT:By contrast, all deterministic/programmed machines and computers 
are


guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the
above case, the computer is guaranteed to hang - and it does, strictly,
complete its task.


Not necessarily, the task could be interrupted at that process stopped
or paused indefinately.


What's happened is that you have had imperfect knowledge of the program's
operations. Had you known more, you would have known that it would hang.


If it hung because of mult-process issues, you would need perfect
knowledge of the environment to know the possible timing issues as
well.


Were your computer like a human mind, it would have been able to say (as
you/we all do) - well if that part of the problem is going to be 
difficult,
I'll ignore it  or.. I'll just make up an answer... or by God I'll 
keep

trying other ways until I do solve this.. or... ..  or ...
Computers, currently, aren't free thinkers.



Computers aren't free thinkers, but it does not follow from an
inability to switch,  cancel, pause and restart or modify tasks. All
of which they can do admirably. They just don't tend to do so, because
they aren't smart enough (and cannot change 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner

Sorry - para Our unreliability ..  should have contined..

Our unreliabilty is the negative flip-side of our positive ability to stop 
an activity at any point, incl. the beginning and completely change tack/ 
course or whole approach, incl. the task itself, and even completely 
contradict ourself. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread William Pearson
2008/9/6 Mike Tintner [EMAIL PROTECTED]:
 Will,

 Yes, humans are manifestly a RADICALLY different machine paradigm- if you
 care to stand back and look at the big picture.

 Employ a machine of any kind and in general, you know what you're getting -
 some glitches (esp. with complex programs) etc sure - but basically, in
 general,  it will do its job.

What exactly is a desktop computers job?

 Humans are only human, not a machine. Employ one of those, incl. yourself,
 and, by comparison, you have only a v. limited idea of what you're getting -
 whether they'll do the job at all, to what extent, how well. Employ a
 programmer, a plumber etc etc.. Can you get a good one these days?... VAST
 difference.

If you find a new computer that I do not know how it has been
programmed (whether it has linux/windows and what version). You also
lack knowledge of what it is going to do. Aibo is a computer as well!
It follows a program.

 And that's the negative side of our positive side - the fact that we're 1)
 supremely adaptable, and 2) can tackle those problems that no machine or
 current AGI  - (actually of course, there is no such thing at the mo, only
 pretenders) - can even *begin* to tackle.

 Our unreliability
 .

 That, I suggest, only comes from having no set structure - no computer
 program - no program of action in the first place. (Hey, good  idea, who
 needs a program?)

You equate set structure with computer program. A computer program is
not set! There is set structure of some sorts in the brain, at the
neural level anyway. so you would have to be more precise in what you
mean by lack of set structure.

Wait, program of action? You don't think computer programs are like
lists of things to do in the real world, do you? That is just
something cooked up by the language writers to make things easier to
deal with, a computer program is really only about memory
manipulation. Some of the memory locations might be hooked up to the
real world, but at the end of the day the computer treats it all as
semanticless memory manipulations. Since what controls the memory
manipulations are themselves in memory, they to can be manipulated!

 Here's a simple, extreme example.

 Will,  I want you to take up to an hour, and come up with a dance, called
 the Keyboard Shuffle. (A very ill-structured problem.)

How about you go learn about self-modifying assembly language,
preferably with real-time interrupts. That would be a better use of
the time, I think.


 Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Derek Zahn
It has been explained many times to Tintner that even though computer hardware 
works with a particular set of primitive operations running in sequence, a 
hardwired set of primitive logical operations operating in sequence is NOT the 
theory of intelligence that any AGI researchers are proposing (to my 
knowledge).  A computer is just a system for holding a theory of intelligence 
which does not look like those primitives (at least not since the view that 
intelligence consists of simple interpretations of atomic tokens representing 
physical objects in small numbers of relationships with other such tokens was 
given up decades ago as insufficient).  As an example, the representational 
mechansms in Novamente and the dynamics of the mind agents that operate on them 
are probably better thought of as churning masses of probability relationships 
with varying and often non-specific semantic interpretations than Tintner's 
narrow view of what a computer is -- although I do not yet understand Novamente 
in detail.  He has to ignore all such efforts, though, because if he paid 
attention he would have to stop saying that NONE of us understand ANYTHING 
about how REAL intelligence is actually based on line drawings, or keyboards, 
or other childish notions.
 
Though he's in my killfile I do see his posts when others take the bait.  So 
Mike, please try to finally understand this:  AGI researchers do not think of 
intelligence as what you think of as a computer program -- some rigid sequence 
of logical operations programmed by a designer to mimic intelligent behavior.  
We know it is deeper than that.  This has been clear to just about everybody 
for many many years.  By engaging the field at such a level you do nothing 
worthwhile.



 Date: Sat, 6 Sep 2008 15:38:59 +0100 From: [EMAIL PROTECTED] To: 
 agi@v2.listbox.com Subject: Re: [agi] A NewMetaphor for Intelligence - the 
 Computer/Organiser  2008/9/6 Mike Tintner [EMAIL PROTECTED]:  Will,  
  Yes, humans are manifestly a RADICALLY different machine paradigm- if you 
  care to stand back and look at the big picture.   Employ a machine of 
 any kind and in general, you know what you're getting -  some glitches 
 (esp. with complex programs) etc sure - but basically, in  general, it will 
 do its job.  What exactly is a desktop computers job?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-06 Thread Mike Tintner
DZ:AGI researchers do not think of intelligence as what you think of as a 
computer program -- some rigid sequence of logical operations programmed by a 
designer to mimic intelligent behavior.

1. Sequence/Structure. The concept I've been using is not that a program is  a 
sequence of operations but a structure., including as per NARS, as I 've 
read Pei, a structure that may change more or less continuously. Techno-idiot 
that I am, I am fairly aware that many modern programs are extremely 
sophisticated and complex structures. I take into account, for example, 
Minsky's idea of a possible society of mind, with many different parts 
perhaps competing - not obviously realised in program form yet. 

But programs are nevertheless manifestly structures. Would you dispute that?

And a central point I've been making is that human life and activities  are 
manifestly *unstructured* - that in just about everything we do, we struggle to 
impose structure on our activities - to impose order and organization., 
planning, focus etc. .

Especially in AGI's central challenge -creativity. Creative activities are 
outstanding examples of unstructured activities, in which structures have to be 
created - painting scenes, writing stories, designing new machines, writing 
music/pop songs - often starting from an entirely blank page. (What's the 
program equivalent?)

2. A Programmer on Programs.  I am persuaded on multiple grounds that the 
human mind is not always algorithmic, nor merely computational in the syntactic 
sense of computational.
S Kauffman, Reinventing the Sacred

Try Chap 12.  Computationally, he trumps most AGI-ers in terms of most AI 
departments, incl. complexity, bioinformatics and general standing, no? Read 
the whole book in fact - it can be read as being entirely about the creative 
problem/challenge of AGI -  you liked Barsalou, you'll like this. . 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

OK, I'll bite: what's nondeterministic programming if not a contradiction?

Again - v. briefly - it's a reality - nondeterministic programming is a 
reality, so there's no material, mechanistic, software problem in getting a 
machine to decide either way. The only problem is a logical one of doing it 
for sensible reasons. And that's the long part - there are a continuous 
stream of sensible reasons, as there are for current nondeterministic 
computer choices.


Yes, strictly, a nondeterministic *program* can be regarded as a 
contradiction - i.e. a structured *series* of instructions to decide freely 
. The way the human mind is programmed is that we are not only free, and 
have to, *decide* either way about certain decisions, but we are also free 
to *think* about it - i.e. to decide metacognitively whether and how we 
decide at all - we continually decide. for example, to put off the 
decision till later.


So the simple reality of being as free to decide and think as you are, is 
that when you sit down to engage in any task, like write a post, essay, or 
have a conversation, or almost literally anything, there is no guarantee 
that you will start, or continue to the 2nd, 3rd, 4th step, let alone 
complete it. You may jack in your post more or less immediately.  This is at 
once the bane and the blessing of your life, and why you have such 
extraordinary problems finishing so many things. Procrastination.


By contrast, all deterministic/programmed machines and computers are 
guaranteed to complete any task they begin. (Zero procrastination or 
deviation). Very different kinds of machines to us. Very different paradigm. 
(No?)


I would say then that the human mind is strictly not so much 
nondeterministically programmed as briefed. And that's how an AGI will 
have to function. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread William Pearson
2008/9/5 Mike Tintner [EMAIL PROTECTED]:
 By contrast, all deterministic/programmed machines and computers are
 guaranteed to complete any task they begin.

If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so. Have you every had a programmed computer system
say to you. This program is not responding, do you wish to terminate
it. There is no reason in principle why the decision to terminate the
program couldn't be made automatically.

 (Zero procrastination or
 deviation).

Multi-tasking systems deviate all the time...

 Very different kinds of machines to us. Very different paradigm.
 (No?)

We commonly talk about single program systems because they are
generally interesting, and can be analysed simply. My discussion on
self-modifying systems ignored the interrupt driven multi-tasking
nature of the system I want to build, because that makes analysis a
lot more hard. I will still be building an interrupt driven, multi
tasking system.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner

MT:By contrast, all deterministic/programmed machines and computers are

guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the 
above case, the computer is guaranteed to hang - and it does, strictly, 
complete its task.


What's happened is that you have had imperfect knowledge of the program's 
operations. Had you known more, you would have known that it would hang.


Were your computer like a human mind, it would have been able to say (as 
you/we all do) - well if that part of the problem is going to be difficult, 
I'll ignore it  or.. I'll just make up an answer... or by God I'll keep 
trying other ways until I do solve this.. or... ..  or ... 
Computers, currently, aren't free thinkers. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike,

Will's objection is not quite so easily dismissed. You need to argue
that there is an alternative, not just that Will's is more of the
same.

--Abram

On Fri, Sep 5, 2008 at 9:34 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 MT:By contrast, all deterministic/programmed machines and computers are

 guaranteed to complete any task they begin.

 Will:If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so.

 Will,

 That's a legalistic, not a valid objection, (although heartfelt!).In the
 above case, the computer is guaranteed to hang - and it does, strictly,
 complete its task.

 What's happened is that you have had imperfect knowledge of the program's
 operations. Had you known more, you would have known that it would hang.

 Were your computer like a human mind, it would have been able to say (as
 you/we all do) - well if that part of the problem is going to be difficult,
 I'll ignore it  or.. I'll just make up an answer... or by God I'll keep
 trying other ways until I do solve this.. or... ..  or ...
 Computers, currently, aren't free thinkers.



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike,

The philosophical paradigm I'm assuming is that the only two
alternatives are deterministic and random. Either the next state is
completely determined by the last, or it is only probabilistically
determined.

Deterministic does not mean computable, since physical processes can
be totally well-defined without being computable (take Newton's
physics for example).

So,

1) Is the next action that your creativity machine will take intended
to be uniquely defined, given its experience and inputs?

2) is the next action intended to be computable from the experience
and inputs? Meaning (approximately), could the creativity machine be
implemented on a computer?

--Abram

On Fri, Sep 5, 2008 at 9:26 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram: In that case I do not see how your view differs from simplistic

 dualism, as Terren cautioned. If your goal is to make a creativity
 machine, in what sense would the machine be non-algorithmic? Physical
 random processes?


 Abram,

 You're operating within a philosophical paradigm that says all actions and
 problemsolving must be preprogrammed. Nothing else is possible. That ignores
 the majority of real life problems where no program is possible, period.

 Sometimes the best plan is no plan  If you're confronted with the task of
 finding something in a foreign territory, you simply don't (and couldn't)
 have the luxury of a program.

 All you have is a rough idea, as opposed to an algorithm, of the sort of
 things you can do. You know roughly what you're looking for - an object
 somewhere in that territory. You know roughly how to travel and put one
 foor in front of the other and avoid obstacles and pick things up etc.

 (Let's say - you have to find a key that has been lost somewhere in a
 house).

 Well you certainly don't have an algorithm for finding a lost key in a
 house. In fact, if you or anyone would care to spend 5 mins on this problem,
 you would start to realise that no algorithm is possible. Check out
 Kauffman's interview on edge.com. for similar problems  arguments
 .
 So what do/can you do? Make it up as you go along. Start somewhere and keep
 going, and after a while if that doesn't work, try somewhere and something
 else...

 But there's no algorithm for this. Just as there is, or was,  no algorithm
 for your putting the pieces of a jigsaw puzzle together (a much simpler,
 more tightly defined problem).  You just got stuck in. Somewhere. Anywhere
 reasonable.

 Algorithms, from a human POV, are for literal people who have to do things
 by the book - people with a compulsive obsessional disorder - who can't
 bear to confront a blank page. :).V useful *after* you've solved a problem,
 but not in the beginning

 There are no physical, computational, mechanical reasons why machines can't
 be designed on these principles - to proceed with rough ideas of what to do,
 freely consulting and combining options and looking around for fresh ones,
 as they go along, rather than following a preprogrammed list.

 P.S. Nothing in this is strictly random - as in a narrow AI, randomly,
 blindly. working its way through a preprogrammed list. You only try options
 that are appropriate -  routes that appear likely to lead to your goal. I
 would call this unstructured but not (blindly) random thinking.



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Mike Tintner


Abram,

I don't understand why.how I need to argue an alternative - please explain. 
If it helps, a deterministic, programmed machine can, at any given point, 
only follow one route through a given territory or problem space or maze - 
even if surprising  *appearing* to halt/deviate from the plan -   to the 
original, less-than-omniscient-of-what-he-hath-wrought programmer. (A 
fundamental programming problem, right?) A creative free machine, like a 
human, really can follow any of what may be a vast range of routes - and you 
really can't predict what it will do or, at a basic level, be surprised by 
it.



Mike,

Will's objection is not quite so easily dismissed. You need to argue
that there is an alternative, not just that Will's is more of the
same.

--Abram

On Fri, Sep 5, 2008 at 9:34 AM, Mike Tintner [EMAIL PROTECTED] 
wrote:

MT:By contrast, all deterministic/programmed machines and computers are


guaranteed to complete any task they begin.


Will:If only such could be guaranteed! We would never have system hangs,
dead locks. Even if it could be made so, computer systems would not
always want to do so.

Will,

That's a legalistic, not a valid objection, (although heartfelt!).In the
above case, the computer is guaranteed to hang - and it does, strictly,
complete its task.

What's happened is that you have had imperfect knowledge of the program's
operations. Had you known more, you would have known that it would hang.

Were your computer like a human mind, it would have been able to say (as
you/we all do) - well if that part of the problem is going to be 
difficult,
I'll ignore it  or.. I'll just make up an answer... or by God I'll 
keep

trying other ways until I do solve this.. or... ..  or ...
Computers, currently, aren't free thinkers.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Terren Suydam

Hi Mike, comments below...

--- On Fri, 9/5/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Again - v. briefly - it's a reality - nondeterministic
 programming is a 
 reality, so there's no material, mechanistic, software
 problem in getting a 
 machine to decide either way. 

This is inherently dualistic to say this. On one hand you're calling it a 
'reality' and on the other you're denying the influence of material or 
mechanism. What exactly is deciding then, a soul?  How do you get one of those 
into an AI? 

 Yes, strictly, a nondeterministic *program* can be regarded
 as a 
 contradiction - i.e. a structured *series* of instructions
 to decide freely. 

At some point you will have to explain how this deciding freely works. As of 
now, all you have done is name it. 

 The way the human mind is programmed is that
 we are not only free, and 
 have to, *decide* either way about certain decisions, but
 we are also free 
 to *think* about it - i.e. to decide metacognitively
 whether and how we 
 decide at all - we continually decide. for
 example, to put off the 
 decision till later.

There is an entire school of thought, quite mainstream now, in cognitive 
science that says that what appears to be free will is an illusion. Of 
course, you can say that you are free to choose whatever you like, but that 
only speaks to the strength of the illusion - that in itself is not enough to 
disprove the claim. 

In fact, it is plain to see that if you do not commit yourself to this view 
(free will as illusion), you are either a dualist, or you must invoke some kind 
of probabilistic mechanism (as some like Penrose have done by saying that the 
free-will buck stops at the level of quantum mechanics). 

So, Mike, is free will:

1) an illusion based on some kind of unpredictable, complex but *deterministic* 
interaction of physical components
2) the result of probabilistic physics - a *non-deterministic* interaction 
described by something like quantum mechanics
3) the expression of our god-given spirit, or some other non-physical mover of 
physical things


 By contrast, all deterministic/programmed machines and
 computers are 
 guaranteed to complete any task they begin. (Zero
 procrastination or 
 deviation). Very different kinds of machines to us. Very
 different paradigm. 
 (No?)

I think the difference of paradigm between computers and humans is not that one 
is deterministic and one isn't, but rather that one is a paradigm of top-down, 
serialized control, and the other is bottom-up, massively parallel, and 
emergent. It comes down to design vs. emergence.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-05 Thread Abram Demski
Mike,

On Fri, Sep 5, 2008 at 1:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 Abram,

 I don't understand why.how I need to argue an alternative - please explain.

I am not sure what to say, but here is my view of the situation. You
are claiming that there is a broad range of things that algorithmic
systems cannot do. You gave some examples. William took a couple of
these examples and argued that they are routinely done by
multi-tasking systems. You say  that those methods do not really
count, because they reduce to normal computation. I say that that is
not a valid response, because that was exactly Will's point, that they
do reduce to normal computations. To make your objection work, you
need to argue that humans do not do the same sort of thing when we
change our minds about something.

 If it helps, a deterministic, programmed machine can, at any given point,
 only follow one route through a given territory or problem space or maze -
 even if surprising  *appearing* to halt/deviate from the plan -   to the
 original, less-than-omniscient-of-what-he-hath-wrought programmer. (A
 fundamental programming problem, right?) A creative free machine, like a
 human, really can follow any of what may be a vast range of routes - and you
 really can't predict what it will do or, at a basic level, be surprised by
 it.

It still sounds like you are describing physical randomness.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread William Pearson
2008/9/4 Mike Tintner [EMAIL PROTECTED]:
 Terren,

 If you think it's all been said, please point me to the philosophy of AI
 that includes it.

 A programmed machine is an organized structure. A keyboard (and indeed a
 computer with keyboard) are something very different - there is no
 organization to those 26 letters etc.   They can be freely combined and
 sequenced to create an infinity of texts. That is the very essence and
 manifestly, the whole point, of a keyboard.

 Yes, the keyboard is only an instrument. But your body - and your brain -
 which use it,  are themselves keyboards. They consist of parts which also
 have no fundamental behavioural organization - that can be freely combined
 and sequenced to create an infinity of sequences of movements and thought -
 dances, texts, speeches, daydreams, postures etc.

 In abstract logical principle, it could all be preprogrammed. But I doubt
 that it's possible mathematically - a program for selecting from an infinity
 of possibilities? And it would be engineering madness - like trying to
 preprogram a particular way of playing music, when an infinite repertoire is
 possible and the environment, (in this case musical culture), is changing
 and evolving with bewildering and unpredictable speed.

 To look at computers as what they are (are you disputing this?) - machines
 for creating programs first, and following them second,  is a radically
 different way of looking at computers. It also fits with radically different
 approaches to DNA - moving away from the idea of DNA as coded program, to
 something that can be, as it obviously can be, played like a keyboard  - see
 Dennis Noble, The Music of Life. It fits with the fact (otherwise
 inexplicable) that all intelligences have both deliberate (creative) and
 automatic (routine) levels - and are not just automatic, like purely
 programmed computers. And it fits with the way computers are actually used
 and programmed, rather than the essentially fictional notion of them as pure
 turing machines.

 And how to produce creativity is the central problem of AGI - completely
 unsolved.  So maybe a new approach/paradigm is worth at least considering
 rather than more of the same? I'm not aware of a single idea from any AGI-er
 past or present that directly addresses that problem - are you?


You can't create a program out of thin air. So you have to have some
sort of program to start with. You probably want to change the initial
program in some way as well as perhaps adding more programming. This
leads you to recursive self-change and its subset RSI, which is a very
tricky business even if you don't think it is going to go FOOM and
take over the world.

So this very list has been discussing in abstract terms the very thing
you want it to be discussing!

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Will:You can't create a program out of thin air. So you have to have some
sort of program to start with

Not out of thin air.Out of a general instruction and desire[s]/emotion[s]. 
Write me a program that will contradict every statement made to it. Write 
me a single program that will allow me to write video/multimedia 
articles/journalism fast and simply. That's what you actually DO. You start 
with v. general briefs rather than any detailed list of instructions, and 
fill them  in as you go along, in an ad hoc, improvisational way - 
manifestly *creating* rather than *following* organized structures of 
behaviour in an initially disorganized way.


Do you honestly think that you write programs in a programmed way? That it's 
not an *art* pace Matt, full of hesitation, halts, meandering, twists and 
turns, dead ends, detours etc?  If you have to have some sort of program to 
start with, how come there is no sign  of that being true, in the creative 
process of programmers actually writing programs?


Do you think that there's a program for improvising on a piano [or other 
form of keyboard]?  That's what AGI's are supposed to do - improvise. So 
create one that can. Like you. And every other living creature. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Valentina Poletti
Programming definitely feels like an art to me - I get the same feelings as
when I am painting. I always wondered why.

On the phylosophical side in general technology is the ability of humans to
adapt the environment to themselves instead of the opposite - adapting to
the environment. The environment acts on us and we act on it - we absorb
information from it and we change it while it changes us.

When we want to step further and create an AGI I think we want to
externalize the very ability to create technology - we want the environment
to start adapting to us by itself, spontaneously by gaining our goals.

Vale



On 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Will:You can't create a program out of thin air. So you have to have some
 sort of program to start with

 Not out of thin air.Out of a general instruction and desire[s]/emotion[s].
 Write me a program that will contradict every statement made to it. Write
 me a single program that will allow me to write video/multimedia
 articles/journalism fast and simply. That's what you actually DO. You start
 with v. general briefs rather than any detailed list of instructions, and
 fill them  in as you go along, in an ad hoc, improvisational way -
 manifestly *creating* rather than *following* organized structures of
 behaviour in an initially disorganized way.

 Do you honestly think that you write programs in a programmed way? That
 it's not an *art* pace Matt, full of hesitation, halts, meandering, twists
 and turns, dead ends, detours etc?  If you have to have some sort of
 program to start with, how come there is no sign  of that being true, in
 the creative process of programmers actually writing programs?

 Do you think that there's a program for improvising on a piano [or other
 form of keyboard]?  That's what AGI's are supposed to do - improvise. So
 create one that can. Like you. And every other living creature.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 12:47 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Terren,

 If you think it's all been said, please point me to the philosophy of AI
 that includes it.

I believe what you are suggesting is best understood as an interaction machine.



General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single reference for that. (I am not knowledgable on this subject, I
just have heard the idea before.)

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks for reply. But I don't understand what you see as the connection. An 
interaction machine from my brief googling is one which has physical organs.


Any factory machine can be thought of as having organs. What I am trying to 
forge is a new paradigm of a creative, free  machine as opposed to that 
exemplified by most actual machines, which are rational, deterministic 
machines. The latter can only engage in any task in set ways - and therefore 
engage and combine their organs in set combinations and sequences. Creative 
machines have a more or less infinite range of possible ways of going about 
things, and can combine their organs in a virtually infinite range of 
combinations, (which gives them a slight advantage, adaptively :) ). 
Organisms *are* creative machines; computers and robots *could* be (and are, 
when combined with humans), AGI's will *have* to be.


(To talk of creative machines, more specifically, as I did, as 
keyboards/organisers is to focus on the mechanics of this infinite 
combinativity of organs).


Interaction machines do not seem in any way then to entail what I'm talking 
about - creative machines - keyboards/ organisers - infinite 
combinativity - or the *creation,* as quite distinct from *following*  of 
programs/algorithms and routines..




Abram/MT: If you think it's all been said, please point me to the 
philosophy of AI

that includes it.


I believe what you are suggesting is best understood as an interaction 
machine.




General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single reference for that. (I am not knowledgable on this subject, I
just have heard the idea before.)

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

Mike,

Thanks for the reference to Dennis Noble, he sounds very interesting and his 
views on Systems Biology as expressed on his Wikipedia page are perfectly in 
line with my own thoughts and biases.

I agree in spirit with your basic criticisms regarding current AI and 
creativity. However, it must be pointed out that if you abandon determinism, 
you find yourself in the world of dualism, or worse. There are several ways out 
of this conundrum, one involves complexity/emergence (global behavior cannot be 
understood in terms of reduction to local behavior), another involves 
algorithmic complexity (or complicatedness, behavior cannot be predicted due to 
limitations of our inborn abilities to mentally model such complicatedness), 
although either can be predicted in principle with sufficient computational 
resources. This is true of humans as well - and if you think it isn't, once 
again, you're committing yourself to some kind of dualistic position (e.g., we 
are motivated by our spirit).

If you accept the proposition that the appearance of free will in an agent 
comes down to one's ability to predict its behavior, then either of the schemes 
above serves to produce free will (or the illusion of it, if you prefer).

Thus is creativity possible while preserving determinism. Of course, you still 
need to have an explanation for how creativity emerges in either case, but in 
contrast to what you said before, some AI folks have indeed worked on this 
issue. 

Terren

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 12:47 AM
 Terren,
 
 If you think it's all been said, please point me to the
 philosophy of AI 
 that includes it.
 
 A programmed machine is an organized structure. A keyboard
 (and indeed a 
 computer with keyboard) are something very different -
 there is no 
 organization to those 26 letters etc.   They can be freely
 combined and 
 sequenced to create an infinity of texts. That is the very
 essence and 
 manifestly, the whole point, of a keyboard.
 
 Yes, the keyboard is only an instrument. But your body -
 and your brain - 
 which use it,  are themselves keyboards. They consist of
 parts which also 
 have no fundamental behavioural organization - that can be
 freely combined 
 and sequenced to create an infinity of sequences of
 movements and thought - 
 dances, texts, speeches, daydreams, postures etc.
 
 In abstract logical principle, it could all be
 preprogrammed. But I doubt 
 that it's possible mathematically - a program for
 selecting from an infinity 
 of possibilities? And it would be engineering madness -
 like trying to 
 preprogram a particular way of playing music, when an
 infinite repertoire is 
 possible and the environment, (in this case musical
 culture), is changing 
 and evolving with bewildering and unpredictable speed.
 
 To look at computers as what they are (are you disputing
 this?) - machines 
 for creating programs first, and following them second,  is
 a radically 
 different way of looking at computers. It also fits with
 radically different 
 approaches to DNA - moving away from the idea of DNA as
 coded program, to 
 something that can be, as it obviously can be, played like
 a keyboard  - see 
 Dennis Noble, The Music of Life. It fits with the fact
 (otherwise 
 inexplicable) that all intelligences have both deliberate
 (creative) and 
 automatic (routine) levels - and are not just automatic,
 like purely 
 programmed computers. And it fits with the way computers
 are actually used 
 and programmed, rather than the essentially fictional
 notion of them as pure 
 turing machines.
 
 And how to produce creativity is the central problem of AGI
 - completely 
 unsolved.  So maybe a new approach/paradigm is worth at
 least considering 
 rather than more of the same? I'm not aware of a single
 idea from any AGI-er 
 past or present that directly addresses that problem - are
 you?
 
 
 
  Mike,
 
  There's nothing particularly creative about
 keyboards. The creativity 
  comes from what uses the keyboard. Maybe that was your
 point, but if so 
  the digression about a keyboard is just confusing.
 
  In terms of a metaphor, I'm not sure I understand
 your point about 
  organizers. It seems to me to refer simply
 to that which we humans do, 
  which in essence says general intelligence is
 what we humans do. 
  Unfortunately, I found this last email to be quite
 muddled. Actually, I am 
  sympathetic to a lot of your ideas, Mike, but I also
 have to say that your 
  tone is quite condescending. There are a lot of smart
 people on this list, 
  as one would expect, and a little humility and respect
 on your part would 
  go a long way. Saying things like You see,
 AI-ers simply don't understand 
  computers, or understand only half of them. 
 More often than not you 
  position 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike,

The reason I decided that what you are arguing for is essentially an
interactive model is this quote:

But that is obviously only the half of it.Computers are obviously
much more than that - and  Turing machines. You just have to look at
them. It's staring you in the face. There's something they have that
Turing machines don't. See it? Terren?

They have -   a keyboard.

A keyboard is precisely what the interaction theorists are trying to
account for! Plus the mouse, the ethernet port, et cetera.

Moreover, your general comments fit into the model if interpreted
judiciously. You make a distinction between rule-based and creative
behavior; rule-based behavior could be thought of as isolated
processing of input (receive input, process without interference,
output result) while creative behavior is behavior resulting from
continual interaction with and exploration of the external world. Your
concept of organisms as organizers only makes sense when I see it in
this light: a human organizes the environment by interaction with it,
while a Turing machine is unable to do this because it cannot
explore/experiment/discover.

-Abram

On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks for reply. But I don't understand what you see as the connection. An
 interaction machine from my brief googling is one which has physical organs.

 Any factory machine can be thought of as having organs. What I am trying to
 forge is a new paradigm of a creative, free  machine as opposed to that
 exemplified by most actual machines, which are rational, deterministic
 machines. The latter can only engage in any task in set ways - and therefore
 engage and combine their organs in set combinations and sequences. Creative
 machines have a more or less infinite range of possible ways of going about
 things, and can combine their organs in a virtually infinite range of
 combinations, (which gives them a slight advantage, adaptively :) ).
 Organisms *are* creative machines; computers and robots *could* be (and are,
 when combined with humans), AGI's will *have* to be.

 (To talk of creative machines, more specifically, as I did, as
 keyboards/organisers is to focus on the mechanics of this infinite
 combinativity of organs).

 Interaction machines do not seem in any way then to entail what I'm talking
 about - creative machines - keyboards/ organisers - infinite combinativity
 - or the *creation,* as quite distinct from *following*  of
 programs/algorithms and routines..



 Abram/MT: If you think it's all been said, please point me to the
 philosophy of AI

 that includes it.

 I believe what you are suggesting is best understood as an interaction
 machine.



 General references:

 http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

 http://www.cs.brown.edu/people/pw/papers/ficacm.ps

 http://www.la-acm.org/Archives/laacm9912.html



 The concept that seems most relevant to AI is the learning theory
 provided by inductive turing machines, but I cannot find a good
 single reference for that. (I am not knowledgable on this subject, I
 just have heard the idea before.)

 --Abram


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And as a matter of scientific, historical fact, computers are first
 and foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and only then following them. [Remember how AI gets
 almost everything about intelligence back to front?] There is not and
 never has been a program that wasn't first created on a keyboard.
 Indisputable fact. Almost everything that happens in computers
 happens via the keyboard.

http://heybryan.org/mediawiki/index.php/Egan_quote

 So what exactly is a keyboard? Well, like all keyboards whether of
 computers, musical instruments or typewriters, it is a creative
 instrument. And what makes it creative is that it is - you could say
 - an organiser.

Then you're starting to get into (some well needed) complexity science.

 A device with certain organs (in this case keys) that are designed
 to be creatively organised - arranged in creative, improvised (rather
 than programmed) sequences of  action/ association./organ play.

Yes, but the genotype isn't the phenotype and the translation from 
the 'code', the intentions of the programmer and so on to the 
expressions is 'hard' - people get so caught up in folk psychology that 
it's maddening.

 And an extension of the body. Of the organism. All organisms are
 organisers - devices for creatively sequencing actions/
 associations./organs/ nervous systems first and developing fixed,
 orderly sequences/ routines/ programs second.

Some (I) say that neural systems are somewhat like optimizers, which are 
heavily used in compilers that are compiling your programs anyway, so 
be careful: the difference might not be that broad.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks. V. helpful and interesting. Yes, on further examination, these 
interactionist guys seem, as you say, to be trying to take into account  the 
embeddedness of the computer.


But no, there's still a huge divide between them and me. I would liken them 
in the context of this discussion, to Pei who tries to argue that NARS is 
non-algorithmic, because the program is continuously changing. - and 
therefore satisfies the objections of classical objectors to AI/AGI.


Well, both these guys and Pei are still v. much algorithmic in any 
reasonable sense of the word - still following *structures,* if v. 
sophisticated (and continuously changing) structures, of thought.


And what I am asserting is a  paradigm of a creative machine, which starts 
as, and is, NON-algorithmic and UNstructured  in all its activities, albeit 
that it acquires and creates a multitude of algorithms, or 
routines/structures, for *parts* of those  activities. For example, when you 
write a post,  nearly every word and a great many phrases and even odd 
sentences, will be automatically, algorithmically produced. But the whole 
post, and most paras will *not* be - and *could not* be.


A creative machine has infinite combinative potential. An algorithmic, 
programmed machine has strictly limited combinativity..


And a keyboard is surely the near perfect symbol of infinite, unstructured 
combinativity. It is being, and has been, used in endlessly creative ways - 
and is, along with the blank page and pencil, the central tool of our 
civilisation's creativity. Those randomly arranged letters - clearly 
designed to be infinitely recombined - are the antithesis of a programmed 
machine.


So however those guys account for that keyboard, I don't see them as in any 
way accounting for it in my sense, or in its true, full usage. But thanks 
for your comments. (Oh and I did understand re Bayes - I was and am still 
arguing he isn't valid in many cases, period).




Mike,

The reason I decided that what you are arguing for is essentially an
interactive model is this quote:

But that is obviously only the half of it.Computers are obviously
much more than that - and  Turing machines. You just have to look at
them. It's staring you in the face. There's something they have that
Turing machines don't. See it? Terren?

They have -   a keyboard.

A keyboard is precisely what the interaction theorists are trying to
account for! Plus the mouse, the ethernet port, et cetera.

Moreover, your general comments fit into the model if interpreted
judiciously. You make a distinction between rule-based and creative
behavior; rule-based behavior could be thought of as isolated
processing of input (receive input, process without interference,
output result) while creative behavior is behavior resulting from
continual interaction with and exploration of the external world. Your
concept of organisms as organizers only makes sense when I see it in
this light: a human organizes the environment by interaction with it,
while a Turing machine is unable to do this because it cannot
explore/experiment/discover.

-Abram

On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Abram,

Thanks for reply. But I don't understand what you see as the connection. 
An
interaction machine from my brief googling is one which has physical 
organs.


Any factory machine can be thought of as having organs. What I am trying 
to

forge is a new paradigm of a creative, free  machine as opposed to that
exemplified by most actual machines, which are rational, deterministic
machines. The latter can only engage in any task in set ways - and 
therefore
engage and combine their organs in set combinations and sequences. 
Creative
machines have a more or less infinite range of possible ways of going 
about

things, and can combine their organs in a virtually infinite range of
combinations, (which gives them a slight advantage, adaptively :) ).
Organisms *are* creative machines; computers and robots *could* be (and 
are,

when combined with humans), AGI's will *have* to be.

(To talk of creative machines, more specifically, as I did, as
keyboards/organisers is to focus on the mechanics of this infinite
combinativity of organs).

Interaction machines do not seem in any way then to entail what I'm 
talking
about - creative machines - keyboards/ organisers - infinite 
combinativity

- or the *creation,* as quite distinct from *following*  of
programs/algorithms and routines..



Abram/MT: If you think it's all been said, please point me to the
philosophy of AI


that includes it.


I believe what you are suggesting is best understood as an interaction
machine.



General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Terren Suydam wrote:
 Thus is creativity possible while preserving determinism. Of course,
 you still need to have an explanation for how creativity emerges in
 either case, but in contrast to what you said before, some AI folks
 have indeed worked on this issue.

http://heybryan.org/mediawiki/index.php/Egan_quote 

Egan solved that particular problem. It's about creation -- even if you 
have the most advanced mathematical theory of the universe, you just 
made it slightly more recursive and so on just by shuffling around 
neurotransmitters in your head.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 And what I am asserting is a  paradigm of a creative machine, which
 starts as, and is, NON-algorithmic and UNstructured  in all its
 activities, albeit that it acquires and creates a multitude of
 algorithms, or
 routines/structures, for *parts* of those  activities. For example,
 when you write a post,  nearly every word and a great many phrases
 and even odd sentences, will be automatically, algorithmically
 produced. But the whole post, and most paras will *not* be - and
 *could not* be.

Here's an alternative formulation for you to play with, Mike. I suspect 
it is still possible to consider it a creative machine even with an 
algorithmic basis *because* it is the nature of reality itself to 
compute these things; there is nothing that can have as much 
information about the moment than the moment itself, and thus why 
there's still this element of stochasticity and creativity that we see, 
even if we say that the brain is deterministic and so on.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And how to produce creativity is the central problem of AGI -
 completely unsolved.  So maybe a new approach/paradigm is worth at
 least considering rather than more of the same? I'm not aware of a
 single idea from any AGI-er past or present that directly addresses
 that problem - are you?

Mike, one of the big problems in computer science is the prediction of 
genotypes from phenotypes in general problem spaces. So far, from what 
I've learned, we haven't a way to guarantee that a resulting process 
is going to be creative. So it's not going to be solved per-se in the 
traditional sense of hey look, here's a foolproof equivalency of 
creativity. I truly hope I am wrong. This is a good way to be wrong 
about the whole thing, I must admit.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 Do you honestly think that you write programs in a programmed way?
 That it's not an *art* pace Matt, full of hesitation, halts,
 meandering, twists and turns, dead ends, detours etc?  If you have
 to have some sort of program to start with, how come there is no
 sign  of that being true, in the creative process of programmers
 actually writing programs?

Two notes on this one. 

I'd like to see fMRI studies of programmers having at it. I've seen this 
of authors, but not of programmers per-se. It would be interesting. But 
this isn't going to work because it'll just show you lots of active 
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those 
dead ends and turns that one makes when traveling down those paths. 
I've sometimes been able to go fully into a recording session where I 
could write about a few minutes of decisions for hours on end 
afterwards, but it's just not efficient to getting the point across. 
I've sometimes wanted to do this for web crawling, when I do my 
browsing and reading, and at least somewhat track my jumps from page to 
page and so on, or even in my own grammar and writing so that I can 
make sure I optimize it :-) and so that I can see where I was going or 
not going :-) but any solution that requires me to type even /more/ 
will be a sort of contradiction, since then I will have to type even 
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop 
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Valentina Poletti wrote:
 When we want to step further and create an AGI I think we want to
 externalize the very ability to create technology - we want the
 environment to start adapting to us by itself, spontaneously by
 gaining our goals.

There is a sense of resilience in the whole scheme of things. It's not 
hard to show how stupid each one of us can be in a single moment; but 
luckily our stupid decisions don't blow us up [often] - it's not so 
much luck as it might be resilience. In an earlier email to which I 
replied today, Mike was looking for a resilient computer that didn't 
need code. 

On another note: goals are an interesting folk psychology mechanism. 
I've seen other cultures afflict their own goals upon their 
environment, sort of how the brain contains a map of the skin for 
sensory representation, the same with the environment to their own 
goals and aspirations in life. What alternatives to goals could you do 
when doing programming? Otherwise you'll not end up with Mike's 
requested 'resilient computer' as I'm calling it.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Terren:   I agree in spirit with your basic criticisms regarding current AI 
and creativity. However, it must be pointed out that if you abandon 
determinism, you find yourself in the world of dualism, or worse.


Nah. One word (though it would take too long here to explain) ; 
nondeterministic programming.


Terren: you still need to have an explanation for how creativity emerges in 
either case, but in contrast to what you said before, some AI folks have 
indeed worked on this issue.


Oh, they've done loads of work, often fine work, i.e. produced impressive 
but 'hack' variations on themes, musical, artistic, scripting etc. But the 
people actually producing those creative/hack variations, will agree, when 
pressed that they are not truly creative. And actual AGI-ers, to repeat, 
AFAIK have not produced a single idea about how machines can be creative. 
Not even a proposal, however wrong. Please point to one.


P.S. Glad to see your evolutionary perspective includes the natural kind - I 
had begun to think, obviously wrongly, that it didn't. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Bryan,

You start v. constructively thinking how to test the non-programmed nature 
of  - or simply record - the actual writing of programs, and then IMO fail 
to keep going.


There have to be endless more precise ways than trying to look at their 
brain.


Verbal protocols.

Ask them to use the keyboard for everything - (how much do you guys use the 
keyboard vs say paper or other things?) - and you can automatically record 
key-presses.


If they use paper, find a surface that records the pen strokes.

Combine with a camera recording them.

Come on, you must be able to give me still more ways - there are multiple 
possible recording technologies, no?


Hasn't anyone done this in any shape or form? It might sound as if it would 
produce terribly complicated results, but my guess is that they would be 
fascinating just to look at (and compare technique) as well as analyse.



Bryan/MT: Do you honestly think that you write programs in a programmed 
way?

That it's not an *art* pace Matt, full of hesitation, halts,
meandering, twists and turns, dead ends, detours etc? If you have
to have some sort of program to start with, how come there is no
sign of that being true, in the creative process of programmers
actually writing programs?


Two notes on this one.

I'd like to see fMRI studies of programmers having at it. I've seen this
of authors, but not of programmers per-se. It would be interesting. But
this isn't going to work because it'll just show you lots of active
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those
dead ends and turns that one makes when traveling down those paths.
I've sometimes been able to go fully into a recording session where I
could write about a few minutes of decisions for hours on end
afterwards, but it's just not efficient to getting the point across.
I've sometimes wanted to do this for web crawling, when I do my
browsing and reading, and at least somewhat track my jumps from page to
page and so on, or even in my own grammar and writing so that I can
make sure I optimize it :-) and so that I can see where I was going or
not going :-) but any solution that requires me to type even /more/
will be a sort of contradiction, since then I will have to type even
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

OK, I'll bite: what's nondeterministic programming if not a contradiction?

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Nah. One word (though it would take too long here to
 explain) ; 
 nondeterministic programming.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike,

In that case I do not see how your view differs from simplistic
dualism, as Terren cautioned. If your goal is to make a creativity
machine, in what sense would the machine be non-algorithmic? Physical
random processes?

--Abram

On Thu, Sep 4, 2008 at 6:59 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks. V. helpful and interesting. Yes, on further examination, these
 interactionist guys seem, as you say, to be trying to take into account  the
 embeddedness of the computer.

 But no, there's still a huge divide between them and me. I would liken them
 in the context of this discussion, to Pei who tries to argue that NARS is
 non-algorithmic, because the program is continuously changing. - and
 therefore satisfies the objections of classical objectors to AI/AGI.

 Well, both these guys and Pei are still v. much algorithmic in any
 reasonable sense of the word - still following *structures,* if v.
 sophisticated (and continuously changing) structures, of thought.

 And what I am asserting is a  paradigm of a creative machine, which starts
 as, and is, NON-algorithmic and UNstructured  in all its activities, albeit
 that it acquires and creates a multitude of algorithms, or
 routines/structures, for *parts* of those  activities. For example, when you
 write a post,  nearly every word and a great many phrases and even odd
 sentences, will be automatically, algorithmically produced. But the whole
 post, and most paras will *not* be - and *could not* be.

 A creative machine has infinite combinative potential. An algorithmic,
 programmed machine has strictly limited combinativity..

 And a keyboard is surely the near perfect symbol of infinite, unstructured
 combinativity. It is being, and has been, used in endlessly creative ways -
 and is, along with the blank page and pencil, the central tool of our
 civilisation's creativity. Those randomly arranged letters - clearly
 designed to be infinitely recombined - are the antithesis of a programmed
 machine.

 So however those guys account for that keyboard, I don't see them as in any
 way accounting for it in my sense, or in its true, full usage. But thanks
 for your comments. (Oh and I did understand re Bayes - I was and am still
 arguing he isn't valid in many cases, period).


 Mike,

 The reason I decided that what you are arguing for is essentially an
 interactive model is this quote:

 But that is obviously only the half of it.Computers are obviously
 much more than that - and  Turing machines. You just have to look at
 them. It's staring you in the face. There's something they have that
 Turing machines don't. See it? Terren?

 They have -   a keyboard.

 A keyboard is precisely what the interaction theorists are trying to
 account for! Plus the mouse, the ethernet port, et cetera.

 Moreover, your general comments fit into the model if interpreted
 judiciously. You make a distinction between rule-based and creative
 behavior; rule-based behavior could be thought of as isolated
 processing of input (receive input, process without interference,
 output result) while creative behavior is behavior resulting from
 continual interaction with and exploration of the external world. Your
 concept of organisms as organizers only makes sense when I see it in
 this light: a human organizes the environment by interaction with it,
 while a Turing machine is unable to do this because it cannot
 explore/experiment/discover.

 -Abram

 On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:

 Abram,

 Thanks for reply. But I don't understand what you see as the connection.
 An
 interaction machine from my brief googling is one which has physical
 organs.

 Any factory machine can be thought of as having organs. What I am trying
 to
 forge is a new paradigm of a creative, free  machine as opposed to that
 exemplified by most actual machines, which are rational, deterministic
 machines. The latter can only engage in any task in set ways - and
 therefore
 engage and combine their organs in set combinations and sequences.
 Creative
 machines have a more or less infinite range of possible ways of going
 about
 things, and can combine their organs in a virtually infinite range of
 combinations, (which gives them a slight advantage, adaptively :) ).
 Organisms *are* creative machines; computers and robots *could* be (and
 are,
 when combined with humans), AGI's will *have* to be.

 (To talk of creative machines, more specifically, as I did, as
 keyboards/organisers is to focus on the mechanics of this infinite
 combinativity of organs).

 Interaction machines do not seem in any way then to entail what I'm
 talking
 about - creative machines - keyboards/ organisers - infinite
 combinativity
 - or the *creation,* as quite distinct from *following*  of
 programs/algorithms and routines..



 Abram/MT: If you think it's all been said, please point me to the
 philosophy of AI

 that includes it.

 I believe what you are suggesting is best understood as 

[agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner
Terren's request for new metaphors/paradigms for intelligence threw me 
temporarily off course.Why a new one - why not the old one? The computer. 
But the whole computer.


You see, AI-ers simply don't understand computers, or understand only half 
of them


What I'm doing here is what I said philosophers do - outline existing 
paradigms and point out how they lack certain essential dimensions.


When AI-ers look at a computer, the paradigm that they impose on it is that 
of a Turing machine - a programmed machine, a device for following programs.


But that is obviously only the half of it.Computers are obviously much more 
than that - and  Turing machines. You just have to look at them. It's 
staring you in the face. There's something they have that Turing machines 
don't. See it? Terren?


They have -   a keyboard.

And as a matter of scientific, historical fact, computers are first and 
foremost keyboards - i.e.devices for CREATING programs  on keyboards, - and 
only then following them. [Remember how AI gets almost everything about 
intelligence back to front?] There is not and never has been a program that 
wasn't first created on a keyboard. Indisputable fact. Almost everything 
that happens in computers happens via the keyboard.


So what exactly is a keyboard? Well, like all keyboards whether of 
computers, musical instruments or typewriters, it is a creative instrument. 
And what makes it creative is that it is - you could say - an organiser.


A device with certain organs (in this case keys) that are designed to be 
creatively organised - arranged in creative, improvised (rather than 
programmed) sequences of  action/ association./organ play.


And an extension of the body. Of the organism. All organisms are 
organisers - devices for creatively sequencing actions/ 
associations./organs/ nervous systems first and developing fixed, orderly 
sequences/ routines/ programs second.


All organisers are manifestly capable of an infinity of creative, novel 
sequences, both rational and organized, and crazy and disorganized.  The 
idea that organisers (including computers) are only meant to follow 
programs - to be straitjacketed in movement and thought -  is obviously 
untrue. Touch the keyboard. Which key comes first? What's the program for 
creating any program? And there lies the secret of AGI.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Terren Suydam

Mike,

There's nothing particularly creative about keyboards. The creativity comes 
from what uses the keyboard. Maybe that was your point, but if so the 
digression about a keyboard is just confusing.

In terms of a metaphor, I'm not sure I understand your point about 
organizers. It seems to me to refer simply to that which we humans do, which 
in essence says general intelligence is what we humans do.  Unfortunately, I 
found this last email to be quite muddled. Actually, I am sympathetic to a lot 
of your ideas, Mike, but I also have to say that your tone is quite 
condescending. There are a lot of smart people on this list, as one would 
expect, and a little humility and respect on your part would go a long way. 
Saying things like You see, AI-ers simply don't understand computers, or 
understand only half of them.  More often than not you position yourself as 
the sole source of enlightened wisdom on AI and other subjects, and that does 
not make me want to get to know your ideas any better.  Sorry to veer off topic 
here, but I say these things because I think some of your ideas are valid and 
could really benefit from an adjustment in your
 presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.

Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Wednesday, September 3, 2008, 9:42 PM
 Terren's request for new metaphors/paradigms for
 intelligence threw me 
 temporarily off course.Why a new one - why not the old one?
 The computer. 
 But the whole computer.
 
 You see, AI-ers simply don't understand computers, or
 understand only half 
 of them
 
 What I'm doing here is what I said philosophers do -
 outline existing 
 paradigms and point out how they lack certain essential
 dimensions.
 
 When AI-ers look at a computer, the paradigm that they
 impose on it is that 
 of a Turing machine - a programmed machine, a device for
 following programs.
 
 But that is obviously only the half of it.Computers are
 obviously much more 
 than that - and  Turing machines. You just have to look at
 them. It's 
 staring you in the face. There's something they have
 that Turing machines 
 don't. See it? Terren?
 
 They have -   a keyboard.
 
 And as a matter of scientific, historical fact, computers
 are first and 
 foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and 
 only then following them. [Remember how AI gets almost
 everything about 
 intelligence back to front?] There is not and never has
 been a program that 
 wasn't first created on a keyboard. Indisputable fact.
 Almost everything 
 that happens in computers happens via the keyboard.
 
 So what exactly is a keyboard? Well, like all keyboards
 whether of 
 computers, musical instruments or typewriters, it is a
 creative instrument. 
 And what makes it creative is that it is - you could say -
 an organiser.
 
 A device with certain organs (in this case
 keys) that are designed to be 
 creatively organised - arranged in creative, improvised
 (rather than 
 programmed) sequences of  action/ association./organ
 play.
 
 And an extension of the body. Of the organism. All
 organisms are 
 organisers - devices for creatively sequencing
 actions/ 
 associations./organs/ nervous systems first and developing
 fixed, orderly 
 sequences/ routines/ programs second.
 
 All organisers are manifestly capable of an infinity of
 creative, novel 
 sequences, both rational and organized, and crazy and
 disorganized.  The 
 idea that organisers (including computers) are only meant
 to follow 
 programs - to be straitjacketed in movement and thought - 
 is obviously 
 untrue. Touch the keyboard. Which key comes first?
 What's the program for 
 creating any program? And there lies the secret of AGI.
 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-03 Thread Mike Tintner

Terren,

If you think it's all been said, please point me to the philosophy of AI 
that includes it.


A programmed machine is an organized structure. A keyboard (and indeed a 
computer with keyboard) are something very different - there is no 
organization to those 26 letters etc.   They can be freely combined and 
sequenced to create an infinity of texts. That is the very essence and 
manifestly, the whole point, of a keyboard.


Yes, the keyboard is only an instrument. But your body - and your brain - 
which use it,  are themselves keyboards. They consist of parts which also 
have no fundamental behavioural organization - that can be freely combined 
and sequenced to create an infinity of sequences of movements and thought - 
dances, texts, speeches, daydreams, postures etc.


In abstract logical principle, it could all be preprogrammed. But I doubt 
that it's possible mathematically - a program for selecting from an infinity 
of possibilities? And it would be engineering madness - like trying to 
preprogram a particular way of playing music, when an infinite repertoire is 
possible and the environment, (in this case musical culture), is changing 
and evolving with bewildering and unpredictable speed.


To look at computers as what they are (are you disputing this?) - machines 
for creating programs first, and following them second,  is a radically 
different way of looking at computers. It also fits with radically different 
approaches to DNA - moving away from the idea of DNA as coded program, to 
something that can be, as it obviously can be, played like a keyboard  - see 
Dennis Noble, The Music of Life. It fits with the fact (otherwise 
inexplicable) that all intelligences have both deliberate (creative) and 
automatic (routine) levels - and are not just automatic, like purely 
programmed computers. And it fits with the way computers are actually used 
and programmed, rather than the essentially fictional notion of them as pure 
turing machines.


And how to produce creativity is the central problem of AGI - completely 
unsolved.  So maybe a new approach/paradigm is worth at least considering 
rather than more of the same? I'm not aware of a single idea from any AGI-er 
past or present that directly addresses that problem - are you?





Mike,

There's nothing particularly creative about keyboards. The creativity 
comes from what uses the keyboard. Maybe that was your point, but if so 
the digression about a keyboard is just confusing.


In terms of a metaphor, I'm not sure I understand your point about 
organizers. It seems to me to refer simply to that which we humans do, 
which in essence says general intelligence is what we humans do. 
Unfortunately, I found this last email to be quite muddled. Actually, I am 
sympathetic to a lot of your ideas, Mike, but I also have to say that your 
tone is quite condescending. There are a lot of smart people on this list, 
as one would expect, and a little humility and respect on your part would 
go a long way. Saying things like You see, AI-ers simply don't understand 
computers, or understand only half of them.  More often than not you 
position yourself as the sole source of enlightened wisdom on AI and other 
subjects, and that does not make me want to get to know your ideas any 
better.  Sorry to veer off topic here, but I say these things because I 
think some of your ideas are valid and could really benefit from an 
adjustment in your
presentation of them, and yourself.  If I didn't think you had anything 
worthwhile to say, I wouldn't bother.


Terren

--- On Wed, 9/3/08, Mike Tintner [EMAIL PROTECTED] wrote:


From: Mike Tintner [EMAIL PROTECTED]
Subject: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 9:42 PM
Terren's request for new metaphors/paradigms for
intelligence threw me
temporarily off course.Why a new one - why not the old one?
The computer.
But the whole computer.

You see, AI-ers simply don't understand computers, or
understand only half
of them

What I'm doing here is what I said philosophers do -
outline existing
paradigms and point out how they lack certain essential
dimensions.

When AI-ers look at a computer, the paradigm that they
impose on it is that
of a Turing machine - a programmed machine, a device for
following programs.

But that is obviously only the half of it.Computers are
obviously much more
than that - and  Turing machines. You just have to look at
them. It's
staring you in the face. There's something they have
that Turing machines
don't. See it? Terren?

They have -   a keyboard.

And as a matter of scientific, historical fact, computers
are first and
foremost keyboards - i.e.devices for CREATING programs  on
keyboards, - and
only then following them. [Remember how AI gets almost
everything about
intelligence back to front?] There is not and never has
been a program that
wasn't first created on a keyboard. Indisputable fact.
Almost everything