Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi,


  What I think is that the set of patterns in perceptual and motoric data
 has
  radically different statistical properties than the set of patterns in
  linguistic and mathematical data ... and that the properties of the set
 of
  patterns in perceptual and motoric data is intrinsically better suited to
  the needs of a young, ignorant, developing mind.

 Sure it is. Systems with different sensory channels will never fully
 understand each other. I'm not saying that one channel (verbal) can
 replace another (visual), but that both of them (and many others) can
 give symbol/representation/concept/pattern/whatever-you-call-it
 meaning. No on is more real than others.


True, but some channels may -- due to the statistical properties of the data
coming across them -- be more conducive to the development of AGI than
others...




  All these different domains of pattern display what I've called a dual
  network structure ... a collection of hierarchies (of progressively more
  and more complex, hierarchically nested patterns) overlayed with a
  heterarchy (of overlapping, interrelated patterns).  But the statistics
 of
  the dual networks in the different domains is different.  I haven't fully
  plumbed the difference yet ... but, among the many differences is that in
  perceptual/motoric domains, you have a very richly connected dual network
 at
  a very low level of the overall dual network hierarchy -- i.e., there's a
  richly connected web of relatively simple stuff to understand ... and
 then
  these simple things are related to (hence useful for learning) the more
  complex things, etc.

 True, but can you say that the relations among words, or concepts, are
 simpler?



I think the set of relations among words (considered in isolation, without
their referents) is less rich than the set of relations among perceptions
of a complex world, and far less rich than the set of relations among
{perceptions of a complex world, plus words referring to these
perceptions}

And I think that this lesser richness makes sequences of words a much worse
input stream for a developing AGI

I realize that quantifying less rich in the above is a significant
challenge, but I'm presenting my intuition anyway...

Also, relatedly and just as critically, the set of perceptions regarding the
body and its interactions with the environment, are well-structured to give
the mind a sense of its own self.  This primitive infantile sense of
body-self gives rise to the more sophisticated phenomenal self of the child
and adult mind, which gives rise to reflective consciousness, the feeling of
will, and other characteristic structures of humanlike general
intelligence.  A stream of words doesn't seem to give an AI the same kind of
opportunity for self-development




 In this short paper, I make no attempt to settle all issues, but just
 to point out a simple fact --- a laptop has a body, and is not less
 embodied than Roomba or Mindstorms --- that seems have been ignored in
 the previous discussion.


I agree with your point, but I wonder if it's partially a straw man
argument.  The proponents of embodiment as a key  aspect of AGI don't of
course think that Cyc is disembodied in a maximally strong sense -- they
know it interacts with the world via physical means.  What they mean by
embodied is something different.

I don't have the details at my finger tips, but I know that Maturana, Varela
and Eleanor Rosch took some serious pains to carefully specify the sense in
which they feel embodiment is critical to intelligence, and to distinguish
their sense of embodiment from the trivial sense of communicating via
physical signals.

I suggest your paper should probably include a careful response to the
characterization of embodiment presented in

http://www.*amazon*.com/*Embodied*-*Mind*
-Cognitive-Science-Experience/dp/0262720213

I note that I do not agree with the arguments of Varela, Rosch, Brooks,
etc.  I just think their characterization of embodiment is an interesting
and nontrivial one, and I'm not sure NARS with a text stream as input would
be embodied according to their definition...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel

 Also, relatedly and just as critically, the set of perceptions regarding
 the body and its interactions with the environment, are well-structured to
 give the mind a sense of its own self.  This primitive infantile sense of
 body-self gives rise to the more sophisticated phenomenal self of the child
 and adult mind, which gives rise to reflective consciousness, the feeling of
 will, and other characteristic structures of humanlike general
 intelligence.  A stream of words doesn't seem to give an AI the same kind of
 opportunity for self-development



To put it perhaps more clearly: I think that a standard laptop is too
lacking in

-- proprioceptive perception

-- perception of its own relationship to other entities in the world around
it

to form a physical self-image based on its perceptions ... hence a standard
laptop will not likely be driven by its experience to develop a phenomenal
self ... hence, I suspect, no generally intelligent mind...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread William Pearson
2008/9/4 Mike Tintner [EMAIL PROTECTED]:
 Terren,

 If you think it's all been said, please point me to the philosophy of AI
 that includes it.

 A programmed machine is an organized structure. A keyboard (and indeed a
 computer with keyboard) are something very different - there is no
 organization to those 26 letters etc.   They can be freely combined and
 sequenced to create an infinity of texts. That is the very essence and
 manifestly, the whole point, of a keyboard.

 Yes, the keyboard is only an instrument. But your body - and your brain -
 which use it,  are themselves keyboards. They consist of parts which also
 have no fundamental behavioural organization - that can be freely combined
 and sequenced to create an infinity of sequences of movements and thought -
 dances, texts, speeches, daydreams, postures etc.

 In abstract logical principle, it could all be preprogrammed. But I doubt
 that it's possible mathematically - a program for selecting from an infinity
 of possibilities? And it would be engineering madness - like trying to
 preprogram a particular way of playing music, when an infinite repertoire is
 possible and the environment, (in this case musical culture), is changing
 and evolving with bewildering and unpredictable speed.

 To look at computers as what they are (are you disputing this?) - machines
 for creating programs first, and following them second,  is a radically
 different way of looking at computers. It also fits with radically different
 approaches to DNA - moving away from the idea of DNA as coded program, to
 something that can be, as it obviously can be, played like a keyboard  - see
 Dennis Noble, The Music of Life. It fits with the fact (otherwise
 inexplicable) that all intelligences have both deliberate (creative) and
 automatic (routine) levels - and are not just automatic, like purely
 programmed computers. And it fits with the way computers are actually used
 and programmed, rather than the essentially fictional notion of them as pure
 turing machines.

 And how to produce creativity is the central problem of AGI - completely
 unsolved.  So maybe a new approach/paradigm is worth at least considering
 rather than more of the same? I'm not aware of a single idea from any AGI-er
 past or present that directly addresses that problem - are you?


You can't create a program out of thin air. So you have to have some
sort of program to start with. You probably want to change the initial
program in some way as well as perhaps adding more programming. This
leads you to recursive self-change and its subset RSI, which is a very
tricky business even if you don't think it is going to go FOOM and
take over the world.

So this very list has been discussing in abstract terms the very thing
you want it to be discussing!

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment.. P.S.

2008-09-04 Thread Valentina Poletti
That's if you aim at getting an AGI that is intelligent in the real world. I
think some people on this list (incl Ben perhaps) might argue that for now -
for safety purposes but also due to costs - it might be better to build an
AGI that is intelligent in a simulated environment.

Ppl like Ben argue that the concept/engineering aspect of intelligence
is *independent
of the type of environment*. That is, given you understand how to make it in
a virtual environment you can then tarnspose that concept into a real
environment more safely.

Some other ppl on the other hand believe intelligence is a property of
humans only. So you have to simulate every detail about humans to get that
intelligence. I'd say that among the two approaches the first one (Ben's) is
safer and more realistic.

I am more concerned with the physics aspect of the whole issue I guess.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-04 Thread Valentina Poletti
That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.

On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
  Got ya, thanks for the clarification. That brings up another question.
 Why
  do we want to make an AGI?
 
 

 To understand ourselves as intelligent agents better? It might enable
 us to have decent education policy, rehabilitation of criminals.

 Even if we don't make human like AGIs the principles should help us
 understand ourselves, just as optics of the lens helped us understand
 the eye and aerodynamics of wings helps us understand bird flight.

 It could also gives us more leverage, more brain power on the planet
 to help solve the planets problems.

 This is all predicated on the idea that fast take off is pretty much
 impossible. It is possible then all bets are off.

 Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Valentina Poletti
On 8/31/08, Steve Richfield [EMAIL PROTECTED] wrote:


  Protective mechanisms to restrict their thinking and action will only
 make things WORSE.



Vlad, this was my point in the control e-mail, I didn't express it quite as
clearly, partly because coming from a different background I use a slightly
different language.

Also, Steve made another good point here: loads of people at any moment do
whatever they can to block the advancement and progress of human beings as
it is now. How will *those* people react to a progress as advanced as AGI?
That's why I keep stressing the social factor in intelligence as very
important part to consider.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti [EMAIL PROTECTED] wrote:

 Vlad, this was my point in the control e-mail, I didn't express it quite as
 clearly, partly because coming from a different background I use a slightly
 different language.

 Also, Steve made another good point here: loads of people at any moment do
 whatever they can to block the advancement and progress of human beings as
 it is now. How will those people react to a progress as advanced as AGI?
 That's why I keep stressing the social factor in intelligence as very
 important part to consider.


No, it's not important, unless these people start to pose a serious
threat to the project. You need to care about what is the correct
answer, not what is a popular one, in the case where popular answer is
dictated by ignorance.

P.S. AGI? I'm again not sure what we are talking about here.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Sure it is. Systems with different sensory channels will never fully
 understand each other. I'm not saying that one channel (verbal) can
 replace another (visual), but that both of them (and many others) can
 give symbol/representation/concept/pattern/whatever-you-call-it
 meaning. No on is more real than others.

 True, but some channels may -- due to the statistical properties of the data
 coming across them -- be more conducive to the development of AGI than
 others...

I haven't seen any evidence for that. For human intelligence, maybe,
but for intelligence in general, I doubt it.

 I think the set of relations among words (considered in isolation, without
 their referents) is less rich than the set of relations among perceptions
 of a complex world, and far less rich than the set of relations among
 {perceptions of a complex world, plus words referring to these
 perceptions}

Not necessarily. Actually some people may even make the opposite
argument: relations among non-linguistic components in experience are
basically temporal or spatial, while the relations among words and
concepts have much more types. I won't go that far, but I guess in
some sense all channels may have the same (potential) richness.

 And I think that this lesser richness makes sequences of words a much worse
 input stream for a developing AGI

 I realize that quantifying less rich in the above is a significant
 challenge, but I'm presenting my intuition anyway...

If your condition is true, then your conclusion follows, but the
problem is in that IF.

 Also, relatedly and just as critically, the set of perceptions regarding the
 body and its interactions with the environment, are well-structured to give
 the mind a sense of its own self.

We can say the same for every input/out operation set of an
intelligent system. SELF is defined by what the system can feel and
do.

 This primitive infantile sense of
 body-self gives rise to the more sophisticated phenomenal self of the child
 and adult mind, which gives rise to reflective consciousness, the feeling of
 will, and other characteristic structures of humanlike general
 intelligence.

Agree.

 A stream of words doesn't seem to give an AI the same kind of
 opportunity for self-development

If the system just sits there and passively accept whatever words come
into it, what you said is true. If the incoming words is causally
related to its outgoing words, will you still say that?

 I agree with your point, but I wonder if it's partially a straw man
 argument.

If you read Brooks or Pfeifer, you'll see that most of their arguments
are explicitly or implicitly based on the myth that only a robot has
a body, have real sensor, live in a real world, ...

 The proponents of embodiment as a key  aspect of AGI don't of
 course think that Cyc is disembodied in a maximally strong sense -- they
 know it interacts with the world via physical means.  What they mean by
 embodied is something different.

Whether a system is embodied does not depends on hardware, but on semantics.

 I don't have the details at my finger tips, but I know that Maturana, Varela
 and Eleanor Rosch took some serious pains to carefully specify the sense in
 which they feel embodiment is critical to intelligence, and to distinguish
 their sense of embodiment from the trivial sense of communicating via
 physical signals.

That is different. The embodiment school in CogSci doesn't focus on
body (they know every human already has one), but on experience.
However, they have their misconception about AI. As I mentioned,
Barsalou and Lakoff both thought strong AI is unlikely because
computer cannot have human experience --- I agree what they said
except their narrow conception of intelligence (CogSci people tend to
take intelligence as human intelligence).

 I suggest your paper should probably include a careful response to the
 characterization of embodiment presented in

 http://www.amazon.com/Embodied-Mind-Cognitive-Science-Experience/dp/0262720213

 I note that I do not agree with the arguments of Varela, Rosch, Brooks,
 etc.  I just think their characterization of embodiment is an interesting
 and nontrivial one, and I'm not sure NARS with a text stream as input would
 be embodied according to their definition...

If I got the time (and motivation) to extend the paper into a journal
paper, I'll double the length by discussing embodiment in CogSci. In
the current version, as a short conference paper, I'd rather focus on
embodiment in AI, and only attack the robot myth.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:12 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Also, relatedly and just as critically, the set of perceptions regarding
 the body and its interactions with the environment, are well-structured to
 give the mind a sense of its own self.  This primitive infantile sense of
 body-self gives rise to the more sophisticated phenomenal self of the child
 and adult mind, which gives rise to reflective consciousness, the feeling of
 will, and other characteristic structures of humanlike general
 intelligence.  A stream of words doesn't seem to give an AI the same kind of
 opportunity for self-development

 To put it perhaps more clearly: I think that a standard laptop is too
 lacking in

 -- proprioceptive perception

 -- perception of its own relationship to other entities in the world around
 it

Obviously you didn't consider the potential a laptop has with its
network connection, which in theory can give it all kinds of
perception by connecting it to some input/output device.

Even if we exclude network, your conclusion is still problematic. Why
a touchpad cannot provide proprioceptive perception? I agree it
usually doesn't, because the way it is used, but that doesn't mean it
cannot, under all possible usage. The same is true for keyboard. The
current limitation of the standard computer is more in the way we use
them than in the hardware itself.

 to form a physical self-image based on its perceptions ... hence a standard
 laptop will not likely be driven by its experience to develop a phenomenal
 self ... hence, I suspect, no generally intelligent mind...

Of course it won't have a visual concept of self, but a system like
NARS has the potential to grow into an intelligent operating system,
with a notion of self based on what it can feel and do, as well as
the causal relations among them --- If there is a file in this
folder, then I should have felt it, it cannot be there because I've
deleted the contents.

I know some people won't agree there is a self in such a system,
because it doesn't look like themselves. Too bad human intelligence is
the only known example of intelligence ...

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Will:You can't create a program out of thin air. So you have to have some
sort of program to start with

Not out of thin air.Out of a general instruction and desire[s]/emotion[s]. 
Write me a program that will contradict every statement made to it. Write 
me a single program that will allow me to write video/multimedia 
articles/journalism fast and simply. That's what you actually DO. You start 
with v. general briefs rather than any detailed list of instructions, and 
fill them  in as you go along, in an ad hoc, improvisational way - 
manifestly *creating* rather than *following* organized structures of 
behaviour in an initially disorganized way.


Do you honestly think that you write programs in a programmed way? That it's 
not an *art* pace Matt, full of hesitation, halts, meandering, twists and 
turns, dead ends, detours etc?  If you have to have some sort of program to 
start with, how come there is no sign  of that being true, in the creative 
process of programmers actually writing programs?


Do you think that there's a program for improvising on a piano [or other 
form of keyboard]?  That's what AGI's are supposed to do - improvise. So 
create one that can. Like you. And every other living creature. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
I agree with Pei in that a robot's experience is not necessarily more real
than that of a, say, web-embedded agent - if anything it is closer to the *
human* experience of the world. But who knows how limited our own sensory
experience is anyhow. Perhaps a better intelligence would comprehend the
world better through a different emboyment.

However, could you guys be more specific regarding the statistical
differences of different types of data? What kind of differences are you
talking about specifically (mathematically)? And what about the differences
at the various levels of the dual-hierarchy? Has any of your work or
research suggested this hypothesis, if so which?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What Time Is It? No. What clock is it?

2008-09-04 Thread Valentina Poletti
Great articles!

On 9/4/08, Brad Paulsen [EMAIL PROTECTED] wrote:

 Hey gang...

 It's Likely That Times Are Changing

 http://www.sciencenews.org/view/feature/id/35992/title/It%E2%80%99s_Likely_That_Times_Are_Changing
 A century ago, mathematician Hermann Minkowski famously merged space with
 time, establishing a new foundation for physics;  today physicists are
 rethinking how the two should fit together.

 A PDF of a paper presented in March of this year, and upon which the
 article is based, can be found at http://arxiv.org/abs/0805.4452.  It's a
 free download.  Lots of equations, graphs, oh my!

 Cheers,
 Brad




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Valentina Poletti
Programming definitely feels like an art to me - I get the same feelings as
when I am painting. I always wondered why.

On the phylosophical side in general technology is the ability of humans to
adapt the environment to themselves instead of the opposite - adapting to
the environment. The environment acts on us and we act on it - we absorb
information from it and we change it while it changes us.

When we want to step further and create an AGI I think we want to
externalize the very ability to create technology - we want the environment
to start adapting to us by itself, spontaneously by gaining our goals.

Vale



On 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Will:You can't create a program out of thin air. So you have to have some
 sort of program to start with

 Not out of thin air.Out of a general instruction and desire[s]/emotion[s].
 Write me a program that will contradict every statement made to it. Write
 me a single program that will allow me to write video/multimedia
 articles/journalism fast and simply. That's what you actually DO. You start
 with v. general briefs rather than any detailed list of instructions, and
 fill them  in as you go along, in an ad hoc, improvisational way -
 manifestly *creating* rather than *following* organized structures of
 behaviour in an initially disorganized way.

 Do you honestly think that you write programs in a programmed way? That
 it's not an *art* pace Matt, full of hesitation, halts, meandering, twists
 and turns, dead ends, detours etc?  If you have to have some sort of
 program to start with, how come there is no sign  of that being true, in
 the creative process of programmers actually writing programs?

 Do you think that there's a program for improvising on a piano [or other
 form of keyboard]?  That's what AGI's are supposed to do - improvise. So
 create one that can. Like you. And every other living creature.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel

 Obviously you didn't consider the potential a laptop has with its
 network connection, which in theory can give it all kinds of
 perception by connecting it to some input/output device.


yes, that's true ... I was considering the laptop w/ only a power cable as
the AI system in question.  Of course my point does not apply to a laptop
that's being used as an on-board control system for an android robot, or a
laptop that's connected to a network of sensors and actuators via the net,
etc.  Sorry I did not clarify my terms better!

Similarly the human brain lacks much proprioception and control in
isolation, and probably would not be able to achieve a high level of general
intelligence without the right peripherals (such as the rest of the human
body ;-)


Even if we exclude network, your conclusion is still problematic. Why
 a touchpad cannot provide proprioceptive perception? I agree it
 usually doesn't, because the way it is used, but that doesn't mean it
 cannot, under all possible usage. The same is true for keyboard. The
 current limitation of the standard computer is more in the way we use
 them than in the hardware itself.


I understand that a keyboard and touchpad do provide proprioceptive input,
but I think it's too feeble, and too insensitively respondent to changes in
the environment and the relation btw the laptop and the environment, to
serve as the foundation for a robust self-model or a powerful general
intelligence.




  to form a physical self-image based on its perceptions ... hence a
 standard
  laptop will not likely be driven by its experience to develop a
 phenomenal
  self ... hence, I suspect, no generally intelligent mind...

 Of course it won't have a visual concept of self, but a system like
 NARS has the potential to grow into an intelligent operating system,
 with a notion of self based on what it can feel and do, as well as
 the causal relations among them --- If there is a file in this
 folder, then I should have felt it, it cannot be there because I've
 deleted the contents.


My suggestion is that the file system lacks the complexity of structure and
dynamics to support the emergence of a robust self-model, and powerful
general intelligence...

Not in principle ... potentially a file system *could* display the needed
complexity, but I don't think any file systems on laptops now come close...

Whether the Internet as a whole contains the requisite complexity is a
subtler question.



 I know some people won't agree there is a self in such a system,
 because it doesn't look like themselves. Too bad human intelligence is
 the only known example of intelligence ...


I would call a self any internal, explicit model that a system creates
that allows it to predict its own behaviors in a sufficient variety of
contexts  This need not have a visual aspect nor a great similarity to a
human self.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
 OK, then the observable universe has a finite description length. We don't 
 need to describe anything else to model it, so by universe I mean only the 
 observable part.


But, what good is it to only have finite description of the observable
part, since new portions of the universe enter the observable portion
continually? Physics cannot then be modeled as a computer program,
because computer programs do not increase in Kolmogorov complexity as
they run (except by a logarithmic term to count how long it has been
running).

 I am saying that the universe *is* deterministic. It has a definite quantum 
 state, but we would need about 10^122 bits of memory to describe it. Since we 
 can't do that, we have to resort to approximate models like quantum mechanics.


Yes, I understood that you were suggesting a deterministic universe.
What I'm saying is that it seems plausible for us to be able to have
an accurate knowledge of that deterministic physics, lacking only the
exact knowledge of particle locations et cetera. We would be forced to
use probabilistic methods as you argue, but they would not necessarily
be built into our physical theories; instead, our physical theories
act as a deterministic function that is given probabilistic input and
therefore yields probabilistic output.

 I believe there is a simpler description. First, the description length is 
 increasing with the square of the age of the universe, since it is 
 proportional to area. So it must have been very small at one time. Second, 
 the most efficient way to enumerate all possible universes would be to run 
 each B-bit machine for 2^B steps, starting with B = 0, 1, 2... until 
 intelligent life is found. For our universe, B ~ 407. You could reasonably 
 argue that the algorithmic complexity of the free parameters of string theory 
 and general relativity is of this magnitude. I believe that Wolfram also 
 argued that the (observable) universe is a few lines of code.


I really do not understand your willingness to restrict universe to
observable universe. The description length of the observable
universe was very small at one time because at that time none of the
basic stuffs of the universe had yet interacted, so by definition the
description length of the observable universe for each basic entity is
just the description length of that entity. As time moves forward, the
entities interact and the description lengths of their observable
universes increase. Similarly, today, one might say that the
observable universe for each person is slightly different, and indeed
the universe observable from my right hand would be slightly different
then the one observable from my left. They could have differing
description lengths.

In short, I think you really want to apply your argument to the
actual universe, not merely observable subsets... or if you don't,
you should, because otherwise it seems like a very strange argument.

 But even if we discover this program it does not mean we could model the 
 universe deterministically. We would need a computer larger than the universe 
 to do so.

Agreed... partly thanks to your argument below.

 There is a simple argument using information theory. Every system S has a 
 Kolmogorov complexity K(S), which is the smallest size that you can compress 
 a description of S to. A model of S must also have complexity K(S). However, 
 this leaves no space for S to model itself. In particular, if all of S's 
 memory is used to describe its model, there is no memory left over to store 
 any results of the simulation.

Point conceded.


--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi Pei,

I think your point is correct that the notion of embodiment presented by
Brooks and some other roboticists is naive.  I'm not sure whether their
actual conceptions are naive, or whether they just aren't presenting their
foundational philosophical ideas clearly in their writings (being ultimately
more engineering-oriented people, and probably not that accustomed to the
philosophical style of discourse in which these sorts of definitional
distinctions need to be more precisely drawn).  I do think (in approximate
concurrence with your paper) that ANY control system physically embodied in
a physical system S, that has an input and output stream, and whose input
and output stream possess correlation with the physical state of S, should
be considered as psychologically embodied.  Clearly, whether it's a robot
or a laptop (w/o network connection if you like), such a system has the
basic property of embodiment.  Furthermore S doesn't need to be a physical
system ... it could be a virtual system inside some virtual world (and
then there's the question of what properties characterize a valid virtual
world ... but let's leave that for another email thread...)

However, I think that not all psychologically-embodied systems possess a
sufficiently rich psychological-embodiment to lead to significantly general
intelligence  My suggestion is that a laptop w/o network connection or
odd sensor-peripherals, probably does not have sufficiently rich
correlations btw its I/O stream and its physical state, to allow it to
develop a robust self-model of its physical self (which can then be used as
a basis for a more general phenomenal self).

I think that Varela and crew understood the value of this rich network of
correlations, but mistakenly assumed it to be a unique property of
biological systems...

I realize that the points you made in your paper do not contradict the
suggestions I've made in this email.  I don't think anything significant in
your paper is wrong, actually.  It just seems to me not to address the most
interesting aspects of the embodiment issue as related to AGI.

-- Ben G

On Thu, Sep 4, 2008 at 7:06 AM, Pei Wang [EMAIL PROTECTED] wrote:

 On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Sure it is. Systems with different sensory channels will never fully
  understand each other. I'm not saying that one channel (verbal) can
  replace another (visual), but that both of them (and many others) can
  give symbol/representation/concept/pattern/whatever-you-call-it
  meaning. No on is more real than others.
 
  True, but some channels may -- due to the statistical properties of the
 data
  coming across them -- be more conducive to the development of AGI than
  others...

 I haven't seen any evidence for that. For human intelligence, maybe,
 but for intelligence in general, I doubt it.

  I think the set of relations among words (considered in isolation,
 without
  their referents) is less rich than the set of relations among
 perceptions
  of a complex world, and far less rich than the set of relations among
  {perceptions of a complex world, plus words referring to these
  perceptions}

 Not necessarily. Actually some people may even make the opposite
 argument: relations among non-linguistic components in experience are
 basically temporal or spatial, while the relations among words and
 concepts have much more types. I won't go that far, but I guess in
 some sense all channels may have the same (potential) richness.

  And I think that this lesser richness makes sequences of words a much
 worse
  input stream for a developing AGI
 
  I realize that quantifying less rich in the above is a significant
  challenge, but I'm presenting my intuition anyway...

 If your condition is true, then your conclusion follows, but the
 problem is in that IF.

  Also, relatedly and just as critically, the set of perceptions regarding
 the
  body and its interactions with the environment, are well-structured to
 give
  the mind a sense of its own self.

 We can say the same for every input/out operation set of an
 intelligent system. SELF is defined by what the system can feel and
 do.

  This primitive infantile sense of
  body-self gives rise to the more sophisticated phenomenal self of the
 child
  and adult mind, which gives rise to reflective consciousness, the feeling
 of
  will, and other characteristic structures of humanlike general
  intelligence.

 Agree.

  A stream of words doesn't seem to give an AI the same kind of
  opportunity for self-development

 If the system just sits there and passively accept whatever words come
 into it, what you said is true. If the incoming words is causally
 related to its outgoing words, will you still say that?

  I agree with your point, but I wonder if it's partially a straw man
  argument.

 If you read Brooks or Pfeifer, you'll see that most of their arguments
 are explicitly or implicitly based on the myth that only a robot has
 a 

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel

 However, could you guys be more specific regarding the statistical
 differences of different types of data? What kind of differences are you
 talking about specifically (mathematically)? And what about the differences
 at the various levels of the dual-hierarchy? Has any of your work or
 research suggested this hypothesis, if so which?



Sorry I've been fuzzy on this ... I'm engaging in this email conversation in
odd moments while at a conference (Virtual Worlds 2008, in Los Angeles...)

Specifically I think that patterns interrelating the I/O stream of system S
with the relation between the system S's embodiment and its environment, are
important.  It is these patterns that let S build a self-model of its
physical embodiment, which then leads S to a more abstract self-model (aka
Metzinger's phenomenal self)

Considering patterns in the above category, it seems critical to have a rich
variety of patterns at varying levels of complexity... so that the patterns
at complexity level L are largely approximable as compositions of patterns
at complexity less than L.  This way a mind can incrementally build up its
self-model via recognizing slightly complex self-related patterns, then
acting based on these patterns, then recognizing somewhat more complex
self-related patterns involving its recent actions, and so forth.

It seems that a human body's sensors and actuators are suited to create and
recognize patterns of the above sort whereas the sensors and actuators of a
laptop w/o network cables or odd peripherals are not...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
On 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:



 However, could you guys be more specific regarding the statistical
 differences of different types of data? What kind of differences are you
 talking about specifically (mathematically)? And what about the differences
 at the various levels of the dual-hierarchy? Has any of your work or
 research suggested this hypothesis, if so which?



 Sorry I've been fuzzy on this ... I'm engaging in this email conversation
 in odd moments while at a conference (Virtual Worlds 2008, in Los
 Angeles...)

 Specifically I think that patterns interrelating the I/O stream of system S
 with the relation between the system S's embodiment and its environment, are
 important.  It is these patterns that let S build a self-model of its
 physical embodiment, which then leads S to a more abstract self-model (aka
 Metzinger's phenomenal self)

 So in short you are saying that the main difference between I/O data by
a motor embodyed system (such as robot or human) and a laptop is the ability
to interact with the data: make changes in its environment to systematically
change the input?

  Considering patterns in the above category, it seems critical to have a
 rich variety of patterns at varying levels of complexity... so that the
 patterns at complexity level L are largely approximable as compositions of
 patterns at complexity less than L.  This way a mind can incrementally build
 up its self-model via recognizing slightly complex self-related patterns,
 then acting based on these patterns, then recognizing somewhat more complex
 self-related patterns involving its recent actions, and so forth.


Definitely.

  It seems that a human body's sensors and actuators are suited to create
 and recognize patterns of the above sort whereas the sensors and actuators
 of a

 laptop w/o network cables or odd peripherals are not...


Agree.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
To clarify what I mean by observable universe, I am including any part that 
could be observed in the future, and therefore must be modeled to make accurate 
predictions. For example, if our universe is computed by one of an enumeration 
of Turing machines, then the other enumerations are outside our observable 
universe.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: Re: Computation as an explanation of the universe (was Re: [agi] 
 Recursive self-change: some definitions)
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 9:43 AM
  OK, then the observable universe has a finite
 description length. We don't need to describe anything
 else to model it, so by universe I mean only the
 observable part.
 
 
 But, what good is it to only have finite description of the
 observable
 part, since new portions of the universe enter the
 observable portion
 continually? Physics cannot then be modeled as a computer
 program,
 because computer programs do not increase in Kolmogorov
 complexity as
 they run (except by a logarithmic term to count how long it
 has been
 running).
 
  I am saying that the universe *is* deterministic. It
 has a definite quantum state, but we would need about 10^122
 bits of memory to describe it. Since we can't do that,
 we have to resort to approximate models like quantum
 mechanics.
 
 
 Yes, I understood that you were suggesting a deterministic
 universe.
 What I'm saying is that it seems plausible for us to be
 able to have
 an accurate knowledge of that deterministic physics,
 lacking only the
 exact knowledge of particle locations et cetera. We would
 be forced to
 use probabilistic methods as you argue, but they would not
 necessarily
 be built into our physical theories; instead, our physical
 theories
 act as a deterministic function that is given probabilistic
 input and
 therefore yields probabilistic output.
 
  I believe there is a simpler description. First, the
 description length is increasing with the square of the age
 of the universe, since it is proportional to area. So it
 must have been very small at one time. Second, the most
 efficient way to enumerate all possible universes would be
 to run each B-bit machine for 2^B steps, starting with B =
 0, 1, 2... until intelligent life is found. For our
 universe, B ~ 407. You could reasonably argue that the
 algorithmic complexity of the free parameters of string
 theory and general relativity is of this magnitude. I
 believe that Wolfram also argued that the (observable)
 universe is a few lines of code.
 
 
 I really do not understand your willingness to restrict
 universe to
 observable universe. The description length of
 the observable
 universe was very small at one time because at that time
 none of the
 basic stuffs of the universe had yet interacted, so by
 definition the
 description length of the observable universe for each
 basic entity is
 just the description length of that entity. As time moves
 forward, the
 entities interact and the description lengths of their
 observable
 universes increase. Similarly, today, one might say that
 the
 observable universe for each person is slightly different,
 and indeed
 the universe observable from my right hand would be
 slightly different
 then the one observable from my left. They could have
 differing
 description lengths.
 
 In short, I think you really want to apply your argument to
 the
 actual universe, not merely observable
 subsets... or if you don't,
 you should, because otherwise it seems like a very strange
 argument.
 
  But even if we discover this program it does not mean
 we could model the universe deterministically. We would need
 a computer larger than the universe to do so.
 
 Agreed... partly thanks to your argument below.
 
  There is a simple argument using information theory.
 Every system S has a Kolmogorov complexity K(S), which is
 the smallest size that you can compress a description of S
 to. A model of S must also have complexity K(S). However,
 this leaves no space for S to model itself. In particular,
 if all of S's memory is used to describe its model,
 there is no memory left over to store any results of the
 simulation.
 
 Point conceded.
 
 
 --Abram
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
 So in short you are saying that the main difference between I/O data by
 a motor embodyed system (such as robot or human) and a laptop is the ability
 to interact with the data: make changes in its environment to systematically
 change the input?


Not quite ... but, to interact w/ the data in a way that gives rise to a
hierarchy of nested, progressively more complex patterns that correlate the
system and its environment (and that the system can recognize and act upon)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 10:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 To clarify what I mean by observable universe, I am including any part that 
 could be observed in the future, and therefore must be modeled to make 
 accurate predictions. For example, if our universe is computed by one of an 
 enumeration of Turing machines, then the other enumerations are outside our 
 observable universe.

 -- Matt Mahoney, [EMAIL PROTECTED]


OK, that works. But, you cannot invoke current physics to argue that
this sort of observable universe is finite (so far as I know).

Of course, that is not central to your point anyway. The universe
might be spatially infinite while still having a finite description
length.

So, my only remaining objection is that while the universe *could* be
computable, it seems unwise to me to totally rule out the alternative.
As you said, the idea is something that makes testable predictions.
So, it is something to be decided experimentally, not philosophically.

-Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Terren Suydam

Hi Ben,

You may have stated this explicitly in the past, but I just want to clarify - 
you seem to be suggesting that a phenomenological self is important if not 
critical to the actualization of general intelligence. Is this your belief, and 
if so, can you provide a brief justification of that?  (I happen to believe 
this myself.. just trying to understand your philosophy better.)

Terren

--- On Thu, 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
However, I think that not all psychologically-embodied systems possess a 
sufficiently rich psychological-embodiment to lead to significantly general 
intelligence  My suggestion is that a laptop w/o network connection or odd 
sensor-peripherals, probably does not have sufficiently rich correlations btw 
its I/O stream and its physical state, to allow it to develop a robust 
self-model of its physical self (which can then be used as a basis for a more 
general phenomenal self).  






  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote:
Ppl like Ben argue that the concept/engineering aspect of intelligence is
independent of the type of environment. That is, given you understand how
to make it in a virtual environment you can then tarnspose that concept
into a real environment more safely.

Some other ppl on the other hand believe intelligence is a property of
humans only. So you have to simulate every detail about humans to get
that intelligence. I'd say that among the two approaches the first one
(Ben's) is safer and more realistic.

The issue is not what is intelligence, but what do you want to create? In order 
for machines to do more work for us, they may need language and vision, which 
we associate with human intelligence. But building artificial humans is not 
necessarily useful. We already know how to create humans, and we are doing so 
at an unsustainable rate.

I suggest that instead of the imitation game (Turing test) for AI, we should 
use a preference test. If you prefer to talk to a machine vs. a human, then the 
machine passes the test.

Prediction is central to intelligence. If you can predict a text stream, then 
for any question Q and any answer A, you can compute the probability 
distribution P(A|Q) = P(QA)/P(Q). This passes the Turing test. More 
importantly, it allows you to output max_A P(QA), the most likely answer from a 
group of humans. This passes the preference test because a group is usually 
more accurate than any individual member. (It may fail a Turing test for giving 
too few wrong answers, a problem Turing was aware of in 1950 when he gave an 
example of a computer incorrectly answering an arithmetic problem).

Text compression is equivalent to AI because we have already solved the coding 
problem. Given P(x) for string x, we know how to optimally and efficiently code 
x in log_2(1/P(x)) bits (e.g. arithmetic coding). Text compression has an 
advantage over the Turing or preference tests in that that incremental progress 
in modeling can be measured precisely and the test is repeatable and verifiable.

If I want to test a text compressor, it is important to use real data (human 
generated text) rather than simulated data, i.e. text generated by a program. 
Otherwise, I know there is a concise code for the input data, which is the 
program that generated it. When you don't understand the source distribution 
(i.e. the human brain), the problem is much harder, and you have a legitimate 
test.

I understand that Ben is developing AI for virtual worlds. This might produce 
interesting results, but I wouldn't call it AGI. The value of AGI is on the 
order of US $1 quadrillion. It is a global economic system running on a smarter 
internet. I believe that any attempt to develop AGI on a budget of $1 million 
or $1 billion or $1 trillion is just wishful thinking.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 12:47 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Terren,

 If you think it's all been said, please point me to the philosophy of AI
 that includes it.

I believe what you are suggesting is best understood as an interaction machine.



General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single reference for that. (I am not knowledgable on this subject, I
just have heard the idea before.)

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 So, my only remaining objection is that while the universe
 *could* be
 computable, it seems unwise to me to totally rule out the
 alternative.

You're right. We cannot prove that the universe is computable. We have evidence 
like Occam's Razor (if the universe is computable, then algorithmically simple 
models are to be preferred), but that is not proof.

At one time our models of physics were not computable. Then we discovered 
atoms, quantization of electric charge, general relativity (which bounds 
density and velocity), the big bang (history is finite) and quantum mechanics. 
Our models would still not be computable (requiring infinite description 
length) if any one of these events did not occur.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks for reply. But I don't understand what you see as the connection. An 
interaction machine from my brief googling is one which has physical organs.


Any factory machine can be thought of as having organs. What I am trying to 
forge is a new paradigm of a creative, free  machine as opposed to that 
exemplified by most actual machines, which are rational, deterministic 
machines. The latter can only engage in any task in set ways - and therefore 
engage and combine their organs in set combinations and sequences. Creative 
machines have a more or less infinite range of possible ways of going about 
things, and can combine their organs in a virtually infinite range of 
combinations, (which gives them a slight advantage, adaptively :) ). 
Organisms *are* creative machines; computers and robots *could* be (and are, 
when combined with humans), AGI's will *have* to be.


(To talk of creative machines, more specifically, as I did, as 
keyboards/organisers is to focus on the mechanics of this infinite 
combinativity of organs).


Interaction machines do not seem in any way then to entail what I'm talking 
about - creative machines - keyboards/ organisers - infinite 
combinativity - or the *creation,* as quite distinct from *following*  of 
programs/algorithms and routines..




Abram/MT: If you think it's all been said, please point me to the 
philosophy of AI

that includes it.


I believe what you are suggesting is best understood as an interaction 
machine.




General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single reference for that. (I am not knowledgable on this subject, I
just have heard the idea before.)

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

Mike,

Thanks for the reference to Dennis Noble, he sounds very interesting and his 
views on Systems Biology as expressed on his Wikipedia page are perfectly in 
line with my own thoughts and biases.

I agree in spirit with your basic criticisms regarding current AI and 
creativity. However, it must be pointed out that if you abandon determinism, 
you find yourself in the world of dualism, or worse. There are several ways out 
of this conundrum, one involves complexity/emergence (global behavior cannot be 
understood in terms of reduction to local behavior), another involves 
algorithmic complexity (or complicatedness, behavior cannot be predicted due to 
limitations of our inborn abilities to mentally model such complicatedness), 
although either can be predicted in principle with sufficient computational 
resources. This is true of humans as well - and if you think it isn't, once 
again, you're committing yourself to some kind of dualistic position (e.g., we 
are motivated by our spirit).

If you accept the proposition that the appearance of free will in an agent 
comes down to one's ability to predict its behavior, then either of the schemes 
above serves to produce free will (or the illusion of it, if you prefer).

Thus is creativity possible while preserving determinism. Of course, you still 
need to have an explanation for how creativity emerges in either case, but in 
contrast to what you said before, some AI folks have indeed worked on this 
issue. 

Terren

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 12:47 AM
 Terren,
 
 If you think it's all been said, please point me to the
 philosophy of AI 
 that includes it.
 
 A programmed machine is an organized structure. A keyboard
 (and indeed a 
 computer with keyboard) are something very different -
 there is no 
 organization to those 26 letters etc.   They can be freely
 combined and 
 sequenced to create an infinity of texts. That is the very
 essence and 
 manifestly, the whole point, of a keyboard.
 
 Yes, the keyboard is only an instrument. But your body -
 and your brain - 
 which use it,  are themselves keyboards. They consist of
 parts which also 
 have no fundamental behavioural organization - that can be
 freely combined 
 and sequenced to create an infinity of sequences of
 movements and thought - 
 dances, texts, speeches, daydreams, postures etc.
 
 In abstract logical principle, it could all be
 preprogrammed. But I doubt 
 that it's possible mathematically - a program for
 selecting from an infinity 
 of possibilities? And it would be engineering madness -
 like trying to 
 preprogram a particular way of playing music, when an
 infinite repertoire is 
 possible and the environment, (in this case musical
 culture), is changing 
 and evolving with bewildering and unpredictable speed.
 
 To look at computers as what they are (are you disputing
 this?) - machines 
 for creating programs first, and following them second,  is
 a radically 
 different way of looking at computers. It also fits with
 radically different 
 approaches to DNA - moving away from the idea of DNA as
 coded program, to 
 something that can be, as it obviously can be, played like
 a keyboard  - see 
 Dennis Noble, The Music of Life. It fits with the fact
 (otherwise 
 inexplicable) that all intelligences have both deliberate
 (creative) and 
 automatic (routine) levels - and are not just automatic,
 like purely 
 programmed computers. And it fits with the way computers
 are actually used 
 and programmed, rather than the essentially fictional
 notion of them as pure 
 turing machines.
 
 And how to produce creativity is the central problem of AGI
 - completely 
 unsolved.  So maybe a new approach/paradigm is worth at
 least considering 
 rather than more of the same? I'm not aware of a single
 idea from any AGI-er 
 past or present that directly addresses that problem - are
 you?
 
 
 
  Mike,
 
  There's nothing particularly creative about
 keyboards. The creativity 
  comes from what uses the keyboard. Maybe that was your
 point, but if so 
  the digression about a keyboard is just confusing.
 
  In terms of a metaphor, I'm not sure I understand
 your point about 
  organizers. It seems to me to refer simply
 to that which we humans do, 
  which in essence says general intelligence is
 what we humans do. 
  Unfortunately, I found this last email to be quite
 muddled. Actually, I am 
  sympathetic to a lot of your ideas, Mike, but I also
 have to say that your 
  tone is quite condescending. There are a lot of smart
 people on this list, 
  as one would expect, and a little humility and respect
 on your part would 
  go a long way. Saying things like You see,
 AI-ers simply don't understand 
  computers, or understand only half of them. 
 More often than not you 
  position 

Re: [agi] draft for comment

2008-09-04 Thread Matt Mahoney
--- On Wed, 9/3/08, Pei Wang [EMAIL PROTECTED] wrote:

 TITLE: Embodiment: Who does not have a body?
 
 AUTHOR: Pei Wang
 
 ABSTRACT: In the context of AI, ``embodiment''
 should not be
 interpreted as ``giving the system a body'', but as
 ``adapting to the
 system's experience''. Therefore, being a robot
 is neither a
 sufficient condition nor a necessary condition of being
 embodied. What
 really matters is the assumption about the environment for
 which the
 system is designed.
 
 URL: http://nars.wang.googlepages.com/wang.embodiment.pdf

The paper seems to argue that embodiment applies to any system with inputs and 
outputs, and therefore all AI systems are embodied. However, there are 
important differences between symbolic systems like NARS and systems with 
external sensors such as robots and humans. The latter are analog, e.g. the 
light intensity of a particular point in the visual field, or the position of a 
joint in an arm. In humans, there is a tremendous amount of data reduction from 
the senses, from 137 million rods and cones in each eye each firing up to 300 
pulses per second, down to 2 bits per second by the time our high level visual 
perceptions reach long term memory.

AI systems have traditionally avoided this type of processing because they 
lacked the necessary CPU power. IMHO this has resulted in biologically 
implausible symbolic language models with only a small number of connections 
between concepts, rather than the tens of thousands of connections per neuron.

Another aspect of embodiment (as the term is commonly used), is the false 
appearance of intelligence. We associate intelligence with humans, given that 
there are no other examples. So giving an AI a face or a robotic body modeled 
after a human can bias people to believe there is more intelligence than is 
actually present.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
A closed model is one that is interpreted as representing all truths
about that which is modeled. An open model is instead interpreted as
making a specific set of assertions, and leaving the rest undecided.
Formally, we might say that a closed model is interpreted to include
all of the truths, so that any other statements are false. This is
also known as the closed-world assumption.

A typical example of an open model is a set of statements in predicate
logic. This could be changed to a closed model simply by applying the
closed-world assumption. A possibly more typical example of a
closed-world model is a computer program that outputs the data so far
(and predicts specific future output), as in Solomonoff induction.

These two types of model are very different! One important difference
is that we can simply *add* to an open model if we need to account for
new data, while we must always *modify* a closed model if we want to
account for more information.

The key difference I want to ask about here is: a length-based
bayesian prior seems to apply well to closed models, but not so well
to open models.

First, such priors are generally supposed to apply to entire joint
states; in other words, probability theory itself (and in particular
bayesian learning) is built with an assumption of an underlying space
of closed models, not open ones.

Second, an open model always has room for additional stuff somewhere
else in the universe, unobserved by the agent. This suggests that,
made probabilistic, open models would generally predict universes with
infinite description length. Whatever information was known, there
would be an infinite number of chances for other unknown things to be
out there; so it seems as if the probability of *something* more being
there would converge to 1. (This is not, however, mathematically
necessary.) If so, then taking that other thing into account, the same
argument would still suggest something *else* was out there, and so
on; in other words, a probabilistic open-model-learner would seem to
predict a universe with an infinite description length. This does not
make it easy to apply the description length principle.

I am not arguing that open models are a necessity for AI, but I am
curious if anyone has ideas of how to handle this. I know that Pei
Wang suggests abandoning standard probability in order to learn open
models, for example.

--Abram Demski


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike,

The reason I decided that what you are arguing for is essentially an
interactive model is this quote:

But that is obviously only the half of it.Computers are obviously
much more than that - and  Turing machines. You just have to look at
them. It's staring you in the face. There's something they have that
Turing machines don't. See it? Terren?

They have -   a keyboard.

A keyboard is precisely what the interaction theorists are trying to
account for! Plus the mouse, the ethernet port, et cetera.

Moreover, your general comments fit into the model if interpreted
judiciously. You make a distinction between rule-based and creative
behavior; rule-based behavior could be thought of as isolated
processing of input (receive input, process without interference,
output result) while creative behavior is behavior resulting from
continual interaction with and exploration of the external world. Your
concept of organisms as organizers only makes sense when I see it in
this light: a human organizes the environment by interaction with it,
while a Turing machine is unable to do this because it cannot
explore/experiment/discover.

-Abram

On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks for reply. But I don't understand what you see as the connection. An
 interaction machine from my brief googling is one which has physical organs.

 Any factory machine can be thought of as having organs. What I am trying to
 forge is a new paradigm of a creative, free  machine as opposed to that
 exemplified by most actual machines, which are rational, deterministic
 machines. The latter can only engage in any task in set ways - and therefore
 engage and combine their organs in set combinations and sequences. Creative
 machines have a more or less infinite range of possible ways of going about
 things, and can combine their organs in a virtually infinite range of
 combinations, (which gives them a slight advantage, adaptively :) ).
 Organisms *are* creative machines; computers and robots *could* be (and are,
 when combined with humans), AGI's will *have* to be.

 (To talk of creative machines, more specifically, as I did, as
 keyboards/organisers is to focus on the mechanics of this infinite
 combinativity of organs).

 Interaction machines do not seem in any way then to entail what I'm talking
 about - creative machines - keyboards/ organisers - infinite combinativity
 - or the *creation,* as quite distinct from *following*  of
 programs/algorithms and routines..



 Abram/MT: If you think it's all been said, please point me to the
 philosophy of AI

 that includes it.

 I believe what you are suggesting is best understood as an interaction
 machine.



 General references:

 http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

 http://www.cs.brown.edu/people/pw/papers/ficacm.ps

 http://www.la-acm.org/Archives/laacm9912.html



 The concept that seems most relevant to AI is the learning theory
 provided by inductive turing machines, but I cannot find a good
 single reference for that. (I am not knowledgable on this subject, I
 just have heard the idea before.)

 --Abram


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Matt Mahoney
In a closed model, every statement is either true or false. In an open model, 
every statement is either true or uncertain. In reality, all statements are 
uncertain, but we have a means to assign them probabilities (not necessarily 
accurate probabilities).

A closed model is unrealistic, but an open model is even more unrealistic 
because you lack a means of assigning likelihoods to statements like the sun 
will rise tomorrow or the world will end tomorrow. You absolutely must have 
a means of guessing probabilities to do anything at all in the real world.


-- Matt Mahoney, [EMAIL PROTECTED]


--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: [agi] open models, closed models, priors
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 2:19 PM
 A closed model is one that is interpreted as representing
 all truths
 about that which is modeled. An open model is instead
 interpreted as
 making a specific set of assertions, and leaving the rest
 undecided.
 Formally, we might say that a closed model is interpreted
 to include
 all of the truths, so that any other statements are false.
 This is
 also known as the closed-world assumption.
 
 A typical example of an open model is a set of statements
 in predicate
 logic. This could be changed to a closed model simply by
 applying the
 closed-world assumption. A possibly more typical example of
 a
 closed-world model is a computer program that outputs the
 data so far
 (and predicts specific future output), as in Solomonoff
 induction.
 
 These two types of model are very different! One important
 difference
 is that we can simply *add* to an open model if we need to
 account for
 new data, while we must always *modify* a closed model if
 we want to
 account for more information.
 
 The key difference I want to ask about here is: a
 length-based
 bayesian prior seems to apply well to closed models, but
 not so well
 to open models.
 
 First, such priors are generally supposed to apply to
 entire joint
 states; in other words, probability theory itself (and in
 particular
 bayesian learning) is built with an assumption of an
 underlying space
 of closed models, not open ones.
 
 Second, an open model always has room for additional stuff
 somewhere
 else in the universe, unobserved by the agent. This
 suggests that,
 made probabilistic, open models would generally predict
 universes with
 infinite description length. Whatever information was
 known, there
 would be an infinite number of chances for other unknown
 things to be
 out there; so it seems as if the probability of *something*
 more being
 there would converge to 1. (This is not, however,
 mathematically
 necessary.) If so, then taking that other thing into
 account, the same
 argument would still suggest something *else* was out
 there, and so
 on; in other words, a probabilistic open-model-learner
 would seem to
 predict a universe with an infinite description length.
 This does not
 make it easy to apply the description length principle.
 
 I am not arguing that open models are a necessity for AI,
 but I am
 curious if anyone has ideas of how to handle this. I know
 that Pei
 Wang suggests abandoning standard probability in order to
 learn open
 models, for example.
 
 --Abram Demski
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Matt,

My intention here is that there is a basic level of well-defined,
crisp models which probabilities act upon; so in actuality the
system will never be using a single model, open or closed...

(in a hurry now, more comments later)

--Abram

On Thu, Sep 4, 2008 at 2:47 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 In a closed model, every statement is either true or false. In an open model, 
 every statement is either true or uncertain. In reality, all statements are 
 uncertain, but we have a means to assign them probabilities (not necessarily 
 accurate probabilities).

 A closed model is unrealistic, but an open model is even more unrealistic 
 because you lack a means of assigning likelihoods to statements like the sun 
 will rise tomorrow or the world will end tomorrow. You absolutely must 
 have a means of guessing probabilities to do anything at all in the real 
 world.


 -- Matt Mahoney, [EMAIL PROTECTED]


 --- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:

 From: Abram Demski [EMAIL PROTECTED]
 Subject: [agi] open models, closed models, priors
 To: agi@v2.listbox.com
 Date: Thursday, September 4, 2008, 2:19 PM
 A closed model is one that is interpreted as representing
 all truths
 about that which is modeled. An open model is instead
 interpreted as
 making a specific set of assertions, and leaving the rest
 undecided.
 Formally, we might say that a closed model is interpreted
 to include
 all of the truths, so that any other statements are false.
 This is
 also known as the closed-world assumption.

 A typical example of an open model is a set of statements
 in predicate
 logic. This could be changed to a closed model simply by
 applying the
 closed-world assumption. A possibly more typical example of
 a
 closed-world model is a computer program that outputs the
 data so far
 (and predicts specific future output), as in Solomonoff
 induction.

 These two types of model are very different! One important
 difference
 is that we can simply *add* to an open model if we need to
 account for
 new data, while we must always *modify* a closed model if
 we want to
 account for more information.

 The key difference I want to ask about here is: a
 length-based
 bayesian prior seems to apply well to closed models, but
 not so well
 to open models.

 First, such priors are generally supposed to apply to
 entire joint
 states; in other words, probability theory itself (and in
 particular
 bayesian learning) is built with an assumption of an
 underlying space
 of closed models, not open ones.

 Second, an open model always has room for additional stuff
 somewhere
 else in the universe, unobserved by the agent. This
 suggests that,
 made probabilistic, open models would generally predict
 universes with
 infinite description length. Whatever information was
 known, there
 would be an infinite number of chances for other unknown
 things to be
 out there; so it seems as if the probability of *something*
 more being
 there would converge to 1. (This is not, however,
 mathematically
 necessary.) If so, then taking that other thing into
 account, the same
 argument would still suggest something *else* was out
 there, and so
 on; in other words, a probabilistic open-model-learner
 would seem to
 predict a universe with an infinite description length.
 This does not
 make it easy to apply the description length principle.

 I am not arguing that open models are a necessity for AI,
 but I am
 curious if anyone has ideas of how to handle this. I know
 that Pei
 Wang suggests abandoning standard probability in order to
 learn open
 models, for example.

 --Abram Demski




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Mike Tintner

Matt,

I'm confused here. What I mean is that in real life, the probabilities are 
mathematically incalculable, period, a good deal of the time - you cannot 
go, as you v. helpfully point out, much beyond saying this is fairly 
probable, may happen, there's some chance.. And those words are fairly 
good reflections of how we actually reason and anti-calculate 
probabilities -*without* numbers or any maths... And such non-mathematical 
vagueness seems foundational for AGI.  You can't, for example, calculate 
mathematically the likeness or the truthfulness of metaphorical terms - of 
storms and swirling milk in a teacup. Not even provisionally.


My understanding is that AGI-ers still persist in trying to use numbers, and 
you seem, in your first sentence, to be advocating the same.



Matt: I mean that you have to assign likelihoods to beliefs, even if the 
numbers are wrong. Logic systems where every statement is true or false 
simply are too brittle to scale beyond toy problems. Everything in life is 
uncertain, including the degree of uncertainty. That's why we use terms like 
probably, maybe, etc. instead of numbers.


--

Matt:You absolutely must have a means of guessing
probabilities to do
anything at all in the real world


MT: Do you mean mathematically?  Estimating chances as roughly,

even if
provisionally,  0.70? If so, manifestly, that is untrue.
What are your
chances that you will get lucky tonight?  Will an inability
to guess the
probability stop you trying?  Most of the time, arguably,
we have to and do,
act on the basis of truly vague magnitudes - a
mathematically horrendously
rough sense of probability. Or just: what the heck -
what's the worst that
can happen? Let's do it. And let's just pray it
works out.  How precise a
sense of the probabilities attending his current decisions
does even a
professionally mathematical man like Bernanke have?

Only AGI's in a virtual world can live with cosy,
mathematically calculable
uncertainty. Living in the real world is as
Kauffman points out to a great
extent living with *mystery*. What are the maths of
mystery? Do you think
Ben has the least realistic idea of the probabilities
affecting his AGI
projects? That's not how most creative projects get
done, or life gets
lived.  Quadrillions, Matt, schmazillions.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Mike,

standard Bayesianism somewhat accounts for this-- exact-number
probabilities are defined by the math, but in no way are they seen as
the real probability values. A subjective prior is chosen, which
defines all further probabilities, but that prior is not believed to
be correct. Subsequent experience tends to outweigh the prior, so the
probabilities after experience are much more accurate than before,
even though they are still not perfectly accurate.

And while we may not be able to articulate our exact belief levels in
English, I could still argue that the level of activation in the brain
is a precise value. So, just because an AI uses probabilities at the
implementation level does not mean it would be able to articulate
exact numbers consciously.

Of course, none of this is an argument *for* the use of probabilities...

--Abram

On Thu, Sep 4, 2008 at 5:29 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Matt,

 I'm confused here. What I mean is that in real life, the probabilities are
 mathematically incalculable, period, a good deal of the time - you cannot
 go, as you v. helpfully point out, much beyond saying this is fairly
 probable, may happen, there's some chance.. And those words are fairly
 good reflections of how we actually reason and anti-calculate
 probabilities -*without* numbers or any maths... And such non-mathematical
 vagueness seems foundational for AGI.  You can't, for example, calculate
 mathematically the likeness or the truthfulness of metaphorical terms - of
 storms and swirling milk in a teacup. Not even provisionally.

 My understanding is that AGI-ers still persist in trying to use numbers, and
 you seem, in your first sentence, to be advocating the same.


 Matt: I mean that you have to assign likelihoods to beliefs, even if the
 numbers are wrong. Logic systems where every statement is true or false
 simply are too brittle to scale beyond toy problems. Everything in life is
 uncertain, including the degree of uncertainty. That's why we use terms like
 probably, maybe, etc. instead of numbers.

 --

 Matt:You absolutely must have a means of guessing
 probabilities to do
 anything at all in the real world

 MT: Do you mean mathematically?  Estimating chances as roughly,

 even if
 provisionally,  0.70? If so, manifestly, that is untrue.
 What are your
 chances that you will get lucky tonight?  Will an inability
 to guess the
 probability stop you trying?  Most of the time, arguably,
 we have to and do,
 act on the basis of truly vague magnitudes - a
 mathematically horrendously
 rough sense of probability. Or just: what the heck -
 what's the worst that
 can happen? Let's do it. And let's just pray it
 works out.  How precise a
 sense of the probabilities attending his current decisions
 does even a
 professionally mathematical man like Bernanke have?

 Only AGI's in a virtual world can live with cosy,
 mathematically calculable
 uncertainty. Living in the real world is as
 Kauffman points out to a great
 extent living with *mystery*. What are the maths of
 mystery? Do you think
 Ben has the least realistic idea of the probabilities
 affecting his AGI
 projects? That's not how most creative projects get
 done, or life gets
 lived.  Quadrillions, Matt, schmazillions.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 8:56 AM, Valentina Poletti [EMAIL PROTECTED] wrote:
 I agree with Pei in that a robot's experience is not necessarily more real
 than that of a, say, web-embedded agent - if anything it is closer to the
 human experience of the world. But who knows how limited our own sensory
 experience is anyhow. Perhaps a better intelligence would comprehend the
 world better through a different emboyment.

Exactly, the world to a system is always limited by the system's I/O
channels, and for systems with different I/O channels, their worlds
are different in many aspects, but no one is more real than the
others.

 However, could you guys be more specific regarding the statistical
 differences of different types of data? What kind of differences are you
 talking about specifically (mathematically)? And what about the differences
 at the various levels of the dual-hierarchy? Has any of your work or
 research suggested this hypothesis, if so which?

It is Ben who suggested the statistical differences and the
dual-hierarchy, while I'm still not convinced about their value.

My own constructive work on this topic can be found in
http://nars.wang.googlepages.com/wang.semantics.pdf

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And as a matter of scientific, historical fact, computers are first
 and foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and only then following them. [Remember how AI gets
 almost everything about intelligence back to front?] There is not and
 never has been a program that wasn't first created on a keyboard.
 Indisputable fact. Almost everything that happens in computers
 happens via the keyboard.

http://heybryan.org/mediawiki/index.php/Egan_quote

 So what exactly is a keyboard? Well, like all keyboards whether of
 computers, musical instruments or typewriters, it is a creative
 instrument. And what makes it creative is that it is - you could say
 - an organiser.

Then you're starting to get into (some well needed) complexity science.

 A device with certain organs (in this case keys) that are designed
 to be creatively organised - arranged in creative, improvised (rather
 than programmed) sequences of  action/ association./organ play.

Yes, but the genotype isn't the phenotype and the translation from 
the 'code', the intentions of the programmer and so on to the 
expressions is 'hard' - people get so caught up in folk psychology that 
it's maddening.

 And an extension of the body. Of the organism. All organisms are
 organisers - devices for creatively sequencing actions/
 associations./organs/ nervous systems first and developing fixed,
 orderly sequences/ routines/ programs second.

Some (I) say that neural systems are somewhat like optimizers, which are 
heavily used in compilers that are compiling your programs anyway, so 
be careful: the difference might not be that broad.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Abram,

Thanks. V. helpful and interesting. Yes, on further examination, these 
interactionist guys seem, as you say, to be trying to take into account  the 
embeddedness of the computer.


But no, there's still a huge divide between them and me. I would liken them 
in the context of this discussion, to Pei who tries to argue that NARS is 
non-algorithmic, because the program is continuously changing. - and 
therefore satisfies the objections of classical objectors to AI/AGI.


Well, both these guys and Pei are still v. much algorithmic in any 
reasonable sense of the word - still following *structures,* if v. 
sophisticated (and continuously changing) structures, of thought.


And what I am asserting is a  paradigm of a creative machine, which starts 
as, and is, NON-algorithmic and UNstructured  in all its activities, albeit 
that it acquires and creates a multitude of algorithms, or 
routines/structures, for *parts* of those  activities. For example, when you 
write a post,  nearly every word and a great many phrases and even odd 
sentences, will be automatically, algorithmically produced. But the whole 
post, and most paras will *not* be - and *could not* be.


A creative machine has infinite combinative potential. An algorithmic, 
programmed machine has strictly limited combinativity..


And a keyboard is surely the near perfect symbol of infinite, unstructured 
combinativity. It is being, and has been, used in endlessly creative ways - 
and is, along with the blank page and pencil, the central tool of our 
civilisation's creativity. Those randomly arranged letters - clearly 
designed to be infinitely recombined - are the antithesis of a programmed 
machine.


So however those guys account for that keyboard, I don't see them as in any 
way accounting for it in my sense, or in its true, full usage. But thanks 
for your comments. (Oh and I did understand re Bayes - I was and am still 
arguing he isn't valid in many cases, period).




Mike,

The reason I decided that what you are arguing for is essentially an
interactive model is this quote:

But that is obviously only the half of it.Computers are obviously
much more than that - and  Turing machines. You just have to look at
them. It's staring you in the face. There's something they have that
Turing machines don't. See it? Terren?

They have -   a keyboard.

A keyboard is precisely what the interaction theorists are trying to
account for! Plus the mouse, the ethernet port, et cetera.

Moreover, your general comments fit into the model if interpreted
judiciously. You make a distinction between rule-based and creative
behavior; rule-based behavior could be thought of as isolated
processing of input (receive input, process without interference,
output result) while creative behavior is behavior resulting from
continual interaction with and exploration of the external world. Your
concept of organisms as organizers only makes sense when I see it in
this light: a human organizes the environment by interaction with it,
while a Turing machine is unable to do this because it cannot
explore/experiment/discover.

-Abram

On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:

Abram,

Thanks for reply. But I don't understand what you see as the connection. 
An
interaction machine from my brief googling is one which has physical 
organs.


Any factory machine can be thought of as having organs. What I am trying 
to

forge is a new paradigm of a creative, free  machine as opposed to that
exemplified by most actual machines, which are rational, deterministic
machines. The latter can only engage in any task in set ways - and 
therefore
engage and combine their organs in set combinations and sequences. 
Creative
machines have a more or less infinite range of possible ways of going 
about

things, and can combine their organs in a virtually infinite range of
combinations, (which gives them a slight advantage, adaptively :) ).
Organisms *are* creative machines; computers and robots *could* be (and 
are,

when combined with humans), AGI's will *have* to be.

(To talk of creative machines, more specifically, as I did, as
keyboards/organisers is to focus on the mechanics of this infinite
combinativity of organs).

Interaction machines do not seem in any way then to entail what I'm 
talking
about - creative machines - keyboards/ organisers - infinite 
combinativity

- or the *creation,* as quite distinct from *following*  of
programs/algorithms and routines..



Abram/MT: If you think it's all been said, please point me to the
philosophy of AI


that includes it.


I believe what you are suggesting is best understood as an interaction
machine.



General references:

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

http://www.cs.brown.edu/people/pw/papers/ficacm.ps

http://www.la-acm.org/Archives/laacm9912.html



The concept that seems most relevant to AI is the learning theory
provided by inductive turing machines, but I cannot find a good
single 

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Terren Suydam wrote:
 Thus is creativity possible while preserving determinism. Of course,
 you still need to have an explanation for how creativity emerges in
 either case, but in contrast to what you said before, some AI folks
 have indeed worked on this issue.

http://heybryan.org/mediawiki/index.php/Egan_quote 

Egan solved that particular problem. It's about creation -- even if you 
have the most advanced mathematical theory of the universe, you just 
made it slightly more recursive and so on just by shuffling around 
neurotransmitters in your head.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 I think this is a good important point. I've been groping confusedly
 here. It seems to me computation necessarily involves the idea of
 using a code (?). But the nervous system seems to me something
 capable of functioning without a code - directly being imprinted on
 by the world, and directly forming movements, (even if also involving
 complex hierarchical processes), without any code. I've been
 wondering whether computers couldn't also be designed to function
 without a code in somewhat similar fashion.  Any thoughts or ideas of
 your own?

Hold on there -- the brain most certainly has a code, if you will 
remember the gene expression and the general neurophysical nature of it 
all. I think partly the difference you might be seeing here is how much 
more complex and grown the brain is in comparison to somewhat fragile 
circuits and the ecological differences between the WWW and the 
combined evolutionary history keeping your neurons healthy each day. 

Anyway, because of the quantified nature of energy in general, the brain 
must be doing something physical and operating on a code, or i.e. 
have an actual nature to it. I would like to see alternatives to this 
line of reasoning, of course.

As for computers that don't have to be executing code all of the time. 
I've been wondering about machines that could also imitate the 
biological ability to recover from errors and not spontaneously burst 
into flames when something goes wrong in the Source. Clearly there's 
something of interest here.

- Bryan
who has gone 36 hours without sleep. Why am I here?

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 And what I am asserting is a  paradigm of a creative machine, which
 starts as, and is, NON-algorithmic and UNstructured  in all its
 activities, albeit that it acquires and creates a multitude of
 algorithms, or
 routines/structures, for *parts* of those  activities. For example,
 when you write a post,  nearly every word and a great many phrases
 and even odd sentences, will be automatically, algorithmically
 produced. But the whole post, and most paras will *not* be - and
 *could not* be.

Here's an alternative formulation for you to play with, Mike. I suspect 
it is still possible to consider it a creative machine even with an 
algorithmic basis *because* it is the nature of reality itself to 
compute these things; there is nothing that can have as much 
information about the moment than the moment itself, and thus why 
there's still this element of stochasticity and creativity that we see, 
even if we say that the brain is deterministic and so on.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And how to produce creativity is the central problem of AGI -
 completely unsolved.  So maybe a new approach/paradigm is worth at
 least considering rather than more of the same? I'm not aware of a
 single idea from any AGI-er past or present that directly addresses
 that problem - are you?

Mike, one of the big problems in computer science is the prediction of 
genotypes from phenotypes in general problem spaces. So far, from what 
I've learned, we haven't a way to guarantee that a resulting process 
is going to be creative. So it's not going to be solved per-se in the 
traditional sense of hey look, here's a foolproof equivalency of 
creativity. I truly hope I am wrong. This is a good way to be wrong 
about the whole thing, I must admit.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 Do you honestly think that you write programs in a programmed way?
 That it's not an *art* pace Matt, full of hesitation, halts,
 meandering, twists and turns, dead ends, detours etc?  If you have
 to have some sort of program to start with, how come there is no
 sign  of that being true, in the creative process of programmers
 actually writing programs?

Two notes on this one. 

I'd like to see fMRI studies of programmers having at it. I've seen this 
of authors, but not of programmers per-se. It would be interesting. But 
this isn't going to work because it'll just show you lots of active 
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those 
dead ends and turns that one makes when traveling down those paths. 
I've sometimes been able to go fully into a recording session where I 
could write about a few minutes of decisions for hours on end 
afterwards, but it's just not efficient to getting the point across. 
I've sometimes wanted to do this for web crawling, when I do my 
browsing and reading, and at least somewhat track my jumps from page to 
page and so on, or even in my own grammar and writing so that I can 
make sure I optimize it :-) and so that I can see where I was going or 
not going :-) but any solution that requires me to type even /more/ 
will be a sort of contradiction, since then I will have to type even 
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop 
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 9:35 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I understand that a keyboard and touchpad do provide proprioceptive input,
 but I think it's too feeble, and too insensitively respondent to changes in
 the environment and the relation btw the laptop and the environment, to
 serve as the foundation for a robust self-model or a powerful general
 intelligence.

Compared to what? Of course the human sensors are much more
complicated, but many robot sensors are no better, so why they are
considered as real, while keyboard and touchpad are not?

Of course I'm not really arguing that keyboard and touchpad are all
we'll need for AGI (I plan to play with robots myself), but that there
is no fundamental difference between what we call 'robot' and what we
call 'computer', as far as the 'embodiment' discussion is concerned.
Robot is just special-purpose computer with I/O not designed for human
users.

 Of course it won't have a visual concept of self, but a system like
 NARS has the potential to grow into an intelligent operating system,
 with a notion of self based on what it can feel and do, as well as
 the causal relations among them --- If there is a file in this
 folder, then I should have felt it, it cannot be there because I've
 deleted the contents.

 My suggestion is that the file system lacks the complexity of structure and
 dynamics to support the emergence of a robust self-model, and powerful
 general intelligence...

Sure. I just used file managing as a simple example. What if the AI
have full control of the system's hardware and software, and can use
them in novel ways to solve all kinds of problems unknown to it
previously, without human involvement?

 I would call a self any internal, explicit model that a system creates
 that allows it to predict its own behaviors in a sufficient variety of
 contexts  This need not have a visual aspect nor a great similarity to a
 human self.

I'd rather not call it a 'model', though won't argue on this topic ---
'embodiment' is already confusing enough, so 'self' is better to wait,
otherwise someone will even add 'consciousness' into the discussion.
;-)

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Valentina Poletti wrote:
 When we want to step further and create an AGI I think we want to
 externalize the very ability to create technology - we want the
 environment to start adapting to us by itself, spontaneously by
 gaining our goals.

There is a sense of resilience in the whole scheme of things. It's not 
hard to show how stupid each one of us can be in a single moment; but 
luckily our stupid decisions don't blow us up [often] - it's not so 
much luck as it might be resilience. In an earlier email to which I 
replied today, Mike was looking for a resilient computer that didn't 
need code. 

On another note: goals are an interesting folk psychology mechanism. 
I've seen other cultures afflict their own goals upon their 
environment, sort of how the brain contains a map of the skin for 
sensory representation, the same with the environment to their own 
goals and aspirations in life. What alternatives to goals could you do 
when doing programming? Otherwise you'll not end up with Mike's 
requested 'resilient computer' as I'm calling it.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 A closed model is unrealistic, but an open model is even more
 unrealistic because you lack a means of assigning likelihoods to
 statements like the sun will rise tomorrow or the world will end
 tomorrow. You absolutely must have a means of guessing probabilities
 to do anything at all in the real world.

I don't assign or guess probabilities and I seem to get things done. 
What gives?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 Another aspect of embodiment (as the term is commonly used), is the
 false appearance of intelligence. We associate intelligence with
 humans, given that there are no other examples. So giving an AI a
 face or a robotic body modeled after a human can bias people to
 believe there is more intelligence than is actually present.

I'm still waiting until you guys could show me a psychometric test that 
has a one-to-one correlation with the bioinformatics and 
neuroinformatics and then thus could be approached with a physical 
model down at the biophysics. Otherwise the 'false appearance of 
intelligence' is a truism - intelligence is false. What then? (Would 
you give up making brains and such systems? I'm just wondering. It's an 
interesting scenario.)

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Abram Demski wrote:
 My intention here is that there is a basic level of well-defined,
 crisp models which probabilities act upon; so in actuality the
 system will never be using a single model, open or closed...

I think Mike's model is one more of approach, creativity and action 
rather than a formalized system existing in some quasi-state between 
open and closed. I'm not sure if the epistemiologies are meshing here.

Hrm.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 10:04 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi Pei,

 I think your point is correct that the notion of embodiment presented by
 Brooks and some other roboticists is naive.  I'm not sure whether their
 actual conceptions are naive, or whether they just aren't presenting their
 foundational philosophical ideas clearly in their writings (being ultimately
 more engineering-oriented people, and probably not that accustomed to the
 philosophical style of discourse in which these sorts of definitional
 distinctions need to be more precisely drawn).

To a large extent, their position is an reaction to the 'disembodied'
symbolic AI, though they get the issue wrong. The symbolic AI is
indeed 'disembodied', but it is not because computers have no body (or
sensorimotor devices), but that the systems are designed to ignore
their body and their experience.

Therefore, the solution should not be to get a (robotic) body, but
to take experience into account.

 I do think (in approximate
 concurrence with your paper) that ANY control system physically embodied in
 a physical system S, that has an input and output stream, and whose input
 and output stream possess correlation with the physical state of S, should
 be considered as psychologically embodied.  Clearly, whether it's a robot
 or a laptop (w/o network connection if you like), such a system has the
 basic property of embodiment.

Yes, though I'd neither say possess correlation with the physical
state (which is the terminology of model-theoretic semantics), nor
psychologically embodied (which still sounds like a second-rate
substitute of physically embodied).

 Furthermore S doesn't need to be a physical
 system ... it could be a virtual system inside some virtual world (and
 then there's the question of what properties characterize a valid virtual
 world ... but let's leave that for another email thread...)

Every system (in this discussion) is a physical system. It is just
that sometimes we can ignore its physical properties.

 However, I think that not all psychologically-embodied systems possess a
 sufficiently rich psychological-embodiment to lead to significantly general
 intelligence  My suggestion is that a laptop w/o network connection or
 odd sensor-peripherals, probably does not have sufficiently rich
 correlations btw its I/O stream and its physical state, to allow it to
 develop a robust self-model of its physical self (which can then be used as
 a basis for a more general phenomenal self).

That is a separate issue.  If a system's I/O devices are very simple,
it cannot produce rich behaviors. However, the problem is not caused
by 'disembodiment'. We cannot say that a body much reach a certain
complexity to be called a 'body'.

 I think that Varela and crew understood the value of this rich network of
 correlations, but mistakenly assumed it to be a unique property of
 biological systems...

Agree.

 I realize that the points you made in your paper do not contradict the
 suggestions I've made in this email.  I don't think anything significant in
 your paper is wrong, actually.  It just seems to me not to address the most
 interesting aspects of the embodiment issue as related to AGI.

Understand.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Terren:   I agree in spirit with your basic criticisms regarding current AI 
and creativity. However, it must be pointed out that if you abandon 
determinism, you find yourself in the world of dualism, or worse.


Nah. One word (though it would take too long here to explain) ; 
nondeterministic programming.


Terren: you still need to have an explanation for how creativity emerges in 
either case, but in contrast to what you said before, some AI folks have 
indeed worked on this issue.


Oh, they've done loads of work, often fine work, i.e. produced impressive 
but 'hack' variations on themes, musical, artistic, scripting etc. But the 
people actually producing those creative/hack variations, will agree, when 
pressed that they are not truly creative. And actual AGI-ers, to repeat, 
AFAIK have not produced a single idea about how machines can be creative. 
Not even a proposal, however wrong. Please point to one.


P.S. Glad to see your evolutionary perspective includes the natural kind - I 
had begun to think, obviously wrongly, that it didn't. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:22 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 The paper seems to argue that embodiment applies to any system with inputs 
 and outputs, and therefore all AI systems are embodied.

No. It argues that since every system has inputs and outputs,
'embodiment', as a non-trivial notion, should be interpreted as
taking experience into account when behaves. Therefore, traditional
symbolic AI systems, like CYC, is still disembodied.

 However, there are important differences between symbolic systems like NARS 
 and systems with external sensors such as robots and humans.

NARS, when implemented, has input/output, and therefore has external sensors.

I guess you still see NARS as using model-theoretic semantics, so you
call it symbolic and contrast it with system with sensors. This is
not correct --- see
http://nars.wang.googlepages.com/wang.semantics.pdf and
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

 The latter are analog, e.g. the light intensity of a particular point in the 
 visual field, or the position of a joint in an arm. In humans, there is a 
 tremendous amount of data reduction from the senses, from 137 million rods 
 and cones in each eye each firing up to 300 pulses per second, down to 2 bits 
 per second by the time our high level visual perceptions reach long term 
 memory.

Within a certain accuracy, 'digital' and 'analog' have no fundamental
difference. I hope you are not arguing that only analog system can be
embodied.

 AI systems have traditionally avoided this type of processing because they 
 lacked the necessary CPU power. IMHO this has resulted in biologically 
 implausible symbolic language models with only a small number of connections 
 between concepts, rather than the tens of thousands of connections per neuron.

You have made this point on CPU power several times, and I'm still
not convinced that the bottleneck of AI is hardware capacity. Also,
there is no reason to believe an AGI must be designed in a
biologically plausible way.

 Another aspect of embodiment (as the term is commonly used), is the false 
 appearance of intelligence. We associate intelligence with humans, given that 
 there are no other examples. So giving an AI a face or a robotic body modeled 
 after a human can bias people to believe there is more intelligence than is 
 actually present.

I agree with you on this point, though will not argue so in the paper
--- it is like to call the roboticists cheating, even though it is
indeed the case that works in robotics are much easier to get public
attention.

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Pei Wang
Abram,

I agree with the spirit of your post, and I even go further to include
being open in my working definition of intelligence --- see
http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

I also agree with your comment on Solomonoff induction and Bayesian prior.

However, I talk about open system, not open model, because I think
model-theoretic semantics is the wrong theory to be used here --- see
http://nars.wang.googlepages.com/wang.semantics.pdf

Pei

On Thu, Sep 4, 2008 at 2:19 PM, Abram Demski [EMAIL PROTECTED] wrote:
 A closed model is one that is interpreted as representing all truths
 about that which is modeled. An open model is instead interpreted as
 making a specific set of assertions, and leaving the rest undecided.
 Formally, we might say that a closed model is interpreted to include
 all of the truths, so that any other statements are false. This is
 also known as the closed-world assumption.

 A typical example of an open model is a set of statements in predicate
 logic. This could be changed to a closed model simply by applying the
 closed-world assumption. A possibly more typical example of a
 closed-world model is a computer program that outputs the data so far
 (and predicts specific future output), as in Solomonoff induction.

 These two types of model are very different! One important difference
 is that we can simply *add* to an open model if we need to account for
 new data, while we must always *modify* a closed model if we want to
 account for more information.

 The key difference I want to ask about here is: a length-based
 bayesian prior seems to apply well to closed models, but not so well
 to open models.

 First, such priors are generally supposed to apply to entire joint
 states; in other words, probability theory itself (and in particular
 bayesian learning) is built with an assumption of an underlying space
 of closed models, not open ones.

 Second, an open model always has room for additional stuff somewhere
 else in the universe, unobserved by the agent. This suggests that,
 made probabilistic, open models would generally predict universes with
 infinite description length. Whatever information was known, there
 would be an infinite number of chances for other unknown things to be
 out there; so it seems as if the probability of *something* more being
 there would converge to 1. (This is not, however, mathematically
 necessary.) If so, then taking that other thing into account, the same
 argument would still suggest something *else* was out there, and so
 on; in other words, a probabilistic open-model-learner would seem to
 predict a universe with an infinite description length. This does not
 make it easy to apply the description length principle.

 I am not arguing that open models are a necessity for AI, but I am
 curious if anyone has ideas of how to handle this. I know that Pei
 Wang suggests abandoning standard probability in order to learn open
 models, for example.

 --Abram Demski


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] How to Guarantee Creativity...

2008-09-04 Thread Mike Tintner


Mike Tintner wrote:

And how to produce creativity is the central problem of AGI -
completely unsolved. So maybe a new approach/paradigm is worth at
least considering rather than more of the same? I'm not aware of a
single idea from any AGI-er past or present that directly addresses
that problem - are you?


Bryan; Mike, one of the big problems in computer science is the prediction 
of

genotypes from phenotypes in general problem spaces. So far, from what
I've learned, we haven't a way to guarantee that a resulting process
is going to be creative. So it's not going to be solved per-se in the
traditional sense of hey look, here's a foolproof equivalency of
creativity. I truly hope I am wrong. This is a good way to be wrong
about the whole thing, I must admit.

Bryan,

Thanks for comments. First, you definitely sound like you will enjoy and 
benefit from Kauffman's Reinventing the Sacred - v. much extending your 1st 
sentence.


Second, you have posed a fascinating challenge. How can one guarantee 
creativity? I was going to say but of course not, you can only guarantee 
non-creativity by using programs and rational systems. True creativity can 
be extremely laborious and involve literally far-fetched associations.


But actually, yes, I think you may be able to guarantee creativity with a 
high degree of probability. That is, low-level creativity. Not social 
creativity - creative associations that no one in society has thought of 
before. But personal creativity. Novel personal associations that if not 
striking fit the definition. Let's see. Prepare to conduct an experiment. I 
will show you a series of associations - you will quickly grasp the 
underlying principle - you must, *thinking visually*, continue freely 
associating with the last one (or, actually, any one). See what your mind 
comes up with - and let's judge the results. (Everyone else is encouraged to 
try this too -  in the interests of scientific investigation).


http://www.bearskinrug.co.uk/_articles/2005/09/16/doodle/hero.jpg

[Alternatively, simply start with an image of a snake, and freely, visually 
associate with that.]


P.S. You will notice, Bryan, that this test - these metamorphoses - are 
related to the nature of the evolution of new species from old.








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner

Bryan,

You start v. constructively thinking how to test the non-programmed nature 
of  - or simply record - the actual writing of programs, and then IMO fail 
to keep going.


There have to be endless more precise ways than trying to look at their 
brain.


Verbal protocols.

Ask them to use the keyboard for everything - (how much do you guys use the 
keyboard vs say paper or other things?) - and you can automatically record 
key-presses.


If they use paper, find a surface that records the pen strokes.

Combine with a camera recording them.

Come on, you must be able to give me still more ways - there are multiple 
possible recording technologies, no?


Hasn't anyone done this in any shape or form? It might sound as if it would 
produce terribly complicated results, but my guess is that they would be 
fascinating just to look at (and compare technique) as well as analyse.



Bryan/MT: Do you honestly think that you write programs in a programmed 
way?

That it's not an *art* pace Matt, full of hesitation, halts,
meandering, twists and turns, dead ends, detours etc? If you have
to have some sort of program to start with, how come there is no
sign of that being true, in the creative process of programmers
actually writing programs?


Two notes on this one.

I'd like to see fMRI studies of programmers having at it. I've seen this
of authors, but not of programmers per-se. It would be interesting. But
this isn't going to work because it'll just show you lots of active
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those
dead ends and turns that one makes when traveling down those paths.
I've sometimes been able to go fully into a recording session where I
could write about a few minutes of decisions for hours on end
afterwards, but it's just not efficient to getting the point across.
I've sometimes wanted to do this for web crawling, when I do my
browsing and reading, and at least somewhat track my jumps from page to
page and so on, or even in my own grammar and writing so that I can
make sure I optimize it :-) and so that I can see where I was going or
not going :-) but any solution that requires me to type even /more/
will be a sort of contradiction, since then I will have to type even
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Thursday 04 September 2008, Matt Mahoney wrote:
  A closed model is unrealistic, but an open model is
 even more
  unrealistic because you lack a means of assigning
 likelihoods to
  statements like the sun will rise tomorrow
 or the world will end
  tomorrow. You absolutely must have a means of
 guessing probabilities
  to do anything at all in the real world.
 
 I don't assign or guess probabilities and I seem to get
 things done. 
 What gives?

Yes you do. Every time you make a decision, you are assigning a higher 
probability of a good outcome to your choice than to the alternative.

-- Matt Mahoney, [EMAIL PROTECTED]






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Mike Tintner

Bryan,

How do you know the brain has a code? Why can't it be entirely 
impression-istic - a system for literally forming, storing and associating 
sensory impressions (including abstracted, simplified, hierarchical 
impressions of other impressions)?


1). FWIW some comments from a cortically knowledgeable robotics friend:

The issue mentioned below is a major factor for die-hard card-carrying 
Turing-istas, and to me is also their greatest stumbling-block.


You called it a code, but I see computation basically involves setting up 
a model or description of something, but many people think this is 
actually synonomous with the real-thing. It's not, but many people are in 
denial about this. All models involves tons of simplifying assumptions.


EG, XXX is adamant that the visual cortex performs sparse-coded [whatever 
that means] wavelet transforms, and not edge-detection. To me, a wavelet 
transform is just one possible - and extremely simplistic (meaning subject 
to myriad assumptions) - mathematical description of how some cells in the 
VC appear to operate.


Real biological systems are immensely more complex than our simple models. 
Eg, every single cell in the body contains the entire genome, and genes are 
being turned on+off continually during normal operation, and based upon an 
immense #feedback loops in the cells, and not just during reproduction. On 
and on.


2) I vaguely recall de Bono having a model of an imprintable surface that 
was non-coded:


http://en.wikipedia.org/wiki/The_Mechanism_of_the_Mind

(But I think you may have to read the book. Forgive me if I'm wrong).

3) Do you know anyone who has thought of using or designing some kind of 
computer as an imprintable rather than just a codable medium? Perhaps that 
is somehow possible.


PS Go to bed. :)


Bryan/MT
:

I think this is a good important point. I've been groping confusedly
here. It seems to me computation necessarily involves the idea of
using a code (?). But the nervous system seems to me something
capable of functioning without a code - directly being imprinted on
by the world, and directly forming movements, (even if also involving
complex hierarchical processes), without any code. I've been
wondering whether computers couldn't also be designed to function
without a code in somewhat similar fashion. Any thoughts or ideas of
your own?


Hold on there -- the brain most certainly has a code, if you will
remember the gene expression and the general neurophysical nature of it
all. I think partly the difference you might be seeing here is how much
more complex and grown the brain is in comparison to somewhat fragile
circuits and the ecological differences between the WWW and the
combined evolutionary history keeping your neurons healthy each day.

Anyway, because of the quantified nature of energy in general, the brain
must be doing something physical and operating on a code, or i.e.
have an actual nature to it. I would like to see alternatives to this
line of reasoning, of course.

As for computers that don't have to be executing code all of the time.
I've been wondering about machines that could also imitate the
biological ability to recover from errors and not spontaneously burst
into flames when something goes wrong in the Source. Clearly there's
something of interest here.

- 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam

OK, I'll bite: what's nondeterministic programming if not a contradiction?

--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Nah. One word (though it would take too long here to
 explain) ; 
 nondeterministic programming.



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Pei,

I sympathize with your care in wording, because I'm very aware of the
strange meaning that the word model takes on in formal accounts of
semantics. While a cognitive scientist might talk about a person's
model of the world, a logician would say that the world is a model
of a first-order theory. I do want to avoid the second meaning. But,
I don't think I could fare well by saying system instead, because
the models are only a part of the larger system... so I'm not sure
there is a word that is both neutral and sufficiently meaningful.

Do you think it is impossible to apply probability to open
models/theories/systems, or merely undesirable?

On Thu, Sep 4, 2008 at 8:10 PM, Pei Wang [EMAIL PROTECTED] wrote:
 Abram,

 I agree with the spirit of your post, and I even go further to include
 being open in my working definition of intelligence --- see
 http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

 I also agree with your comment on Solomonoff induction and Bayesian prior.

 However, I talk about open system, not open model, because I think
 model-theoretic semantics is the wrong theory to be used here --- see
 http://nars.wang.googlepages.com/wang.semantics.pdf

 Pei

 On Thu, Sep 4, 2008 at 2:19 PM, Abram Demski [EMAIL PROTECTED] wrote:
 A closed model is one that is interpreted as representing all truths
 about that which is modeled. An open model is instead interpreted as
 making a specific set of assertions, and leaving the rest undecided.
 Formally, we might say that a closed model is interpreted to include
 all of the truths, so that any other statements are false. This is
 also known as the closed-world assumption.

 A typical example of an open model is a set of statements in predicate
 logic. This could be changed to a closed model simply by applying the
 closed-world assumption. A possibly more typical example of a
 closed-world model is a computer program that outputs the data so far
 (and predicts specific future output), as in Solomonoff induction.

 These two types of model are very different! One important difference
 is that we can simply *add* to an open model if we need to account for
 new data, while we must always *modify* a closed model if we want to
 account for more information.

 The key difference I want to ask about here is: a length-based
 bayesian prior seems to apply well to closed models, but not so well
 to open models.

 First, such priors are generally supposed to apply to entire joint
 states; in other words, probability theory itself (and in particular
 bayesian learning) is built with an assumption of an underlying space
 of closed models, not open ones.

 Second, an open model always has room for additional stuff somewhere
 else in the universe, unobserved by the agent. This suggests that,
 made probabilistic, open models would generally predict universes with
 infinite description length. Whatever information was known, there
 would be an infinite number of chances for other unknown things to be
 out there; so it seems as if the probability of *something* more being
 there would converge to 1. (This is not, however, mathematically
 necessary.) If so, then taking that other thing into account, the same
 argument would still suggest something *else* was out there, and so
 on; in other words, a probabilistic open-model-learner would seem to
 predict a universe with an infinite description length. This does not
 make it easy to apply the description length principle.

 I am not arguing that open models are a necessity for AI, but I am
 curious if anyone has ideas of how to handle this. I know that Pei
 Wang suggests abandoning standard probability in order to learn open
 models, for example.

 --Abram Demski


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike,

In that case I do not see how your view differs from simplistic
dualism, as Terren cautioned. If your goal is to make a creativity
machine, in what sense would the machine be non-algorithmic? Physical
random processes?

--Abram

On Thu, Sep 4, 2008 at 6:59 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks. V. helpful and interesting. Yes, on further examination, these
 interactionist guys seem, as you say, to be trying to take into account  the
 embeddedness of the computer.

 But no, there's still a huge divide between them and me. I would liken them
 in the context of this discussion, to Pei who tries to argue that NARS is
 non-algorithmic, because the program is continuously changing. - and
 therefore satisfies the objections of classical objectors to AI/AGI.

 Well, both these guys and Pei are still v. much algorithmic in any
 reasonable sense of the word - still following *structures,* if v.
 sophisticated (and continuously changing) structures, of thought.

 And what I am asserting is a  paradigm of a creative machine, which starts
 as, and is, NON-algorithmic and UNstructured  in all its activities, albeit
 that it acquires and creates a multitude of algorithms, or
 routines/structures, for *parts* of those  activities. For example, when you
 write a post,  nearly every word and a great many phrases and even odd
 sentences, will be automatically, algorithmically produced. But the whole
 post, and most paras will *not* be - and *could not* be.

 A creative machine has infinite combinative potential. An algorithmic,
 programmed machine has strictly limited combinativity..

 And a keyboard is surely the near perfect symbol of infinite, unstructured
 combinativity. It is being, and has been, used in endlessly creative ways -
 and is, along with the blank page and pencil, the central tool of our
 civilisation's creativity. Those randomly arranged letters - clearly
 designed to be infinitely recombined - are the antithesis of a programmed
 machine.

 So however those guys account for that keyboard, I don't see them as in any
 way accounting for it in my sense, or in its true, full usage. But thanks
 for your comments. (Oh and I did understand re Bayes - I was and am still
 arguing he isn't valid in many cases, period).


 Mike,

 The reason I decided that what you are arguing for is essentially an
 interactive model is this quote:

 But that is obviously only the half of it.Computers are obviously
 much more than that - and  Turing machines. You just have to look at
 them. It's staring you in the face. There's something they have that
 Turing machines don't. See it? Terren?

 They have -   a keyboard.

 A keyboard is precisely what the interaction theorists are trying to
 account for! Plus the mouse, the ethernet port, et cetera.

 Moreover, your general comments fit into the model if interpreted
 judiciously. You make a distinction between rule-based and creative
 behavior; rule-based behavior could be thought of as isolated
 processing of input (receive input, process without interference,
 output result) while creative behavior is behavior resulting from
 continual interaction with and exploration of the external world. Your
 concept of organisms as organizers only makes sense when I see it in
 this light: a human organizes the environment by interaction with it,
 while a Turing machine is unable to do this because it cannot
 explore/experiment/discover.

 -Abram

 On Thu, Sep 4, 2008 at 1:07 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:

 Abram,

 Thanks for reply. But I don't understand what you see as the connection.
 An
 interaction machine from my brief googling is one which has physical
 organs.

 Any factory machine can be thought of as having organs. What I am trying
 to
 forge is a new paradigm of a creative, free  machine as opposed to that
 exemplified by most actual machines, which are rational, deterministic
 machines. The latter can only engage in any task in set ways - and
 therefore
 engage and combine their organs in set combinations and sequences.
 Creative
 machines have a more or less infinite range of possible ways of going
 about
 things, and can combine their organs in a virtually infinite range of
 combinations, (which gives them a slight advantage, adaptively :) ).
 Organisms *are* creative machines; computers and robots *could* be (and
 are,
 when combined with humans), AGI's will *have* to be.

 (To talk of creative machines, more specifically, as I did, as
 keyboards/organisers is to focus on the mechanics of this infinite
 combinativity of organs).

 Interaction machines do not seem in any way then to entail what I'm
 talking
 about - creative machines - keyboards/ organisers - infinite
 combinativity
 - or the *creation,* as quite distinct from *following*  of
 programs/algorithms and routines..



 Abram/MT: If you think it's all been said, please point me to the
 philosophy of AI

 that includes it.

 I believe what you are suggesting is best understood as