Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi, What I think is that the set of patterns in perceptual and motoric data has radically different statistical properties than the set of patterns in linguistic and mathematical data ... and that the properties of the set of patterns in perceptual and motoric data is intrinsically

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Also, relatedly and just as critically, the set of perceptions regarding the body and its interactions with the environment, are well-structured to give the mind a sense of its own self. This primitive infantile sense of body-self gives rise to the more sophisticated phenomenal self of the

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread William Pearson
2008/9/4 Mike Tintner [EMAIL PROTECTED]: Terren, If you think it's all been said, please point me to the philosophy of AI that includes it. A programmed machine is an organized structure. A keyboard (and indeed a computer with keyboard) are something very different - there is no

Re: [agi] draft for comment.. P.S.

2008-09-04 Thread Valentina Poletti
That's if you aim at getting an AGI that is intelligent in the real world. I think some people on this list (incl Ben perhaps) might argue that for now - for safety purposes but also due to costs - it might be better to build an AGI that is intelligent in a simulated environment. Ppl like Ben

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-04 Thread Valentina Poletti
That sounds like a useful purpose. Yeh, I don't believe in fast and quick methods either.. but also humans tend to overestimate their own capabilities, so it will probably take more time than predicted. On 9/3/08, William Pearson [EMAIL PROTECTED] wrote: 2008/8/28 Valentina Poletti [EMAIL

Re: [agi] What is Friendly AI?

2008-09-04 Thread Valentina Poletti
On 8/31/08, Steve Richfield [EMAIL PROTECTED] wrote: Protective mechanisms to restrict their thinking and action will only make things WORSE. Vlad, this was my point in the control e-mail, I didn't express it quite as clearly, partly because coming from a different background I use a

Re: [agi] What is Friendly AI?

2008-09-04 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti [EMAIL PROTECTED] wrote: Vlad, this was my point in the control e-mail, I didn't express it quite as clearly, partly because coming from a different background I use a slightly different language. Also, Steve made another good point here:

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:10 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Sure it is. Systems with different sensory channels will never fully understand each other. I'm not saying that one channel (verbal) can replace another (visual), but that both of them (and many others) can give

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:12 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Also, relatedly and just as critically, the set of perceptions regarding the body and its interactions with the environment, are well-structured to give the mind a sense of its own self. This primitive infantile sense of

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Will:You can't create a program out of thin air. So you have to have some sort of program to start with Not out of thin air.Out of a general instruction and desire[s]/emotion[s]. Write me a program that will contradict every statement made to it. Write me a single program that will allow me to

Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
I agree with Pei in that a robot's experience is not necessarily more real than that of a, say, web-embedded agent - if anything it is closer to the * human* experience of the world. But who knows how limited our own sensory experience is anyhow. Perhaps a better intelligence would comprehend the

Re: [agi] What Time Is It? No. What clock is it?

2008-09-04 Thread Valentina Poletti
Great articles! On 9/4/08, Brad Paulsen [EMAIL PROTECTED] wrote: Hey gang... It's Likely That Times Are Changing http://www.sciencenews.org/view/feature/id/35992/title/It%E2%80%99s_Likely_That_Times_Are_Changing A century ago, mathematician Hermann Minkowski famously merged space with

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Valentina Poletti
Programming definitely feels like an art to me - I get the same feelings as when I am painting. I always wondered why. On the phylosophical side in general technology is the ability of humans to adapt the environment to themselves instead of the opposite - adapting to the environment. The

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Obviously you didn't consider the potential a laptop has with its network connection, which in theory can give it all kinds of perception by connecting it to some input/output device. yes, that's true ... I was considering the laptop w/ only a power cable as the AI system in question. Of

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
OK, then the observable universe has a finite description length. We don't need to describe anything else to model it, so by universe I mean only the observable part. But, what good is it to only have finite description of the observable part, since new portions of the universe enter the

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
Hi Pei, I think your point is correct that the notion of embodiment presented by Brooks and some other roboticists is naive. I'm not sure whether their actual conceptions are naive, or whether they just aren't presenting their foundational philosophical ideas clearly in their writings (being

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
However, could you guys be more specific regarding the statistical differences of different types of data? What kind of differences are you talking about specifically (mathematically)? And what about the differences at the various levels of the dual-hierarchy? Has any of your work or

Re: [agi] draft for comment

2008-09-04 Thread Valentina Poletti
On 9/4/08, Ben Goertzel [EMAIL PROTECTED] wrote: However, could you guys be more specific regarding the statistical differences of different types of data? What kind of differences are you talking about specifically (mathematically)? And what about the differences at the various levels of

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
To clarify what I mean by observable universe, I am including any part that could be observed in the future, and therefore must be modeled to make accurate predictions. For example, if our universe is computed by one of an enumeration of Turing machines, then the other enumerations are outside

Re: [agi] draft for comment

2008-09-04 Thread Ben Goertzel
So in short you are saying that the main difference between I/O data by a motor embodyed system (such as robot or human) and a laptop is the ability to interact with the data: make changes in its environment to systematically change the input? Not quite ... but, to interact w/ the data in a

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 10:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote: To clarify what I mean by observable universe, I am including any part that could be observed in the future, and therefore must be modeled to make accurate predictions. For example, if our universe is computed by one of an

Re: [agi] draft for comment

2008-09-04 Thread Terren Suydam
Hi Ben, You may have stated this explicitly in the past, but I just want to clarify - you seem to be suggesting that a phenomenological self is important if not critical to the actualization of general intelligence. Is this your belief, and if so, can you provide a brief justification of

Real vs. simulated environments (was Re: [agi] draft for comment.. P.S.)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Valentina Poletti [EMAIL PROTECTED] wrote: Ppl like Ben argue that the concept/engineering aspect of intelligence is independent of the type of environment. That is, given you understand how to make it in a virtual environment you can then tarnspose that concept into a real

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
On Thu, Sep 4, 2008 at 12:47 AM, Mike Tintner [EMAIL PROTECTED] wrote: Terren, If you think it's all been said, please point me to the philosophy of AI that includes it. I believe what you are suggesting is best understood as an interaction machine. General references:

Re: Computation as an explanation of the universe (was Re: [agi] Recursive self-change: some definitions)

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote: So, my only remaining objection is that while the universe *could* be computable, it seems unwise to me to totally rule out the alternative. You're right. We cannot prove that the universe is computable. We have evidence like Occam's

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Abram, Thanks for reply. But I don't understand what you see as the connection. An interaction machine from my brief googling is one which has physical organs. Any factory machine can be thought of as having organs. What I am trying to forge is a new paradigm of a creative, free machine as

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam
Mike, Thanks for the reference to Dennis Noble, he sounds very interesting and his views on Systems Biology as expressed on his Wikipedia page are perfectly in line with my own thoughts and biases. I agree in spirit with your basic criticisms regarding current AI and creativity. However, it

Re: [agi] draft for comment

2008-09-04 Thread Matt Mahoney
--- On Wed, 9/3/08, Pei Wang [EMAIL PROTECTED] wrote: TITLE: Embodiment: Who does not have a body? AUTHOR: Pei Wang ABSTRACT: In the context of AI, ``embodiment'' should not be interpreted as ``giving the system a body'', but as ``adapting to the system's experience''. Therefore, being

[agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
A closed model is one that is interpreted as representing all truths about that which is modeled. An open model is instead interpreted as making a specific set of assertions, and leaving the rest undecided. Formally, we might say that a closed model is interpreted to include all of the truths, so

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike, The reason I decided that what you are arguing for is essentially an interactive model is this quote: But that is obviously only the half of it.Computers are obviously much more than that - and Turing machines. You just have to look at them. It's staring you in the face. There's something

Re: [agi] open models, closed models, priors

2008-09-04 Thread Matt Mahoney
In a closed model, every statement is either true or false. In an open model, every statement is either true or uncertain. In reality, all statements are uncertain, but we have a means to assign them probabilities (not necessarily accurate probabilities). A closed model is unrealistic, but an

Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Matt, My intention here is that there is a basic level of well-defined, crisp models which probabilities act upon; so in actuality the system will never be using a single model, open or closed... (in a hurry now, more comments later) --Abram On Thu, Sep 4, 2008 at 2:47 PM, Matt Mahoney [EMAIL

Re: [agi] open models, closed models, priors

2008-09-04 Thread Mike Tintner
Matt, I'm confused here. What I mean is that in real life, the probabilities are mathematically incalculable, period, a good deal of the time - you cannot go, as you v. helpfully point out, much beyond saying this is fairly probable, may happen, there's some chance.. And those words are

Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Mike, standard Bayesianism somewhat accounts for this-- exact-number probabilities are defined by the math, but in no way are they seen as the real probability values. A subjective prior is chosen, which defines all further probabilities, but that prior is not believed to be correct. Subsequent

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 8:56 AM, Valentina Poletti [EMAIL PROTECTED] wrote: I agree with Pei in that a robot's experience is not necessarily more real than that of a, say, web-embedded agent - if anything it is closer to the human experience of the world. But who knows how limited our own

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: And as a matter of scientific, historical fact, computers are first and foremost keyboards - i.e.devices for CREATING programs  on keyboards, - and only then following them. [Remember how AI gets almost everything about intelligence back to

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Abram, Thanks. V. helpful and interesting. Yes, on further examination, these interactionist guys seem, as you say, to be trying to take into account the embeddedness of the computer. But no, there's still a huge divide between them and me. I would liken them in the context of this

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Terren Suydam wrote: Thus is creativity possible while preserving determinism. Of course, you still need to have an explanation for how creativity emerges in either case, but in contrast to what you said before, some AI folks have indeed worked on this issue.

Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: I think this is a good important point. I've been groping confusedly here. It seems to me computation necessarily involves the idea of using a code (?). But the nervous system seems to me something capable of functioning without a code -

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: And what I am asserting is a  paradigm of a creative machine, which starts as, and is, NON-algorithmic and UNstructured  in all its activities, albeit that it acquires and creates a multitude of algorithms, or routines/structures, for *parts*

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: And how to produce creativity is the central problem of AGI - completely unsolved.  So maybe a new approach/paradigm is worth at least considering rather than more of the same? I'm not aware of a single idea from any AGI-er past or present

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: Do you honestly think that you write programs in a programmed way? That it's not an *art* pace Matt, full of hesitation, halts, meandering, twists and turns, dead ends, detours etc?  If you have to have some sort of program to start with, how

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 9:35 AM, Ben Goertzel [EMAIL PROTECTED] wrote: I understand that a keyboard and touchpad do provide proprioceptive input, but I think it's too feeble, and too insensitively respondent to changes in the environment and the relation btw the laptop and the environment, to

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Valentina Poletti wrote: When we want to step further and create an AGI I think we want to externalize the very ability to create technology - we want the environment to start adapting to us by itself, spontaneously by gaining our goals. There is a sense of

Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: A closed model is unrealistic, but an open model is even more unrealistic because you lack a means of assigning likelihoods to statements like the sun will rise tomorrow or the world will end tomorrow. You absolutely must have a means of

Re: [agi] draft for comment

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: Another aspect of embodiment (as the term is commonly used), is the false appearance of intelligence. We associate intelligence with humans, given that there are no other examples. So giving an AI a face or a robotic body modeled after a human

Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Abram Demski wrote: My intention here is that there is a basic level of well-defined, crisp models which probabilities act upon; so in actuality the system will never be using a single model, open or closed... I think Mike's model is one more of approach,

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 10:04 AM, Ben Goertzel [EMAIL PROTECTED] wrote: Hi Pei, I think your point is correct that the notion of embodiment presented by Brooks and some other roboticists is naive. I'm not sure whether their actual conceptions are naive, or whether they just aren't presenting

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Terren: I agree in spirit with your basic criticisms regarding current AI and creativity. However, it must be pointed out that if you abandon determinism, you find yourself in the world of dualism, or worse. Nah. One word (though it would take too long here to explain) ; nondeterministic

Re: [agi] draft for comment

2008-09-04 Thread Pei Wang
On Thu, Sep 4, 2008 at 2:22 PM, Matt Mahoney [EMAIL PROTECTED] wrote: The paper seems to argue that embodiment applies to any system with inputs and outputs, and therefore all AI systems are embodied. No. It argues that since every system has inputs and outputs, 'embodiment', as a non-trivial

Re: [agi] open models, closed models, priors

2008-09-04 Thread Pei Wang
Abram, I agree with the spirit of your post, and I even go further to include being open in my working definition of intelligence --- see http://nars.wang.googlepages.com/wang.logic_intelligence.pdf I also agree with your comment on Solomonoff induction and Bayesian prior. However, I talk about

[agi] How to Guarantee Creativity...

2008-09-04 Thread Mike Tintner
Mike Tintner wrote: And how to produce creativity is the central problem of AGI - completely unsolved. So maybe a new approach/paradigm is worth at least considering rather than more of the same? I'm not aware of a single idea from any AGI-er past or present that directly addresses that problem

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Mike Tintner
Bryan, You start v. constructively thinking how to test the non-programmed nature of - or simply record - the actual writing of programs, and then IMO fail to keep going. There have to be endless more precise ways than trying to look at their brain. Verbal protocols. Ask them to use the

Re: [agi] open models, closed models, priors

2008-09-04 Thread Matt Mahoney
--- On Thu, 9/4/08, Bryan Bishop [EMAIL PROTECTED] wrote: On Thursday 04 September 2008, Matt Mahoney wrote: A closed model is unrealistic, but an open model is even more unrealistic because you lack a means of assigning likelihoods to statements like the sun will rise tomorrow or the

Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Mike Tintner
Bryan, How do you know the brain has a code? Why can't it be entirely impression-istic - a system for literally forming, storing and associating sensory impressions (including abstracted, simplified, hierarchical impressions of other impressions)? 1). FWIW some comments from a cortically

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Terren Suydam
OK, I'll bite: what's nondeterministic programming if not a contradiction? --- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote: Nah. One word (though it would take too long here to explain) ; nondeterministic programming. --- agi

Re: [agi] open models, closed models, priors

2008-09-04 Thread Abram Demski
Pei, I sympathize with your care in wording, because I'm very aware of the strange meaning that the word model takes on in formal accounts of semantics. While a cognitive scientist might talk about a person's model of the world, a logician would say that the world is a model of a first-order

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Abram Demski
Mike, In that case I do not see how your view differs from simplistic dualism, as Terren cautioned. If your goal is to make a creativity machine, in what sense would the machine be non-algorithmic? Physical random processes? --Abram On Thu, Sep 4, 2008 at 6:59 PM, Mike Tintner [EMAIL PROTECTED]