[singularity] CONJECTURE OR TRUTH

2007-10-26 Thread albert medina
Dear Sir or Madam, The human brain knows that it does not know. This very fact prompts it to act to know, producing the innovation which has brought us to this moment in time. This is because it derives all of its vital energy from a source beyond itself. . .above it and infinitely more

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote: On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: I'm quite convinced that what I would want, for example, is not circular. And I find it rather improbable that many of you other humans would end up in a loop either. So CEV is not

Re: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, albert medina [EMAIL PROTECTED] wrote: Dear Sir or Madam, The human brain knows that it does not know. This very fact prompts it to act to know, producing the innovation which has brought us to this moment in time. This is because it derives all of its vital energy from a

RE: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread candice schuster
Albert is referring to the 'Soul life force' if I understand him correctly. It is a theory just like any other and probably coincides with many peoples want to achieve Utopia on an individual level and not with the help of AI's ! Date: Fri, 26 Oct 2007 17:30:26 +1000 From: [EMAIL PROTECTED]

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: What you get once the dynamic has run its course, is whatever convergent answers were obtained on the topic of what humans would want. You do not need these answers to set the dynamic in motion. We already know the part that we don't want

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote: My one sentence summary of CEV is: What would a better me/humanity want? Is that in line with your understanding? For an AI to model a 'better' me/humanity it would have to know what 'good' is - a definition of good - and that is the

RE: [singularity] John Searle...

2007-10-26 Thread candice schuster
Yes Matt, I understand what you are saying, more commonly referred to as ''Quantum psychics / Quantum Metaphysics''. What you are forgetting however is that based on our own known consciousness we would be creating AI's. So therefore they would have the consciousness we ourselves perceive

RE: [singularity] 14 objections against AI/Friendly AI/The Singularity answered

2007-10-26 Thread candice schuster
Stefan, I plan to read your 112+- paged book 'Jame5' this weekendwhile sitting in taxis / on planes etc to Ireland. PS : I really like the graphics on the cover, interesting that the 'fire ball' is the one that is about to hit the rest of the balls ! PPS : Why the name JAME5 ?

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote: My one sentence summary of CEV is: What would a better me/humanity want? Is that in line with your understanding? No... I'm not sure I fully grok Eliezer's intentions/ideas, but I will summarize here the current idea I have of CEV..

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote: In summary one would need to define good first in order to set the CEV dynamic in motion, otherwise the AI would not be able to model a better me/humanity. Present questions to

RE: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread candice schuster
BillK, Lets define the word 'Theory' for you shall we ? See possible explanations belowyou could possibly reference a Dictionary on line for the same content...or better yet..pop down to your local bookshop ! 1. A coherent group of general propositions used as principles of

Re: [singularity] CONJECTURE OR TRUTH

2007-10-26 Thread BillK
On 10/26/07, candice schuster wrote: Albert is referring to the 'Soul life force' if I understand him correctly. It is a theory just like any other and probably coincides with many peoples want to achieve Utopia on an individual level and not with the help of AI's On 26/10/2007, albert

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-26 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Why do say that Our reign will end in a few decades when, in fact, one of the most obvious things that would happen in this future is that humans will be able to *choose* what intelligence level to be experiencing, on a day

Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore
candice schuster wrote: Richard, Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ? I think I recently (last week or so) wrote out a reply to someone on the

Re: [singularity] Re: CEV

2007-10-26 Thread BillK
On 10/26/07, Benjamin Goertzel wrote: My understanding is that it's more like this (taking some liberties) X0 = me Y0 = what X0 thinks is good for the world X1 = what X0 wants to be Y1 = what X1 would think is good for the world X2 = what X1 would want to be Y2 = what X2 would think is

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Benjamin Goertzel wrote: On 10/26/07, Stefan Pernar [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: My one sentence summary of CEV is: What would a better me/humanity want? Is that in line with your understanding? No... I'm not sure I fully grok Eliezer's

Re: [singularity] John Searle...

2007-10-26 Thread Allen Majorovic
Pardon me for butting in and apologies if I've missed some crucial point or complex idea but having dug up a synopsis of John Searles argument: A man is in a room with a book of rules. Chinese sentences are passed under the door to him. The man looks up in his book of rules how to process the

Re: [singularity] John Searle...

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, Allen Majorovic [EMAIL PROTECTED] wrote: It seems to me that Mr. Searles is suggesting that because some people (intelligences) are cooks, i.e. work from a set of rules they don't understand, this somehow proves that chemists, i.e. people who *do* understand the set of rules,

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/26/07, Richard Loosemore [EMAIL PROTECTED] wrote: Stefan can correct me if I am wrong here, but I think that both yourself and Aleksei have misunderstood the sense in which he is pointing to a circularity. If you build an AGI, and it sets out to discover the convergent desires (the

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Stefan Pernar wrote: On 10/26/07, *Richard Loosemore* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Stefan can correct me if I am wrong here, but I think that both yourself and Aleksei have misunderstood the sense in which he is pointing to a circularity. If you build

Re: [singularity] Re: CEV

2007-10-26 Thread Stathis Papaioannou
On 26/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote: If you build an AGI, and it sets out to discover the convergent desires (the CEV) of all humanity, it will be doing this because it has the goal of using this CEV as the basis for the friendly motivations that will henceforth guide it.

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-26 Thread Richard Loosemore
[EMAIL PROTECTED] wrote: I have to applaud this comment, and it's general tenor. -- Original message from Mike Tintner [EMAIL PROTECTED]: -- Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those

Re: [singularity] Re: CEV

2007-10-26 Thread Benjamin Goertzel
So a VPOP is defined to be a safe AGI. And its purpose is to solve the problem of building the first safe AGI... No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal of carrying out a certain kind of extrapolation What you are doubting, perhaps, is that it is

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore
Benjamin Goertzel wrote: So a VPOP is defined to be a safe AGI. And its purpose is to solve the problem of building the first safe AGI... No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal of carrying out a certain kind of extrapolation What you are

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-26 Thread stolzy
I have to applaud this comment, and it's general tenor. -- Original message from Mike Tintner [EMAIL PROTECTED]: -- Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those which try to dress themselves up in some semblance

Re: [singularity] John Searle...

2007-10-26 Thread Charles D Hixson
Richard Loosemore wrote: candice schuster wrote: Richard, Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ? I think I recently (last week or so) wrote out a

Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: candice schuster wrote: Richard, Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ? I think I recently (last

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-26 Thread stolzy
Richard, Kindly just let it go. I'm new to the list and not really familiar with everyone's idiosyncracies. What I was applauding was what I perceive as his remark's unwillingness to consider as somehow passe and irrelevent the particularly human concerns and real pragmatic human

Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Charles D Hixson
I noticed in a later read that you differentiate between systems designed to operate via goal stacks and those operating via motivational system. This is not the meaning of goal that I was using. To me, if a motive is a theory to prove, then a goal is a lemma needed to prove the theory. I

Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Richard Loosemore
Charles D Hixson wrote: I noticed in a later read that you differentiate between systems designed to operate via goal stacks and those operating via motivational system. This is not the meaning of goal that I was using. To me, if a motive is a theory to prove, then a goal is a lemma needed

RE: [singularity] John Searle...

2007-10-26 Thread candice schuster
No Richard, I meantFull Frontal Lobotomy, seeing as that has not been done before, ha, ha ! ...''Either that or this is a great title for a movie about a lab full of scientists trapped in a nudist colony'' God I shudder to think of the thoughtmind you your AI's might have a thing or

Re: [singularity] Re: CEV

2007-10-26 Thread Stefan Pernar
On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote: On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: An AI implementing CEV doesn't question, is the thing that humans express that they ultimately want, 'good' or not. If it is

[singularity] Re: CEV

2007-10-26 Thread Aleksei Riikonen
On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote: On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote: An AI implementing CEV doesn't question, is the thing that humans express that they ultimately want, 'good' or not. If it is what the humans really want, then it is done. No thinking