Dear Sir or Madam,
The human brain knows that it does not know. This very fact prompts it to
act to know, producing the innovation which has brought us to this moment in
time. This is because it derives all of its vital energy from a source beyond
itself. . .above it and infinitely more
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote:
On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
I'm quite convinced that what I would want, for example, is not
circular. And I find it rather improbable that many of you other
humans would end up in a loop either. So CEV is not
On 26/10/2007, albert medina [EMAIL PROTECTED] wrote:
Dear Sir or Madam,
The human brain knows that it does not know. This very fact prompts it to
act to know, producing the innovation which has brought us to this moment in
time. This is because it derives all of its vital energy from a
Albert is referring to the 'Soul life force' if I understand him correctly. It
is a theory just like any other and probably coincides with many peoples want
to achieve Utopia on an individual level and not with the help of AI's !
Date: Fri, 26 Oct 2007 17:30:26 +1000 From: [EMAIL PROTECTED]
On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
What you get once the dynamic has run its course, is whatever
convergent answers were obtained on the topic of what humans would
want. You do not need these answers to set the dynamic in motion. We
already know the part that we don't want
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote:
My one sentence summary of CEV is: What would a better me/humanity want?
Is that in line with your understanding? For an AI to model a 'better'
me/humanity it would have to know what 'good' is - a definition of good -
and that is the
Yes Matt, I understand what you are saying, more commonly referred to as
''Quantum psychics / Quantum Metaphysics''.
What you are forgetting however is that based on our own known consciousness we
would be creating AI's. So therefore they would have the consciousness we
ourselves perceive
Stefan,
I plan to read your 112+- paged book 'Jame5' this weekendwhile sitting in
taxis / on planes etc to Ireland. PS : I really like the graphics on the
cover, interesting that the 'fire ball' is the one that is about to hit the
rest of the balls ! PPS : Why the name JAME5 ?
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote:
My one sentence summary of CEV is: What would a better me/humanity
want?
Is that in line with your understanding?
No...
I'm not sure I fully grok Eliezer's intentions/ideas, but I will summarize
here the
current idea I have of CEV..
On 10/26/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
On 10/26/07, Stefan Pernar [EMAIL PROTECTED] wrote:
In summary one would need to define good first in order to set the CEV
dynamic in motion, otherwise the AI would not be able to model a better
me/humanity.
Present questions to
BillK,
Lets define the word 'Theory' for you shall we ? See possible explanations
belowyou could possibly reference a Dictionary on line for the same
content...or better yet..pop down to your local bookshop !
1.
A coherent group of general propositions used as principles of
On 10/26/07, candice schuster wrote:
Albert is referring to the 'Soul life force' if I understand him correctly.
It is a theory just like any other and probably coincides with many peoples
want to achieve Utopia on an individual level and not with the help of AI's
On 26/10/2007, albert
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Why do say that Our reign will end in a few decades when, in fact, one
of the most obvious things that would happen in this future is that
humans will be able to *choose* what intelligence level to be
experiencing, on a day
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last week or so) wrote out a reply to someone on the
On 10/26/07, Benjamin Goertzel wrote:
My understanding is that it's more like this (taking some liberties)
X0 = me
Y0 = what X0 thinks is good for the world
X1 = what X0 wants to be
Y1 = what X1 would think is good for the world
X2 = what X1 would want to be
Y2 = what X2 would think is
Benjamin Goertzel wrote:
On 10/26/07, Stefan Pernar [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
My one sentence summary of CEV is: What would a better
me/humanity want?
Is that in line with your understanding?
No...
I'm not sure I fully grok Eliezer's
Pardon me for butting in and apologies if I've missed some crucial point or
complex idea but having dug up a synopsis of John Searles argument:
A man is in a room with a book of rules. Chinese sentences are passed under
the door to him. The man looks up in his book of rules how to process the
On 26/10/2007, Allen Majorovic [EMAIL PROTECTED] wrote:
It seems to me that Mr. Searles is suggesting that because some people
(intelligences) are cooks, i.e. work from a set of rules they don't
understand, this somehow proves that chemists, i.e. people who *do*
understand the set of rules,
On 10/26/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Stefan can correct me if I am wrong here, but I think that both yourself
and Aleksei have misunderstood the sense in which he is pointing to a
circularity.
If you build an AGI, and it sets out to discover the convergent desires
(the
Stefan Pernar wrote:
On 10/26/07, *Richard Loosemore* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Stefan can correct me if I am wrong here, but I think that both yourself
and Aleksei have misunderstood the sense in which he is pointing to a
circularity.
If you build
On 26/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
If you build an AGI, and it sets out to discover the convergent desires
(the CEV) of all humanity, it will be doing this because it has the goal
of using this CEV as the basis for the friendly motivations that will
henceforth guide it.
[EMAIL PROTECTED] wrote:
I have to applaud this comment, and it's general tenor.
-- Original message from Mike Tintner
[EMAIL PROTECTED]: --
Every speculation on this board about the nature of future AGI's
has been
pure fantasy. Even those
So a VPOP is defined to be a safe AGI. And its purpose is to solve the
problem of building the first safe AGI...
No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal
of carrying out a certain kind of extrapolation
What you are doubting, perhaps, is that it is
Benjamin Goertzel wrote:
So a VPOP is defined to be a safe AGI. And its purpose is to solve the
problem of building the first safe AGI...
No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal
of carrying out a certain kind of extrapolation
What you are
I have to applaud this comment, and it's general tenor.
-- Original message from Mike Tintner [EMAIL PROTECTED]:
--
Every speculation on this board about the nature of future AGI's has been
pure fantasy. Even those which try to dress themselves up in some semblance
Richard Loosemore wrote:
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last week or so) wrote out a
Charles D Hixson wrote:
Richard Loosemore wrote:
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last
Richard,
Kindly just let it go. I'm new to the list and not really familiar with
everyone's idiosyncracies. What I was applauding was what I perceive as his
remark's unwillingness to consider as somehow passe and irrelevent the
particularly human concerns and real pragmatic human
I noticed in a later read that you differentiate between systems
designed to operate via goal stacks and those operating via motivational
system. This is not the meaning of goal that I was using.
To me, if a motive is a theory to prove, then a goal is a lemma needed
to prove the theory. I
Charles D Hixson wrote:
I noticed in a later read that you differentiate between systems
designed to operate via goal stacks and those operating via motivational
system. This is not the meaning of goal that I was using.
To me, if a motive is a theory to prove, then a goal is a lemma needed
No Richard, I meantFull Frontal Lobotomy, seeing as that has not been
done before, ha, ha !
...''Either that or this is a great title for a movie about a lab full of
scientists trapped in a nudist colony'' God I shudder to think of the
thoughtmind you your AI's might have a thing or
On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote:
On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
An AI implementing CEV doesn't question, is the thing that humans
express that they ultimately want, 'good' or not. If it is
On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote:
On 10/27/07, Aleksei Riikonen [EMAIL PROTECTED] wrote:
An AI implementing CEV doesn't question, is the thing that humans
express that they ultimately want, 'good' or not. If it is what the
humans really want, then it is done. No thinking
33 matches
Mail list logo