Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your
human
lifetime. Would it make much difference if it was erased
--- Richard Loosemore [EMAIL PROTECTED] wrote:
My assumption is friendly AI under the CEV model. Currently, FAI is
unsolved.
CEV only defines the problem of friendliness, not a solution. As I
understand it, CEV defines AI as friendly if on average it gives humans
what
they want in the
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your
human
lifetime. Would it make much difference if it was erased to make room for
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Why do say that Our reign will end in a few decades when, in fact, one
of the most obvious things that would happen in this future is that
humans will be able to *choose* what intelligence level to be
experiencing, on a day
Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity. My question is about what role humans will
play in this. For the last 100,000 years, humans have been the most
intelligent creatures on Earth. Our reign will end in a few decades.
Who
Matt Mahoney wrote:
Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity. My question is about what role humans will
play in this. For the last 100,000 years, humans have been the most
intelligent creatures on Earth. Our reign will end
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Why do say that Our reign will end in a few decades when, in fact, one
of the most obvious things that would happen in this future is that
humans will be able to *choose* what intelligence level to be
experiencing, on a day to day basis?
Hi Richard,
Without getting too technical on you...how do you propose implementing these
ideas of yours ?
Candice Date: Tue, 23 Oct 2007 20:28:42 -0400 From: [EMAIL PROTECTED] To:
singularity@v2.listbox.com Subject: Bright Green Tomorrow [WAS Re:
[singularity] QUESTION] candice schuster
This is a perfect example of how one person comes up with some positive,
constructive ideas and then someone else waltzes right in, pays
no attention to the actual arguments, pays no attention to the relative
probability of different outcomes, but just snears at the whole idea
with
candice schuster wrote:
Hi Richard,
Without getting too technical on you...how do you propose implementing
these ideas of yours ?
In what sense?
The point is that implementation would be done by the AGIs, after we
produce a blueprint for what we want.
Richard Loosemore
-
This
Tomorrow [WAS Re: [singularity] QUESTION]
This is a perfect example of how one person comes up with some positive,
constructive ideas and then someone else waltzes right in, pays
no attention to the actual arguments, pays no attention to the relative
probability of different outcomes
candice schuster wrote:
Richard,
Thank you for your response. I have read your other posts and
understand what 'the story' is so to speak. I understand where you are
coming from and when I talk about evolution therioes this is not to
throw a 'stick in the wheel' so to speak, it is to
12 matches
Mail list logo