Le 26-janv.-10, à 22:29, Jack Mallah a écrit :
--- On Tue, 1/26/10, Bruno Marchal marc...@ulb.ac.be wrote:
On 25 Jan 2010, at 23:16, Jack Mallah wrote:
Killing one man is not OK just because he has a brother.
In our context, the 'brother' has the same consciousness.
The brother most
Le 27-janv.-10, à 01:39, Mark Buda a écrit :
Bruno Marchal wrote:
On 25 Jan 2010, at 23:15, Mark Buda wrote:
On 25 Jan 2010, at 04:39, Mark Buda wrote:
Bruno Marchal wrote:
I would suggest the SANE 2004 paper:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.htm
Are you OK
2010/1/27 Jack Mallah jackmal...@yahoo.com:
See above. That would be a measure-conserving process, so it would be OK.
I would be upset at the prospect of someone killing me even if they
filled the world with angelic beings by way of atonement, because it
would not feel as if any of them were
Bruno Marchal wrote:
Le 27-janv.-10, à 01:39, Mark Buda a écrit :
Bruno Marchal wrote:
On 25 Jan 2010, at 23:15, Mark Buda wrote:
On 25 Jan 2010, at 04:39, Mark Buda wrote:
Bruno Marchal wrote:
I would suggest the SANE 2004 paper:
Stathis Papaioannou wrote:
2010/1/27 Jack Mallah jackmal...@yahoo.com:
See above. That would be a measure-conserving process, so it would be OK.
I would be upset at the prospect of someone killing me even if they
filled the world with angelic beings by way of atonement, because it
Mark Buda wrote:
Bruno Marchal wrote:
Le 27-janv.-10, à 01:39, Mark Buda a écrit :
Bruno Marchal wrote:
On 25 Jan 2010, at 23:15, Mark Buda wrote:
On 25 Jan 2010, at 04:39, Mark Buda wrote:
Bruno Marchal wrote:
I would suggest the SANE
I'm replying to this bit seperately since Bruno touched on a different issue
than the others have. My reply to the main measure again '10 thread will
follow under the original title.
--- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
I would also not say yes to a computationalist
On 28 January 2010 05:31, Brent Meeker meeke...@dslextreme.com wrote:
If I understand you correctly, your discussion of copies really refers to
copies that exist in different identical worlds, e.g. like different copies
of the same AI running in identical virtual environments, so that they can
Jack,
What you mentioned ending the existence of a suffering copy can be positive.
I am curious, would you consider ending any observer whose quality of life
was less than the average weighted (by number of copies) quality of life of
all observers everywhere? Consider this example:
On Wed, Jan 27, 2010 at 7:46 PM, Jack Mallah jackmal...@yahoo.com wrote:
I'm replying to this bit seperately since Bruno touched on a different
issue than the others have. My reply to the main measure again '10 thread
will follow under the original title.
--- On Wed, 1/27/10, Bruno Marchal
Hey There,
I love reading the posts on this group, and I find a lot of the ideas
mindblowing (and more than occasionally over my head) but I was
wondering if anyone could clarify this question(s):
1) Is QI implied by UDA and comp?
2) Is QI implied by ASSA/RSSA?
More generally, what is the
11 matches
Mail list logo