On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote:
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
You seem to have a need to personally give a final answer to "What
is
'good'?" -- an answer to what moral rules the universe should be
governed by. If you think that your answer is better than what the
"surveying" process that CEV is would produce, I think your attitude
amounts to delusions of grandeur.
I do not find it very credible to simply claim that the CEV answer
will be significantly better. Yeah you can argue it "by construction"
simply because the entire thing is made up to by definition be the
very best at this particular job. But that it is achievable and will
be best is not provable. As long as it is not then calling anyone's
opinion that they or some other human or group of human's could do
better "delusions of grandeur" is not justified.
The first fallacy one runs into there is this: "The question what
friendliness means however existed before all of those problems, is a
separate one and needs to be answered before the creation of a
friendly AI can be attempted."
What is "friendly"? That is a good question. However it is not
exactly at all crisp.
It's not at all necessary to answer this question before the creation
of a friendly AI can be attempted. This problem, "choosing something
nice to do with the AI" (which you here referred to as "what
friendliness means"), number 2 in the enumeration of the separate
problems on the CEV page, can be handled in the way I've repeatedly
described to you. There's no need to try to come up with a final
answer to this while we as humans are as limited in intelligence and
knowledge as we currently are.
I find it obvious that whatever answer you give, it isn't better
than
the answer the smarter and more knowledgeable humans, who are
surveyed
in the CEV process, would give. I find it infinitely more preferable
to find out what they would say ("they" would include wiser versions
of you and me), instead of taking whatever the current you says as
the
final answer to this question that all the smartest human
philosophers
have tried to solve for all of human history, without coming up with
an answer that could be considered to settle the issue.
I hope you would take a look at my suggested theory before
dismissing it.
Would be more than happy to engage in a discussion with you based
on mutual
respect.
I have looked at your writing a bit, but actually there isn't real
point in me doing so, unless you really claim it is a *smarter*
conclusion than to which the "humans surveyed in the CEV process"
would arrive. And how could you claim to be able to present a smarter
conclusion, than one produced by people smarter than us, that have a
lot more knowledge than us, that knowledge including all that you have
written?
This is a bit of a long con. These "people smarter than us" are
totally hypothetical. Here in the real world right now I think we
darn well better come up with the best notion of "friendliness" we can
and steer toward that. That very much includes not shutting people
down for attempting to make some hopefully relevant suggestions. What
we do now with our limited intelligence (but of necessity all the
intelligence we can work with) determines whether there ever will be
greatly smarter humans with or without a CEV. It could well
determine whether there are any future humans at all. We can't steer
our course N years ahead of the bit of road right in front of us or
leave it to our hypothetical betters or to the CEV dream machine.
Even if I accepted that you are the brightest philosopher who has ever
lived, and have come up with a solution that has eluded all that have
come before, don't you see that the humans surveyed by CEV would be
aware of what you have written, and would come to the same conclusion
if it really is that smart? How then could your proposal be better
than CEV, when CEV would result in the exact same thing?
This is mental masturbation. It don't see that it does one whit of
good. It doesn't give any real sort of guidance for doing that in the
present that it likely to get us to a better tomorrow.
- samantha
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58088271-563346