Mitchell Porter wrote:
Richard Loosemore:
In fact, if it knew all about its own design (and it would,
eventually), it would check to see just how possible it might be for
it to accidentally convince itself to disobey its prime directive,
But it doesn't have a prime directive, does it? It
Ben,
I guess the issue I have with your critique is that you say that I have
given no details, no rigorous argument, just handwaving, etc.
But you are being contradictory: on the one hand you say that the
proposal is vague/underspecified/does not give any arguments but
then having
Hi Richard,
Let me go back to start of this dialogue...
Ben Goertzel wrote:
Loosemore wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would be similar to the
Richard Loosemore:
In fact, if it knew all about its own design (and it would, eventually), it
would check to see just how possible it might be for it to accidentally
convince itself to disobey its prime directive,
But it doesn't have a prime directive, does it? It has large numbers
of
Hi,
There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it: I am proposing a general
*class* of architectures for an AI-with-motivational-system. I am not
saying that this is a specific instance (with all the details nailed
down)
Curious.
A couple of days ago, I responded to demands that I produce arguments to
justify the conclusion that there were ways to build a friendly AI that
was extremely stable and trustworthy, but without having to give a
mathematical proof of its friendliness.
Now, granted, the text was
The last I heard, computers are spied upon because of the language the
computer is generating. Why would the government care about the guy
that picks up garbage?
Richard Loosemore wrote, Wed, Oct 25, 2006:
The word trapdoor is a reference to trapdoor algorithms that allow
computers to be spied