On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.
I suspect an AGI that executes one fixed unchangeable program is not
physically possible.
On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:
On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically executing a program.
I suspect an AGI that executes one
On 05/06/07, Ricardo Barreira [EMAIL PROTECTED] wrote:
On 6/5/07, William Pearson [EMAIL PROTECTED] wrote:
On 04/06/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Suppose you build a human level AGI, and argue
that it is not autonomous no matter what it does, because it is
deterministically
Sorry, noticed that after I posted, acting autonomously given that it is acting
Intelligently as well.
I was assuming the existence of an AGI / intelligent machine, and being asked
about the consciousness of that.
An AGI that plans, reasons, and acts autonomously would be conscious.
Where
There is a tendency among people to grant human rights to entities that are
more human-like, more like yourself. For example, if you give an animal a
name, it is likely to get better treatment. (We name dogs and cats, but not
cows or pigs). Among humans, those who speak the same language and
But you haven't answered my question. How do you test if a machine is
conscious, and is therefore (1) dangerous, and (2) deserving of human rights?
Easily, once it acts autonomously, not based on your direct given goals and
orders, when it begins acting and generating its own new goals.
OK. I'm confused. You said both
lets say we don't program beliefs in consciousness or free will . . . .
The AGI will look at these concepts rationally. It will conclude that
they do not exist because human behavior can be explained without their
existence.
AND
I do believe in
--- Mark Waser [EMAIL PROTECTED] wrote:
OK. I'm confused. You said both
lets say we don't program beliefs in consciousness or free will . . . .
The AGI will look at these concepts rationally. It will conclude that
they do not exist because human behavior can be explained without their
My approach is to accept the conflicting evidence and not attempt to
resolve it.
Yes, indeed, that does explain much.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
You are anthropomorphising. Machines are not human. There is nothing
wrong
with programming an AGI to behave as a willing slave whose goal is to obey
humans.
I disagree. Programming an AGI to behave as a willing slave is unsafe
unless you can *absolutely* guarantee that it will *always*
On 6/2/07, Mark Waser [EMAIL PROTECTED] wrote:
By some measures Google is more intelligent than any human. Should it
have
human rights? If not, then what measure should be used as criteria?
Google is not conscious. It does not need rights. Sufficiently complex
consciousness (or even
On 6/2/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Google has its rights. No crazy totalitarian government tells Google what to do.
(perhaps it should go: Google struggles for its rights, sometimes
making moral compromises)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
Belief in consciousness and belief in free will are parts of the human
brain's
programming. If we want an AGI to obey us, then we should not program
these
beliefs into it.
Are we positive that we can avoid doing so? Can we prevent others from
doing so?
Would there be technical
--- Mark Waser [EMAIL PROTECTED] wrote:
Belief in consciousness and belief in free will are parts of the human
brain's
programming. If we want an AGI to obey us, then we should not program
these
beliefs into it.
Are we positive that we can avoid doing so? Can we prevent others
But programming a belief of consciousness or free will seems to be a hard
problem, that has no practical benefit anyway. It seems to be easier to
build
machines without them. We do it all the time.
But we aren't programming AGI all the time. And you shouldn't be
hard-coding beliefs in
--- Mark Waser [EMAIL PROTECTED] wrote:
But programming a belief of consciousness or free will seems to be a hard
problem, that has no practical benefit anyway. It seems to be easier to
build
machines without them. We do it all the time.
But we aren't programming AGI all the time.
But lets say we don't program beliefs in consciousness or free will (not that
we should). The AGI will look at these concepts rationally. It will conclude
that they do not exist because human behavior can be explained without their
existence. It will recognize that the human belief in a little
--- Mark Waser [EMAIL PROTECTED] wrote:
But lets say we don't program beliefs in consciousness or free will (not
that
we should). The AGI will look at these concepts rationally. It will
conclude
that they do not exist because human behavior can be explained without their
existence. It
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Friday 01 June 2007 12:40:36 pm YKY (Yan King Yin) wrote:
... And intellectual property seems to be a reasonable way of
rewarding inventors --
Is human property a reasonable way of rewarding slave traders? Remember,
the
systems we
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
... And intellectual property seems to be a reasonable way of
rewarding inventors --
Is human property a reasonable way of rewarding slave traders?
That's a very good question.. ;)
If you suddenly abolish patent laws, some people will suffer
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Perhaps we should seek a way to make patents beneficial to society as a
whole. But is abolishing software patents the way to go? Then what happens
to the inventors' reward?
That is how it is supposed to work. In some fields, it actually does.
Certainly I'm anthropomorphising in the sense that the word means putting a
human shape to. However, that's not a fallacy if such a shape is part of a
design one intends to rigorously impose, as opposed to imagining human
qualities where they are not. It's the difference between seeing a man's
22 matches
Mail list logo