On Thu, Sep 11, 2014 at 3:38 PM, John Rose via AGI <[email protected]> wrote:
>> -----Original Message-----
>> From: Matt Mahoney [mailto:[email protected]]
>>
>> Brier also seems mystified by phenomenal consciousness (qualia). Do you
>> have any comments on my previous post where I explained why we evolved
>> to believe that such a thing exists? Do you understand or agree with my
>> explanation of how this belief works?
>>
>
> Well, I assume some possibility of it being true. There are some issues when 
> you talk about Wolpert's theorem related to self-modelling and 
> self-reflection. These can be done fuzzily or probabilistically or 
> neutrosoficly... Also with consciousness really I don't see who's talking 
> about breaking the laws of physics. And your reference to a soul is rather 
> off I would say, I won't elaborate there.

I don't believe in souls. But many people do. Many people also believe
that it is impossible, even in principle, for a computer to do
everything that the human brain can do (for example, Penrose). These
beliefs arise because our models of our own brains are incomplete. We
think it is doing something different than what it is really doing.

Wolpert's theorem proves that all self-models must be incomplete.
Specifically, he proves that two computers cannot mutually predict
each other's actions, even when each can take the source code and
initial state of the other as input. As a corollary, a computer cannot
predict its own actions for the special case where both computers are
the same. We use models to make predictions, and we use prediction to
test understanding. Thus, no agent can completely understand itself.

Wolpert gives a formal proof, but there is an easy to understand
informal proof. Suppose that two computers played each other in a
rock-scissors-paper game. If I could take your source code and initial
state as input, then I could run a simulation of you playing the game
up to the last move and predict your next move. Then if my simulation
of you plays, e.g. scissors, then I play rock. Likewise, if you could
do the same and predict that I would play rock, then you could play
paper. But I think you see the problem. We can't both do this because
only one of us can win.

Newcomb's paradox is another proof. It supposes an impossible
situation where you and Omega can both predict the other's actions
with certainty.

> I don't think algorithmic information theory alone gives us all the tools. 
> And I don't think we need to wait for a particular hypothesis to be proven or 
> a theory to be popularly accepted to take action. For example if you're going 
> to build consciousness in a virtual world do we need a proven theory of human 
> consciousness? Or even for AGI.

Of course not. There are two important problems for AGI to solve.
1. People don't want to work.
2. People don't want to die.

To solve the first problem, we need to make machines smart enough to
do any kind of work that humans can do. It means solving hard problems
in AI like vision, language, robotics, art, and modeling human
behavior. It does *not* require machines to be human-like by having
weaknesses like poor arithmetic skills or a need to take time off
work. It doesn't require machines to have emotions. However, it does
require machines to recognize and predict human emotions and their
effect on behavior. This is a critical communication skill for almost
every job. You have to know how your words will make someone feel, and
how that will affect their actions. Therefore machines need this skill
too.

Although we don't require machines that would be mistaken for humans
to do useful work (for example, Google), we do require it for the
second case. But this is easy once the first problem is solved. Since
a requirement of effective communication with you is to have a model
of your mind (a function that takes sensory input and returns a
prediction of your actions), then making a copy of you simply means
programming a robot to carry out its predictions of your actions in
real time.

At this point, a theory of human consciousness might be in order. You
might ask: how will you transfer my soul into the silicon brain of
this robot so that it becomes "me" after disposing of my old
carbon-based body? The answer, of course, is that we don't because
there is no such thing as a soul. It is only an illusion that you have
one, and the robot would be programmed to express this illusion as
well so nothing feels different. People naturally attribute
consciousness (in both senses of the word) to themselves and to other
people. If this robot looks and acts just like you, they will
attribute your consciousness to it as well. So the only thing that has
changed is that you now have a new and improved body and the
capability to back up your memories to the cloud to effectively
achieve immortality.


-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to