On Sun, Sep 7, 2014 at 9:47 PM, Ben Goertzel <[email protected]> wrote:
> If you look at Scott Aaronson's first blog post on Tononi, that I refer to,
>
> http://www.scottaaronson.com/blog/?p=1799
>
> he goes to some effort to make Tononi's definition precise enough that he can 
> evaluate it in specific situations....   He has to kind of "hack" Tononi's 
> definition to make it work...  He works from Tononi's early papers on phi, 
> not from Maguire's mathematical interpretation...

It was Aaronson's blog that pointed me to Maguire's paper. They both
have a problem with Tononi's lack of precision and do their best to
make a reasonable interpretation. But Tononi can always argue that's
not what he meant, while continuing to not say what he means.

The word "consciousness" has several different meanings, often leading
to long philosophical disagreements because participants are using
different interpretations of the same word. But it seems clear to me
that what Tononi is trying (and failing) to quantify is "phenomenal"
consciousness or qualia. He wants to solve Chalmer's "hard problem".
IMHO Aaronson and Maguire have both decisively refuted this nonsense.
Humans attribute (without a precise definition) phenomenal
consciousness to themselves and to other humans. They might attribute
phenomenal consciousness to some animals, and might to future machines
that pass the Turing test. But there is no simple test for human-like
behavior. If there were, then AGI could be reduced to an iterative
search problem.

What we should be asking instead about consciousness is why a
collection of neurons that obeys physical laws would model itself as
having some property that exists outside of physics. Of course it is
easy to produce such systems.

int main() {
  printf("I think, therefore I am.");
}

More generally, Wolpert's theorem says that two physical computers
cannot mutually predict each other. As a corollary, a computer cannot
model itself. So it would be surprising if we *could* understand our
own minds.

But why make this particular error? Why do we imagine that the purpose
of our visual cortex is to transmit images to a projector inside our
heads where a little person, homunculus, or soul watches and
experiences the images that we see? We know logically it isn't true,
but why is this view so appealing?

Logically we know that our brains are programmed by evolution to
maximize reproductive fitness. A fear of dying confers a selective
advantage. When we experience something, what really happens is that
memories of the experience are written to episodic memory and are
positively reinforced. In utility theory, positive reinforcement is a
goal signal, that which an agent seeks to maximize. In general, the
optimal strategy (AIXI) is not computable, but a reasonable
approximation is to respond to a positive reinforcement signal by
increasing the frequency of actions taken immediately prior. Thus, our
behavior is shaped by taking actions to ensure that the stream of
perception, what we call phenomenal consciousness, continues by not
dying. Then by the illusion of free will, we take responsibility for
those (deterministic) actions.

I am disappointed but not surprised by the number of intelligent
people who confuse a belief encoded into our brains with reality.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to