On Wed, Nov 6, 2019, 4:53 AM John Rose <[email protected]> wrote:

> Question: Why don't the compression experts call near-lossless and
> perceptual-lossless lossy?
> Answer: Because you don't know. They could be either though admittedly
> high probability lossy.
>

No. It's marketing.

How do you know something is conscious? It could be perceptually conscious
> but not really conscious.
>

You "know" you are conscious because thinking feels like something. You
want to keep on thinking by not dying. This behavior increases your
expected number of offspring.

If something is like you, then you assume it is conscious too. We have to
assume because there is no objective test. Other humans are like you,
therefore conscious. Animals are sort of like you, so less conscious.
Robots are even less like you, so not conscious.

So let's loop this around and call perceptual-lossless p-lossless. Then I
> would say a p-zombie is p-lossless.
>

The "p" in "p-zombie" means "philosophical". It is philosophical because it
exposes the obvious inconsistency in your belief that you are conscious.
First we define consciousness as the difference between a human and a
zombie. Then we define a zombie as being behaviorally identical to a human.
The logical conclusion is that there is no test, no measurement, no
objective evidence, and thus no rational reason to believe that you are
conscious. The fact that you believe otherwise proves that brains have
irrational beliefs about consciousness.

Anyone doing serious research in AGI knows this. They know that
consciousness is irrelevant to intelligence. The only thing you have to
model is the emotions that cause the belief in it.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-M99313117555910f2a70a1e7a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to