On Tue, Jul 2, 2019 at 10:59 PM Matt Mahoney <[email protected]>
wrote:

> Colin, in case you haven't noticed, Peter has actually produced some AI
> (aigo, which seems to have better language understanding than Amazon's
> Alexa, at least in the demos I've seen), while all you have is a theory
> that AI comes from consciousness, which comes from EM waves or some other
> mysterious physics that can't be modeled in software. You argue for the
> scientific method, yet you argue for a theory which is not testable because
> you defined consciousness (qualia) in such a way that it is not detectable.
> Well, good luck.
>
> Your argument for qualia is to poke me in the eye and say "are you going
> to argue that wasn't real?"  Of course not. Our brains can't just turn off
> pain because if they could, we would become extinct. But I can also write a
> program that claims to feel pain and modifies it's behavior to avoid it,
> thus passing the same test that people have argued proves that lobsters
> feel pain when you boil them.
> http://mattmahoney.net/autobliss.txt
>
> Am I missing something?
>

Regrettably, yes. My claims about EM fields or any other basis for an
artificial brain/consciousness are moot.

It doesn't matter what physics I choose. 50 people could choose 50
different aspects of the brain physics they hold as originating
intelligence/consciousness (however interdependent these things are).

I'm saying that the generic framework in which such choices are correctly
evaluated for necessity/sufficiency in creating and artificial brain
involves replicating your chosen physics and that the location of it as a
science activity is in (e) LEFT. You also examine models of it in (e) RIGHT
and only together can you scientifically examine the necessity of the
chosen physics.

In doing this all I am doing is making the science like every other science
of a natural phenomenon. It is merely being normalised.

I just happen to have chosen EM fields because of how deeply they are
involved (they literally originate all the brain's signalling).

You are trying to invalidate a correction to a *general
*(physics-agnostic) framework
for the science of AGI by questioning the validity of a *particular*
instance of science done under the framework.

AGI science currently has no (e) LEFT activity at all. I'm trying to get
the science framework corrected. I'd be happy to have my physics choices
invalidated .... scientifically .... but that gets done in a science done
with both (e)LEFT and (e) RIGHT, not (e) RIGHT on its own.

As it happens, when you use the complete science framework you have an
empirical option for testing consciousness that you didn't have before ....
and it is in that context that consciousness becomes empirically tractable.
But not if you don't do (e)LEFT.

So getting the science framework right is #1. I can't even discuss my chip
approach without it.

Let's say there are 100 possibilities for the necessary physics in real
AGI. There is only one of them that is NONE. That is the only one that ever
gets explored by (e)RIGHT alone (with computers that throw all brain
physics out). The other 99 get completely lost. They are absent because of
a broken science framework, not because they've been proved un-necessary.

Does that help?

Colin

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Mb07f7e65a1281c104aa1890a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to