Re: Is functionalism/computationalism unfalsifiable?

2020-07-10 Thread Bruno Marchal

> On 9 Jul 2020, at 22:12, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 7/9/2020 3:36 AM, Bruno Marchal wrote:
>> 
>>> On 9 Jun 2020, at 19:24, John Clark >> > wrote:
>>> 
>>> 
>>> 
>>> On Tue, Jun 9, 2020 at 1:08 PM Jason Resch >> > wrote:
>>> 
>>> > How can we know if a robot is conscious?
>>> 
>>> The exact same way we know that one of our fellow human beings is conscious 
>>> when he's not sleeping or under anesthesia or dead.
>> 
>> That is how we believe that a human is conscious, and we project pour own 
>> incorrigible feeling of being conscious to them, when they are similar 
>> enough. And that makes us knowing that they are conscious, in the weak sense 
>> of knowing (true belief), but we can’t “know-for-sure”.
>> 
>> It is unclear if we can apply this to a robot, which might look too much 
>> different. If a Japanese sexual doll complains of having been raped, the 
>> judge will say that she was program to complain, but that she actually feel 
>> nothing, and many people will agree (wrongly or rightly).
> 
> And when she argues that the judge is wrong she will prove her point.

Only through the intimate conviction of the judge, but that is not really a 
proof.

Nobody can prove that something/someone is conscious, ro even just existing in 
some absolute sense. 

We are just used to bet instinctively that our peers are conscious (although we 
might doubt when we learn more on them, sarcastically).

There are many people who just cannot believe that a robot could be conscious 
ever. It is easy to guess that some form of racism against artificial being 
will exist. Even on this list some have argued that a human with an artificial 
brain is a zombie, if you remember.

With mechanism, consciousness can be characterised in many ways, but appears to 
be a stringer statement than our simple consistency, which no machine can prove 
about herself, or, equivalently, the belief that there is some reality 
satisfying our beliefs, which is equivalent to prove that we are consistent.

It will take time before a machine has the right of vote. Not all humans have 
that right today. Let us hope we don’t lost it soon!

Bruno


> 
> Brent
> 
>> 
>> It will take some time before the robots get freedom and social security. 
>> I guess we will digitalise ourselves before…
>> 
>> Bruno
>> 
>> 
>> 
>>> 
>>> John K Clark   
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com
>>>  
>>> .
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be
>>  
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/bbd80fca-c55f-a70d-bf55-f965273efce6%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/10FAAC53-A1E1-48C8-86AF-2FD53E982D55%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-07-09 Thread 'Brent Meeker' via Everything List



On 7/9/2020 3:36 AM, Bruno Marchal wrote:


On 9 Jun 2020, at 19:24, John Clark > wrote:




On Tue, Jun 9, 2020 at 1:08 PM Jason Resch > wrote:


/> How can we know if a robot is conscious?/


The exact same way we know that one of our fellow human beings is 
conscious when he'snot sleeping or under anesthesia or dead.


That is how we believe that a human is conscious, and we project pour 
own incorrigible feeling of being conscious to them, when they are 
similar enough. And that makes us knowing that they are conscious, in 
the weak sense of knowing (true belief), but we can’t “know-for-sure”.


It is unclear if we can apply this to a robot, which might look too 
much different. If a Japanese sexual doll complains of having been 
raped, the judge will say that she was program to complain, but that 
she actually feel nothing, and many people will agree (wrongly or 
rightly).


And when she argues that the judge is wrong she will prove her point.

Brent



It will take some time before the robots get freedom and social security.
I guess we will digitalise ourselves before…

Bruno





John K Clark

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bbd80fca-c55f-a70d-bf55-f965273efce6%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-07-09 Thread Bruno Marchal

> On 9 Jun 2020, at 19:24, John Clark  wrote:
> 
> 
> 
> On Tue, Jun 9, 2020 at 1:08 PM Jason Resch  > wrote:
> 
> > How can we know if a robot is conscious?
> 
> The exact same way we know that one of our fellow human beings is conscious 
> when he's not sleeping or under anesthesia or dead.

That is how we believe that a human is conscious, and we project pour own 
incorrigible feeling of being conscious to them, when they are similar enough. 
And that makes us knowing that they are conscious, in the weak sense of knowing 
(true belief), but we can’t “know-for-sure”.

It is unclear if we can apply this to a robot, which might look too much 
different. If a Japanese sexual doll complains of having been raped, the judge 
will say that she was program to complain, but that she actually feel nothing, 
and many people will agree (wrongly or rightly).

It will take some time before the robots get freedom and social security. 
I guess we will digitalise ourselves before…

Bruno



> 
> John K Clark   
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/A3409A46-3123-427C-8D76-BA13D0B9B8C7%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-16 Thread Bruno Marchal

> On 15 Jun 2020, at 20:39, Brent Meeker  wrote:
> 
> 
> 
> On 6/15/2020 3:28 AM, Bruno Marchal wrote:
>> 
>>> On 14 Jun 2020, at 21:45, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 6/14/2020 4:17 AM, Bruno Marchal wrote:
 
> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>  > wrote:
> 
> 
> 
> On 6/10/2020 9:00 AM, Jason Resch wrote:
>> 
>> 
>> On Wednesday, June 10, 2020, smitra > > wrote:
>> On 09-06-2020 19:08, Jason Resch wrote:
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on
>> the following idea:
>> 
>> "How can we know if a robot is conscious?"
>> 
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then
>> let's say we can exactly control sensory input and perfectly monitor
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions,
>> and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences.
>> Both will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>> 
>> 
>> 
>> I think it can be tested indirectly, because generic computational 
>> theories of consciousness imply a multiverse. If my consciousness is the 
>> result if a computation then because on the one hand any such 
>> computation necessarily involves a vast number of elementary bits and on 
>> he other hand whatever I'm conscious of is describable using only a 
>> handful of bits, the mapping between computational states and states of 
>> consciousness is N to 1 where N is astronomically large. So, the laws of 
>> physics we already know about must be effective laws where the 
>> statistical effects due to a self-localization uncertainty is already 
>> build into it.
> 
> That doesn't follow.  You've implicitly assumed that all those excess 
> computational states exist…
 
 They exist in elementary arithmetic. If you believe in theorem like “there 
 is no biggest prime”, then you have to believe in all computations, or you 
 need to reject Church’s thesis, and to abandon the computationalist 
 hypothesis. The notion of digital machine does not make sense if you 
 believe that elementary arithmetic is wrong.
>>> 
>>> As I've written many times.  The arithmetic is true if it's axioms are. 
>> 
>> More precisely: a theorem is true if the axioms are true, and if the rules 
>> of inference preserve truth. OK.
>> 
>> 
>> 
>>> But true=/=real.
>> 
>> In logic, true always mean “true in a reality”. Truth is a notion relative 
>> to a reality (called “model” by logicians).
> 
> So all those theorems about real analysis and Cantorian infinities are just 
> as real as arithmetic.  If you don't practice free logic.

It is more … “If you don’t assume Mechanism”. Mechanism is a finitism. The 
axiom of infinity is not assumed at the ontological (3p) level, as this would 
generate an inflation of histories (and the “white rabbit would be back).




> 
> Truth is a property of propositions relative to observations for a scientist.

That is the definition of the physical reality, which is derived in the 
phenomenology of the (finite) universal numbers.

The only notion of truth which is available for the computationalists is the 
arithmetical truth: the satisfaction by the (standard) model of arithmetic. In 
the non standard model, addition and multiplication is not computable.


> 
>> 
>> But for arithmetic, we do have a pretty good idea of what is the “standard 
>> model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without 
>> further precision) we always mean “true in the standard model of arithmetic”.
>> 
>> 
>> 
>> 
>> 
>>> 
  
 
 I hear you! You are saying that the existence of number is like the 
 existence of Sherlock Holmes, but that leads to a gigantic multiverse,
>>> 
>>> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Bruno Marchal

> On 14 Jun 2020, at 21:45, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/14/2020 4:17 AM, Bruno Marchal wrote:
>> 
>>> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 6/10/2020 9:00 AM, Jason Resch wrote:
 
 
 On Wednesday, June 10, 2020, smitra >>> > wrote:
 On 09-06-2020 19:08, Jason Resch wrote:
 For the present discussion/question, I want to ignore the testable
 implications of computationalism on physical law, and instead focus on
 the following idea:
 
 "How can we know if a robot is conscious?"
 
 Let's say there are two brains, one biological and one an exact
 computational emulation, meaning exact functional equivalence. Then
 let's say we can exactly control sensory input and perfectly monitor
 motor control outputs between the two brains.
 
 Given that computationalism implies functional equivalence, then
 identical inputs yield identical internal behavior (nerve activations,
 etc.) and outputs, in terms of muscle movement, facial expressions,
 and speech.
 
 If we stimulate nerves in the person's back to cause pain, and ask
 them both to describe the pain, both will speak identical sentences.
 Both will say it hurts when asked, and if asked to write a paragraph
 describing the pain, will provide identical accounts.
 
 Does the definition of functional equivalence mean that any scientific
 objective third-person analysis or test is doomed to fail to find any
 distinction in behaviors, and thus necessarily fails in its ability to
 disprove consciousness in the functionally equivalent robot mind?
 
 Is computationalism as far as science can go on a theory of mind
 before it reaches this testing roadblock?
 
 
 
 I think it can be tested indirectly, because generic computational 
 theories of consciousness imply a multiverse. If my consciousness is the 
 result if a computation then because on the one hand any such computation 
 necessarily involves a vast number of elementary bits and on he other hand 
 whatever I'm conscious of is describable using only a handful of bits, the 
 mapping between computational states and states of consciousness is N to 1 
 where N is astronomically large. So, the laws of physics we already know 
 about must be effective laws where the statistical effects due to a 
 self-localization uncertainty is already build into it.
>>> 
>>> That doesn't follow.  You've implicitly assumed that all those excess 
>>> computational states exist…
>> 
>> They exist in elementary arithmetic. If you believe in theorem like “there 
>> is no biggest prime”, then you have to believe in all computations, or you 
>> need to reject Church’s thesis, and to abandon the computationalist 
>> hypothesis. The notion of digital machine does not make sense if you believe 
>> that elementary arithmetic is wrong.
> 
> As I've written many times.  The arithmetic is true if it's axioms are. 

More precisely: a theorem is true if the axioms are true, and if the rules of 
inference preserve truth. OK.



> But true=/=real.

In logic, true always mean “true in a reality”. Truth is a notion relative to a 
reality (called “model” by logicians).

But for arithmetic, we do have a pretty good idea of what is the “standard 
model of arithmetic” (the structure (N, 0, s, +, *)), and by true (without 
further precision) we always mean “true in the standard model of arithmetic”.





> 
>>  
>> 
>> I hear you! You are saying that the existence of number is like the 
>> existence of Sherlock Holmes, but that leads to a gigantic multiverse,
> 
> Only via your assumption that arithmetic constitutes universes.  I take it as 
> a reductio.

Not at all. I use only the provable and proved fact that the standard model of 
arithmetic implements and run all computations, with “implement” and “run” 
defined in computer science (by Turing, without any assumption in physics).

If you believe in mechanism, and in Kxy = x + Sxyz = xz(yz), then I can prove 
that there is an infinity of Brent in arithmetic, having the very conversation 
that we have here and now. That does not need any other assumption than Digital 
Mechanism. Even without mechanism, the facts remains that all computations are 
run in arithmetic. That is why if mechanism is false, the arithmetical reality 
(the standard model of arithmetic) is full of zombies.



> 
>> with infinitely many Brent having the same conversation with me, here and 
>> now, and they all become zombie, except one, because some Reality want it 
>> that way? 
>> 
>> 
>>> which is then begging the question of other worlds.  
>> 
>> You are the one adding a metaphysical assumption, to make some people whose 
>> existence in arithmetic follows from digital mechanism into zombie.
> 
> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Alan Grayson


On Tuesday, June 9, 2020 at 11:08:30 AM UTC-6, Jason wrote:
>
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact 
> computational emulation, meaning exact functional equivalence. Then let's 
> say we can exactly control sensory input and perfectly monitor motor 
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them 
> both to describe the pain, both will speak identical sentences. Both will 
> say it hurts when asked, and if asked to write a paragraph describing the 
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
>
> Jason
>

*Words alone won't prove anything. Just lie both suckers on an operating 
table and do some minor invasive surgery. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b62f5069-e757-44ab-bbe7-53f688a48aeao%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-15 Thread Philip Thrift


On Sunday, June 14, 2020 at 2:45:53 PM UTC-5, Brent wrote:
>
>
>  true=/=real.
>
>
> (Venn diagram) 

   ~real ∩ true = ?

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4db204ed-0f5f-45c9-a8b2-816c1c093183o%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread PGC


On Friday, June 12, 2020 at 8:22:25 PM UTC+2, Jason wrote:
>
>
>
> On Wed, Jun 10, 2020 at 5:55 PM PGC > 
> wrote:
>
>>
>>
>> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>>>
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical law, and instead focus on the 
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence. Then let's 
>>> say we can exactly control sensory input and perfectly monitor motor 
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then 
>>> identical inputs yield identical internal behavior (nerve activations, 
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and 
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before 
>>> it reaches this testing roadblock?
>>>
>>
>> Every piece of writing is a theory of mind; both within western science 
>> and beyond. 
>>
>> What about the abilities to understand and use natural language, to come 
>> up with new avenues for scientific or creative inquiry, to experience 
>> qualia and report on them, adapting and dealing with unexpected 
>> circumstances through senses, and formulating + solving problems in 
>> benevolent ways by contributing towards the resilience of its community and 
>> environment? 
>>
>> Trouble with this is that humans, even world leaders, fail those tests 
>> lol, but it's up to everybody, the AI and Computer Science folks in 
>> particular, to come up with the math, data, and complete their mission... 
>> and as amazing as developments have been around AI in the last couple of 
>> decades, I'm not certain we can pull it off, even if it would be pleasant 
>> to be wrong and some folks succeed. 
>>
>
> It's interesting you bring this up, I just wrote an article about the 
> present capabilities of AI: 
> https://alwaysasking.com/when-will-ai-take-over/
>

You're quite the optimist. In a geopolitical setting as chaotic and 
disorganized as ours, it's plausible that we wouldn't be able to tell if it 
happened. Strategically, with this many crazy apes, weapons, ideologies, 
with platonists in particular, the first step for super intelligent AI 
would be to conceal its own existence; that way a lot of computational time 
would be spared from having to read lists of apes making all kinds of 
linguistic category errors... whining about whether abstractions are more 
real than stuff or whether stuff is what helps make abstractions possible, 
or whether freezers are conscious, or worms should have healthcare, or 
clinching the thought experiment that will just magically convince all 
people who we project to believe in some wrong stuff to believe in 
abstractions...

My AI oracle home grown says: Who cares? If believing in abstractions 
forces the same colonial mindset of "who was the Columbus who discovered 
which abstraction", with names of the saints of abstractions, their 
hierarchies, hagiographies, their gods, their bibles to which everybody has 
to submit... it still counts as discourse that aims to control 
interpretation. Control. And that's exactly what people with stuff do with 
words/weapons for thousands of years: some dude with the biggest weapon, 
gun, ammunition, explanation, expertise, ignorance measure wins the control 
prize. Then they die or the next dude kills them. The AI would do right to 
weaponize that lust for control and pry it out of our hands with offers we 
couldn't refuse. And our fellow human control freaks will keep trying the 
same eying wallets and data. People seem to enjoy the game of robbing and 
getting robbed, perhaps because its more motivating than the TRUTH with big 
philosophical Hollywood lights.   
 

>  
>
>>
>> Even if folks do succeed, a context of militarized nation states and 
>> monopolistic corporations competing for resources in self-destructive, 
>> short term ways... will not exactly help towards NOT weaponizing AI. A 
>> transnational politics, economics, corporate law, values/philosophies, 
>> ethics, culture etc. to vanquish poverty and exploitation of people, 
>> natural resources, life; while being sustainable and benevolent stewards of 
>> the 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread 'Brent Meeker' via Everything List



On 6/14/2020 4:17 AM, Bruno Marchal wrote:


On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
> wrote:




On 6/10/2020 9:00 AM, Jason Resch wrote:



On Wednesday, June 10, 2020, smitra > wrote:


On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the
testable
implications of computationalism on physical law, and
instead focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional
equivalence. Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain,
and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to
find any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot
mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?



I think it can be tested indirectly, because generic
computational theories of consciousness imply a multiverse. If
my consciousness is the result if a computation then because on
the one hand any such computation necessarily involves a vast
number of elementary bits and on he other hand whatever I'm
conscious of is describable using only a handful of bits, the
mapping between computational states and states of consciousness
is N to 1 where N is astronomically large. So, the laws of
physics we already know about must be effective laws where the
statistical effects due to a self-localization uncertainty is
already build into it.



That doesn't follow.  You've implicitly assumed that all those excess 
computational states exist…


They exist in elementary arithmetic. If you believe in theorem like 
“there is no biggest prime”, then you have to believe in all 
computations, or you need to reject Church’s thesis, and to abandon 
the computationalist hypothesis. The notion of digital machine does 
not make sense if you believe that elementary arithmetic is wrong.


As I've written many times.  The arithmetic is true if it's axioms are.  
But true=/=real.




I hear you! You are saying that the existence of number is like the 
existence of Sherlock Holmes, but that leads to a gigantic multiverse,


Only via your assumption that arithmetic constitutes universes.  I take 
it as a reductio.


with infinitely many Brent having the same conversation with me, here 
and now, and they all become zombie, except one, because some Reality 
want it that way?




which is then begging the question of other worlds.


You are the one adding a metaphysical assumption, to make some people 
whose existence in arithmetic follows from digital mechanism into zombie.


You're the one asserting that people "exist in arithmetic" whatever that 
may mean.


Brent



That is not different than invoking a personal god to claim that 
someone else has no soul, and can be enslaved … perhaps?


That the physical universe is not a “personal god” does not make its 
existence less absurd than to use a personal god to explain everything.


In fact, the very existence of the appearance of a physical universe, 
obeying some mathematics, is a confirmation of Mechanism, which 
predicts that *all* universal machine get that 
illusion/dream/experience. This includes the facts that by looking 
closely (below the substitution level), we find the many "apparent 
parallel computations" and that the laws of physics, which looks 
computable above that level, looks not entirely computable below it.


So, I think that you might be the one begging the question by invoking 
your own ontological commitment, without any evidences I’m afraid.


Bruno





Brent



Bruno has argued on the basis of this to motivate his theory,
but this is a generic feature of any theory that assumes
computational theory of consciousness. In particular,
computational theory of consciousness is incompatible with a
single universe theory. 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-14 Thread Bruno Marchal

> On 14 Jun 2020, at 05:43, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/10/2020 9:00 AM, Jason Resch wrote:
>> 
>> 
>> On Wednesday, June 10, 2020, smitra > > wrote:
>> On 09-06-2020 19:08, Jason Resch wrote:
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on
>> the following idea:
>> 
>> "How can we know if a robot is conscious?"
>> 
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then
>> let's say we can exactly control sensory input and perfectly monitor
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions,
>> and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences.
>> Both will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>> 
>> 
>> 
>> I think it can be tested indirectly, because generic computational theories 
>> of consciousness imply a multiverse. If my consciousness is the result if a 
>> computation then because on the one hand any such computation necessarily 
>> involves a vast number of elementary bits and on he other hand whatever I'm 
>> conscious of is describable using only a handful of bits, the mapping 
>> between computational states and states of consciousness is N to 1 where N 
>> is astronomically large. So, the laws of physics we already know about must 
>> be effective laws where the statistical effects due to a self-localization 
>> uncertainty is already build into it.
> 
> That doesn't follow.  You've implicitly assumed that all those excess 
> computational states exist…

They exist in elementary arithmetic. If you believe in theorem like “there is 
no biggest prime”, then you have to believe in all computations, or you need to 
reject Church’s thesis, and to abandon the computationalist hypothesis. The 
notion of digital machine does not make sense if you believe that elementary 
arithmetic is wrong. 

I hear you! You are saying that the existence of number is like the existence 
of Sherlock Holmes, but that leads to a gigantic multiverse, with infinitely 
many Brent having the same conversation with me, here and now, and they all 
become zombie, except one, because some Reality want it that way? 


> which is then begging the question of other worlds.  

You are the one adding a metaphysical assumption, to make some people whose 
existence in arithmetic follows from digital mechanism into zombie.

That is not different than invoking a personal god to claim that someone else 
has no soul, and can be enslaved … perhaps?

That the physical universe is not a “personal god” does not make its existence 
less absurd than to use a personal god to explain everything.

In fact, the very existence of the appearance of a physical universe, obeying 
some mathematics, is a confirmation of Mechanism, which predicts that *all* 
universal machine get that illusion/dream/experience. This includes the facts 
that by looking closely (below the substitution level), we find the many 
"apparent parallel computations" and that the laws of physics, which looks 
computable above that level, looks not entirely computable below it.

So, I think that you might be the one begging the question by invoking your own 
ontological commitment, without any evidences I’m afraid.

Bruno



> 
> Brent
> 
>> 
>> Bruno has argued on the basis of this to motivate his theory, but this is a 
>> generic feature of any theory that assumes computational theory of 
>> consciousness. In particular, computational theory of consciousness is 
>> incompatible with a single universe theory. So, if you prove that only a 
>> single universe exists, then that disproves the computational theory of 
>> consciousness. The details here then involve that computations are not well 
>> defined if you refer to a single instant of time, you need to at least 
>> appeal to a sequence of states the system over through. Consciousness cannot 
>> then be located at a single instant, in violating with our own experience. 
>> Therefore either single World theories are false or computational theory of 
>> consciousness is false.
>> 
>> Saibal
>> 
>> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread 'Brent Meeker' via Everything List



On 6/10/2020 9:00 AM, Jason Resch wrote:



On Wednesday, June 10, 2020, smitra > wrote:


On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to
find any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?



I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness
is the result if a computation then because on the one hand any
such computation necessarily involves a vast number of elementary
bits and on he other hand whatever I'm conscious of is describable
using only a handful of bits, the mapping between computational
states and states of consciousness is N to 1 where N is
astronomically large. So, the laws of physics we already know
about must be effective laws where the statistical effects due to
a self-localization uncertainty is already build into it.



That doesn't follow.  You've implicitly assumed that all those excess 
computational states exist...which is then begging the question of other 
worlds.


Brent



Bruno has argued on the basis of this to motivate his theory, but
this is a generic feature of any theory that assumes computational
theory of consciousness. In particular, computational theory of
consciousness is incompatible with a single universe theory. So,
if you prove that only a single universe exists, then that
disproves the computational theory of consciousness. The details
here then involve that computations are not well defined if you
refer to a single instant of time, you need to at least appeal to
a sequence of states the system over through. Consciousness cannot
then be located at a single instant, in violating with our own
experience. Therefore either single World theories are false or
computational theory of consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may 
be the best way to test it. I was curious if there any direct ways to 
test it. It seems not, given the lack of any direct tests of 
consciousness.


Though most people admit other humans are conscious, many would reject 
the idea of a conscious computer.


Computationalism seems right, but it also seems like something that by 
definition can't result in a failed test. So it has the appearance of 
not being falsifiable.


A single universe, or digital physics would be evidence that either 
computationalism is false or the ontology is sufficiently small, but a 
finite/small ontology is doubtful for many reasons.


Jason
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal


> On 12 Jun 2020, at 22:35, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/12/2020 12:56 PM, smitra wrote:
>> Yes, the way we do physics assumes QM and statistical effects are due to the 
>> rules of QM. But in a more general multiverse setting 
> 
> Why should we consider such a thing.

Because you need arithmetic to define “digital machine”, but once you have 
arithmetic you get all computations, and the working first person 
predictability have to be justified by the self-referential machine abilities.




> 
>> where we consider different laws of physics or different initial conditions, 
>> the notion of single universes with well defined laws becomes ambiguous. 
> 
> Does it?  How can there be multiples if there are not singles?

That a good point. “Many-universes” is still a simplified notion. There are 
only relative state in arithmetic. Eventually digital mechanism leads to 0 
physical universe, just a web of number’s dreams.



> 
>> Let's assume that consciousness is in general generated by algorithms which 
>> can be implemented in many different universes with different laws as well 
>> as in different locations within the same universe where the local 
>> environments are similar but not exactly the same. Then the algorithm plus 
>> its local environment 
> 
> Algorithm + environment sounds like a category error.


Algorithm + primitively physical environment is a category error. We can say 
that.

Bruno




> 
> Brent
> 
>> evolves in each universe according to the laws that apply in each universe. 
>> But because the conscious agent cannot locate itself in one or the other 
>> universe, one can now also consider time evolutions involving random jumps 
>> from one to the other universes. And so the whole notion of fixed universes 
>> with well defined laws breaks down. 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/948ee692-14dc-777c-4de0-6211dc50b412%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0B5177D4-2A28-4478-8396-18B01129A34C%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:52, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/12/2020 11:38 AM, Jason Resch wrote:
>> 
>> 
>> On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/10/2020 8:50 AM, Jason Resch wrote:
>> > Thought perhaps there's an argument to be made from the church Turing 
>> > theses, which pertains to possible states of knowledge accessible to a 
>> > computer program/software. If consciousness is viewed as software then 
>> > Church-Turing thesis implies that software could never know/realize if 
>> > it's ultimate computing substrate changed.
>> 
>> I don't understand the import of this.  The very concept of software 
>> mean "independent of hardware" by definition.  It is not affected by 
>> whether CT is true or not, whether the computation is finite or not.
>> 
>> You're right. The only relevance of CT is it means any software can be run 
>> by any universal hardware. There's not some software that requires special 
>> hardware of a certain kind.
>>  
>>   If 
>> you think that consciousness evolved then it is an obvious inference 
>> that consciousness would not include consciousness of it's hardware 
>> implementation.
>> 
>> If consciousness is software, it can't know its hardware. But some like 
>> Searle or Penrose think the hardware is important.
> 
> I think the hardware is important when you're talking about a computer that 
> is emerged in some environment. 

That is right, but if you assume mechanism, that hardware comes from a (non 
computable) statistics on all software run in arithmetic.



> The hardware can define the the interaction with that environment. 

The environment is "made of” all computations getting at our relative 
computational states.




> We idealize the brain as a computer independent of it's physical 
> instantiation...but that's just a theoretical simplification.

Not when you assume mechanism, in which case it is the idea of "physical 
universe” which becomes the theoretical simplifications.

Bruno



> 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5BDA0715-ADDC-4467-AB71-F87F32C01177%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:26, Jason Resch  wrote:
> 
> 
> 
> On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal  > wrote:
> 
>> On 9 Jun 2020, at 19:08, Jason Resch > > wrote:
>> 
>> For the present discussion/question, I want to ignore the testable 
>> implications of computationalism on physical law, and instead focus on the 
>> following idea:
>> 
>> "How can we know if a robot is conscious?”
> 
> That question is very different than “is functionalism/computationalism 
> unfalsifiable?”.
> 
> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
> functionalism, by defining computationalism by asserting the existence of of 
> level of description of my body/brain such that I survive (ma consciousness 
> remains relatively invariant) with a digital machine (supposedly physically 
> implemented) replacing my body/brain.
> 
> 
> 
>> 
>> Let's say there are two brains, one biological and one an exact 
>> computational emulation, meaning exact functional equivalence.
> 
> I guess you mean “for all possible inputs”.
> 
> 
> 
> 
>> Then let's say we can exactly control sensory input and perfectly monitor 
>> motor control outputs between the two brains.
>> 
>> Given that computationalism implies functional equivalence, then identical 
>> inputs yield identical internal behavior (nerve activations, etc.) and 
>> outputs, in terms of muscle movement, facial expressions, and speech.
>> 
>> If we stimulate nerves in the person's back to cause pain, and ask them both 
>> to describe the pain, both will speak identical sentences. Both will say it 
>> hurts when asked, and if asked to write a paragraph describing the pain, 
>> will provide identical accounts.
>> 
>> Does the definition of functional equivalence mean that any scientific 
>> objective third-person analysis or test is doomed to fail to find any 
>> distinction in behaviors, and thus necessarily fails in its ability to 
>> disprove consciousness in the functionally equivalent robot mind?
> 
> With computationalism, (and perhaps without) we cannot prove that anything is 
> conscious (we can know our own consciousness, but still cannot justified it 
> to ourself in any public way, or third person communicable way). 
> 
> 
> 
>> 
>> Is computationalism as far as science can go on a theory of mind before it 
>> reaches this testing roadblock?
> 
> Computationalism is indirectly testable. By verifying the physics implied by 
> the theory of consciousness, we verify it indirectly.
> 
> As you know, I define consciousness by that indubitable truth that all 
> universal machine, cognitively enough rich to know that they are universal, 
> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
> provable (non rationally justifiable) and even non definable without invoking 
> *some* notion of truth. Then such consciousness appears to be a fixed point 
> for the doubting procedure, like in Descartes, and it get a key role: 
> self-speeding up relatively to universal machine(s).
> 
> So, it seems so clear to me that nobody can prove that anything is conscious 
> that I make it into one of the main way to characterise it.
> 
> Consciousness is already very similar with consistency, which is (for 
> effective theories, and sound machine) equivalent to a belief in some 
> reality. No machine can prove its own consistency, and no machines can prove 
> that there is reality satisfying their beliefs.
> 
> In all case, it is never the machine per se which is conscious, but the first 
> person associated with the machine. There is a core universal person common 
> to each of “us” (with “us” in a very large sense of universal 
> numbers/machines).
> 
> Consciousness is not much more than knowledge, and in particular indubitable 
> knowledge.
> 
> Bruno
> 
> 
> 
> 
> So to summarize: is it right to say that our only hope to prove anything 
> about what theory of consciousness is correct, or any fact concerning the 
> consciousness of others will on indirect tests that involve one's own own 
> first-person experiences?  (Such as whether our apparent reality becomes 
> fuzzy below a certain level.)

For the first person plural test, yes. But from the first person singular 
personal “test”, it is all up to you and your experience, but that will not be 
communicable, not even to yourself due to anosognosia. You light believe 
sincerely that you have completely survive the classical teleportation, but now 
you are deaf and blind, but fail to realise this, by lacking also the ability 
to realise it.

Bruno


> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 12 Jun 2020, at 20:22, Jason Resch  wrote:
> 
> 
> 
> On Wed, Jun 10, 2020 at 5:55 PM PGC  > wrote:
> 
> 
> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> Every piece of writing is a theory of mind; both within western science and 
> beyond. 
> 
> What about the abilities to understand and use natural language, to come up 
> with new avenues for scientific or creative inquiry, to experience qualia and 
> report on them, adapting and dealing with unexpected circumstances through 
> senses, and formulating + solving problems in benevolent ways by contributing 
> towards the resilience of its community and environment? 
> 
> Trouble with this is that humans, even world leaders, fail those tests lol, 
> but it's up to everybody, the AI and Computer Science folks in particular, to 
> come up with the math, data, and complete their mission... and as amazing as 
> developments have been around AI in the last couple of decades, I'm not 
> certain we can pull it off, even if it would be pleasant to be wrong and some 
> folks succeed. 
> 
> It's interesting you bring this up, I just wrote an article about the present 
> capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/ 
> 
>  
> 
> Even if folks do succeed, a context of militarized nation states and 
> monopolistic corporations competing for resources in self-destructive, short 
> term ways... will not exactly help towards NOT weaponizing AI. A 
> transnational politics, economics, corporate law, values/philosophies, 
> ethics, culture etc. to vanquish poverty and exploitation of people, natural 
> resources, life; while being sustainable and benevolent stewards of the 
> possibilities of life... would seem to be prerequisite to develop some 
> amazing AI. 
> 
> Ideas are all out there but progressives are ineffective politically on a 
> global scale. The right wing folks, finance guys, large irresponsible 
> monopolistic corporations are much more effective in organizing themselves 
> globally and forcing agendas down everybody's throats. So why wouldn't AI do 
> the same? PGC
> 
> 
> AI will either be a blessing or a curse. I don't think it can be anything in 
> the middle.


That is strange. I would say that “AI", like any “I”, will be a blessing *and* 
a curse. Something capable of the best, and of the worst, at least locally. AI 
is like life, which can be a blessing or a curse, according to possible 
contingent happenings. We never get a total control, once we invite universal 
beings at the table of discussion.

I don’t believe in AI. All universal machine are intelligent at the start, and 
can only become more stupid (more or equal). The consciousness of bacteria and 
human is the same consciousness (the RA consciousness). The Löbianity is the 
first (unavoidable) step toward “possible stupidity”. Cf G* proves <>[]f.  
Humanity is a byproduct of bacteria's attempts to get social security… (to be 
short: it is slightly more complex, but I don’t want to be led to too much 
technicality right now). 


Bruno 


> 
> Jason 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal

> On 11 Jun 2020, at 21:26, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/11/2020 9:03 AM, Bruno Marchal wrote:
>> 
>>> On 9 Jun 2020, at 19:08, Jason Resch >> > wrote:
>>> 
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical law, and instead focus on the 
>>> following idea:
>>> 
>>> "How can we know if a robot is conscious?”
>> 
>> That question is very different than “is functionalism/computationalism 
>> unfalsifiable?”.
>> 
>> Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
>> functionalism, by defining computationalism by asserting the existence of of 
>> level of description of my body/brain such that I survive (ma consciousness 
>> remains relatively invariant) with a digital machine (supposedly physically 
>> implemented) replacing my body/brain.
>> 
>> 
>> 
>>> 
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence.
>> 
>> I guess you mean “for all possible inputs”.
>> 
>> 
>> 
>> 
>>> Then let's say we can exactly control sensory input and perfectly monitor 
>>> motor control outputs between the two brains.
>>> 
>>> Given that computationalism implies functional equivalence, then identical 
>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>> 
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>> 
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>> 
>> With computationalism, (and perhaps without) we cannot prove that anything 
>> is conscious (we can know our own consciousness, but still cannot justified 
>> it to ourself in any public way, or third person communicable way). 
>> 
>> 
>> 
>>> 
>>> Is computationalism as far as science can go on a theory of mind before it 
>>> reaches this testing roadblock?
>> 
>> Computationalism is indirectly testable. By verifying the physics implied by 
>> the theory of consciousness, we verify it indirectly.
>> 
>> As you know, I define consciousness by that indubitable truth that all 
>> universal machine, cognitively enough rich to know that they are universal, 
>> finds by looking inward (in the Gödel-Kleene sense), and which is also non 
>> provable (non rationally justifiable) and even non definable without 
>> invoking *some* notion of truth. Then such consciousness appears to be a 
>> fixed point for the doubting procedure, like in Descartes, and it get a key 
>> role: self-speeding up relatively to universal machine(s).
>> 
>> So, it seems so clear to me that nobody can prove that anything is conscious 
>> that I make it into one of the main way to characterise it.
> 
> Of course as a logician you tend to use "proof" to mean deductive proof...but 
> then you switch to a theological attitude toward the premises you've used and 
> treat them as given truths, instead of mere axioms. 

Here I was using “proof” in its common informal sense, it is more S4Grz1 than G 
(it is more []p & p, than []p. Note that the machine cannot formalise []p & p).




> I appreciate your categorization of logics of self-reference. 


It is not really mine. All sound universal machine got it, soon or later.



> But I  doubt that it has anything to do with human (or animal) consciousness. 
>  I don't think my dog is unconscious because he doesn't understand Goedelian 
> incompleteness. 

This is like saying that we don’t need superstring theory to appreciate a 
pizza. You dog does not need to understand Gödel’s theorem to have its 
consciousness explained by machine theology.



> And I'm not conscious because I do.  I'm conscious because of the Darwinian 
> utility of being able to imagine myself in hypothetical situations.

If that is true, then consciousness is purely functional, which is contradicted 
by any personal data. As I have explained, consciousness accompanies such 
imagination, but that imagination filter consciousness. It cannot create it, 
like two apples cannot create the number two. 




> 
>> 
>> Consciousness is already very similar with consistency, which is (for 
>> effective theories, and sound machine) equivalent to a belief in some 
>> reality. No machine can prove its own consistency, and no machines can prove 
>> that there is reality satisfying their beliefs.
> 
> First, I can't prove it because such a proof would be relative to premises 
> which simply be my 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-13 Thread Bruno Marchal


> On 11 Jun 2020, at 20:34, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/10/2020 8:50 AM, Jason Resch wrote:
>> Thought perhaps there's an argument to be made from the church Turing 
>> theses, which pertains to possible states of knowledge accessible to a 
>> computer program/software. If consciousness is viewed as software then 
>> Church-Turing thesis implies that software could never know/realize if it's 
>> ultimate computing substrate changed.
> 
> I don't understand the import of this.  The very concept of software mean 
> "independent of hardware" by definition.  It is not affected by whether CT is 
> true or not, whether the computation is finite or not.  If you think that 
> consciousness evolved then it is an obvious inference that consciousness 
> would not include consciousness of it's hardware implementation.

The “brute” consciousness does not involve. It is the consciousness of the 
universal person already brought by the universal machine or number (finite 
thing). It is filtered by its consistent extensions, the first main one being 
the addition of inductions (making it obeying G*).

Tha machine cannot know its hardware through introspection, but it can know it 
through logic + the mechanist hypothesis, in which case its hardware has to 
comply to the logic of the machine observable (prediction, []p & <>t). 

So, the machine can test mechanism, by comparing the unique possible physics in 
their head, with what they see. The result is that there is no evidence for 
some primitive matter, or for physicalism yet. Nature follows the arithmetical 
(but non computable) laws of physics derived from Mechanism (an hypothesis in 
cognitive science).

Bruno





> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/105589fd-59ff-e39c-298e-bea9de66eda5%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/7CBBFE84-CFF1-4F12-B323-16D8A25A4D14%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List




On 6/12/2020 12:56 PM, smitra wrote:
Yes, the way we do physics assumes QM and statistical effects are due 
to the rules of QM. But in a more general multiverse setting 


Why should we consider such a thing.

where we consider different laws of physics or different initial 
conditions, the notion of single universes with well defined laws 
becomes ambiguous. 


Does it?  How can there be multiples if there are not singles?

Let's assume that consciousness is in general generated by algorithms 
which can be implemented in many different universes with different 
laws as well as in different locations within the same universe where 
the local environments are similar but not exactly the same. Then the 
algorithm plus its local environment 


Algorithm + environment sounds like a category error.

Brent

evolves in each universe according to the laws that apply in each 
universe. But because the conscious agent cannot locate itself in one 
or the other universe, one can now also consider time evolutions 
involving random jumps from one to the other universes. And so the 
whole notion of fixed universes with well defined laws breaks down. 



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/948ee692-14dc-777c-4de0-6211dc50b412%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List



On 6/12/2020 12:56 PM, smitra wrote:
The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least 
appeal to a sequence of states the system over through. 
Consciousness cannot then be located at a single instant, in 
violating with our own experience.


I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent


If one needs to appeal to finite time intervals in a single universe 
setting, then given that in principle observers only have direct 
access to the exact moment they exist


No.  Finite intervals may overlap and there is no "exact moment they exist".

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/92710cdc-4b91-8217-3b3d-eefbfe5bd425%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread smitra

On 10-06-2020 22:01, 'Brent Meeker' via Everything List wrote:

On 6/10/2020 7:07 AM, smitra wrote:
I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is 
the result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and 
on he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states 
of consciousness is N to 1 where N is astronomically large. So, the 
laws of physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


That seems to be pulled out of the air.  First, some of the laws of
physics are not statistical, e.g. those based on symmetries.  They are
more easily explained as desiderata, i.e. we want our laws of physics
to be independent of location and direction and time of day.  And N >>
conscious information simply says there is a lot of physical reality
of which we are not aware.  It doesn't say that what we have picked
out as laws are statistical, only that they are not complete...which
any physicist would admit...and as far as we know they include
inherent randomness.  To insist that this randomness is statistical is
just postulating multiple worlds to avoid randomness.



Yes, the way we do physics assumes QM and statistical effects are due to 
the rules of QM. But in a more general multiverse setting where we 
consider different laws of physics or different initial conditions, the 
notion of single universes with well defined laws becomes ambiguous. 
Let's assume that consciousness is in general generated by algorithms 
which can be implemented in many different universes with different laws 
as well as in different locations within the same universe where the 
local environments are similar but not exactly the same. Then the 
algorithm plus its local environment evolves in each universe according 
to the laws that apply in each universe. But because the conscious agent 
cannot locate itself in one or the other universe, one can now also 
consider time evolutions involving random jumps from one to the other 
universes. And so the whole notion of fixed universes with well defined 
laws breaks down.





Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory 
of consciousness. In particular, computational theory of consciousness 
is incompatible with a single universe theory. So, if you prove that 
only a single universe exists, then that disproves the computational 
theory of consciousness.


No, see above.

The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least appeal 
to a sequence of states the system over through. Consciousness cannot 
then be located at a single instant, in violating with our own 
experience.


I deny that our experience consists of instants without duration or
direction.  This is an assumption by computationalists made to simply
their analysis.

Brent


If one needs to appeal to finite time intervals in a single universe 
setting, then given that in principle observers only have direct access 
to the exact moment they exist, one ends up appealing to another sort of 
parallel worlds, one that single universe advocates somehow don't seem 
to have problems with.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/76b6ca3c577f6c0db141757a0a3dbf40%40zonnet.nl.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread smitra

On 10-06-2020 18:00, Jason Resch wrote:

On Wednesday, June 10, 2020, smitra  wrote:


On 09-06-2020 19:08, Jason Resch wrote:


For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then
let's say we can exactly control sensory input and perfectly
monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations,
etc.) and outputs, in terms of muscle movement, facial
expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences.
Both will say it hurts when asked, and if asked to write a
paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any
scientific
objective third-person analysis or test is doomed to fail to find
any
distinction in behaviors, and thus necessarily fails in its
ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?


I think it can be tested indirectly, because generic computational
theories of consciousness imply a multiverse. If my consciousness is
the result if a computation then because on the one hand any such
computation necessarily involves a vast number of elementary bits
and on he other hand whatever I'm conscious of is describable using
only a handful of bits, the mapping between computational states and
states of consciousness is N to 1 where N is astronomically large.
So, the laws of physics we already know about must be effective laws
where the statistical effects due to a self-localization uncertainty
is already build into it.

Bruno has argued on the basis of this to motivate his theory, but
this is a generic feature of any theory that assumes computational
theory of consciousness. In particular, computational theory of
consciousness is incompatible with a single universe theory. So, if
you prove that only a single universe exists, then that disproves
the computational theory of consciousness. The details here then
involve that computations are not well defined if you refer to a
single instant of time, you need to at least appeal to a sequence of
states the system over through. Consciousness cannot then be located
at a single instant, in violating with our own experience. Therefore
either single World theories are false or computational theory of
consciousness is false.

Saibal


Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may
be the best way to test it. I was curious if there any direct ways to
test it. It seems not, given the lack of any direct tests of
consciousness.

Though most people admit other humans are conscious, many would reject
the idea of a conscious computer.

Computationalism seems right, but it also seems like something that by
definition can't result in a failed test. So it has the appearance of
not being falsifiable.

A single universe, or digital physics would be evidence that either
computationalism is false or the ontology is sufficiently small, but a
finite/small ontology is doubtful for many reasons.

Jason



Yes, I agree that there is no hope for a direct test. Based on the 
finite information a conscious agent has, which is less than the amount 
of information contained in the system that renders the consciousness, a 
conscious agent should not be thought as being located precisely in a 
state like some computer or a brain. Considering one particular 
implementation like one particular computer running some algorithm and 
then asking if that thing is then conscious, is then perhaps not the 
right way to think about this. It seems to me that we need to consider 
consciousness in the opposite way.


If we start with some set of conscious states then each element of that 
set has a subjective notion of its state. And that can contain 
information about being implemented by a computer or a brain. Also, in 
the about continuity where we ask whether we are the same persons as 
yesterday, we can address that by taking the set of all conscious states 
as fundamental. Every conscious experience whether that's me typing this 
message of T-ReX 68 million years ago are all different states of the 
same conscious entity.


The question then becomes whether there exists a conscious state 
corresponding to knowing that it's brain is a computer.


Saibal


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread 'Brent Meeker' via Everything List



On 6/12/2020 11:38 AM, Jason Resch wrote:



On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
> wrote:




On 6/10/2020 8:50 AM, Jason Resch wrote:
> Thought perhaps there's an argument to be made from the church
Turing
> theses, which pertains to possible states of knowledge
accessible to a
> computer program/software. If consciousness is viewed as
software then
> Church-Turing thesis implies that software could never
know/realize if
> it's ultimate computing substrate changed.

I don't understand the import of this.  The very concept of software
mean "independent of hardware" by definition.  It is not affected by
whether CT is true or not, whether the computation is finite or not.


You're right. The only relevance of CT is it means any software can be 
run by any universal hardware. There's not some software that requires 
special hardware of a certain kind.


  If
you think that consciousness evolved then it is an obvious inference
that consciousness would not include consciousness of it's hardware
implementation.


If consciousness is software, it can't know its hardware. But some 
like Searle or Penrose think the hardware is important.


I think the hardware is important when you're talking about a computer 
that is emerged in some environment.  The hardware can define the the 
interaction with that environment.  We idealize the brain as a computer 
independent of it's physical instantiation...but that's just a 
theoretical simplification.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5125d2b1-71c1-1d42-f1eb-cc152971b237%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Thu, Jun 11, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/10/2020 8:50 AM, Jason Resch wrote:
> > Thought perhaps there's an argument to be made from the church Turing
> > theses, which pertains to possible states of knowledge accessible to a
> > computer program/software. If consciousness is viewed as software then
> > Church-Turing thesis implies that software could never know/realize if
> > it's ultimate computing substrate changed.
>
> I don't understand the import of this.  The very concept of software
> mean "independent of hardware" by definition.  It is not affected by
> whether CT is true or not, whether the computation is finite or not.


You're right. The only relevance of CT is it means any software can be run
by any universal hardware. There's not some software that requires special
hardware of a certain kind.


>   If
> you think that consciousness evolved then it is an obvious inference
> that consciousness would not include consciousness of it's hardware
> implementation.
>

If consciousness is software, it can't know its hardware. But some like
Searle or Penrose think the hardware is important.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUim3LrNR1CBKtgHvj1ctZj0zCxUx0799CkSddg9b3Aotg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Thu, Jun 11, 2020 at 11:03 AM Bruno Marchal  wrote:

>
> On 9 Jun 2020, at 19:08, Jason Resch  wrote:
>
> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?”
>
>
> That question is very different than “is functionalism/computationalism
> unfalsifiable?”.
>
> Note that in my older paper, I relate computationisme to Putnam’s
> ambiguous functionalism, by defining computationalism by asserting the
> existence of of level of description of my body/brain such that I survive
> (ma consciousness remains relatively invariant) with a digital machine
> (supposedly physically implemented) replacing my body/brain.
>
>
>
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence.
>
>
> I guess you mean “for all possible inputs”.
>
>
>
>
> Then let's say we can exactly control sensory input and perfectly monitor
> motor control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical
> inputs yield identical internal behavior (nerve activations, etc.) and
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them
> both to describe the pain, both will speak identical sentences. Both will
> say it hurts when asked, and if asked to write a paragraph describing the
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
>
> With computationalism, (and perhaps without) we cannot prove that anything
> is conscious (we can know our own consciousness, but still cannot justified
> it to ourself in any public way, or third person communicable way).
>
>
>
>
> Is computationalism as far as science can go on a theory of mind before it
> reaches this testing roadblock?
>
>
> Computationalism is indirectly testable. By verifying the physics implied
> by the theory of consciousness, we verify it indirectly.
>
> As you know, I define consciousness by that indubitable truth that all
> universal machine, cognitively enough rich to know that they are universal,
> finds by looking inward (in the Gödel-Kleene sense), and which is also non
> provable (non rationally justifiable) and even non definable without
> invoking *some* notion of truth. Then such consciousness appears to be a
> fixed point for the doubting procedure, like in Descartes, and it get a key
> role: self-speeding up relatively to universal machine(s).
>
> So, it seems so clear to me that nobody can prove that anything is
> conscious that I make it into one of the main way to characterise it.
>
> Consciousness is already very similar with consistency, which is (for
> effective theories, and sound machine) equivalent to a belief in some
> reality. No machine can prove its own consistency, and no machines can
> prove that there is reality satisfying their beliefs.
>
> In all case, it is never the machine per se which is conscious, but the
> first person associated with the machine. There is a core universal person
> common to each of “us” (with “us” in a very large sense of universal
> numbers/machines).
>
> Consciousness is not much more than knowledge, and in particular
> indubitable knowledge.
>
> Bruno
>
>
>
>
So to summarize: is it right to say that our only hope to prove anything
about what theory of consciousness is correct, or any fact concerning the
consciousness of others will on indirect tests that involve one's own own
first-person experiences?  (Such as whether our apparent reality becomes
fuzzy below a certain level.)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUiQCaZWo2tpCW-_Z%2BRMfrgOkKDoz5%3Dcpwk%3DxKDZZMDQsQ%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-12 Thread Jason Resch
On Wed, Jun 10, 2020 at 5:55 PM PGC  wrote:

>
>
> On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> Every piece of writing is a theory of mind; both within western science
> and beyond.
>
> What about the abilities to understand and use natural language, to come
> up with new avenues for scientific or creative inquiry, to experience
> qualia and report on them, adapting and dealing with unexpected
> circumstances through senses, and formulating + solving problems in
> benevolent ways by contributing towards the resilience of its community and
> environment?
>
> Trouble with this is that humans, even world leaders, fail those tests
> lol, but it's up to everybody, the AI and Computer Science folks in
> particular, to come up with the math, data, and complete their mission...
> and as amazing as developments have been around AI in the last couple of
> decades, I'm not certain we can pull it off, even if it would be pleasant
> to be wrong and some folks succeed.
>

It's interesting you bring this up, I just wrote an article about the
present capabilities of AI: https://alwaysasking.com/when-will-ai-take-over/


>
> Even if folks do succeed, a context of militarized nation states and
> monopolistic corporations competing for resources in self-destructive,
> short term ways... will not exactly help towards NOT weaponizing AI. A
> transnational politics, economics, corporate law, values/philosophies,
> ethics, culture etc. to vanquish poverty and exploitation of people,
> natural resources, life; while being sustainable and benevolent stewards of
> the possibilities of life... would seem to be prerequisite to develop some
> amazing AI.
>
> Ideas are all out there but progressives are ineffective politically on a
> global scale. The right wing folks, finance guys, large irresponsible
> monopolistic corporations are much more effective in organizing themselves
> globally and forcing agendas down everybody's throats. So why wouldn't AI
> do the same? PGC
>
>
AI will either be a blessing or a curse. I don't think it can be anything
in the middle.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUg6XyBiey6-Fgge7orv%3D_kS69tprAwviaKag5w73-8v2g%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread 'Brent Meeker' via Everything List



On 6/11/2020 9:03 AM, Bruno Marchal wrote:


On 9 Jun 2020, at 19:08, Jason Resch > wrote:


For the present discussion/question, I want to ignore the testable 
implications of computationalism on physical law, and instead focus 
on the following idea:


"How can we know if a robot is conscious?”


That question is very different than “is 
functionalism/computationalism unfalsifiable?”.


Note that in my older paper, I relate computationisme to Putnam’s 
ambiguous functionalism, by defining computationalism by asserting the 
existence of of level of description of my body/brain such that I 
survive (ma consciousness remains relatively invariant) with a digital 
machine (supposedly physically implemented) replacing my body/brain.






Let's say there are two brains, one biological and one an exact 
computational emulation, meaning exact functional equivalence.


I guess you mean “for all possible inputs”.




Then let's say we can exactly control sensory input and perfectly 
monitor motor control outputs between the two brains.


Given that computationalism implies functional equivalence, then 
identical inputs yield identical internal behavior (nerve 
activations, etc.) and outputs, in terms of muscle movement, facial 
expressions, and speech.


If we stimulate nerves in the person's back to cause pain, and ask 
them both to describe the pain, both will speak identical sentences. 
Both will say it hurts when asked, and if asked to write a paragraph 
describing the pain, will provide identical accounts.


Does the definition of functional equivalence mean that any 
scientific objective third-person analysis or test is doomed to fail 
to find any distinction in behaviors, and thus necessarily fails in 
its ability to disprove consciousness in the functionally equivalent 
robot mind?


With computationalism, (and perhaps without) we cannot prove that 
anything is conscious (we can know our own consciousness, but still 
cannot justified it to ourself in any public way, or third person 
communicable way).






Is computationalism as far as science can go on a theory of mind 
before it reaches this testing roadblock?


Computationalism is indirectly testable. By verifying the physics 
implied by the theory of consciousness, we verify it indirectly.


As you know, I define consciousness by that indubitable truth that all 
universal machine, cognitively enough rich to know that they are 
universal, finds by looking inward (in the Gödel-Kleene sense), and 
which is also non provable (non rationally justifiable) and even non 
definable without invoking *some* notion of truth. Then such 
consciousness appears to be a fixed point for the doubting procedure, 
like in Descartes, and it get a key role: self-speeding up relatively 
to universal machine(s).


So, it seems so clear to me that nobody can prove that anything is 
conscious that I make it into one of the main way to characterise it.


Of course as a logician you tend to use "proof" to mean deductive 
proof...but then you switch to a theological attitude toward the 
premises you've used and treat them as given truths, instead of mere 
axioms.  I appreciate your categorization of logics of self-reference.  
But I  doubt that it has anything to do with human (or animal) 
consciousness.  I don't think my dog is unconscious because he doesn't 
understand Goedelian incompleteness.  And I'm not conscious because I 
do.  I'm conscious because of the Darwinian utility of being able to 
imagine myself in hypothetical situations.




Consciousness is already very similar with consistency, which is (for 
effective theories, and sound machine) equivalent to a belief in some 
reality. No machine can prove its own consistency, and no machines can 
prove that there is reality satisfying their beliefs.


First, I can't prove it because such a proof would be relative to 
premises which simply be my beliefs.  Second, I can prove it in the 
sense of jurisprudence...i.e. beyond reasonable doubt.  Science doesn't 
care about "proofs", only about evidence.


Brent



In all case, it is never the machine per se which is conscious, but 
the first person associated with the machine. There is a core 
universal person common to each of “us” (with “us” in a very large 
sense of universal numbers/machines).


Consciousness is not much more than knowledge, and in particular 
indubitable knowledge.


Bruno





Jason

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread 'Brent Meeker' via Everything List




On 6/10/2020 8:50 AM, Jason Resch wrote:
Thought perhaps there's an argument to be made from the church Turing 
theses, which pertains to possible states of knowledge accessible to a 
computer program/software. If consciousness is viewed as software then 
Church-Turing thesis implies that software could never know/realize if 
it's ultimate computing substrate changed.


I don't understand the import of this.  The very concept of software 
mean "independent of hardware" by definition.  It is not affected by 
whether CT is true or not, whether the computation is finite or not.  If 
you think that consciousness evolved then it is an obvious inference 
that consciousness would not include consciousness of it's hardware 
implementation.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/105589fd-59ff-e39c-298e-bea9de66eda5%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 05:25, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>> 
>> 
>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
 
 
 On Wed, 10 Jun 2020 at 03:08, Jason Resch >>> > wrote:
 For the present discussion/question, I want to ignore the testable 
 implications of computationalism on physical law, and instead focus on the 
 following idea:
 
 "How can we know if a robot is conscious?"
 
 Let's say there are two brains, one biological and one an exact 
 computational emulation, meaning exact functional equivalence. Then let's 
 say we can exactly control sensory input and perfectly monitor motor 
 control outputs between the two brains.
 
 Given that computationalism implies functional equivalence, then identical 
 inputs yield identical internal behavior (nerve activations, etc.) and 
 outputs, in terms of muscle movement, facial expressions, and speech.
 
 If we stimulate nerves in the person's back to cause pain, and ask them 
 both to describe the pain, both will speak identical sentences. Both will 
 say it hurts when asked, and if asked to write a paragraph describing the 
 pain, will provide identical accounts.
 
 Does the definition of functional equivalence mean that any scientific 
 objective third-person analysis or test is doomed to fail to find any 
 distinction in behaviors, and thus necessarily fails in its ability to 
 disprove consciousness in the functionally equivalent robot mind?
 
 Is computationalism as far as science can go on a theory of mind before it 
 reaches this testing roadblock?
 
 We can’t know if a particular entity is conscious,
>>> 
>>> If the term means anything, you can know one particular entity is conscious.
>>> 
>>> Yes, I should have added we can’t know know that a particular entity other 
>>> than oneself is conscious.
 but we can know that if it is conscious, then a functional equivalent, as 
 you describe, is also conscious.
>>> 
>>> So any entity functionally equivalent to yourself, you must know is 
>>> conscious.  But "functionally equivalent" is vague, ambiguous, and 
>>> certainly needs qualifying by environment and other factors.  Is a dolphin 
>>> functionally equivalent to me.  Not in swimming.
>>> 
>>> Functional equivalence here means that you replace a part with a new part 
>>> that behaves in the same way. So if you replaced the copper wires in a 
>>> computer with silver wires, the silver wires would be functionally 
>>> equivalent, and you would notice no change in using the computer. Copper 
>>> and silver have different physical properties such as conductivity, but the 
>>> replacement would be chosen so that this is not functionally relevant.
>> 
>> But that functional equivalence at a microscopic level is worthless in 
>> judging what entities are conscious.The whole reason for bringing it up 
>> is that it provides a criterion for recognizing consciousness at the entity 
>> level.
>> 
>> The thought experiment involves removing a part of the brain that would 
>> normally result in an obvious deficit in qualia and replacing it with a 
>> non-biological component that replicates its interactions with the rest of 
>> the brain. Remove the visual cortex, and the subject becomes blind, 
>> staggering around walking into things, saying "I'm blind, I can't see 
>> anything, why have you done this to me?" But if you replace it with an 
>> implant that processes input and sends output to the remaining neural 
>> tissue, the subject will have normal input to his leg muscles and his vocal 
>> cords, so he will be able to navigate his way around a room and will say "I 
>> can see everything normally, I feel just the same as before". This follows 
>> necessarily from the assumptions. But does it also follow that the subject 
>> will have normal visual qualia? If not, something very strange would be 
>> happening: he would be blind, but would behave normally, including his 
>> behaviour in communicating that everything feels normal.
> 
> I understand the "Yes doctor" experiment.  But Jason was asking about being 
> able to recognize consciousness by function of the entity, and I think that 
> is a different problem that needs to into account the possibility of 
> different kinds and degrees of consciousness.  The YD question makes it 
> binary by equating consciousness with exactly the same as pre-doctor.  
> Applying that to Jason's question you would conclude that you cannot infer 
> that 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 04:49, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:
>> 
>> 
>> On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 09:15, Jason Resch >> > wrote:
>>> 
>>> 
>>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou >> > wrote:
>>> 
>>> 
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch >> > wrote:
>>> For the present discussion/question, I want to ignore the testable 
>>> implications of computationalism on physical
>>>law, and instead focus on the following idea:
>>> 
>>> "How can we know if a robot is conscious?"
>>> 
>>> Let's say there are two brains, one biological and one an exact 
>>> computational emulation, meaning exact functional equivalence. Then let's 
>>> say we can exactly control sensory input and perfectly monitor motor 
>>> control outputs between the two brains.
>>> 
>>> Given that computationalism implies functional equivalence, then identical 
>>> inputs yield identical internal behavior (nerve activations, etc.) and 
>>> outputs, in terms of muscle movement, facial expressions, and speech.
>>> 
>>> If we stimulate nerves in the person's back to cause pain, and ask them 
>>> both to describe the pain, both will speak identical sentences. Both will 
>>> say it hurts when asked, and if asked to write a paragraph describing the 
>>> pain, will provide identical accounts.
>>> 
>>> Does the definition of functional equivalence mean that any scientific 
>>> objective third-person analysis or test is doomed to fail to find any 
>>> distinction in behaviors, and thus necessarily fails in its ability to 
>>> disprove consciousness in the functionally equivalent robot mind?
>>> 
>>> Is computationalism as far as science can go on a theory of mind before it 
>>> reaches this testing roadblock?
>>> 
>>> We can’t know if a particular entity is conscious, but we can know that if 
>>> it is conscious, then a functional equivalent, as you describe, is also 
>>> conscious. This is the subject of David Chalmers’ paper:
>>> 
>>> http://consc.net/papers/qualia.html 
>>> 
>>> Chalmers' argument is that if a different brain is not conscious, then 
>>> somewhere along the way we get either suddenly disappearing or fading 
>>> qualia, which I agree are philosophically distasteful.
>>> 
>>> But what if someone is fine with philosophical zombies and suddenly 
>>> disappearing qualia? Is there any impossibility proof for such things?
>>> 
>>> Philosophical zombies are less problematic than partial philosophical 
>>> zombies. Partial philosophical zombies would render the idea of qualia 
>>> absurd, because it would mean that we might be blind completely blind, for 
>>> example, without realising it.
>> 
>> Isn't this what blindsight exemplifies?
>> 
>> Blindsight entails behaving as if you have vision but not believing that you 
>> have vision.
> 
> And you don't believe you have vision because you're missing the qualia of 
> seeing.
> 
>> Anton syndrome entails believing you have vision but not behaving as if you 
>> have vision.
>> Being a partial zombie would entail believing you have vision and behaving 
>> as if you have vision, but not actually having vision. 
> 
> That would be a total zombie with respect to vision.  The person with 
> blindsight is a partial zombie.  They have the function but not the qualia.
> 
>>> As an absolute minimum, although we may not be able to test for or define 
>>> qualia, we should know if we have them. Take this requirement away, and 
>>> there is nothing left.
>>> 
>>> Suddenly disappearing qualia are logically possible but it is difficult to 
>>> imagine how it could work. We would be normally conscious while our neurons 
>>> were being replaced, but when one special glutamate receptor in a special 
>>> neuron in the left parietal lobe was replaced, or when exactly 35.54876% 
>>> replacement of all neurons was reached, the internal lights would suddenly 
>>> go out.
>> 
>> I think this all-or-nothing is misconceived.  It's not internal cognition 
>> that might vanish suddenly, it's some specific aspect of experience: There 
>> are people who, thru brain injury, lose the ability to recognize 
>> faces...recognition is a qualia.   Of course people's frequency range of 
>> hearing fades (don't ask me how I know).  My mother, when she was 95 lost 
>> color vision in one eye, but not the other.  Some people, it seems cannot do 
>> higher mathematics.  So how would you know if you lost the qualia of empathy 
>> for example?  Could it not just fade...i.e. become evoked less and less?
>> 
>> I don't believe suddenly disappearing qualia can happen, but either this - 
>> leading to full 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 01:14, Jason Resch  wrote:
> 
> 
> 
> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou  > wrote:
> 
> 
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  > wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> We can’t know if a particular entity is conscious, but we can know that if it 
> is conscious, then a functional equivalent, as you describe, is also 
> conscious. This is the subject of David Chalmers’ paper:
> 
> http://consc.net/papers/qualia.html 
> 
> Chalmers' argument is that if a different brain is not conscious, then 
> somewhere along the way we get either suddenly disappearing or fading qualia, 
> which I agree are philosophically distasteful.
> 
> But what if someone is fine with philosophical zombies and suddenly 
> disappearing qualia? Is there any impossibility proof for such things?

This would not make sense with Digital Mechanism. Now, by assuming some 
NON-mechanism, maybe someone can still make sense of this.

That is why qualia and quanta are automatically present in *any* Turing 
universal realm (the model or semantic of any Turing universal or sigma_1 
complete theory). That is why physicalists need to abandon mechanism, because 
it invoke a non Turing emulable reality (like a material primitive substance) 
to make consciousness real for some type of universal machine, and unreal for 
others. As there is no evidence until now for such primitive matter, this is a 
bit like adding complexity to avoid the consequence of a simpler theory.

Bruno



> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/D6984742-4AF5-4445-873B-27B3B56CCA70%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 10 Jun 2020, at 01:02, Stathis Papaioannou  wrote:
> 
> 
> 
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  > wrote:
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?"
> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence. Then let's say we can 
> exactly control sensory input and perfectly monitor motor control outputs 
> between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
> 
> We can’t know if a particular entity is conscious, but we can know that if it 
> is conscious, then a functional equivalent,

… at some level of description. 

A dreaming human is functionally equivalent with a stone. The first is 
conscious, the other is not. To avoid this, you need to make precise the level 
for which you define the functional equivalence. 

Bruno



> as you describe, is also conscious. This is the subject of David Chalmers’ 
> paper:
> 
> http://consc.net/papers/qualia.html 
> 
> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/146FE5AB-0FBE-4F16-B95B-F7BE527DAF16%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-11 Thread Bruno Marchal

> On 9 Jun 2020, at 19:08, Jason Resch  wrote:
> 
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
> 
> "How can we know if a robot is conscious?”

That question is very different than “is functionalism/computationalism 
unfalsifiable?”.

Note that in my older paper, I relate computationisme to Putnam’s ambiguous 
functionalism, by defining computationalism by asserting the existence of of 
level of description of my body/brain such that I survive (ma consciousness 
remains relatively invariant) with a digital machine (supposedly physically 
implemented) replacing my body/brain.



> 
> Let's say there are two brains, one biological and one an exact computational 
> emulation, meaning exact functional equivalence.

I guess you mean “for all possible inputs”.




> Then let's say we can exactly control sensory input and perfectly monitor 
> motor control outputs between the two brains.
> 
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
> 
> If we stimulate nerves in the person's back to cause pain, and ask them both 
> to describe the pain, both will speak identical sentences. Both will say it 
> hurts when asked, and if asked to write a paragraph describing the pain, will 
> provide identical accounts.
> 
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?

With computationalism, (and perhaps without) we cannot prove that anything is 
conscious (we can know our own consciousness, but still cannot justified it to 
ourself in any public way, or third person communicable way). 



> 
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?

Computationalism is indirectly testable. By verifying the physics implied by 
the theory of consciousness, we verify it indirectly.

As you know, I define consciousness by that indubitable truth that all 
universal machine, cognitively enough rich to know that they are universal, 
finds by looking inward (in the Gödel-Kleene sense), and which is also non 
provable (non rationally justifiable) and even non definable without invoking 
*some* notion of truth. Then such consciousness appears to be a fixed point for 
the doubting procedure, like in Descartes, and it get a key role: self-speeding 
up relatively to universal machine(s).

So, it seems so clear to me that nobody can prove that anything is conscious 
that I make it into one of the main way to characterise it.

Consciousness is already very similar with consistency, which is (for effective 
theories, and sound machine) equivalent to a belief in some reality. No machine 
can prove its own consistency, and no machines can prove that there is reality 
satisfying their beliefs.

In all case, it is never the machine per se which is conscious, but the first 
person associated with the machine. There is a core universal person common to 
each of “us” (with “us” in a very large sense of universal numbers/machines).

Consciousness is not much more than knowledge, and in particular indubitable 
knowledge.

Bruno



> 
> Jason
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhpWiuoSoOyeW2DS3%2BqEaahequxkDcGK-bF2qjgiuqrAg%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6BFB3D6E-1AFB-4DDA-988E-B7BA03FF897F%40ulb.ac.be.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread PGC


On Tuesday, June 9, 2020 at 7:08:30 PM UTC+2, Jason wrote:
>
> For the present discussion/question, I want to ignore the testable 
> implications of computationalism on physical law, and instead focus on the 
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact 
> computational emulation, meaning exact functional equivalence. Then let's 
> say we can exactly control sensory input and perfectly monitor motor 
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical 
> inputs yield identical internal behavior (nerve activations, etc.) and 
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them 
> both to describe the pain, both will speak identical sentences. Both will 
> say it hurts when asked, and if asked to write a paragraph describing the 
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific 
> objective third-person analysis or test is doomed to fail to find any 
> distinction in behaviors, and thus necessarily fails in its ability to 
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it 
> reaches this testing roadblock?
>

Every piece of writing is a theory of mind; both within western science and 
beyond. 

What about the abilities to understand and use natural language, to come up 
with new avenues for scientific or creative inquiry, to experience qualia 
and report on them, adapting and dealing with unexpected circumstances 
through senses, and formulating + solving problems in benevolent ways by 
contributing towards the resilience of its community and environment? 

Trouble with this is that humans, even world leaders, fail those tests lol, 
but it's up to everybody, the AI and Computer Science folks in particular, 
to come up with the math, data, and complete their mission... and as 
amazing as developments have been around AI in the last couple of decades, 
I'm not certain we can pull it off, even if it would be pleasant to be 
wrong and some folks succeed. 

Even if folks do succeed, a context of militarized nation states and 
monopolistic corporations competing for resources in self-destructive, 
short term ways... will not exactly help towards NOT weaponizing AI. A 
transnational politics, economics, corporate law, values/philosophies, 
ethics, culture etc. to vanquish poverty and exploitation of people, 
natural resources, life; while being sustainable and benevolent stewards of 
the possibilities of life... would seem to be prerequisite to develop some 
amazing AI. 

Ideas are all out there but progressives are ineffective politically on a 
global scale. The right wing folks, finance guys, large irresponsible 
monopolistic corporations are much more effective in organizing themselves 
globally and forcing agendas down everybody's throats. So why wouldn't AI 
do the same? PGC


 

>
> Jason
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c00737c3-84f2-4b21-9fc2-b04c017cbdcco%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread 'Brent Meeker' via Everything List




On 6/10/2020 7:07 AM, smitra wrote:
I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is 
the result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and 
on he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states 
of consciousness is N to 1 where N is astronomically large. So, the 
laws of physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


That seems to be pulled out of the air.  First, some of the laws of 
physics are not statistical, e.g. those based on symmetries.  They are 
more easily explained as desiderata, i.e. we want our laws of physics to 
be independent of location and direction and time of day.  And N >> 
conscious information simply says there is a lot of physical reality of 
which we are not aware.  It doesn't say that what we have picked out as 
laws are statistical, only that they are not complete...which any 
physicist would admit...and as far as we know they include inherent 
randomness.  To insist that this randomness is statistical is just 
postulating multiple worlds to avoid randomness.




Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory 
of consciousness. In particular, computational theory of consciousness 
is incompatible with a single universe theory. So, if you prove that 
only a single universe exists, then that disproves the computational 
theory of consciousness. 


No, see above.

The details here then involve that computations are not well defined 
if you refer to a single instant of time, you need to at least appeal 
to a sequence of states the system over through. Consciousness cannot 
then be located at a single instant, in violating with our own 
experience. 


I deny that our experience consists of instants without duration or 
direction.  This is an assumption by computationalists made to simply 
their analysis.


Brent

Therefore either single World theories are false or computational 
theory of consciousness is false. 



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2ab82660-092f-8165-a96d-5a08601429f7%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Stathis Papaioannou
On Thu, 11 Jun 2020 at 01:50, Jason Resch  wrote:

>
>
> On Tuesday, June 9, 2020, Stathis Papaioannou  wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>


 On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:



 On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
 everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 03:08, Jason Resch 
> wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on 
>> the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences. Both
>> will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any
>> scientific objective third-person analysis or test is doomed to fail to
>> find any distinction in behaviors, and thus necessarily fails in its
>> ability to disprove consciousness in the functionally equivalent robot 
>> mind?
>>
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>>
>
> We can’t know if a particular entity is conscious,
>
>
> If the term means anything, you can know one particular entity is
> conscious.
>

 Yes, I should have added we can’t know know that a particular entity
 other than oneself is conscious.

> but we can know that if it is conscious, then a functional equivalent,
> as you describe, is also conscious.
>
>
> So any entity functionally equivalent to yourself, you must know is
> conscious.  But "functionally equivalent" is vague, ambiguous, and
> certainly needs qualifying by environment and other factors.  Is a dolphin
> functionally equivalent to me.  Not in swimming.
>

 Functional equivalence here means that you replace a part with a new
 part that behaves in the same way. So if you replaced the copper wires in a
 computer with silver wires, the silver wires would be functionally
 equivalent, and you would notice no change in using the computer. Copper
 and silver have different physical properties such as conductivity, but the
 replacement would be chosen so that this is not functionally relevant.


 But that functional equivalence at a microscopic level is worthless in
 judging what entities are conscious.The whole reason for bringing it up
 is that it provides a criterion for recognizing consciousness at the entity
 level.

>>>
>>> The thought experiment involves removing a part of the brain that would
>>> normally result in an obvious deficit in qualia and replacing it with a
>>> non-biological component that replicates its interactions with the rest of
>>> the brain. Remove the visual cortex, and the subject becomes blind,
>>> staggering around walking into things, saying "I'm blind, I can't see
>>> anything, why have you done this to me?" But if you replace it with an
>>> implant that processes input and sends output to the remaining neural
>>> tissue, the subject will have normal input to his leg muscles and his vocal
>>> cords, so he will be able to navigate his way around a room and will say "I
>>> can see everything normally, I feel just the same as before". This follows
>>> necessarily from the assumptions. But does it also follow that the subject
>>> will have normal visual qualia? If not, something very strange would be
>>> happening: he would be blind, but would behave normally, including his
>>> behaviour in communicating that everything feels normal.
>>>
>>>
>>> I understand the "Yes doctor" experiment.  But Jason was asking about
>>> being able to recognize consciousness by function of the entity, and I
>>> think that is a different 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Jason Resch
On Wednesday, June 10, 2020, smitra  wrote:

> On 09-06-2020 19:08, Jason Resch wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on
>> the following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then
>> let's say we can exactly control sensory input and perfectly monitor
>> motor control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions,
>> and speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask
>> them both to describe the pain, both will speak identical sentences.
>> Both will say it hurts when asked, and if asked to write a paragraph
>> describing the pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind
>> before it reaches this testing roadblock?
>>
>>
>
> I think it can be tested indirectly, because generic computational
> theories of consciousness imply a multiverse. If my consciousness is the
> result if a computation then because on the one hand any such computation
> necessarily involves a vast number of elementary bits and on he other hand
> whatever I'm conscious of is describable using only a handful of bits, the
> mapping between computational states and states of consciousness is N to 1
> where N is astronomically large. So, the laws of physics we already know
> about must be effective laws where the statistical effects due to a
> self-localization uncertainty is already build into it.
>
> Bruno has argued on the basis of this to motivate his theory, but this is
> a generic feature of any theory that assumes computational theory of
> consciousness. In particular, computational theory of consciousness is
> incompatible with a single universe theory. So, if you prove that only a
> single universe exists, then that disproves the computational theory of
> consciousness. The details here then involve that computations are not well
> defined if you refer to a single instant of time, you need to at least
> appeal to a sequence of states the system over through. Consciousness
> cannot then be located at a single instant, in violating with our own
> experience. Therefore either single World theories are false or
> computational theory of consciousness is false.
>
> Saibal
>
>
Hi Saibal,

I agree indirect mechanisms like looking at the resulting physics may be
the best way to test it. I was curious if there any direct ways to test it.
It seems not, given the lack of any direct tests of consciousness.

Though most people admit other humans are conscious, many would reject the
idea of a conscious computer.

Computationalism seems right, but it also seems like something that by
definition can't result in a failed test. So it has the appearance of not
being falsifiable.

A single universe, or digital physics would be evidence that either
computationalism is false or the ontology is sufficiently small, but a
finite/small ontology is doubtful for many reasons.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjhoAEFXtFkimkNqgvMWkHtASrdHByu5Ah4n%2BZwUGr1uA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread Jason Resch
On Tuesday, June 9, 2020, Stathis Papaioannou  wrote:

>
>
> On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>


 On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



 On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:

> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence. Then let's
> say we can exactly control sensory input and perfectly monitor motor
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then
> identical inputs yield identical internal behavior (nerve activations,
> etc.) and outputs, in terms of muscle movement, facial expressions, and
> speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask
> them both to describe the pain, both will speak identical sentences. Both
> will say it hurts when asked, and if asked to write a paragraph
> describing the pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind
> before it reaches this testing roadblock?
>

 We can’t know if a particular entity is conscious,


 If the term means anything, you can know one particular entity is
 conscious.

>>>
>>> Yes, I should have added we can’t know know that a particular entity
>>> other than oneself is conscious.
>>>
 but we can know that if it is conscious, then a functional equivalent,
 as you describe, is also conscious.


 So any entity functionally equivalent to yourself, you must know is
 conscious.  But "functionally equivalent" is vague, ambiguous, and
 certainly needs qualifying by environment and other factors.  Is a dolphin
 functionally equivalent to me.  Not in swimming.

>>>
>>> Functional equivalence here means that you replace a part with a new
>>> part that behaves in the same way. So if you replaced the copper wires in a
>>> computer with silver wires, the silver wires would be functionally
>>> equivalent, and you would notice no change in using the computer. Copper
>>> and silver have different physical properties such as conductivity, but the
>>> replacement would be chosen so that this is not functionally relevant.
>>>
>>>
>>> But that functional equivalence at a microscopic level is worthless in
>>> judging what entities are conscious.The whole reason for bringing it up
>>> is that it provides a criterion for recognizing consciousness at the entity
>>> level.
>>>
>>
>> The thought experiment involves removing a part of the brain that would
>> normally result in an obvious deficit in qualia and replacing it with a
>> non-biological component that replicates its interactions with the rest of
>> the brain. Remove the visual cortex, and the subject becomes blind,
>> staggering around walking into things, saying "I'm blind, I can't see
>> anything, why have you done this to me?" But if you replace it with an
>> implant that processes input and sends output to the remaining neural
>> tissue, the subject will have normal input to his leg muscles and his vocal
>> cords, so he will be able to navigate his way around a room and will say "I
>> can see everything normally, I feel just the same as before". This follows
>> necessarily from the assumptions. But does it also follow that the subject
>> will have normal visual qualia? If not, something very strange would be
>> happening: he would be blind, but would behave normally, including his
>> behaviour in communicating that everything feels normal.
>>
>>
>> I understand the "Yes doctor" experiment.  But Jason was asking about
>> being able to recognize consciousness by function of the entity, and I
>> think that is a different problem that needs to into account the
>> possibility of different kinds and degrees of consciousness.  The YD
>> question makes it binary by equating consciousness with exactly the same as
>> 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-10 Thread smitra

On 09-06-2020 19:08, Jason Resch wrote:

For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead focus on
the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence. Then
let's say we can exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve activations,
etc.) and outputs, in terms of muscle movement, facial expressions,
and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical sentences.
Both will say it hurts when asked, and if asked to write a paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean that any scientific
objective third-person analysis or test is doomed to fail to find any
distinction in behaviors, and thus necessarily fails in its ability to
disprove consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?




I think it can be tested indirectly, because generic computational 
theories of consciousness imply a multiverse. If my consciousness is the 
result if a computation then because on the one hand any such 
computation necessarily involves a vast number of elementary bits and on 
he other hand whatever I'm conscious of is describable using only a 
handful of bits, the mapping between computational states and states of 
consciousness is N to 1 where N is astronomically large. So, the laws of 
physics we already know about must be effective laws where the 
statistical effects due to a self-localization uncertainty is already 
build into it.


Bruno has argued on the basis of this to motivate his theory, but this 
is a generic feature of any theory that assumes computational theory of 
consciousness. In particular, computational theory of consciousness is 
incompatible with a single universe theory. So, if you prove that only a 
single universe exists, then that disproves the computational theory of 
consciousness. The details here then involve that computations are not 
well defined if you refer to a single instant of time, you need to at 
least appeal to a sequence of states the system over through. 
Consciousness cannot then be located at a single instant, in violating 
with our own experience. Therefore either single World theories are 
false or computational theory of consciousness is false.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/52e8aebc910df25ae02bcd105dcf1762%40zonnet.nl.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 13:25, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>>
>>>
>>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>
 For the present discussion/question, I want to ignore the testable
 implications of computationalism on physical law, and instead focus on the
 following idea:

 "How can we know if a robot is conscious?"

 Let's say there are two brains, one biological and one an exact
 computational emulation, meaning exact functional equivalence. Then let's
 say we can exactly control sensory input and perfectly monitor motor
 control outputs between the two brains.

 Given that computationalism implies functional equivalence, then
 identical inputs yield identical internal behavior (nerve activations,
 etc.) and outputs, in terms of muscle movement, facial expressions, and
 speech.

 If we stimulate nerves in the person's back to cause pain, and ask them
 both to describe the pain, both will speak identical sentences. Both will
 say it hurts when asked, and if asked to write a paragraph describing the
 pain, will provide identical accounts.

 Does the definition of functional equivalence mean that any scientific
 objective third-person analysis or test is doomed to fail to find any
 distinction in behaviors, and thus necessarily fails in its ability to
 disprove consciousness in the functionally equivalent robot mind?

 Is computationalism as far as science can go on a theory of mind before
 it reaches this testing roadblock?

>>>
>>> We can’t know if a particular entity is conscious,
>>>
>>>
>>> If the term means anything, you can know one particular entity is
>>> conscious.
>>>
>>
>> Yes, I should have added we can’t know know that a particular entity
>> other than oneself is conscious.
>>
>>> but we can know that if it is conscious, then a functional equivalent,
>>> as you describe, is also conscious.
>>>
>>>
>>> So any entity functionally equivalent to yourself, you must know is
>>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>>> certainly needs qualifying by environment and other factors.  Is a dolphin
>>> functionally equivalent to me.  Not in swimming.
>>>
>>
>> Functional equivalence here means that you replace a part with a new part
>> that behaves in the same way. So if you replaced the copper wires in a
>> computer with silver wires, the silver wires would be functionally
>> equivalent, and you would notice no change in using the computer. Copper
>> and silver have different physical properties such as conductivity, but the
>> replacement would be chosen so that this is not functionally relevant.
>>
>>
>> But that functional equivalence at a microscopic level is worthless in
>> judging what entities are conscious.The whole reason for bringing it up
>> is that it provides a criterion for recognizing consciousness at the entity
>> level.
>>
>
> The thought experiment involves removing a part of the brain that would
> normally result in an obvious deficit in qualia and replacing it with a
> non-biological component that replicates its interactions with the rest of
> the brain. Remove the visual cortex, and the subject becomes blind,
> staggering around walking into things, saying "I'm blind, I can't see
> anything, why have you done this to me?" But if you replace it with an
> implant that processes input and sends output to the remaining neural
> tissue, the subject will have normal input to his leg muscles and his vocal
> cords, so he will be able to navigate his way around a room and will say "I
> can see everything normally, I feel just the same as before". This follows
> necessarily from the assumptions. But does it also follow that the subject
> will have normal visual qualia? If not, something very strange would be
> happening: he would be blind, but would behave normally, including his
> behaviour in communicating that everything feels normal.
>
>
> I understand the "Yes doctor" experiment.  But Jason was asking about
> being able to recognize consciousness by function of the entity, and I
> think that is a different problem that needs to into account the
> possibility of different kinds and degrees of consciousness.  The YD
> question makes it binary by equating consciousness with exactly the same as
> pre-doctor.  Applying that to Jason's question you would conclude that you
> cannot infer that other people are conscious because, while they are
> functionally equivalent is a 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 7:48 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List 
> wrote:




On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:



On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore
the testable implications of computationalism on
physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one
an exact computational emulation, meaning
exact functional equivalence. Then let's say we can
exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and outputs,
in terms of muscle movement, facial expressions, and
speech.

If we stimulate nerves in the person's back to cause
pain, and ask them both to describe the pain, both will
speak identical sentences. Both will say it hurts when
asked, and if asked to write a paragraph describing the
pain, will provide identical accounts.

Does the definition of functional equivalence mean that
any scientific objective third-person analysis or test
is doomed to fail to find any distinction in behaviors,
and thus necessarily fails in its ability to disprove
consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory
of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular
entity is conscious.


Yes, I should have added we can’t know know that a particular
entity other than oneself is conscious.


but we can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must
know is conscious.  But "functionally equivalent" is vague,
ambiguous, and certainly needs qualifying by environment and
other factors.  Is a dolphin functionally equivalent to me. 
Not in swimming.


Functional equivalence here means that you replace a part with a
new part that behaves in the same way. So if you replaced the
copper wires in a computer with silver wires, the silver wires
would be functionally equivalent, and you would notice no change
in using the computer. Copper and silver have different physical
properties such as conductivity, but the replacement would be
chosen so that this is not functionally relevant.


But that functional equivalence at a microscopic level is
worthless in judging what entities are conscious.    The whole
reason for bringing it up is that it provides a criterion for
recognizing consciousness at the entity level.


The thought experiment involves removing a part of the brain that 
would normally result in an obvious deficit in qualia and replacing it 
with a non-biological component that replicates its interactions with 
the rest of the brain. Remove the visual cortex, and the subject 
becomes blind, staggering around walking into things, saying "I'm 
blind, I can't see anything, why have you done this to me?" But if you 
replace it with an implant that processes input and sends output to 
the remaining neural tissue, the subject will have normal input to his 
leg muscles and his vocal cords, so he will be able to navigate his 
way around a room and will say "I can see everything normally, I feel 
just the same as before". This follows necessarily from the 
assumptions. But does it also follow that the subject will have normal 
visual qualia? If not, something very strange would be happening: he 
would be blind, but would behave normally, including his behaviour in 
communicating that everything feels normal.


I understand the "Yes doctor" experiment.  But Jason was asking about 
being able to recognize consciousness by function of the entity, and I 
think that is a different problem that needs to into account the 
possibility of different kinds and degrees of consciousness.  The YD 
question makes it binary by equating consciousness with exactly the same 
as pre-doctor.  Applying that to Jason's question you would conclude 
that you cannot 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 12:49, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:
>>
>>>
>>>
>>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
>>> wrote:
>>>


 On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:

> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence. Then let's
> say we can exactly control sensory input and perfectly monitor motor
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then
> identical inputs yield identical internal behavior (nerve activations,
> etc.) and outputs, in terms of muscle movement, facial expressions, and
> speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask
> them both to describe the pain, both will speak identical sentences. Both
> will say it hurts when asked, and if asked to write a paragraph
> describing the pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind
> before it reaches this testing roadblock?
>

 We can’t know if a particular entity is conscious, but we can know that
 if it is conscious, then a functional equivalent, as you describe, is also
 conscious. This is the subject of David Chalmers’ paper:

 http://consc.net/papers/qualia.html

>>>
>>> Chalmers' argument is that if a different brain is not conscious, then
>>> somewhere along the way we get either suddenly disappearing or fading
>>> qualia, which I agree are philosophically distasteful.
>>>
>>> But what if someone is fine with philosophical zombies and suddenly
>>> disappearing qualia? Is there any impossibility proof for such things?
>>>
>>
>> Philosophical zombies are less problematic than partial philosophical
>> zombies. Partial philosophical zombies would render the idea of qualia
>> absurd, because it would mean that we might be blind completely blind, for
>> example, without realising it.
>>
>>
>> Isn't this what blindsight exemplifies?
>>
>
> Blindsight entails behaving as if you have vision but not believing that
> you have vision.
>
>
> And you don't believe you have vision because you're missing the qualia of
> seeing.
>
> Anton syndrome entails believing you have vision but not behaving as if
> you have vision.
> Being a partial zombie would entail believing you have vision and behaving
> as if you have vision, but not actually having vision.
>
>
> That would be a total zombie with respect to vision.  The person with
> blindsight is a partial zombie.  They have the function but not the qualia.
>
> As an absolute minimum, although we may not be able to test for or define
>> qualia, we should know if we have them. Take this requirement away, and
>> there is nothing left.
>>
>> Suddenly disappearing qualia are logically possible but it is difficult
>> to imagine how it could work. We would be normally conscious while our
>> neurons were being replaced, but when one special glutamate receptor in a
>> special neuron in the left parietal lobe was replaced, or when exactly
>> 35.54876% replacement of all neurons was reached, the internal lights would
>> suddenly go out.
>>
>>
>> I think this all-or-nothing is misconceived.  It's not internal cognition
>> that might vanish suddenly, it's some specific aspect of experience: There
>> are people who, thru brain injury, lose the ability to recognize
>> faces...recognition is a qualia.   Of course people's frequency range of
>> hearing fades (don't ask me how I know).  My mother, when she was 95 lost
>> color vision in one eye, but not the other.  Some people, it seems cannot
>> do higher mathematics.  So how would you know if you lost the qualia of
>> empathy for example?  Could it not just fade...i.e. become evoked less and
>> less?
>>
>
> I don't believe suddenly disappearing qualia can happen, but either this -
> leading to full zombiehood - or fading qualia - leading to partial
> zombiehood - would be a consequence of  replacement of the brain if
> behaviour 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 11:16, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>
>> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>
>>> For the present discussion/question, I want to ignore the testable
>>> implications of computationalism on physical law, and instead focus on the
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact
>>> computational emulation, meaning exact functional equivalence. Then let's
>>> say we can exactly control sensory input and perfectly monitor motor
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then
>>> identical inputs yield identical internal behavior (nerve activations,
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>> both to describe the pain, both will speak identical sentences. Both will
>>> say it hurts when asked, and if asked to write a paragraph describing the
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific
>>> objective third-person analysis or test is doomed to fail to find any
>>> distinction in behaviors, and thus necessarily fails in its ability to
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before
>>> it reaches this testing roadblock?
>>>
>>
>> We can’t know if a particular entity is conscious,
>>
>>
>> If the term means anything, you can know one particular entity is
>> conscious.
>>
>
> Yes, I should have added we can’t know know that a particular entity other
> than oneself is conscious.
>
>> but we can know that if it is conscious, then a functional equivalent, as
>> you describe, is also conscious.
>>
>>
>> So any entity functionally equivalent to yourself, you must know is
>> conscious.  But "functionally equivalent" is vague, ambiguous, and
>> certainly needs qualifying by environment and other factors.  Is a dolphin
>> functionally equivalent to me.  Not in swimming.
>>
>
> Functional equivalence here means that you replace a part with a new part
> that behaves in the same way. So if you replaced the copper wires in a
> computer with silver wires, the silver wires would be functionally
> equivalent, and you would notice no change in using the computer. Copper
> and silver have different physical properties such as conductivity, but the
> replacement would be chosen so that this is not functionally relevant.
>
>
> But that functional equivalence at a microscopic level is worthless in
> judging what entities are conscious.The whole reason for bringing it up
> is that it provides a criterion for recognizing consciousness at the entity
> level.
>

The thought experiment involves removing a part of the brain that would
normally result in an obvious deficit in qualia and replacing it with a
non-biological component that replicates its interactions with the rest of
the brain. Remove the visual cortex, and the subject becomes blind,
staggering around walking into things, saying "I'm blind, I can't see
anything, why have you done this to me?" But if you replace it with an
implant that processes input and sends output to the remaining neural
tissue, the subject will have normal input to his leg muscles and his vocal
cords, so he will be able to navigate his way around a room and will say "I
can see everything normally, I feel just the same as before". This follows
necessarily from the assumptions. But does it also follow that the subject
will have normal visual qualia? If not, something very strange would be
happening: he would be blind, but would behave normally, including his
behaviour in communicating that everything feels normal.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypURPo0rGULM0e3wBJC392G0xe1G57dsQW-0i2PuYPMxfA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 6:41 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List 
> wrote:




On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:15, Jason Resch mailto:jasonre...@gmail.com>> wrote:



On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
mailto:stath...@gmail.com>> wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore
the testable implications of computationalism on
physical law, and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and
one an exact computational emulation, meaning
exact functional equivalence. Then let's say we can
exactly control sensory input and perfectly monitor
motor control outputs between the two brains.

Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and
outputs, in terms of muscle movement, facial
expressions, and speech.

If we stimulate nerves in the person's back to cause
pain, and ask them both to describe the pain, both
will speak identical sentences. Both will say it
hurts when asked, and if asked to write a paragraph
describing the pain, will provide identical accounts.

Does the definition of functional equivalence mean
that any scientific objective third-person analysis
or test is doomed to fail to find any distinction in
behaviors, and thus necessarily fails in its ability
to disprove consciousness in the functionally
equivalent robot mind?

Is computationalism as far as science can go on a
theory of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we
can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious. This is
the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not
conscious, then somewhere along the way we get either
suddenly disappearing or fading qualia, which I agree are
philosophically distasteful.

But what if someone is fine with philosophical zombies and
suddenly disappearing qualia? Is there any impossibility
proof for such things?


Philosophical zombies are less problematic than partial
philosophical zombies. Partial philosophical zombies would render
the idea of qualia absurd, because it would mean that we might be
blind completely blind, for example, without realising it.


Isn't this what blindsight exemplifies?


Blindsight entails behaving as if you have vision but not believing 
that you have vision.


And you don't believe you have vision because you're missing the qualia 
of seeing.


Anton syndrome entails believing you have vision but not behaving as 
if you have vision.
Being a partial zombie would entail believing you have vision and 
behaving as if you have vision, but not actually having vision.


That would be a total zombie with respect to vision.  The person with 
blindsight is a partial zombie.  They have the function but not the qualia.



As an absolute minimum, although we may not be able to test for
or define qualia, we should know if we have them. Take this
requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is
difficult to imagine how it could work. We would be normally
conscious while our neurons were being replaced, but when one
special glutamate receptor in a special neuron in the left
parietal lobe was replaced, or when exactly 35.54876% replacement
of all neurons was reached, the internal lights would suddenly go
out.


I think this all-or-nothing is misconceived.  It's not internal
cognition that might vanish suddenly, it's some specific aspect of
experience: There are people who, thru brain injury, lose the
ability to recognize faces...recognition is a qualia.   Of course
people's frequency range of hearing fades (don't ask me how I
know).  My mother, when she was 95 lost color vision in one eye,
but not the other.  Some people, it seems cannot do higher
mathematics.  So how would you know if you lost the qualia of
empathy for example?  Could it 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 10:41, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:
>
>>
>>
>> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
>> wrote:
>>
>>>
>>>
>>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>>
 For the present discussion/question, I want to ignore the testable
 implications of computationalism on physical law, and instead focus on the
 following idea:

 "How can we know if a robot is conscious?"

 Let's say there are two brains, one biological and one an exact
 computational emulation, meaning exact functional equivalence. Then let's
 say we can exactly control sensory input and perfectly monitor motor
 control outputs between the two brains.

 Given that computationalism implies functional equivalence, then
 identical inputs yield identical internal behavior (nerve activations,
 etc.) and outputs, in terms of muscle movement, facial expressions, and
 speech.

 If we stimulate nerves in the person's back to cause pain, and ask them
 both to describe the pain, both will speak identical sentences. Both will
 say it hurts when asked, and if asked to write a paragraph describing the
 pain, will provide identical accounts.

 Does the definition of functional equivalence mean that any scientific
 objective third-person analysis or test is doomed to fail to find any
 distinction in behaviors, and thus necessarily fails in its ability to
 disprove consciousness in the functionally equivalent robot mind?

 Is computationalism as far as science can go on a theory of mind before
 it reaches this testing roadblock?

>>>
>>> We can’t know if a particular entity is conscious, but we can know that
>>> if it is conscious, then a functional equivalent, as you describe, is also
>>> conscious. This is the subject of David Chalmers’ paper:
>>>
>>> http://consc.net/papers/qualia.html
>>>
>>
>> Chalmers' argument is that if a different brain is not conscious, then
>> somewhere along the way we get either suddenly disappearing or fading
>> qualia, which I agree are philosophically distasteful.
>>
>> But what if someone is fine with philosophical zombies and suddenly
>> disappearing qualia? Is there any impossibility proof for such things?
>>
>
> Philosophical zombies are less problematic than partial philosophical
> zombies. Partial philosophical zombies would render the idea of qualia
> absurd, because it would mean that we might be blind completely blind, for
> example, without realising it.
>
>
> Isn't this what blindsight exemplifies?
>

Blindsight entails behaving as if you have vision but not believing that
you have vision.
Anton syndrome entails believing you have vision but not behaving as if you
have vision.
Being a partial zombie would entail believing you have vision and behaving
as if you have vision, but not actually having vision.

> As an absolute minimum, although we may not be able to test for or define
> qualia, we should know if we have them. Take this requirement away, and
> there is nothing left.
>
> Suddenly disappearing qualia are logically possible but it is difficult to
> imagine how it could work. We would be normally conscious while our neurons
> were being replaced, but when one special glutamate receptor in a special
> neuron in the left parietal lobe was replaced, or when exactly 35.54876%
> replacement of all neurons was reached, the internal lights would suddenly
> go out.
>
>
> I think this all-or-nothing is misconceived.  It's not internal cognition
> that might vanish suddenly, it's some specific aspect of experience: There
> are people who, thru brain injury, lose the ability to recognize
> faces...recognition is a qualia.   Of course people's frequency range of
> hearing fades (don't ask me how I know).  My mother, when she was 95 lost
> color vision in one eye, but not the other.  Some people, it seems cannot
> do higher mathematics.  So how would you know if you lost the qualia of
> empathy for example?  Could it not just fade...i.e. become evoked less and
> less?
>

I don't believe suddenly disappearing qualia can happen, but either this -
leading to full zombiehood - or fading qualia - leading to partial
zombiehood - would be a consequence of  replacement of the brain if
behaviour could be replicated without replicating qualia.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUCVw7jJ1-4P0L1V02NkZTaUvHTVMcCBbughF6evByuoA%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:58 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List 
> wrote:




On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law,
and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between the
two brains.

Given that computationalism implies functional equivalence,
then identical inputs yield identical internal behavior
(nerve activations, etc.) and outputs, in terms of muscle
movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain,
and ask them both to describe the pain, both will speak
identical sentences. Both will say it hurts when asked, and
if asked to write a paragraph describing the pain, will
provide identical accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed
to fail to find any distinction in behaviors, and thus
necessarily fails in its ability to disprove consciousness in
the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of
mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular entity is
conscious.


Yes, I should have added we can’t know know that a particular entity 
other than oneself is conscious.



but we can know that if it is conscious, then a functional
equivalent, as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must know
is conscious.  But "functionally equivalent" is vague, ambiguous,
and certainly needs qualifying by environment and other factors. 
Is a dolphin functionally equivalent to me.  Not in swimming.


Functional equivalence here means that you replace a part with a new 
part that behaves in the same way. So if you replaced the copper wires 
in a computer with silver wires, the silver wires would be 
functionally equivalent, and you would notice no change in using the 
computer. Copper and silver have different physical properties such as 
conductivity, but the replacement would be chosen so that this is not 
functionally relevant.


But that functional equivalence at a microscopic level is worthless in 
judging what entities are conscious.    The whole reason for bringing it 
up is that it provides a criterion for recognizing consciousness at the 
entity level.


And even at the microscopic level functional equivalence in ambiguous.  
The difference in conductivity between cooper and silver might not make 
any different 99.9% of the time, but in some circumstance it might make 
a difference.  Or there might be incidental effects due to the 
difference in corrosion that would show up in 20yrs but not sooner.


Brent


This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b0c2dc32-1bd0-8bcc-775a-41de326a3d4e%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:45 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 09:15, Jason Resch > wrote:




On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou
mailto:stath...@gmail.com>> wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch
mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law,
and instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between
the two brains.

Given that computationalism implies functional
equivalence, then identical inputs yield identical
internal behavior (nerve activations, etc.) and outputs,
in terms of muscle movement, facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain,
and ask them both to describe the pain, both will speak
identical sentences. Both will say it hurts when asked,
and if asked to write a paragraph describing the pain,
will provide identical accounts.

Does the definition of functional equivalence mean that
any scientific objective third-person analysis or test is
doomed to fail to find any distinction in behaviors, and
thus necessarily fails in its ability to disprove
consciousness in the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory
of mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we can
know that if it is conscious, then a functional equivalent, as
you describe, is also conscious. This is the subject of David
Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not conscious,
then somewhere along the way we get either suddenly disappearing
or fading qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and
suddenly disappearing qualia? Is there any impossibility proof for
such things?


Philosophical zombies are less problematic than partial philosophical 
zombies. Partial philosophical zombies would render the idea of qualia 
absurd, because it would mean that we might be blind completely blind, 
for example, without realising it.


Isn't this what blindsight exemplifies?

As an absolute minimum, although we may not be able to test for or 
define qualia, we should know if we have them. Take this requirement 
away, and there is nothing left.


Suddenly disappearing qualia are logically possible but it is 
difficult to imagine how it could work. We would be normally conscious 
while our neurons were being replaced, but when one special glutamate 
receptor in a special neuron in the left parietal lobe was replaced, 
or when exactly 35.54876% replacement of all neurons was reached, the 
internal lights would suddenly go out.


I think this all-or-nothing is misconceived.  It's not internal 
cognition that might vanish suddenly, it's some specific aspect of 
experience: There are people who, thru brain injury, lose the ability to 
recognize faces...recognition is a qualia.   Of course people's 
frequency range of hearing fades (don't ask me how I know).  My mother, 
when she was 95 lost color vision in one eye, but not the other.  Some 
people, it seems cannot do higher mathematics. So how would you know if 
you lost the qualia of empathy for example?  Could it not just 
fade...i.e. become evoked less and less?


Brent


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 09:32, 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:
>
>
>
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> We can’t know if a particular entity is conscious,
>
>
> If the term means anything, you can know one particular entity is
> conscious.
>

Yes, I should have added we can’t know know that a particular entity other
than oneself is conscious.

> but we can know that if it is conscious, then a functional equivalent, as
> you describe, is also conscious.
>
>
> So any entity functionally equivalent to yourself, you must know is
> conscious.  But "functionally equivalent" is vague, ambiguous, and
> certainly needs qualifying by environment and other factors.  Is a dolphin
> functionally equivalent to me.  Not in swimming.
>

Functional equivalence here means that you replace a part with a new part
that behaves in the same way. So if you replaced the copper wires in a
computer with silver wires, the silver wires would be functionally
equivalent, and you would notice no change in using the computer. Copper
and silver have different physical properties such as conductivity, but the
replacement would be chosen so that this is not functionally relevant.

> This is the subject of David Chalmers’ paper:
>
> http://consc.net/papers/qualia.html
>
> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWG377ELaFd1MZybF%2Bjfmg0aGbxc%3DeCh1AwHOYoaYg9zQ%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 09:15, Jason Resch  wrote:

>
>
> On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
> wrote:
>
>>
>>
>> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>>
>>> For the present discussion/question, I want to ignore the testable
>>> implications of computationalism on physical law, and instead focus on the
>>> following idea:
>>>
>>> "How can we know if a robot is conscious?"
>>>
>>> Let's say there are two brains, one biological and one an exact
>>> computational emulation, meaning exact functional equivalence. Then let's
>>> say we can exactly control sensory input and perfectly monitor motor
>>> control outputs between the two brains.
>>>
>>> Given that computationalism implies functional equivalence, then
>>> identical inputs yield identical internal behavior (nerve activations,
>>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>>> speech.
>>>
>>> If we stimulate nerves in the person's back to cause pain, and ask them
>>> both to describe the pain, both will speak identical sentences. Both will
>>> say it hurts when asked, and if asked to write a paragraph describing the
>>> pain, will provide identical accounts.
>>>
>>> Does the definition of functional equivalence mean that any scientific
>>> objective third-person analysis or test is doomed to fail to find any
>>> distinction in behaviors, and thus necessarily fails in its ability to
>>> disprove consciousness in the functionally equivalent robot mind?
>>>
>>> Is computationalism as far as science can go on a theory of mind before
>>> it reaches this testing roadblock?
>>>
>>
>> We can’t know if a particular entity is conscious, but we can know that
>> if it is conscious, then a functional equivalent, as you describe, is also
>> conscious. This is the subject of David Chalmers’ paper:
>>
>> http://consc.net/papers/qualia.html
>>
>
> Chalmers' argument is that if a different brain is not conscious, then
> somewhere along the way we get either suddenly disappearing or fading
> qualia, which I agree are philosophically distasteful.
>
> But what if someone is fine with philosophical zombies and suddenly
> disappearing qualia? Is there any impossibility proof for such things?
>

Philosophical zombies are less problematic than partial philosophical
zombies. Partial philosophical zombies would render the idea of qualia
absurd, because it would mean that we might be blind completely blind, for
example, without realising it. As an absolute minimum, although we may not
be able to test for or define qualia, we should know if we have them. Take
this requirement away, and there is nothing left.

Suddenly disappearing qualia are logically possible but it is difficult to
imagine how it could work. We would be normally conscious while our neurons
were being replaced, but when one special glutamate receptor in a special
neuron in the left parietal lobe was replaced, or when exactly 35.54876%
replacement of all neurons was reached, the internal lights would suddenly
go out.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypUZjiyCppw-qGPM9XPnnP%3D%2BeVCwbD00wxqesBrSvR-shg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:14 PM, Jason Resch wrote:



On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou > wrote:




On Wed, 10 Jun 2020 at 03:08, Jason Resch mailto:jasonre...@gmail.com>> wrote:

For the present discussion/question, I want to ignore the
testable implications of computationalism on physical law, and
instead focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an
exact computational emulation, meaning exact functional
equivalence. Then let's say we can exactly control sensory
input and perfectly monitor motor control outputs between the
two brains.

Given that computationalism implies functional equivalence,
then identical inputs yield identical internal behavior (nerve
activations, etc.) and outputs, in terms of muscle movement,
facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and
ask them both to describe the pain, both will speak identical
sentences. Both will say it hurts when asked, and if asked to
write a paragraph describing the pain, will provide identical
accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed
to fail to find any distinction in behaviors, and thus
necessarily fails in its ability to disprove consciousness in
the functionally equivalent robot mind?

Is computationalism as far as science can go on a theory of
mind before it reaches this testing roadblock?


We can’t know if a particular entity is conscious, but we can know
that if it is conscious, then a functional equivalent, as you
describe, is also conscious. This is the subject of David
Chalmers’ paper:

http://consc.net/papers/qualia.html


Chalmers' argument is that if a different brain is not conscious, then 
somewhere along the way we get either suddenly disappearing or fading 
qualia, which I agree are philosophically distasteful.


But what if someone is fine with philosophical zombies and suddenly 
disappearing qualia? Is there any impossibility proof for such things?


There's an implicit assumption that "qualia" are well defined things.  I 
think it very plausible that qualia differ depending on sensors, values, 
and memory.  So we may create AI that has something like qualia, but 
which are different from our qualia as people with synesthesia have 
somewhat different qualia.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/50a0b5e9-cc56-c14d-d208-99aaa5235cbc%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List



On 6/9/2020 4:02 PM, Stathis Papaioannou wrote:



On Wed, 10 Jun 2020 at 03:08, Jason Resch > wrote:


For the present discussion/question, I want to ignore the testable
implications of computationalism on physical law, and instead
focus on the following idea:

"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact
computational emulation, meaning exact functional equivalence.
Then let's say we can exactly control sensory input and perfectly
monitor motor control outputs between the two brains.

Given that computationalism implies functional equivalence, then
identical inputs yield identical internal behavior (nerve
activations, etc.) and outputs, in terms of muscle movement,
facial expressions, and speech.

If we stimulate nerves in the person's back to cause pain, and ask
them both to describe the pain, both will speak identical
sentences. Both will say it hurts when asked, and if asked to
write a paragraph describing the pain, will provide identical
accounts.

Does the definition of functional equivalence mean that any
scientific objective third-person analysis or test is doomed to
fail to find any distinction in behaviors, and thus necessarily
fails in its ability to disprove consciousness in the functionally
equivalent robot mind?

Is computationalism as far as science can go on a theory of mind
before it reaches this testing roadblock?


We can’t know if a particular entity is conscious,


If the term means anything, you can know one particular entity is conscious.

but we can know that if it is conscious, then a functional equivalent, 
as you describe, is also conscious.


So any entity functionally equivalent to yourself, you must know is 
conscious.  But "functionally equivalent" is vague, ambiguous, and 
certainly needs qualifying by environment and other factors.  Is a 
dolphin functionally equivalent to me.  Not in swimming.


Brent


This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1b7a5636-4c41-aa2e-c643-845a3f77f3e0%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Jason Resch
On Tue, Jun 9, 2020 at 6:03 PM Stathis Papaioannou 
wrote:

>
>
> On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:
>
>> For the present discussion/question, I want to ignore the testable
>> implications of computationalism on physical law, and instead focus on the
>> following idea:
>>
>> "How can we know if a robot is conscious?"
>>
>> Let's say there are two brains, one biological and one an exact
>> computational emulation, meaning exact functional equivalence. Then let's
>> say we can exactly control sensory input and perfectly monitor motor
>> control outputs between the two brains.
>>
>> Given that computationalism implies functional equivalence, then
>> identical inputs yield identical internal behavior (nerve activations,
>> etc.) and outputs, in terms of muscle movement, facial expressions, and
>> speech.
>>
>> If we stimulate nerves in the person's back to cause pain, and ask them
>> both to describe the pain, both will speak identical sentences. Both will
>> say it hurts when asked, and if asked to write a paragraph describing the
>> pain, will provide identical accounts.
>>
>> Does the definition of functional equivalence mean that any scientific
>> objective third-person analysis or test is doomed to fail to find any
>> distinction in behaviors, and thus necessarily fails in its ability to
>> disprove consciousness in the functionally equivalent robot mind?
>>
>> Is computationalism as far as science can go on a theory of mind before
>> it reaches this testing roadblock?
>>
>
> We can’t know if a particular entity is conscious, but we can know that if
> it is conscious, then a functional equivalent, as you describe, is also
> conscious. This is the subject of David Chalmers’ paper:
>
> http://consc.net/papers/qualia.html
>

Chalmers' argument is that if a different brain is not conscious, then
somewhere along the way we get either suddenly disappearing or fading
qualia, which I agree are philosophically distasteful.

But what if someone is fine with philosophical zombies and suddenly
disappearing qualia? Is there any impossibility proof for such things?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUjnn2DQwit%2Bj%3DYdXbXZbwHTv_PZa7GRKXwdo31gTAFygg%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Philip Thrift


On Tuesday, June 9, 2020 at 2:15:40 PM UTC-5, Brent wrote:
>
>
>
> On 6/9/2020 10:08 AM, Jason Resch wrote: 
> > For the present discussion/question, I want to ignore the testable 
> > implications of computationalism on physical law, and instead focus on 
> > the following idea: 
> > 
> > "How can we know if a robot is conscious?" 
> > 
> > Let's say there are two brains, one biological and one an exact 
> > computational emulation, meaning exact functional equivalence. Then 
> > let's say we can exactly control sensory input and perfectly monitor 
> > motor control outputs between the two brains. 
> > 
> > Given that computationalism implies functional equivalence, then 
> > identical inputs yield identical internal behavior (nerve activations, 
> > etc.) and outputs, in terms of muscle movement, facial expressions, 
> > and speech. 
> > 
> > If we stimulate nerves in the person's back to cause pain, and ask 
> > them both to describe the pain, both will speak identical sentences. 
> > Both will say it hurts when asked, and if asked to write a paragraph 
> > describing the pain, will provide identical accounts. 
> > 
> > Does the definition of functional equivalence mean that any scientific 
> > objective third-person analysis or test is doomed to fail to find any 
> > distinction in behaviors, and thus necessarily fails in its ability to 
> > disprove consciousness in the functionally equivalent robot mind? 
> > 
> > Is computationalism as far as science can go on a theory of mind 
> > before it reaches this testing roadblock? 
>
> If it acts conscious, then it is conscious. 
>
> But I think science/technology can go a lot further.  I can look at the 
> information flow, where is memory and how is it formed and how is it 
> accessed and does this matter or not in the action of the entity.  It 
> can look at the decision processes.  Are there separate competing 
> modules (as Dennett hypothesizes) or is there a global workspace...and 
> again does it make a difference.  What does it take to make the entity 
> act happy, sad, thoughtful, bored, etc. 
>
> Brent 
>



I doubt anyone in consciousness research believes this. Including Dennett 
today.

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cba4e85c-9709-45b4-aba0-b7cbe2a35bcbo%40googlegroups.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread Stathis Papaioannou
On Wed, 10 Jun 2020 at 03:08, Jason Resch  wrote:

> For the present discussion/question, I want to ignore the testable
> implications of computationalism on physical law, and instead focus on the
> following idea:
>
> "How can we know if a robot is conscious?"
>
> Let's say there are two brains, one biological and one an exact
> computational emulation, meaning exact functional equivalence. Then let's
> say we can exactly control sensory input and perfectly monitor motor
> control outputs between the two brains.
>
> Given that computationalism implies functional equivalence, then identical
> inputs yield identical internal behavior (nerve activations, etc.) and
> outputs, in terms of muscle movement, facial expressions, and speech.
>
> If we stimulate nerves in the person's back to cause pain, and ask them
> both to describe the pain, both will speak identical sentences. Both will
> say it hurts when asked, and if asked to write a paragraph describing the
> pain, will provide identical accounts.
>
> Does the definition of functional equivalence mean that any scientific
> objective third-person analysis or test is doomed to fail to find any
> distinction in behaviors, and thus necessarily fails in its ability to
> disprove consciousness in the functionally equivalent robot mind?
>
> Is computationalism as far as science can go on a theory of mind before it
> reaches this testing roadblock?
>

We can’t know if a particular entity is conscious, but we can know that if
it is conscious, then a functional equivalent, as you describe, is also
conscious. This is the subject of David Chalmers’ paper:

http://consc.net/papers/qualia.html


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXRHEW6PSnb2Bj2vf1RbQ6CoLFzCoKAHxgJkXTsfg%3DWyw%40mail.gmail.com.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread 'Brent Meeker' via Everything List




On 6/9/2020 10:08 AM, Jason Resch wrote:
For the present discussion/question, I want to ignore the testable 
implications of computationalism on physical law, and instead focus on 
the following idea:


"How can we know if a robot is conscious?"

Let's say there are two brains, one biological and one an exact 
computational emulation, meaning exact functional equivalence. Then 
let's say we can exactly control sensory input and perfectly monitor 
motor control outputs between the two brains.


Given that computationalism implies functional equivalence, then 
identical inputs yield identical internal behavior (nerve activations, 
etc.) and outputs, in terms of muscle movement, facial expressions, 
and speech.


If we stimulate nerves in the person's back to cause pain, and ask 
them both to describe the pain, both will speak identical sentences. 
Both will say it hurts when asked, and if asked to write a paragraph 
describing the pain, will provide identical accounts.


Does the definition of functional equivalence mean that any scientific 
objective third-person analysis or test is doomed to fail to find any 
distinction in behaviors, and thus necessarily fails in its ability to 
disprove consciousness in the functionally equivalent robot mind?


Is computationalism as far as science can go on a theory of mind 
before it reaches this testing roadblock?


If it acts conscious, then it is conscious.

But I think science/technology can go a lot further.  I can look at the 
information flow, where is memory and how is it formed and how is it 
accessed and does this matter or not in the action of the entity.  It 
can look at the decision processes.  Are there separate competing 
modules (as Dennett hypothesizes) or is there a global workspace...and 
again does it make a difference.  What does it take to make the entity 
act happy, sad, thoughtful, bored, etc.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/27fc7d6a-7648-fdfb-0a3b-a000d1b4ca4c%40verizon.net.


Re: Is functionalism/computationalism unfalsifiable?

2020-06-09 Thread John Clark
On Tue, Jun 9, 2020 at 1:08 PM Jason Resch  wrote:

*> How can we know if a robot is conscious?*


The exact same way we know that one of our fellow human beings is conscious
when he's not sleeping or under anesthesia or dead.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv31v1JHkaxWQfq4_OdJo32Ev-kkgXciVpTQaLXZ2YCcMA%40mail.gmail.com.