Re: What falsifiability tests has computationalism passed?

2018-02-19 Thread Telmo Menezes
Also,

>> The strong AI thesis is that such machine would be conscious
>
>
> Where is the world did you get that idea? The term "strong AI thesis" was
> invented by working scientists

AFAIK, the term "Strong AI" was introduced by John Searle, a Professor
of Philosophy.

I am not sure what "working scientists" are, or what is their
difference in relation to "non-working scientists", but I do suspect
that Searle is not what John had in mind.

> and they have no use for the C word because
> there  is nothing a scientists can do with consciousness, scientists can only 
> work
> with behavior.

Good boys.

Telmo.

> The strong AI thesis
> says a computer can perceive, learn,
>  do
>  mathematics
>
> and behave in a
> intelligent
> way that is (sometimes) consistent just as humans do.
>
>
>
> Apology for discovering some late mail. You are just incorrect here. “Strong
> AI” thesis is a thesis in philosophy of mind. The “strong” refers to
> consciousness.
>
>
>
>
> http://planetmath.org/strongaithesis
>
>>
>> >
>> computationalism is the even stronger assumption that "I am machine" and
>> that I would survive
>> in the clinical usual mundane 1p sense
>
>
> And in their work real AI scientists don't use personal pronouns with no
> clear referent, nor silly homemade terminology.
>
>
>
>
> 
>
>
>
>
>
>
>>
>> >
>> no digital machine can ever determine which machines she is, nor which
>> computations support her in arithmetic.
>
>
> I have no idea what that means and I don't care, I just want to know one
> thing, can you do better? Do you know "
> which machines
> you are
> ,
> or
>  which computations support
> you
>  in arithmetic
> “?
>
>
>
> Of course not. The “many-histories" of QM without collapse confirms this.
> The reason the electron interfere with itself is that you did not look where
> it pass, and so you mind state is realised in the two histories at once. The
> mathematical “first person indeterminacy”---where the pronouns get the eight
> “Theaetical” nuances enforced by incompleteness, btw— is based on this: we
> don’t know our own boxes “[]”, but we can trust, or not, our doctor.
>
>
>
>
>
>
>
>> I am a machine, in the strong computationalist sense, entails that neither
>> the physical reality,nor the bio-psycho-theo-logical
> reality can be 100% computable.
>
>
> Make up your mind! First you say
> "
> Actually, computationalism  implies it" then you say it doesn’t.
>
>
> “I” am a machine entails, for long reason to explain and which includes some
> familiarity with step 3, and mathematical logic, that everything which is
> not me is not a machine. To can get the gist if you can go from step 3 to
> step 7. The quantum emerges already at that intuitive level, once you
> understand that you cannot attach "your apparent computation” to which one
> in the arithmetical reality.
>
> The idea that the universe is a machine is inconsistent. If the universe is
> a machine, then I am a machine (not obvious, note), but if I am a machine,
> the universe is not a machine (by UDA), so the universe is a machine entails
> that the universe is not a machine, making the idea that the universe is a
> machine inconsistent no matter what. (The universe, or god, or call it how
> you want).
>
> Bruno
>
>
>
> John K Clark
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-02-18 Thread Bruno Marchal

> On 1 Jan 2018, at 18:31, John Clark  wrote:
> 
> On Wed, Dec 27, 2017 at 3:44 PM, Bruno Marchal  > wrote:
> 
> ​> ​The strong AI thesis is that such machine would be conscious
> 
> ​Where is the world did you get that idea? The term "strong AI thesis" was 
> invented by working scientists and they have no use for the C word because 
> there ​ ​is nothing a scientists can do with consciousness, scientists can 
> only work with behavior. The strong AI thesis ​says a computer can perceive, 
> learn, ​ do mathematics​ ​and behave in a intelligent​ way that is 
> (sometimes) consistent just as humans do.


Apology for discovering some late mail. You are just incorrect here. “Strong 
AI” thesis is a thesis in philosophy of mind. The “strong” refers to 
consciousness.



>  
> http://planetmath.org/strongaithesis  ​
>  
> ​>​computationalism is the even stronger assumption that "I am machine" and 
> that I would survive​ ​in the clinical usual mundane 1p sense
> 
> ​And in their work real AI scientists don't use personal pronouns with no 
> clear referent, ​nor silly homemade terminology. 








>  
> ​> ​no digital machine can ever determine which machines she is, nor which 
> computations support her in arithmetic.
> 
> ​I have no idea what that means and I don't care, I just want to know one 
> thing, can you do better? Do you know "​which machines ​you are​, ​or​ which 
> computations support ​you​ in arithmetic​“?​


Of course not. The “many-histories" of QM without collapse confirms this. The 
reason the electron interfere with itself is that you did not look where it 
pass, and so you mind state is realised in the two histories at once. The 
mathematical “first person indeterminacy”---where the pronouns get the eight 
“Theaetical” nuances enforced by incompleteness, btw— is based on this: we 
don’t know our own boxes “[]”, but we can trust, or not, our doctor.





> 
> 
> > I am a machine, in the strong computationalist sense, entails that neither 
> > the physical reality,nor the bio-psycho-theo-logical ​
> ​reality can be 100% computable.
> 
> 
> Make up your mind! First you say​ ​"​Actually, computationalism  implies it" 
> then you say it doesn’t.

“I” am a machine entails, for long reason to explain and which includes some 
familiarity with step 3, and mathematical logic, that everything which is not 
me is not a machine. To can get the gist if you can go from step 3 to step 7. 
The quantum emerges already at that intuitive level, once you understand that 
you cannot attach "your apparent computation” to which one in the arithmetical 
reality. 

The idea that the universe is a machine is inconsistent. If the universe is a 
machine, then I am a machine (not obvious, note), but if I am a machine, the 
universe is not a machine (by UDA), so the universe is a machine entails that 
the universe is not a machine, making the idea that the universe is a machine 
inconsistent no matter what. (The universe, or god, or call it how you want). 

Bruno


> 
> ​ John K Clark​
> 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-02-18 Thread Bruno Marchal

> On 28 Dec 2017, at 17:09, John Clark  wrote:
> 
> 
> On Thu, Dec 28, 2017 at 9:35 AM, Telmo Menezes  > wro​te​
> 
> ​> ​If you read your own email, you will see that the definition that you
> give is not the same as the ones you quote.
> 
> 
> ​I said:​ 
> 
>  ​"​Computationalism is the idea that the brain is an​ ​information 
> processing system and that a computer​ ​can​ ​perform all the complex 
> behaviors that would be called intelligent if it were done by a human​"
> 
> The Wikipedia article I quoted said:
> 
> "​A computational theory of mind names a view that the human mind or the 
> human brain (or both) is an​ ​information processing system and that thinking 
> is a form of computing.​"
> 
> So what's the problem?
> 
>  
> ​> ​You are in fact alluding​ ​to the weak AI thesis, which is about 
> behavior, not mind.
> 
> ​I type "​weak AI thesis​" into Google and this is the first thing I get:
> 
> ​"Weak AI thesis. Weak AI (artificial intelligence) thesis: A digital 
> computer is a powerful tool for studying intelligence and developing useful 
> technology, and it enables us to formulate and test hypotheses in a more 
> rigorous and precise fashion.​"​
> 
> I type "​strong  AI thesis​" into Google and this is the first thing I get:
> 
> ​"​Strong AI (artificial intelligence) thesis: the Mind is assumed, or 
> postulated, to be a consistent algorithm, and therefore if properly 
> programmed, a digital computer can, in principle, mimick the mind, provided 
> the basic assumption about the Mind is correct.​"​
> 
> ​Mind must be about behavior or it has no use in science.  ​
>  
> ​> ​Now, it​ ​could be that intelligent behavior implies mind, but as you 
> yourself
> argue, we don't know that.
> 
> ​We do know that if the definition of mind is based on behavior, especially 
> intelligent behavior, and that is the only definition that has scientific 
> value. If you bring consciousness into the mix then there is only one mind 
> that is known to exist or will ever be known to exist, and "mind" would 
> become a word with no scientific value.​ 
> 
>  
> If you are not interested in the first person / third person​ ​distinction, 
> you are wasting your time with computationalism.
> 
> ​EVERYBODY is interested in the ​first person / third person distinction and 
> NOBODY has the slightest difficulty making that distinction, but they don't 
> do it by computationalism they do it by direct experience.


Yes, and it is here that indexical computationalism asks a tremendously 
important question, if 1p/3p is so easy, what direct experience you expect when 
you duplicate yourself in two identical rooms with 1 and and 0, repetitively? 
What do you expect your life to be if that duplication is done once the day. I 
assume the copies never interacts after the experiences.

I claim that almost all rational indexical computationalist (not unlike the 
Löbian machine) answer something like, converges on: it changes nothing in my 
life, except for a random one or zero I get in that room each day.

Then the interesting “invariance of first person account/consciousness” are 
given in the further steps.

Bruno



> 
> John K Clark 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-18 Thread Bruno Marchal

> On 17 Jan 2018, at 23:10, David Nyman  wrote:
> 
> On 17 January 2018 at 15:41, Bruno Marchal  > wrote:
> 
>> On 15 Jan 2018, at 16:43, David Nyman > > wrote:
>> 
>> On 15 January 2018 at 13:24, Bruno Marchal > > wrote:
>> Hi David,
>> 
>> 
>> You raise interesting question, but they are not so easy to answer, and some 
>> of your precision prevent me to answer as they do not fit the precision made 
>> possible by computer science, or at least not yet.
>> 

>>> 
>>> G* proves []p <-> ([]p & p)
>>> G does not prove []p <-> ([]p & p).
>>> 
>>> ​One might say that the machine itself (G) is both restricted to 
>>> 'knowledge' of what it is in the first place capable of believing and, by 
>>> the same token, incapable 'in person' of doubting.
>> 
>> Here I am not sure. Knowledge is given by ([]p , that is by S4Grz). G is 
>> involved only with the 3p-self, and G* is the the 3p-self seen by “god”. 
>> Only the 1p self is unable to doubt. G is the scientist, doubter, modest 
>> Löbian reasoner. But I can make sense, if this was the reason you put the 
>> “knowldedge” and “in person” in quote? Yes, that makes sense.
>> 
>> ​Yes, that's right. ​I wanted to reiterate the conjunction or intersection 
>> of (3p) belief and (1p) epistemic indubitability as the criterion of 
>> self-knowledge (aka introspection); therefore by extension, of 
>> consciousness. In my remarks to Brent, I described consciousness as the 
>> first-person apprehension of its own states by a computationally-defined 
>> 'mental agent', implying epistemic truths ​*​about itself​*, and more 
>> broadly about the boundaries of its own physical and temporal situation​, 
>> such truths being ​​by the same token not directly communicable.
>> 
>> It's no simple matter to 'scale up' imaginatively, from inferences with 
>> respect to formal propositional logic, to the possible relation between 
>> brains, bodies and their generalised environments. The best I can do (as 
>> I've been suggesting to Brent) is the conception of a mental agent in terms 
>> of computational complexes, such that these complexes carry or track the 
>> relevant dispositions and relations that effectively construct the agent's 
>> 'physical constitution'. In turn such 'dispositional' computations, *at the 
>> relevant substitution level*, emulate propositional or intentional 
>> 'attitudes' (aka beliefs) whose epistemic entailments constitute a 
>> categorically distinct, first-personal knowledge, or conscious apprehension, 
>> of the agent's material, concrete or substantive 'world'.
>> 
>> It might seem puzzling (and indeed it should) why any such conjunction of 
>> physical action,
> 
> (Again, you say something more related to []p & <>t & p, than []p & p, but 
> that is OK (if not, I would not understand the mention of “physical” unless 
> you are already restricting the atomic proposition of the logic to the 
> semi-computable or semi-decidable (sigma_1) one.
> Sorry for such details.
> 
> 
>> which seems to proceed of its own accord and with its own 'causality', with 
>> what might therefore seem to  be merely 'epiphenomenal' or adventitious 
>> first-person knowledge, should be the case. Mere appeal to 'evolutionary' 
>> explanations won't really do, as on closer inspection such accounts rely on 
>> purely third-personal processual logic, not its putative first-personal 
>> counterpart.
> 
> OK. And I have said things going along this line. Yet, this might not be 
> entirely true. I do think that the richness of the human consciousness is 
> related to the fact that we share a very long (sheaf, "diffracted beam") of 
> computations. It is a long computation, and it is deep: it cannot be 
> compressed locally, which gives us the impression that there is some 
> origin/beginning of the human story. That is a 3p feature which add to the 
> first person impression.
> 
> ​I agree. I didn't elaborate in the interests of being short, Nevertheless, 
> the evolutionary 'selection' argument must, necessarily, rest ultimately on 
> extrinsic or 3p behaviour (again at the appropriate 'substitution' level).

Yes, the evolution theory is entirely based on (digital) mechanism, which is, 
like all scientific theories, a 3p theory. But there is a slight difficulty due 
to the fact that the “physical” will appear to be a 1p plural mode. Yet, it is 
locally (in each duplicated population of individuals) conceived as a 3p 
notion. In fact “3p-physicalness” is local 1p plural, even the quantum waves. 



> And on physical-reductionist assumptions, all such behaviour must necessarily 
> be a proxy for 'fundamental physical law'. Hence my movie/TV analogy was 
> intended to suggest that the existence of a consistent 'mental agency' 
> implies the 'epistemic selection' of an (at least) equally consistent and 
> 

Re: What falsifiability tests has computationalism passed?

2018-01-17 Thread David Nyman
On 18 Jan 2018 00:42, "Brent Meeker"  wrote:



On 1/17/2018 2:10 PM, David Nyman wrote:

​I agree. I didn't elaborate in the interests of being short, Nevertheless,
the evolutionary 'selection' argument must, necessarily, rest ultimately on
extrinsic or 3p behaviour (again at the appropriate 'substitution' level).
And on physical-reductionist assumptions, all such behaviour must
necessarily be a proxy for 'fundamental physical law'. Hence my movie/TV
analogy was intended to suggest that the existence of a consistent 'mental
agency' implies the 'epistemic selection' of an (at least) equally
consistent and tightly-constrained 'physical' constitution, depending in
turn on the (undoubtedly long and deep) evolution of a more generalised
environment, based on that foundational 'physical lawfulness'.

This would be the highly non-compressible story both of how the TV and its
viewer came to exist 'physically' and how the viewer came to be
experiencing a movie by means of the complex epistemic relation between its
brain and the TV. For brevity, one can condense this into a purely 3p story
of an evolutionary process that comes about in precisely this way, but as
Tallis crucially points out, we must not take our *experiencing* of
'precisely this way' for granted, as experience, or appearance, is not a 3p
category. Hence in a (logical) sense, the epistemic 'selection' of its own
physical *appearance* would have been a necessary supplementary (1p)
assumption even in the case that (assuming comp, counterfactually) mental
agency could have been be shown to supervene uniquely on some
'pre-ordained', non-quantised 3p physics.

David


I don't think this 3p/1p distinction is a clear as you and Bruno assume.
As I tried to make clear in my axioms of CTM, in your view that only what
is really, really fundamental exists one must say that only 1p
experiences/thoughts exist.  As Descarte and Russell have pointed out, all
the "3p" stuff, including the "person" to whom the experience is attributed
are inferences.  So 3p whatever must be emergent or derivative from the 1p
and hence part of the 1p.


Actually
​Brent,speaking personally, it's not so much what I 'think' in any
cut-and-dried sense, ​but rather what I'm progressively trying to clarify -
at least for for myself - in the form of this and other recent threads. On
that basis, I agree that, on pain of incoherence, we are forced to begin
with 1p for what counts as real. Perhaps this is, at least in part, what
you've been getting at when you've said that epistemology precedes
ontology. What remains is some sort of explanation for the *appearance* of
a such a reality; in modern times, such explanations are customarily
expressed in mathematical form. I think I can be explicit about the 1p/3p
distinction in these terms. What I've been trying to be clear about is the
distinctness of 1p as an *epistemic* category, which we can call knowledge,
thought, or consciousness (the words are not terribly important as long as
the categorical distinction is clear). As such, it is always *about*
something and *on behalf of* someone. That someone might be termed a
self-referential 'mental agent' (again, the name isn't of importance),
initially defined in terms of basic notions of information and computation,
in turn expressible by means of the combinatorial elements of what is
essentially offered as a fundamental language or calculus of thought. Then
the *about* is the agent's 'introspection' of its own epistemic states
(which appear to it, in part, as an 'extraspection' of its spatial and
temporal situation). The agent's epistemic states are 1p and by definition
real; what is offered in explanation of those states is 3p and may be
accounted real by reason of *explanatory power* in accounting for the
former. AFAICS they should be counted none the less real for that.

For the agent to possess the necessary 'physics', assuming comp (always
motivated by CTM), the agent is then conceived as participating in
complexes of computation (which as Bruno says are likely to be
correspondingly long and deep) that both track the disposition and
evolution of its own 'physical' constitution and generalised environment,
and simultaneously emulate (3p)  'dispositional attitudes' that correspond
with its (1p) epistemic states. This 'coincidence' of physical
dispositions, with the corresponding 'attitudinal' computations and their
epistemic entailments, might well strike us as remarkably adventitious
(often attracting the dismissive epithet of 'epiphenomenal') since the
entire (3p) 'dispositional' history, when examined at any given
'substitution level', seems fully capable of operating independently of
whatever 1p story it might be taken to imply. This is what makes the
'epistemic reversal' indispensable to the intelligibility of what is being
proposed. Hence my TV/movie analogy in which, in the relevant sense, it
might well be more intelligible to propose that the agent's experience of
the 

Re: What falsifiability tests has computationalism passed?

2018-01-17 Thread Brent Meeker



On 1/17/2018 2:10 PM, David Nyman wrote:
​I agree. I didn't elaborate in the interests of being short, 
Nevertheless, the evolutionary 'selection' argument must, necessarily, 
rest ultimately on extrinsic or 3p behaviour (again at the appropriate 
'substitution' level). And on physical-reductionist assumptions, all 
such behaviour must necessarily be a proxy for 'fundamental physical 
law'. Hence my movie/TV analogy was intended to suggest that the 
existence of a consistent 'mental agency' implies the 'epistemic 
selection' of an (at least) equally consistent and tightly-constrained 
'physical' constitution, depending in turn on the (undoubtedly long 
and deep) evolution of a more generalised environment, based on that 
foundational 'physical lawfulness'.


This would be the highly non-compressible story both of how the TV and 
its viewer came to exist 'physically' and how the viewer came to be 
experiencing a movie by means of the complex epistemic relation 
between its brain and the TV. For brevity, one can condense this into 
a purely 3p story of an evolutionary process that comes about in 
precisely this way, but as Tallis crucially points out, we must not 
take our *experiencing* of 'precisely this way' for granted, as 
experience, or appearance, is not a 3p category. Hence in a (logical) 
sense, the epistemic 'selection' of its own physical *appearance* 
would have been a necessary supplementary (1p) assumption even in the 
case that (assuming comp, counterfactually) mental agency could have 
been be shown to supervene uniquely on some 'pre-ordained', 
non-quantised 3p physics.


David


I don't think this 3p/1p distinction is a clear as you and Bruno 
assume.  As I tried to make clear in my axioms of CTM, in your view that 
only what is really, really fundamental exists one must say that only 1p 
experiences/thoughts exist.  As Descarte and Russell have pointed out, 
all the "3p" stuff, including the "person" to whom the experience is 
attributed are inferences.  So 3p whatever must be emergent or 
derivative from the 1p and hence part of the 1p.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-15 Thread Bruno Marchal

> On 14 Jan 2018, at 21:30, Brent Meeker  wrote:
> 
> 
> 
> On 1/14/2018 10:00 AM, Bruno Marchal wrote:
>> 
>>> On 12 Jan 2018, at 01:36, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 1/11/2018 4:11 AM, David Nyman wrote:
 
 
 On 11 Jan 2018 04:02, "Brent Meeker" > wrote:
 
 
 On 1/10/2018 6:56 PM, David Nyman wrote:
> 
> 
> On 11 Jan 2018 02:34, "Brent Meeker"  > wrote:
> 
> 
> On 1/10/2018 6:11 PM, David Nyman wrote:
>> If you read the rest of Tallis's piece you'll see that he criticises the 
>> characterisation of the physical environment as encoding 'information' 
>> independent of interpretation. This objection can be dealt with by the 
>> reversal,
> 
> Can it?  Isn't it just assumed that the computational relations, or 
> number relations, encode information?  That was my objection that the MGA 
> was missing the necessity of an environment for its computation to be 
> "about".  Bruno has generally agreed with this and said it just means 
> that the environment (i.e. physics) is part of what is realized in the 
> computations of the UD.  But notice that this doesn't answer a Tallis 
> like objection that "computation is nothing like experience" and 
> "information is nothing like environment".
> 
> But the argument implies that the epistemic entailments of computation,
 
 That a computation has epistemic entailments is an assumption that it's a 
 computation about something.  The argument assumes that so far as I can 
 see.
 
 I think you're right in part, to the extent that Theatatus's criterion of 
 knowledge, which includes the assumption of the (tautological) truth of 
 the belief, is indeed an explicit axiom.
>>> 
>>> Aren't you jumping over the question?  If I have a set of diophantine 
>>> equations that instantiate a universal computer and they have an integer 
>>> that is a solution, that's a computation.  But why is it "about anything"?  
>>> which parts of this process constitute knowledge or belief?  What 
>>> proposition is true?
>>> 
 But as is the case with all hypotheses, the burden is then to persuade 
 that adopting this axiom
>>> 
>>> But what exactly is the axiom?  Saying "Yes" to the doctor is the axiom 
>>> that a replacement brain which instantiates all the same input-output 
>>> functions will preserve one's internal narrative and memories with no 
>>> significant differences.  That is identical to "Philosophical zombies are 
>>> impossible."  Which means that intelligent behavior entails consciousness.  
>>> Which means consciousness can be studied by third persons.
>>> 
 is a reasonable step towards shedding some light on the problem we've set 
 out to address. And one of the defining characteristics of beliefs of the 
 requisite sort is indeed their indubitably, at least as a first 
 approximation. IOW, each sentient agent, willy-nilly, is irrevocably bound 
 (the 'bet' on a reality) to the primary veridicality of phenomena to which 
 it is thereafter both epistemically and procedurally committed.
>>> 
>>> Since "bound" and "committed" are synonyms...I think that was a tautology.  
>>> But I'm not sure about the function of "willy-nilly", "irrevocably", and 
>>> "primary"  or what would be an example of a belief not of the "requisite 
>>> sort"?
>>> 
 
 And again, the point of studying the self-referential logics in this 
 regard is to provide the kernel of a model of *aboutness* that could 
 indeed be understood as  'reaching out', in Tallis's sense, towards such a 
 world. It constructs, as it were, a space for the relation of the agent 
 and its phenomenal world that could begin to be seen as possessing the 
 necessary epistemic and procedural dimensionality, which is arguably what 
 is lacking in the construction of a 'world' in strictly third personal 
 terms. 
>>> 
>>> I think the construction goes the other way and a straightforward 
>>> presentation of CTE (computational theory of everything) is:
>>> 
>>> 1. Arithmetic exists and instantiates all possible computations via 
>>> relations implicit in diophantine equations.
>>> 2. Conscious thoughts are computations
>> 
>> Conscious thoughts are thought by a conscious (first-person).
> 
> An inference.  Not an axiom.


My point was that if taken literally, we cannot equate conscious thought (1p 
notion) with “computation” (a 3p notion).



> 
>> A conscious person is not a computation (it is a universal machine with some 
>> cognitive abilities).
> 
> Did I say otherwise?


Yes, by seemingly equating “conscious thought” (an attribute of person) and 
computations. You can say that a person want to go to the theatre, but you 

Re: What falsifiability tests has computationalism passed?

2018-01-15 Thread Bruno Marchal

> On 14 Jan 2018, at 20:40, Brent Meeker  wrote:
> 
> 
> 
> On 1/14/2018 9:01 AM, Bruno Marchal wrote:
 
 https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves
>>> 
>>> I understand those criticisms of Searle and they may be right.  But note 
>>> that arithmetic and computation are nothing like experience either and all 
>>> the same criticisms apply to CTM;
>> 
>> 
>> Not really. At first sight it looks like that IF we can associate a 
>> consciousness to a person supported by a computation, then we can certainly 
>> (even more would you and Peter Jone say) associate that consciousness with a 
>> material implementation of that computation.
> 
> And we can test that theory by messing with the material implementation and 
> observing the effect on mentation...and the answer is...It's confirmed!

Yes, but this confirms both mechanism and materialism. Then mechanism predicts 
the appearances of infinitely many computations below our substitution level 
and that a quantum logic appears there. That confirms mechanism *and* the 
immaterial and non-physicalist consequences.



> 
>> 
>> But that is not true. And perhaps we need to be more cautious, and repeat 
>> again that in no case (with our without matter) a consciousness is 
>> associated, still less identical with, a computation.
>> 
>> Consciousness is a first person attribute. It is a mode of belief, and 
>> actually a mode of belief which intersect with truth.
>> 
>> Consciousness is an instinctive/logical belief in a reality formally 
>> connected to … some reality, or “model” of oneself.
>> 
> 
> That is poetic, but is it empirically true?  I don't think consciousness is a 
> "mode of belief", unless you drastically stretch meaning of "belief".  And 
> what does it mean "to intersect with truth"...if I generate propositions at 
> random they will, occasionally "intersect with truth".  I think instinctive 
> belief in reality evolved long before consciousness. 

I think you are using “consciousness” in its “Löbian” high order sense. It is 
simpler to ascribe consciousness at the non-Löbian level. I am even open to the 
idea that Löbianity is the first stage of the fall of the soul, but I am not 
quite sure either.



> Part of the problem is that "consciousness" is thrown around as though it's 
> meaning is obvious

Higher order Consciousness is eventually what gives meaning to meaning, but it 
is plausibly already there with low level organisms, in a non reflexive mode, 
at least if I am right to associate some dissociative form of consciousness to 
any universal machine. The worms, or even the amoebas, have already a notion of 
good and bad, of like and dislike.




> and no distinction is made between awareness, self-awareness, 
> inner-narrative, social-awareness, etc.


?

That is what all this is about, the difference between the self-narrative 
account (G), the truth about that (G*), the inner-awareness (S4Grz1), the 
first-person plural (Bp & Dt) which light be the social awareness, the inner 
sensibility, etc. All having an utterly clear arithmetical interpretation, 
which guaranties consistency, and makes a lot of sense when assuming 
computationalism. That leads to testable consequences.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-15 Thread Bruno Marchal

> On 14 Jan 2018, at 19:42, Brent Meeker  wrote:
> 
> 
> 
> On 1/14/2018 3:48 AM, Bruno Marchal wrote:
 No examination of its own body will convince it otherwise, because that 
 examination is always in terms of the *phenomena* entailed (or 'revealed') 
 as a consequence of its formal structure, and not directly in terms of 
 that structure itself.
>>> 
>>> That is also correct, but I was alluding to a More deep reason, which 
>>> eventually benefits from the difference between []p, []p & p, []p & <>t and 
>>> also []p & <>t & p. I might come back on this.
>>> 
>>> Yes, please do. What I was referring to was the machine's inability to ​*​ 
>>> directly apprehend ​*​ its ​elf in terms of a​ formal description.
>> 
>> 
>> But the machine can directly apprehend itself in terms of a formal 
>> description. It can do that, but it cannot prove that it can do that. Here 
>> there is a very subtle nuance.
> 
> In this idealized model of "interviewing" a perfect machine, what does 
> "apprehend" mean.


Let T be any computable predicate or function or transformation. By “apprehend” 
I meant that we can, for any such T write a program p such that p, on some 
special input will output T(e). So, we can build a program Fortran, doing some 
normal task, but capable of answering all 3p question about itself, like saying 
how many “goto” is in its own code, or even possible self-localisation question 
if it has some sensor etc. Here, everything is 3p, and it is, by incompleteness 
only belief. But it is that incompleteness which force the other mode ([]p & p) 
to make sense, and to obey to a S4 logic (knowledge), and not to the 
3p-self-reference logic (given by G and G*).




>   I thought that the machine's apprehension was modelled by what it could 
> prove, i.e. it believed what it could prove.

That was right. Apprehend, prove, rational-believe can be taken as synonymous 
indeed, at this stage. It is pure 3p, and it is what is denoted by the box of 
G, usually written []p. It is Gödel’s beweisbar arithmetical predicate. 



>   But now you seem to introduce another mode by which the machines knows 
> things.


Yes, it is the nuance enforced by incompleteness. It is well modelled by []p & 
p. Theatetetus’ definition of knowledge. There is no way to arithmetic it. It 
is pure “1p”. Of course, with the ideal machine, we can use Theatetus to refer 
to the knower, but no machine can do that and at the same time rationalise of 
communicate in any 3p way that fact. That is why I say that the machine knows 
that its souls is not a machine, and she will need some amount of courage to 
willingly use a brain transplant or classical teleportation. Only the “divine 
intellect” (G*) knows that []p *is* []p & p. The machine cannot know that, nor 
believe that, but can still bet on it.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Brent Meeker



On 1/14/2018 10:00 AM, Bruno Marchal wrote:


On 12 Jan 2018, at 01:36, Brent Meeker > wrote:




On 1/11/2018 4:11 AM, David Nyman wrote:



On 11 Jan 2018 04:02, "Brent Meeker" > wrote:




On 1/10/2018 6:56 PM, David Nyman wrote:



On 11 Jan 2018 02:34, "Brent Meeker" > wrote:



On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he
criticises the characterisation of the physical
environment as encoding 'information' independent of
interpretation. This objection can be dealt with by the
reversal,


Can it?  Isn't it just /assumed/ that the computational
relations, or number relations, encode information?  That
was my objection that the MGA was missing the necessity of
an environment for its computation to be "about". Bruno has
generally agreed with this and said it just means that the
environment (i.e. physics) is part of what is realized in
the computations of the UD.  But notice that this doesn't
answer a Tallis like objection that "computation is nothing
like experience" and "information is nothing like environment".


But the argument implies that the epistemic entailments of
computation,


That a computation has epistemic entailments is an assumption
that it's a computation /about/ something.  The argument
/assumes/ that so far as I can see.


I think you're right in part, to the extent that Theatatus's 
criterion of knowledge, which includes the assumption of the 
(tautological) truth of the belief, is indeed an explicit axiom.


Aren't you jumping over the question?  If I have a set of diophantine 
equations that instantiate a universal computer and they have an 
integer that is a solution, that's a computation.  But why is it 
"about anything"? which parts of this process constitute knowledge or 
belief?  What proposition is true?


But as is the case with all hypotheses, the burden is then to 
persuade that adopting this axiom


But what exactly is the axiom?  Saying "Yes" to the doctor is the 
axiom that a replacement brain which instantiates all the same 
input-output functions will preserve one's internal narrative and 
memories with no significant differences.  That is identical to 
"Philosophical zombies are impossible."  Which means that intelligent 
behavior entails consciousness.  Which means consciousness can be 
studied by third persons.


is a reasonable step towards shedding some light on the problem 
we've set out to address. And one of the defining characteristics of 
beliefs of the requisite sort is indeed their indubitably, at least 
as a first approximation. IOW, each sentient agent, willy-nilly, is 
irrevocably bound (the 'bet' on a reality) to the primary 
veridicality of phenomena to which it is thereafter both 
epistemically and procedurally committed.


Since "bound" and "committed" are synonyms...I think that was a 
tautology.  But I'm not sure about the function of "willy-nilly", 
"irrevocably", and "primary"  or what would be an example of a belief 
not of the "requisite sort"?




And again, the point of studying the self-referential logics in this 
regard is to provide the kernel of a model of *aboutness* that could 
indeed be understood as  'reaching out', in Tallis's sense, towards 
such a world. It constructs, as it were, a space for the relation of 
the agent and its phenomenal world that could begin to be seen as 
possessing the necessary epistemic and procedural dimensionality, 
which is arguably what is lacking in the construction of a 'world' 
in strictly third personal terms.


I think the construction goes the other way and a straightforward 
presentation of CTE (computational theory of everything) is:


1. Arithmetic exists and instantiates all possible computations via 
relations implicit in diophantine equations.

2. Conscious thoughts are computations


Conscious thoughts are thought by a conscious (first-person).


An inference.  Not an axiom.

A conscious person is not a computation (it is a universal machine 
with some cognitive abilities).


Did I say otherwise?

You can attribute a personhood to a body, but a body cannot attribute 
its consciousness to its body among an infinity of variants, which 
exists in arithmetic, and play a role in his (first person) physics.


There will be an "infinity of variants" inferred but conscious thought.  
What are you saying "exists in arithmetic"?...bodies?  You seem to not 
be taking your "reversal" seriously.



With Gödel’s arithlmetization of metamathematics, as I said, this can 
be handled technically, and makes the reversal constructive and thus 
testable.





3. All possible conscious thoughts occur.
4. There are sequences of conscious thoughts that 

Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Brent Meeker



On 1/14/2018 9:01 AM, Bruno Marchal wrote:


https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves


I understand those criticisms of Searle and they may be right.  But 
note that arithmetic and computation are nothing like experience 
either and all the same criticisms apply to CTM;



Not really. At first sight it looks like that IF we can associate a 
consciousness to a person supported by a computation, then we can 
certainly (even more would you and Peter Jone say) associate that 
consciousness with a material implementation of that computation.


And we can test that theory by messing with the material implementation 
and observing the effect on mentation...and the answer is...It's confirmed!




But that is not true. And perhaps we need to be more cautious, and 
repeat again that in no case (with our without matter) a consciousness 
is associated, still less identical with, a computation.


Consciousness is a first person attribute. It is a mode of belief, and 
actually a mode of belief which intersect with truth.


Consciousness is an instinctive/logical belief in a reality formally 
connected to … some reality, or “model” of oneself.




That is poetic, but is it empirically true?  I don't think consciousness 
is a "mode of belief", unless you drastically stretch meaning of 
"belief".  And what does it mean "to intersect with truth"...if I 
generate propositions at random they will, occasionally "intersect with 
truth".  I think instinctive belief in reality evolved long before 
consciousness.  Part of the problem is that "consciousness" is thrown 
around as though it's meaning is obvious and no distinction is made 
between awareness, self-awareness, inner-narrative, social-awareness, etc.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Brent Meeker



On 1/14/2018 3:48 AM, Bruno Marchal wrote:



No examination of its own body will convince it otherwise,
because that examination is always in terms of the *phenomena*
entailed (or 'revealed') as a consequence of its formal
structure, and not directly in terms of that structure itself.

That is also correct, but I was alluding to a More deep reason,
which eventually benefits from the difference between []p, []p &
p, []p & <>t and also []p & <>t & p. I might come back on this.


Yes, please do. What I was referring to was the machine's inability to
​*​
directly apprehend
​*​
its
​elf in terms of a​
formal description.



But the machine can directly apprehend itself in terms of a formal 
description. It can do that, but it cannot prove that it can do that. 
Here there is a very subtle nuance.


In this idealized model of "interviewing" a perfect machine, what does 
"apprehend" mean.  I thought that the machine's apprehension was 
modelled by what it could prove, i.e. it believed what it could prove.  
But now you seem to introduce another mode by which the machines knows 
things.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Bruno Marchal

> On 12 Jan 2018, at 01:36, Brent Meeker  wrote:
> 
> 
> 
> On 1/11/2018 4:11 AM, David Nyman wrote:
>> 
>> 
>> On 11 Jan 2018 04:02, "Brent Meeker" > > wrote:
>> 
>> 
>> On 1/10/2018 6:56 PM, David Nyman wrote:
>>> 
>>> 
>>> On 11 Jan 2018 02:34, "Brent Meeker" >> > wrote:
>>> 
>>> 
>>> On 1/10/2018 6:11 PM, David Nyman wrote:
 If you read the rest of Tallis's piece you'll see that he criticises the 
 characterisation of the physical environment as encoding 'information' 
 independent of interpretation. This objection can be dealt with by the 
 reversal,
>>> 
>>> Can it?  Isn't it just assumed that the computational relations, or number 
>>> relations, encode information?  That was my objection that the MGA was 
>>> missing the necessity of an environment for its computation to be "about".  
>>> Bruno has generally agreed with this and said it just means that the 
>>> environment (i.e. physics) is part of what is realized in the computations 
>>> of the UD.  But notice that this doesn't answer a Tallis like objection 
>>> that "computation is nothing like experience" and "information is nothing 
>>> like environment".
>>> 
>>> But the argument implies that the epistemic entailments of computation,
>> 
>> That a computation has epistemic entailments is an assumption that it's a 
>> computation about something.  The argument assumes that so far as I can see.
>> 
>> I think you're right in part, to the extent that Theatatus's criterion of 
>> knowledge, which includes the assumption of the (tautological) truth of the 
>> belief, is indeed an explicit axiom.
> 
> Aren't you jumping over the question?  If I have a set of diophantine 
> equations that instantiate a universal computer and they have an integer that 
> is a solution, that's a computation.  But why is it "about anything"?  which 
> parts of this process constitute knowledge or belief?  What proposition is 
> true?
> 
>> But as is the case with all hypotheses, the burden is then to persuade that 
>> adopting this axiom
> 
> But what exactly is the axiom?  Saying "Yes" to the doctor is the axiom that 
> a replacement brain which instantiates all the same input-output functions 
> will preserve one's internal narrative and memories with no significant 
> differences.  That is identical to "Philosophical zombies are impossible."  
> Which means that intelligent behavior entails consciousness.  Which means 
> consciousness can be studied by third persons.
> 
>> is a reasonable step towards shedding some light on the problem we've set 
>> out to address. And one of the defining characteristics of beliefs of the 
>> requisite sort is indeed their indubitably, at least as a first 
>> approximation. IOW, each sentient agent, willy-nilly, is irrevocably bound 
>> (the 'bet' on a reality) to the primary veridicality of phenomena to which 
>> it is thereafter both epistemically and procedurally committed.
> 
> Since "bound" and "committed" are synonyms...I think that was a tautology.  
> But I'm not sure about the function of "willy-nilly", "irrevocably", and 
> "primary"  or what would be an example of a belief not of the "requisite 
> sort"?
> 
>> 
>> And again, the point of studying the self-referential logics in this regard 
>> is to provide the kernel of a model of *aboutness* that could indeed be  
>>  understood as  'reaching out', in Tallis's sense, towards such a world. 
>> It constructs, as it were, a space for the relation of the agent and its 
>> phenomenal world that could begin to be seen as possessing the necessary 
>> epistemic and procedural dimensionality, which is arguably what is lacking 
>> in the construction of a 'world' in strictly third personal terms. 
> 
> I think the construction goes the other way and a straightforward 
> presentation of CTE (computational theory of everything) is:
> 
> 1. Arithmetic exists and instantiates all possible computations via relations 
> implicit in diophantine equations.
> 2. Conscious thoughts are computations

Conscious thoughts are thought by a conscious (first-person). A conscious 
person is not a computation (it is a universal machine with some cognitive 
abilities). You can attribute a personhood to a body, but a body cannot 
attribute its consciousness to its body among an infinity of variants, which 
exists in arithmetic, and play a role in his (first person) physics.
With Gödel’s arithlmetization of metamathematics, as I said, this can be 
handled technically, and makes the reversal constructive and thus testable.



> 3. All possible conscious thoughts occur.
> 4. There are sequences of conscious thoughts that instantiate the thoughts of 
> a person about a world.
> 5. You are such person.  
> 6. The world (and you) are just inferences (beliefs) from consistent patterns 
> in the sequence of thoughts.

More or less OK, but the point is that 

Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Bruno Marchal

> On 11 Jan 2018, at 00:49, Bruce Kellett  wrote:
> 
> On 11/01/2018 9:09 am, Brent Meeker wrote:
>> On 1/10/2018 11:23 AM, David Nyman wrote:
>>> Searle makes his position even more vulnerable by arguing that not only are 
>>> neural activity and the experience of perception the same but that the 
>>> former causes the latter just as water is “caused” by H2O. This is 
>>> desperate stuff: one could hardly expect some thing A to cause some thing B 
>>> with which it is identical, because nothing can cause itself. In any event, 
>>> the bottom line is that the molecules of H2O and the wet stuff that is 
>>> water are two appearances of the same thing — two conscious takes on the 
>>> same stuff. They cannot be analogous to, respectively, that which 
>>> supposedly causes conscious experiences (neural impulses) and conscious 
>>> experiences themselves.​"
>>> 
>>> Here's a link to the original piece:
>>> 
>>>  
>>> https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves
>>>  
>>> 
>> I understand those criticisms of Searle and they may be right.  But note 
>> that arithmetic and computation are nothing like experience either and all 
>> the same criticisms apply to CTM; something that goes unchallenged on this 
>> list because CTM is always taken as a given and Bruno says, "We must assume 
>> something to begin our theorizing."  But Searle would reject CTM (and has) 
>> for exactly the same criticisms directed against him above.
> 
> I think Tallis's argument against Searle is entirely specious. Searle would 
> appear to be arguing that the properties of H2O molecules, such as the large 
> dipole moment, etc, cause the observed bulk properties of water. It is hard 
> to fault such an argument -- what else could possibly lie behind the bulk 
> properties of water other than the properties of its constituent molecules? 
> By analogy, then, the bulk properties of a brain, such as consciousness, 
> thought, memory, and so on, arise from the properties of the individual 
> neurons and other structures that make up the physical brain -- mind 
> supervenes on the physical brain.
> 
> As you say, Tallis's argument can be raised against the CTM account -- after 
> all, consciousness is not the same thing as a computation.

Indeed.




> However, it can be argued that consciousness arises from, or supervenes on, 
> the properties of some underlying computations. An argument against Searle is 
> thus an argument against CTM, or, in fact, any other reductionist account of 
> consciousness. Tallis simply makes consciousness some magical, inexplicable 
> thing.
> 

An argument against Searle is an argument in favour of Materialism and an 
argument against the CTM.

Consciousness is in the logical relation enforced by the explicit conjunction 
of locally digital machine’s belief and truth, it is an awareness that 
something kicks back. The theory predict that there is a “level” of 
substitution, and that above it we are confronted with a finite number of 
universal machines, with long computational histories, and below the 
substitution level, we are confronted with an infinity of universal numbers and 
their competition to brought your next states. It is equivalences from 
symmetrical or braided groups, as evidences are given by nature, and arithmetic 
(more timidly, no doubt).

The claustrum seems to have the most of the kappa opioid receptor (where salvia 
acts the most), which suggest it is the seed of consciousness, where we are 
“connected” to the arithmetical reality; it is the mammal ancestor of the 
neuronal implementation of Robinson Arithmetic, so to speak. Oh, I see it is 
the favourite brain structure of Christof Koch. I guess salvia might confirms 
this insight. It is the dynamical fixed point and it match all the equivalent 
one at that level. 

Then, for more on this see my answer to Brent. In metaphysics, the materialist 
are the one who seem to bet on some magic, probably so in the Mechanist frame. 
The reversal reverses also the charge of the proof.

Bruno








> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are 

Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread Bruno Marchal

> On 10 Jan 2018, at 23:09, Brent Meeker  wrote:
> 
> 
> 
> On 1/10/2018 11:23 AM, David Nyman wrote:
>> Searle makes his position even more vulnerable by arguing that not only are 
>> neural activity and the experience of perception the same but that the 
>> former causes the latter just as water is “caused” by H2O. This is desperate 
>> stuff: one could hardly expect some thing A to cause some thing B with which 
>> it is identical, because nothing can cause itself. In any event, the bottom 
>> line is that the molecules of H2O and the wet stuff that is water are two 
>> appearances of the same thing — two conscious takes on the same stuff. They 
>> cannot be analogous to, respectively, that which supposedly causes conscious 
>> experiences (neural impulses) and conscious experiences themselves.​"
>> 
>> Here's a link to the original piece:
>> 
>> https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves
>>  
>> 
> I understand those criticisms of Searle and they may be right.  But note that 
> arithmetic and computation are nothing like experience either and all the 
> same criticisms apply to CTM;


Not really. At first sight it looks like that IF we can associate a 
consciousness to a person supported by a computation, then we can certainly 
(even more would you and Peter Jone say) associate that consciousness with a 
material implementation of that computation.

But that is not true. And perhaps we need to be more cautious, and repeat again 
that in no case (with our without matter) a consciousness is associated, still 
less identical with, a computation.

Consciousness is a first person attribute. It is a mode of belief, and actually 
a mode of belief which intersect with truth. 

Consciousness is an instinctive/logical belief in a reality formally connected 
to … some reality, or “model” of oneself.

It happens because the universal (Turing, Church, Post, Kleene) machine/number 
cannot avoid reflecting itself in itself and confronted to the nuance brought 
on oneself forced by incompleteness. 

So with Mechanism (CT + YD) the following becomes theorem in Elementary 
Arithmetic, or more simply, are true (in all models of Peano arithmetic):

1) There exit universal machines

2) those looking correctly inward deduce from their elementary beliefs that 
they are Löbian, and they can see the different modes, including guessing the 
correct communicable propositional  part (axiomatised by G) and even the 
correct propositional non communicable.

As you have assumed Mechanism, then, what UDA should have already made 
intuitively clear, (even if shocking), is that the inferable observable 
(physics) is the FPI calculus on all relative computational state (sigma_1 
sentences). So the abstract probability is one, which determines the logic of 
the measure, is given by what is true in all accessible relative states ([]p) + 
the explicit default hypothesis that there is some reality (<>t): that is, by 
the arithmetic logic of the conjunction of provability and consistency 
restricted to the sigma_1 sentences. That works by offering a quantisation and 
a quantum logic. 

Now, in that setting, assuming the existence of some primary matter seems to 
complicate or to oversimplify the problem, a bit like the “explanation” God 
made it.  What is that matter, how you test its existence better than testing 
mechanism, and how exactly select it consciousness and of which person?

In a sense, mechanism do say “God made it”, but the God is the elementary 
arithmetic that we already taught in school (since about 400 years locally).  
Gödel’s arithmetization of metamathematics embeds the mathematicians in 
arithmetic in a manner which prolongs Everett’s embedding of the physicist in 
physics. That made a priori more histories, but the constraints of correctness, 
consistency gives the working mode, which allows deep/long computations to get 
partially sharable by some universal machine in association with some first 
personal “winner”.

Now, this gives only tools to measure our degree of materialism or 
non-computationalism, which could still be taken as a special oracle in 
arithmetic. That would be the case if the observable obeyed Boolean logic, like 
with classical physics, but here QM saves Mechanism, and thus win this match. 
Nobody claims that was the last match. (The problem is that the materialist 
claims they win the match since Aristotle).

With Mechanism, we do have a theory of consciousness, because incompleteness 
justifies (and the Lôbian machine does prove this already) that there is no 
communicable belief in a reality encompassing oneself in the case there is such 
a reality, and the universal machine is bounded to confuse p, []p, []p , etc. 

Observation, like Hubble image of far away/young galaxies suggest that we share 
a long and deep (in Bennett 

Re: What falsifiability tests has computationalism passed?

2018-01-14 Thread David Nyman
On 14 January 2018 at 11:48, Bruno Marchal  wrote:

>
> On 10 Jan 2018, at 20:23, David Nyman  wrote:
>
>
>
> On 10 Jan 2018 13:48, "Bruno Marchal"  wrote:
>
>
> On 7 Jan 2018, at 12:42, David Nyman  wrote:
>
> On 7 January 2018 at 09:52, Bruno Marchal  wrote:
>
>>
>> On 6 Jan 2018, at 21:09, David Nyman  wrote:
>>
>>
>>
>> On 6 Jan 2018 19:46, "Bruno Marchal"  wrote:
>>
>>
>> On 5 Jan 2018, at 21:04, David Nyman  wrote:
>>
>>
>>
>> On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:
>>
>>
>> On 4 Jan 2018, at 21:07, David Nyman  wrote:
>>
>>
>>
>> On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:
>>
>>
>> On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:
>>
>> On 4 January 2018 at 11:55, Bruno Marchal  wrote:
>>
>>>
>>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker 
>>> wrote:
>>> >
>>> >
>>> >
>>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>>> >>
>>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>>> >>
>>> >>>
>>> >>>
>>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>>  Now, it
>>>  could be that intelligent behavior implies mind, but as you yourself
>>>  argue, we don't know that.
>>> >>>
>>> >>> Isn't this at the crux of the scientific study of the mind? There
>>> seemed to be universal agreement on this list that a philosophical zombie
>>> is impossible.
>>> >>
>>> >>
>>> >> Precisely: that a philosophical zombie is impossible when we assume
>>> Mechanism.
>>> >
>>> > But the consensus here has been that a philosophical zombie is
>>> impossible because it exhibits intelligent behavior.
>>>
>>> Well, I think the consensus here is that computationalism is far more
>>> plausible than non-computationalism.
>>> Computationalism makes zombies non sensical.
>>>
>>>
>>>
>>> >
>>> >> Philosophical zombie remains logical consistent for a non
>>> computationalist theory of mind.
>>> >
>>> > It's logically consistent with a computationalist theory of brain. It
>>> is only inconsistent with a computationalist theory of mind because use
>>> include as an axiom that computation produces mind.  One can as say that
>>> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
>>> very cheap standard for theories to meet.
>>>
>>> At first sight, zombies seems consistent with computationalism, but the
>>> notion of zombies requires the idea that we attribute mind to bodies
>>> (having the right behavior). But with computationalism, mind is never
>>> associated to a body, but only to the person having the infinity of
>>> (similar enough) bodies relative representation in arithmetic. There are no
>>> “real bodies” or “ontological bodies”, so the notion of zombie becomes
>>> senseless. The consciousness is associated with the person, which is never
>>> determined by one body.
>>>
>>
>> ​So in the light of what you say above, does it then follow that the MGA
>> implies (assuming comp) that a physical system does *not* in fact implement
>> a computation in the relevant sense?
>>
>>
>>
>> The physical world has to be able to implement the computation in the
>> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
>> “act of faith.
>>
>> The physical world is a persistent illusion. It has to be enough
>> persistent that you wake up at the hospital with the digital brain.
>>
>>
>>
>> I ask this because you say mind is *never* associated with a body, but
>> mind *is* associated with computation via the epistemic consequences of
>> universality.
>>
>>
>>
>> A (conscious) third person can associate a mind/person to a body that he
>> perceives. It is polite.
>>
>> The body perceived by that third person is itself a construction of its
>> own mind, and with computationalism (but also with QM), we know that such a
>> body is an (evolving) map of where, and in which states, we could find, sy,
>> the electron and proton of that body, and such snapshot is only a
>> computational state among infinitely many others which would works as well,
>> with respect to the relevant computations which brought its conscious state.
>> Now, the conscious first person cannot associate itself to any particular
>> body or computation.
>>
>> Careful: sometimes I say that a machine can think, or maybe (I usually
>> avoid) that a computation can think or be conscious. It always mean,
>> respectively, that a machine can make a person capable of manifesting
>> itself relatively to you. But the machine and the body are local relative
>> representation.
>>
>> A machine cannot think, and a computation (which is the (arithmetical)
>> dynamic 3p view of the sequence of the relative static machine/state)
>> cannot think. Only a (first) person can think, and to use that thinking
>> with respect to another person, a machine is handy, like brain 

Re: What falsifiability tests has computationalism passed?

2018-01-12 Thread David Nyman
On 12 Jan 2018 00:36, "Brent Meeker"  wrote:



On 1/11/2018 4:11 AM, David Nyman wrote:



On 11 Jan 2018 04:02, "Brent Meeker"  wrote:



On 1/10/2018 6:56 PM, David Nyman wrote:



On 11 Jan 2018 02:34, "Brent Meeker"  wrote:



On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he criticises the
characterisation of the physical environment as encoding 'information'
independent of interpretation. This objection can be dealt with by the
reversal,


Can it?  Isn't it just *assumed* that the computational relations, or
number relations, encode information?  That was my objection that the MGA
was missing the necessity of an environment for its computation to be
"about".  Bruno has generally agreed with this and said it just means that
the environment (i.e. physics) is part of what is realized in the
computations of the UD.  But notice that this doesn't answer a Tallis like
objection that "computation is nothing like experience" and "information is
nothing like environment".


But the argument implies that the epistemic entailments of computation,


That a computation has epistemic entailments is an assumption that it's a
computation *about* something.  The argument *assumes* that so far as I can
see.


I think you're right in part, to the extent that Theatatus's criterion of
knowledge, which includes the assumption of the (tautological) truth of the
belief, is indeed an explicit axiom.


Aren't you jumping over the question?  If I have a set of diophantine
equations that instantiate a universal computer and they have an integer
that is a solution, that's a computation.  But why is it "about anything"?
which parts of this process constitute knowledge or belief?  What
proposition is true?


I thought I was answering the question. Theatatus's criterion is modelled
as the conjunction of believing in p and the axiom that p is indeed the
case *in some relevant sense*. IOW true, justified belief. As to what is a
proposition and what it may be about, since logic is fully emulable in
​'universal'
computation,
​in the limit ​
it might be about most anything whatsoever I suppose.
​That apart,

​as Bruce remarked, if we ​
apply a reductionist approach the criticism can be levelled that, both its
very status as a proposition in the first place and the interpretation of
what it is about, are
​by implication *​
external
​*​
to the computation itself.
​But s
ince ex hypothesi this cannot be the case without infinite regress, we are
confronted with the idea that the truth (or 'epistemic entailment')
putatively asserted by the proposition
​*​
supplies its own interpretation
​*​
.

If this is not very intuitive at the level of the toy model, it might be
​a little ​
more so if we
​try to ​
generalise the idea in
​something like ​
the way I suggested. The physical constitution and activity of a brain, in
this way of thinking, would
​then ​
be construed as the manifestation in appearance of some complex of
computations that simultaneously tracks, both the requisite physical
dispositions relating the brain and its wider environment, and the
propositional attitudes of a 'mental agent'.
​(​
By the way, I would be sympathetic to Dennett's idea that this would not
​be expected to ​
play out in the manner of any simplistic notion of a 'Cartesian theatre'.
​)​

Then a propositional attitude attributable to the mental agent might be "I
see an apple" or "I am in intense agony". What the proposition is 'about'
is now
​in terms of
the agent's apprehension of its own state, implying an epistemic truth
​*​
about itself
​*, or perhaps more broadly, about the boundaries of its own physical and
temporal situation​
, one that i
​s ​
​by the same token
not directly communicable. In either case this attitude would necessarily
​have to be
 expressed both physically and epistemically
​; th
e physical express
​ion​
in behaviour, both gross and subtle (e.g. neurocognition); and the
(tautological
​ly truthful​
)
​entailment
 in the fact that the agent in question
​*​
does indeed see an apple or is indeed in intense agony
​*​
, the latter supplying the
​ otherwise​
missing 'interpretation' of the former.
​

As I suggested to Bruce, if one considers the converse, what would it imply
if I believed that, though you were not in any sense lying, your assertion
of such things was nonetheless false? That it only 'seems as if' they are
true, or that their truth is an 'illusion'? By the way, this is apparently
what the likes of Patricia Churchland urge us to take seriously - that we
are nothing other than zombie mechanisms, whose putative mentalistic
assertions are supernumerary to any fundamental understanding of reality.
At the same time o
ne must be careful not to be misled into the idea that
​'​
truth
​'​
here implies that the agent possesses
​infallibility
 with respect to
the wider *implications*
​or consequences ​
of its immediate 

Re: What falsifiability tests has computationalism passed?

2018-01-11 Thread Brent Meeker



On 1/11/2018 4:11 AM, David Nyman wrote:



On 11 Jan 2018 04:02, "Brent Meeker" > wrote:




On 1/10/2018 6:56 PM, David Nyman wrote:



On 11 Jan 2018 02:34, "Brent Meeker" > wrote:



On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he
criticises the characterisation of the physical environment
as encoding 'information' independent of interpretation.
This objection can be dealt with by the reversal,


Can it?  Isn't it just /assumed/ that the computational
relations, or number relations, encode information? That was
my objection that the MGA was missing the necessity of an
environment for its computation to be "about".  Bruno has
generally agreed with this and said it just means that the
environment (i.e. physics) is part of what is realized in the
computations of the UD.  But notice that this doesn't answer
a Tallis like objection that "computation is nothing like
experience" and "information is nothing like environment".


But the argument implies that the epistemic entailments of
computation,


That a computation has epistemic entailments is an assumption that
it's a computation /about/ something.  The argument /assumes/ that
so far as I can see.


I think you're right in part, to the extent that Theatatus's criterion 
of knowledge, which includes the assumption of the (tautological) 
truth of the belief, is indeed an explicit axiom.


Aren't you jumping over the question?  If I have a set of diophantine 
equations that instantiate a universal computer and they have an integer 
that is a solution, that's a computation.  But why is it "about 
anything"?  which parts of this process constitute knowledge or belief?  
What proposition is true?


But as is the case with all hypotheses, the burden is then to persuade 
that adopting this axiom


But what exactly is the axiom?  Saying "Yes" to the doctor is the axiom 
that a replacement brain which instantiates all the same input-output 
functions will preserve one's internal narrative and memories with no 
significant differences.  That is identical to "Philosophical zombies 
are impossible."  Which means that intelligent behavior entails 
consciousness.  Which means consciousness can be studied by third persons.


is a reasonable step towards shedding some light on the problem we've 
set out to address. And one of the defining characteristics of beliefs 
of the requisite sort is indeed their indubitably, at least as a first 
approximation. IOW, each sentient agent, willy-nilly, is irrevocably 
bound (the 'bet' on a reality) to the primary veridicality of 
phenomena to which it is thereafter both epistemically and 
procedurally committed.


Since "bound" and "committed" are synonyms...I think that was a 
tautology.  But I'm not sure about the function of "willy-nilly", 
"irrevocably", and "primary"  or what would be an example of a belief 
not of the "requisite sort"?




And again, the point of studying the self-referential logics in this 
regard is to provide the kernel of a model of *aboutness* that could 
indeed be understood as  'reaching out', in Tallis's sense, towards 
such a world. It constructs, as it were, a space for the relation of 
the agent and its phenomenal world that could begin to be seen as 
possessing the necessary epistemic and procedural dimensionality, 
which is arguably what is lacking in the construction of a 'world' in 
strictly third personal terms.


I think the construction goes the other way and a straightforward 
presentation of CTE (computational theory of everything) is:


1. Arithmetic exists and instantiates all possible computations via 
relations implicit in diophantine equations.

2. Conscious thoughts are computations
3. All possible conscious thoughts occur.
4. There are sequences of conscious thoughts that instantiate the 
thoughts of a person about a world.

5. You are such person.
6. The world (and you) are just inferences (beliefs) from consistent 
patterns in the sequence of thoughts.


It may be objected that there are infinitely more sequences of thoughts 
that are not consistent with being those of a person in a world.  That's 
called "the white rabbit problem".


Brent






not its formal elements, attain this criterion whilst being at
the same time, and unavoidably, directly incommunicable.


The things that are incommunicable, as I understand it, are the
things that are true (given the axiomatic system) but unprovable. 
In other words, a proof is what it communicable.


Yes, and hence, in a sense admittedly highly generalised from the toy 
model, proof constitutes the procedural or actionable polarity in the 
epistemic construction of a 'world'. It is what is available for 
public inspection. 

Re: What falsifiability tests has computationalism passed?

2018-01-11 Thread David Nyman
On 11 Jan 2018 04:02, "Brent Meeker"  wrote:



On 1/10/2018 6:56 PM, David Nyman wrote:



On 11 Jan 2018 02:34, "Brent Meeker"  wrote:



On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he criticises the
characterisation of the physical environment as encoding 'information'
independent of interpretation. This objection can be dealt with by the
reversal,


Can it?  Isn't it just *assumed* that the computational relations, or
number relations, encode information?  That was my objection that the MGA
was missing the necessity of an environment for its computation to be
"about".  Bruno has generally agreed with this and said it just means that
the environment (i.e. physics) is part of what is realized in the
computations of the UD.  But notice that this doesn't answer a Tallis like
objection that "computation is nothing like experience" and "information is
nothing like environment".


But the argument implies that the epistemic entailments of computation,


That a computation has epistemic entailments is an assumption that it's a
computation *about* something.  The argument *assumes* that so far as I can
see.


I think you're right in part, to the extent that Theatatus's criterion of
knowledge, which includes the assumption of the (tautological) truth of the
belief, is indeed an explicit axiom. But as is the case with all
hypotheses, the burden is then to persuade that adopting this axiom is a
reasonable step towards shedding some light on the problem we've set out to
address. And one of the defining characteristics of beliefs of the
requisite sort is indeed their indubitably, at least as a first
approximation. IOW, each sentient agent, willy-nilly, is irrevocably bound
(the 'bet' on a reality) to the primary veridicality of phenomena to which
it is thereafter both epistemically and procedurally committed.

And again, the point of studying the self-referential logics in this regard
is to provide the kernel of a model of *aboutness* that could indeed be
understood as  'reaching out', in Tallis's sense, towards such a world. It
constructs, as it were, a space for the relation of the agent and its
phenomenal world that could begin to be seen as possessing the necessary
epistemic and procedural dimensionality, which is arguably what is lacking
in the construction of a 'world' in strictly third personal terms.



not its formal elements, attain this criterion whilst being at the same
time, and unavoidably, directly incommunicable.


The things that are incommunicable, as I understand it, are the things that
are true (given the axiomatic system) but unprovable.  In other words, a
proof is what it communicable.


Yes, and hence, in a sense admittedly highly generalised from the toy
model, proof constitutes the procedural or actionable polarity in the
epistemic construction of a 'world'. It is what is available for public
inspection. But at the same time a system of proofs or theorems of
sufficient power implies further truths that are not formally describable.
In essence, again in a highly generalised extrapolation, this forms the
analogy with the epistemic situation of a sentient agent in relation to its
phenomenal world.

It is publicly and procedurally committed to a system of proofs or beliefs
which can never exceed the reliability of relatively confirmed conjectures.
However the epistemic consequences of such beliefs present themselves in
the form of categorically distinct phenomena that elude this mode of direct
communication. So, in another reversal, the substantive 'world ' now lies
at the interior or phenomenal pole of this relation and its procedural or
processual construction at the 'publicly inferrable' or externalisable one.




This category therefore subsumes that of physical phenomenology. So in
principle this is capable of answering the objection, at least in the limit
of what can be explicitly characterised.

  I find these "nothing like" arguments facile.


I know you do. And I've tried to present a less 'facile' view of the issues
as best I can.


I appreciate the effort.


Thanks.

David



Brent


But there I fear our views of the matter diverge, probably irreconcilably.

David



Brent
-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to 

Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Brent Meeker



On 1/10/2018 6:56 PM, David Nyman wrote:



On 11 Jan 2018 02:34, "Brent Meeker" > wrote:




On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he
criticises the characterisation of the physical environment as
encoding 'information' independent of interpretation. This
objection can be dealt with by the reversal,


Can it?  Isn't it just /assumed/ that the computational relations,
or number relations, encode information?  That was my objection
that the MGA was missing the necessity of an environment for its
computation to be "about".  Bruno has generally agreed with this
and said it just means that the environment (i.e. physics) is part
of what is realized in the computations of the UD.  But notice
that this doesn't answer a Tallis like objection that "computation
is nothing like experience" and "information is nothing like
environment".


But the argument implies that the epistemic entailments of computation,


That a computation has epistemic entailments is an assumption that it's 
a computation /about/ something.  The argument /assumes/ that so far as 
I can see.


not its formal elements, attain this criterion whilst being at the 
same time, and unavoidably, directly incommunicable.


The things that are incommunicable, as I understand it, are the things 
that are true (given the axiomatic system) but unprovable. In other 
words, a proof is what it communicable.



This category therefore subsumes that of physical phenomenology. So in 
principle this is capable of answering the objection, at least in the 
limit of what can be explicitly characterised.


  I find these "nothing like" arguments facile.


I know you do. And I've tried to present a less 'facile' view of the 
issues as best I can.


I appreciate the effort.

Brent


But there I fear our views of the matter diverge, probably irreconcilably.

David



Brent
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To post to this group, send email to
everything-list@googlegroups.com
.
Visit this group at
https://groups.google.com/group/everything-list
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread David Nyman
On 11 Jan 2018 02:34, "Brent Meeker"  wrote:



On 1/10/2018 6:11 PM, David Nyman wrote:

If you read the rest of Tallis's piece you'll see that he criticises the
characterisation of the physical environment as encoding 'information'
independent of interpretation. This objection can be dealt with by the
reversal,


Can it?  Isn't it just *assumed* that the computational relations, or
number relations, encode information?  That was my objection that the MGA
was missing the necessity of an environment for its computation to be
"about".  Bruno has generally agreed with this and said it just means that
the environment (i.e. physics) is part of what is realized in the
computations of the UD.  But notice that this doesn't answer a Tallis like
objection that "computation is nothing like experience" and "information is
nothing like environment".


But the argument implies that the epistemic entailments of computation, not
its formal elements, attain this criterion whilst being at the same time,
and unavoidably, directly incommunicable. This category therefore subsumes
that of physical phenomenology. So in principle this is capable of
answering the objection, at least in the limit of what can be explicitly
characterised.

  I find these "nothing like" arguments facile.


I know you do. And I've tried to present a less 'facile' view of the issues
as best I can. But there I fear our views of the matter diverge, probably
irreconcilably.

David



Brent

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Brent Meeker



On 1/10/2018 6:11 PM, David Nyman wrote:
If you read the rest of Tallis's piece you'll see that he criticises 
the characterisation of the physical environment as encoding 
'information' independent of interpretation. This objection can be 
dealt with by the reversal,


Can it?  Isn't it just /assumed/ that the computational relations, or 
number relations, encode information?  That was my objection that the 
MGA was missing the necessity of an environment for its computation to 
be "about".  Bruno has generally agreed with this and said it just means 
that the environment (i.e. physics) is part of what is realized in the 
computations of the UD.  But notice that this doesn't answer a Tallis 
like objection that "computation is nothing like experience" and 
"information is nothing like environment".  I find these "nothing like" 
arguments facile.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread David Nyman
On 10 January 2018 at 23:49, Bruce Kellett 
wrote:

> On 11/01/2018 9:09 am, Brent Meeker wrote:
>
> On 1/10/2018 11:23 AM, David Nyman wrote:
>
> Searle makes his position even more vulnerable by arguing that not only
> are neural activity and the experience of perception the same but that the
> former *causes *the latter just as water is “caused” by H2O. This is
> desperate stuff: one could hardly expect some thing A to cause some thing B
> with which it is identical, because nothing can cause itself. In any event,
> the bottom line is that the molecules of H2O and the wet stuff that is
> water are two *appearances *of the same thing — two conscious takes on
> the same stuff. They cannot be analogous to, respectively, that which
> supposedly *causes* conscious experiences (neural impulses) and conscious
> experiences *themselves*.​"
> Here's a link to the original piece:
>
>
> 
> https://www.thenewatlantis.com/publications/what-neuroscienc
> e-cannot-tell-us-about-ourselves
>
>
> I understand those criticisms of Searle and they may be right.  But note
> that arithmetic and computation are nothing like experience either and all
> the same criticisms apply to CTM; something that goes unchallenged on this
> list because CTM is always taken as a given and Bruno says, "We must assume
> something to begin our theorizing."  But Searle would reject CTM (and has)
> for exactly the same criticisms directed against him above.
>
>
> I think Tallis's argument against Searle is entirely specious. Searle
> would appear to be arguing that the properties of H2O molecules, such as
> the large dipole moment, etc, cause the observed bulk properties of water.
> It is hard to fault such an argument -- what else could possibly lie behind
> the bulk properties of water other than the properties of its constituent
> molecules? By analogy, then, the bulk properties of a brain, such as
> consciousness, thought, memory, and so on
>

​None of these can be claimed as 'bulk properties of a brain', except by a
brute and unargued-for a posteriori assumption (or identity thesis). On the
contrary, they all fall properly into the category of phenomena or
appearance, to employ Tallis's taxonomy. I think you have managed entirely
to miss the point of his argument.​

arise from the properties of the individual neurons and other structures
> that make up the physical brain
>

​The only necessary entailment is that what arises from the properties of
the individual neurons and other structures are the bulk properties of a
brain - i.e. neurocognition and so forth.​ So far, so circular.
​

> -- mind supervenes on the physical brain.
>

​Yes - in the sense of covariance. No - in any further, explicitly
argued-for sense​, except by brute extrapolation or identity.


> As you say, Tallis's argument can be raised against the CTM account --
> after all, consciousness is not the same thing as a computation.
>

​I don't believe so. The point of Bruno's hypothesis is to associate
consciousness with specific, first-person *epistemic entailments* of
computation​, not directly with computation in any purely third-person
sense. Now, you may criticise this by saying that it is likewise
unargued-for and if I'm honest this is something that troubled me for some
time about CTM in this formulation. Hence my characterisation of the
reversal as a complete starting-over with formal elements in terms of which
both information and computation can, in principle, be directly encoded or
described. A 'language of thought', if you will.

If you read the rest of Tallis's piece you'll see that he criticises the
characterisation of the physical environment as encoding 'information'
independent of interpretation. This objection can be dealt with by the
reversal, the consequence being that physics itself comes to be understood
emergently as a complex and tightly constrained set of epistemic phenomena
or appearances. But we are then more justified in claiming that such
appearances can - indeed ubiquitously - be understood as 'encoding'
information and computation, including that associated (albeit indirectly)
with mind.

But to make the final step to mind, we need to mark 'something' out that,
whilst being inextricably 'entangled' with computation and its physical
manifestation, can at the same time be understood to be categorically
distinct from either. The suggestion is then that this something - this
species of categorical distinction, if you like - is to be found at the
juncture of an ontology - in this case the elements and processes of
computation - with its justified or 'truthful' epistemic derivatives, or
'interpretation'. IOW, the idea is that as (the possessor of) a mind, I
apprehend, as a species of tautological truth, epistemic phenomena entailed
or implied by the self-referential - though 'unconscious' - formal
descriptions that carry or track 

Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Bruce Kellett

On 11/01/2018 9:09 am, Brent Meeker wrote:

On 1/10/2018 11:23 AM, David Nyman wrote:


Searle makes his position even more vulnerable by arguing that not 
only are neural activity and the experience of perception the same 
but that the former /causes /the latter just as water is “caused” by 
H_2 O. This is desperate stuff: one could hardly expect some thing A 
to cause some thing B with which it is identical, because nothing can 
cause itself. In any event, the bottom line is that the molecules of 
H_2 O and the wet stuff that is water are two /appearances /of the 
same thing — two conscious takes on the same stuff. They cannot be 
analogous to, respectively, that which supposedly /causes/ conscious 
experiences (neural impulses) and conscious experiences /themselves/.​"


Here's a link to the original piece:

https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves


I understand those criticisms of Searle and they may be right. But 
note that arithmetic and computation are nothing like experience 
either and all the same criticisms apply to CTM; something that goes 
unchallenged on this list because CTM is always taken as a given and 
Bruno says, "We must assume something to begin our theorizing."  But 
Searle would reject CTM (and has) for exactly the same criticisms 
directed against him above.


I think Tallis's argument against Searle is entirely specious. Searle 
would appear to be arguing that the properties of H2O molecules, such as 
the large dipole moment, etc, cause the observed bulk properties of 
water. It is hard to fault such an argument -- what else could possibly 
lie behind the bulk properties of water other than the properties of its 
constituent molecules? By analogy, then, the bulk properties of a brain, 
such as consciousness, thought, memory, and so on, arise from the 
properties of the individual neurons and other structures that make up 
the physical brain -- mind supervenes on the physical brain.


As you say, Tallis's argument can be raised against the CTM account -- 
after all, consciousness is not the same thing as a computation. 
However, it can be argued that consciousness arises from, or supervenes 
on, the properties of some underlying computations. An argument against 
Searle is thus an argument against CTM, or, in fact, any other 
reductionist account of consciousness. Tallis simply makes consciousness 
some magical, inexplicable thing.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Brent Meeker



On 1/10/2018 11:23 AM, David Nyman wrote:


Searle makes his position even more vulnerable by arguing that not 
only are neural activity and the experience of perception the same but 
that the former /causes /the latter just as water is “caused” by H_2 
O. This is desperate stuff: one could hardly expect some thing A to 
cause some thing B with which it is identical, because nothing can 
cause itself. In any event, the bottom line is that the molecules of 
H_2 O and the wet stuff that is water are two /appearances /of the 
same thing — two conscious takes on the same stuff. They cannot be 
analogous to, respectively, that which supposedly /causes/ conscious 
experiences (neural impulses) and conscious experiences /themselves/.​"


Here's a link to the original piece:

https://www.thenewatlantis.com/publications/what-neuroscience-cannot-tell-us-about-ourselves


I understand those criticisms of Searle and they may be right.  But note 
that arithmetic and computation are nothing like experience either and 
all the same criticisms apply to CTM; something that goes unchallenged 
on this list because CTM is always taken as a given and Bruno says, "We 
must assume something to begin our theorizing."  But Searle would reject 
CTM (and has) for exactly the same criticisms directed against him above.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread David Nyman
On 10 Jan 2018 13:48, "Bruno Marchal"  wrote:


On 7 Jan 2018, at 12:42, David Nyman  wrote:

On 7 January 2018 at 09:52, Bruno Marchal  wrote:

>
> On 6 Jan 2018, at 21:09, David Nyman  wrote:
>
>
>
> On 6 Jan 2018 19:46, "Bruno Marchal"  wrote:
>
>
> On 5 Jan 2018, at 21:04, David Nyman  wrote:
>
>
>
> On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:
>
>
> On 4 Jan 2018, at 21:07, David Nyman  wrote:
>
>
>
> On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:
>
>
> On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:
>
> On 4 January 2018 at 11:55, Bruno Marchal  wrote:
>
>>
>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
>> >
>> >
>> >
>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>> >>
>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>> >>
>> >>>
>> >>>
>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>  Now, it
>>  could be that intelligent behavior implies mind, but as you yourself
>>  argue, we don't know that.
>> >>>
>> >>> Isn't this at the crux of the scientific study of the mind? There
>> seemed to be universal agreement on this list that a philosophical zombie
>> is impossible.
>> >>
>> >>
>> >> Precisely: that a philosophical zombie is impossible when we assume
>> Mechanism.
>> >
>> > But the consensus here has been that a philosophical zombie is
>> impossible because it exhibits intelligent behavior.
>>
>> Well, I think the consensus here is that computationalism is far more
>> plausible than non-computationalism.
>> Computationalism makes zombies non sensical.
>>
>>
>>
>> >
>> >> Philosophical zombie remains logical consistent for a non
>> computationalist theory of mind.
>> >
>> > It's logically consistent with a computationalist theory of brain. It
>> is only inconsistent with a computationalist theory of mind because use
>> include as an axiom that computation produces mind.  One can as say that
>> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
>> very cheap standard for theories to meet.
>>
>> At first sight, zombies seems consistent with computationalism, but the
>> notion of zombies requires the idea that we attribute mind to bodies
>> (having the right behavior). But with computationalism, mind is never
>> associated to a body, but only to the person having the infinity of
>> (similar enough) bodies relative representation in arithmetic. There are no
>> “real bodies” or “ontological bodies”, so the notion of zombie becomes
>> senseless. The consciousness is associated with the person, which is never
>> determined by one body.
>>
>
> ​So in the light of what you say above, does it then follow that the MGA
> implies (assuming comp) that a physical system does *not* in fact implement
> a computation in the relevant sense?
>
>
>
> The physical world has to be able to implement the computation in the
> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
> “act of faith.
>
> The physical world is a persistent illusion. It has to be enough
> persistent that you wake up at the hospital with the digital brain.
>
>
>
> I ask this because you say mind is *never* associated with a body, but
> mind *is* associated with computation via the epistemic consequences of
> universality.
>
>
>
> A (conscious) third person can associate a mind/person to a body that he
> perceives. It is polite.
>
> The body perceived by that third person is itself a construction of its
> own mind, and with computationalism (but also with QM), we know that such a
> body is an (evolving) map of where, and in which states, we could find, sy,
> the electron and proton of that body, and such snapshot is only a
> computational state among infinitely many others which would works as well,
> with respect to the relevant computations which brought its conscious state.
> Now, the conscious first person cannot associate itself to any particular
> body or computation.
>
> Careful: sometimes I say that a machine can think, or maybe (I usually
> avoid) that a computation can think or be conscious. It always mean,
> respectively, that a machine can make a person capable of manifesting
> itself relatively to you. But the machine and the body are local relative
> representation.
>
> A machine cannot think, and a computation (which is the (arithmetical)
> dynamic 3p view of the sequence of the relative static machine/state)
> cannot think. Only a (first) person can think, and to use that thinking
> with respect to another person, a machine is handy, like brain or a
> physical computer.
>
> The person is in heaven (arithmetical truth) and on earth (sigma_1
> arithmetical truth), simultaneously. But this belongs to G*, and I should
> stay mute, or insist that we are in the “after-act-of-faith” position of
> the one betting that comp is true, 

Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Bruno Marchal
On 7 Jan 2018, at 12:42, David Nyman  wrote:On 7 January 2018 at 09:52, Bruno Marchal  wrote:On 6 Jan 2018, at 21:09, David Nyman  wrote:On 6 Jan 2018 19:46, "Bruno Marchal"  wrote:On 5 Jan 2018, at 21:04, David Nyman  wrote:On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:On 4 Jan 2018, at 21:07, David Nyman  wrote:On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:On 4 January 2018 at 11:55, Bruno Marchal  wrote:
> On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
>
>
>
> On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>>
>> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>>
>>>
>>>
>>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
 Now, it
 could be that intelligent behavior implies mind, but as you yourself
 argue, we don't know that.
>>>
>>> Isn't this at the crux of the scientific study of the mind? There seemed to be universal agreement on this list that a philosophical zombie is impossible.
>>
>>
>> Precisely: that a philosophical zombie is impossible when we assume Mechanism.
>
> But the consensus here has been that a philosophical zombie is impossible because it exhibits intelligent behavior.

Well, I think the consensus here is that computationalism is far more plausible than non-computationalism.
Computationalism makes zombies non sensical.



>
>> Philosophical zombie remains logical consistent for a non computationalist theory of mind.
>
> It's logically consistent with a computationalist theory of brain. It is only inconsistent with a computationalist theory of mind because use include as an axiom that computation produces mind.  One can as say that intelligent behavior entails mind as an axiom of physicalism.  Logic is a very cheap standard for theories to meet.

At first sight, zombies seems consistent with computationalism, but the notion of zombies requires the idea that we attribute mind to bodies (having the right behavior). But with computationalism, mind is never associated to a body, but only to the person having the infinity of (similar enough) bodies relative representation in arithmetic. There are no “real bodies” or “ontological bodies”, so the notion of zombie becomes senseless. The consciousness is associated with the person, which is never determined by one body.​So in the light of what you say above, does it then follow that the MGA implies (assuming comp) that a physical system does *not* in fact implement a computation in the relevant sense?The physical world has to be able to implement the computation in the relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD “act of faith.The physical world is a persistent illusion. It has to be enough persistent that you wake up at the hospital with the digital brain. I ask this because you say mind is *never* associated with a body, but mind *is* associated with computation via the epistemic consequences of universality.A (conscious) third person can associate a mind/person to a body that he perceives. It is polite. The body perceived by that third person is itself a construction of its own mind, and with computationalism (but also with QM), we know that such a body is an (evolving) map of where, and in which states, we could find, sy, the electron and proton of that body, and such snapshot is only a computational state among infinitely many others which would works as well, with respect to the relevant computations which brought its conscious state.Now, the conscious first person cannot associate itself to any particular body or computation.Careful: sometimes I say that a machine can think, or maybe (I usually avoid) that a computation can think or be conscious. It always mean, respectively, that a machine can make a person capable of manifesting itself relatively to you. But the machine and the body are local relative representation.A machine cannot think, and a computation (which is the (arithmetical) dynamic 3p view of the sequence of the relative static machine/state) cannot think. Only a (first) person can think, and to use that thinking with respect to another person, a machine is handy, like brain or a physical computer.The person is in heaven (arithmetical truth) and on earth (sigma_1 arithmetical truth), simultaneously. But this belongs to G*, and I should stay mute, or insist that we are in the “after-act-of-faith” position of the one betting that comp is true, and … assuming comp is true. It is subtle to talk on those things, and it is important to admit that we don’t know the truth (or we do get inconsistent and fall in the theological trap). If so, according to comp, it would follow that (the material appearance and behaviour of) a body cannot be considered *causally* relevant to the computation-mind polarity,Yes, that is true, with respect 

Re: What falsifiability tests has computationalism passed?

2018-01-10 Thread Bruno Marchal

> On 7 Jan 2018, at 19:35, Brent Meeker  wrote:
> 
> 
> 
> On 1/7/2018 1:58 AM, Bruno Marchal wrote:
>> 
>>> On 6 Jan 2018, at 20:39, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 1/6/2018 2:11 AM, Bruno Marchal wrote:
> Is the Mars Rover conscious?
 
 Probably not, because, despite it run a conscious universal machine, Mars 
 Rover itself might not be universal, and still less Löbian. I should see 
 the program to be sure of this, of course. But it does not implement 
 self-reference, from what I have heard about it.
>>> 
>>> But it does reference, and report, on its internal state and its position.  
>>> So how much self-reference constitutes consciousness?
>> 
>> 
>> The “complete one”, which exists by the second recursion theorem of Kleene. 
>> It must be a number e such that phi_e (x, t, v, …)  = F(e, x, t, v).
> 
> Isn't that equivalent to knowing which machine it is?


You need the full correct 3p self-reference, but it is not enough to have (1p) 
consciousness. You need a reality/semantics/truth. That is why we are happy 
that incompleteness reflected, in the mind of a Löbian machine, the difference 
between []p and ([]p & p).

A case can be made that []p & <>t would be enough, but that is not valid. After 
all “[]p & <>t” is entirely 3p describable. You need the “true(p)”, which is 
non definable by the machine, but emulable by each individual sentence p; from 
the machine’s pov.

The recursion theorem, like Gödel diagonal lemma, gives only the 3p 
self-reference. To get consciousness “borrowable” by the machine, you need the 
notion of truth, and that one will always appears non definable, mysterious, 
“divine”, transcendental, for the machine’s 1p personal view.

The Lôbian machine can already say all this, because it reflects fully its own 
incompleteness. But its consciousness remains a mystery from its personal pov 
because the “true” part of its consciousness remains non definable, although 
“pointable” in a context where such truth makes sense, which of course is only 
something we can ourself only hope for (like when saying “yes” to the doctor).

Hope this helps, it complex matter.

Bruno




> 
> Brent
> 
>> The internal reports have to be done through such self-reference, or the 
>> (3p) identity of the machine is absent.
>> 
>> 
>>>   In your reply to Bruce you said that it's not a particular computation 
>>> that is consciousness, it's a class of machines that are capable of 
>>> consciouness.  But it's not clear to me whether this equates consciousness 
>>> with reflective self-awareness, or just self-awareness, or just awareness.
>> 
>> I use all those term as synonymous by default. Sometimes awareness is used 
>> in a weaker sense, but I made it clear if that is the case.
>> 
>> Bruno
>> 
>> 
>> 
>> 
>>> 
>>> Brent
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com 
>>> .
>>> To post to this group, send email to everything-list@googlegroups.com 
>>> .
>>> Visit this group at https://groups.google.com/group/everything-list 
>>> .
>>> For more options, visit https://groups.google.com/d/optout 
>>> .
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> .
>> To post to this group, send email to everything-list@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/everything-list 
>> .
>> For more options, visit https://groups.google.com/d/optout 
>> .
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe 

Re: What falsifiability tests has computationalism passed?

2018-01-07 Thread Brent Meeker



On 1/7/2018 1:58 AM, Bruno Marchal wrote:


On 6 Jan 2018, at 20:39, Brent Meeker > wrote:




On 1/6/2018 2:11 AM, Bruno Marchal wrote:

Is the Mars Rover conscious?


Probably not, because, despite it run a conscious universal machine, 
Mars Rover itself might not be universal, and still less Löbian. I 
should see the program to be sure of this, of course. But it does 
not implement self-reference, from what I have heard about it.


But it does reference, and report, on its internal state and its 
position.  So how much self-reference constitutes consciousness?



The “complete one”, which exists by the second recursion theorem of 
Kleene. It must be a number /e/ such that phi_/e (x, t, v, …)  = F(e, 
x, t, v)./


Isn't that equivalent to knowing which machine it is?

Brent

The internal reports have to be done through such self-reference, or 
the (3p) identity of the machine is absent.



  In your reply to Bruce you said that it's not a particular 
computation that is consciousness, it's a class of machines that are 
capable of consciouness.  But it's not clear to me whether this 
equates consciousness with reflective self-awareness, or just 
self-awareness, or just awareness.


I use all those term as synonymous by default. Sometimes awareness is 
used in a weaker sense, but I made it clear if that is the case.


Bruno






Brent

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-07 Thread David Nyman
On 7 January 2018 at 09:52, Bruno Marchal  wrote:

>
> On 6 Jan 2018, at 21:09, David Nyman  wrote:
>
>
>
> On 6 Jan 2018 19:46, "Bruno Marchal"  wrote:
>
>
> On 5 Jan 2018, at 21:04, David Nyman  wrote:
>
>
>
> On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:
>
>
> On 4 Jan 2018, at 21:07, David Nyman  wrote:
>
>
>
> On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:
>
>
> On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:
>
> On 4 January 2018 at 11:55, Bruno Marchal  wrote:
>
>>
>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
>> >
>> >
>> >
>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>> >>
>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>> >>
>> >>>
>> >>>
>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>  Now, it
>>  could be that intelligent behavior implies mind, but as you yourself
>>  argue, we don't know that.
>> >>>
>> >>> Isn't this at the crux of the scientific study of the mind? There
>> seemed to be universal agreement on this list that a philosophical zombie
>> is impossible.
>> >>
>> >>
>> >> Precisely: that a philosophical zombie is impossible when we assume
>> Mechanism.
>> >
>> > But the consensus here has been that a philosophical zombie is
>> impossible because it exhibits intelligent behavior.
>>
>> Well, I think the consensus here is that computationalism is far more
>> plausible than non-computationalism.
>> Computationalism makes zombies non sensical.
>>
>>
>>
>> >
>> >> Philosophical zombie remains logical consistent for a non
>> computationalist theory of mind.
>> >
>> > It's logically consistent with a computationalist theory of brain. It
>> is only inconsistent with a computationalist theory of mind because use
>> include as an axiom that computation produces mind.  One can as say that
>> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
>> very cheap standard for theories to meet.
>>
>> At first sight, zombies seems consistent with computationalism, but the
>> notion of zombies requires the idea that we attribute mind to bodies
>> (having the right behavior). But with computationalism, mind is never
>> associated to a body, but only to the person having the infinity of
>> (similar enough) bodies relative representation in arithmetic. There are no
>> “real bodies” or “ontological bodies”, so the notion of zombie becomes
>> senseless. The consciousness is associated with the person, which is never
>> determined by one body.
>>
>
> ​So in the light of what you say above, does it then follow that the MGA
> implies (assuming comp) that a physical system does *not* in fact implement
> a computation in the relevant sense?
>
>
>
> The physical world has to be able to implement the computation in the
> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
> “act of faith.
>
> The physical world is a persistent illusion. It has to be enough
> persistent that you wake up at the hospital with the digital brain.
>
>
>
> I ask this because you say mind is *never* associated with a body, but
> mind *is* associated with computation via the epistemic consequences of
> universality.
>
>
>
> A (conscious) third person can associate a mind/person to a body that he
> perceives. It is polite.
>
> The body perceived by that third person is itself a construction of its
> own mind, and with computationalism (but also with QM), we know that such a
> body is an (evolving) map of where, and in which states, we could find, sy,
> the electron and proton of that body, and such snapshot is only a
> computational state among infinitely many others which would works as well,
> with respect to the relevant computations which brought its conscious state.
> Now, the conscious first person cannot associate itself to any particular
> body or computation.
>
> Careful: sometimes I say that a machine can think, or maybe (I usually
> avoid) that a computation can think or be conscious. It always mean,
> respectively, that a machine can make a person capable of manifesting
> itself relatively to you. But the machine and the body are local relative
> representation.
>
> A machine cannot think, and a computation (which is the (arithmetical)
> dynamic 3p view of the sequence of the relative static machine/state)
> cannot think. Only a (first) person can think, and to use that thinking
> with respect to another person, a machine is handy, like brain or a
> physical computer.
>
> The person is in heaven (arithmetical truth) and on earth (sigma_1
> arithmetical truth), simultaneously. But this belongs to G*, and I should
> stay mute, or insist that we are in the “after-act-of-faith” position of
> the one betting that comp is true, and … assuming comp is true. It is
> subtle to talk on those things, and it is important to admit that we don’t
> know the truth (or we 

Re: What falsifiability tests has computationalism passed?

2018-01-07 Thread Bruno Marchal

> On 6 Jan 2018, at 20:39, Brent Meeker  wrote:
> 
> 
> 
> On 1/6/2018 2:11 AM, Bruno Marchal wrote:
>>> Is the Mars Rover conscious?
>> 
>> Probably not, because, despite it run a conscious universal machine, Mars 
>> Rover itself might not be universal, and still less Löbian. I should see the 
>> program to be sure of this, of course. But it does not implement 
>> self-reference, from what I have heard about it.
> 
> But it does reference, and report, on its internal state and its position.  
> So how much self-reference constitutes consciousness?


The “complete one”, which exists by the second recursion theorem of Kleene. It 
must be a number e such that phi_e (x, t, v, …)  = F(e, x, t, v).
The internal reports have to be done through such self-reference, or the (3p) 
identity of the machine is absent.


>   In your reply to Bruce you said that it's not a particular computation that 
> is consciousness, it's a class of machines that are capable of consciouness.  
> But it's not clear to me whether this equates consciousness with reflective 
> self-awareness, or just self-awareness, or just awareness.

I use all those term as synonymous by default. Sometimes awareness is used in a 
weaker sense, but I made it clear if that is the case.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-07 Thread Bruno Marchal

> On 6 Jan 2018, at 21:09, David Nyman  wrote:
> 
> 
> 
> On 6 Jan 2018 19:46, "Bruno Marchal"  > wrote:
> 
>> On 5 Jan 2018, at 21:04, David Nyman > > wrote:
>> 
>> 
>> 
>> On 5 Jan 2018 19:27, "Bruno Marchal" > > wrote:
>> 
>>> On 4 Jan 2018, at 21:07, David Nyman >> > wrote:
>>> 
>>> 
>>> 
>>> On 4 Jan 2018 18:16, "Bruno Marchal" >> > wrote:
>>> 
 On Jan 4, 2018, at 1:22 PM, David Nyman > wrote:
 
 On 4 January 2018 at 11:55, Bruno Marchal > wrote:
 
 > On Jan 3, 2018, at 10:57 PM, Brent Meeker  > wrote:
 >
 >
 >
 > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
 >>
 >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
 >>
 >>>
 >>>
 >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
  Now, it
  could be that intelligent behavior implies mind, but as you yourself
  argue, we don't know that.
 >>>
 >>> Isn't this at the crux of the scientific study of the mind? There 
 >>> seemed to be universal agreement on this list that a philosophical 
 >>> zombie is impossible.
 >>
 >>
 >> Precisely: that a philosophical zombie is impossible when we assume 
 >> Mechanism.
 >
 > But the consensus here has been that a philosophical zombie is 
 > impossible because it exhibits intelligent behavior.
 
 Well, I think the consensus here is that computationalism is far more 
 plausible than non-computationalism.
 Computationalism makes zombies non sensical.
 
 
 
 >
 >> Philosophical zombie remains logical consistent for a non 
 >> computationalist theory of mind.
 >
 > It's logically consistent with a computationalist theory of brain. It is 
 > only inconsistent with a computationalist theory of mind because use 
 > include as an axiom that computation produces mind.  One can as say that 
 > intelligent behavior entails mind as an axiom of physicalism.  Logic is 
 > a very cheap standard for theories to meet.
 
 At first sight, zombies seems consistent with computationalism, but the 
 notion of zombies requires the idea that we attribute mind to bodies 
 (having the right behavior). But with computationalism, mind is never 
 associated to a body, but only to the person having the infinity of 
 (similar enough) bodies relative representation in arithmetic. There are 
 no “real bodies” or “ontological bodies”, so the notion of zombie becomes 
 senseless. The consciousness is associated with the person, which is never 
 determined by one body.
 
 ​So in the light of what you say above, does it then follow that the MGA 
 implies (assuming comp) that a physical system does *not* in fact 
 implement a computation in the relevant sense?
>>> 
>>> 
>>> The physical world has to be able to implement the computation in the 
>>> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD 
>>> “act of faith.
>>> 
>>> The physical world is a persistent illusion. It has to be enough persistent 
>>> that you wake up at the hospital with the digital brain.
>>> 
>>> 
>>> 
 I ask this because you say mind is *never* associated with a body, but 
 mind *is* associated with computation via the epistemic consequences of 
 universality.
>>> 
>>> 
>>> A (conscious) third person can associate a mind/person to a body that he 
>>> perceives. It is polite. 
>>> 
>>> The body perceived by that third person is itself a construction of its own 
>>> mind, and with computationalism (but also with QM), we know that such a 
>>> body is an (evolving) map of where, and in which states, we could find, sy, 
>>> the electron and proton of that body, and such snapshot is only a 
>>> computational state among infinitely many others which would works as well, 
>>> with respect to the relevant computations which brought its conscious state.
>>> Now, the conscious first person cannot associate itself to any particular 
>>> body or computation.
>>> 
>>> Careful: sometimes I say that a machine can think, or maybe (I usually 
>>> avoid) that a computation can think or be conscious. It always mean, 
>>> respectively, that a machine can make a person capable of manifesting 
>>> itself relatively to you. But the machine and the body are local relative 
>>> representation.
>>> 
>>> A machine cannot think, and a computation (which is the (arithmetical) 
>>> dynamic 3p view of the sequence of the relative static machine/state) 
>>> cannot think. Only a (first) person can think, and to use that 

Re: What falsifiability tests has computationalism passed?

2018-01-07 Thread Bruno Marchal

> On 5 Jan 2018, at 01:55, Bruce Kellett  wrote:
> 
> On 4/01/2018 11:00 pm, Bruno Marchal wrote:
>> Yes, in my Conscience and Mechanism appendices, or in the appendice of the 
>> Lille thesis. I translated a Bell’s inequality in arithmetic, but cannot 
>> test it due to its intractability + my own incompetence of course. But Z1* 
>> introducing tuns of nesting modal boxes, making things hard to verify for 
>> reasonably complex formula.
> 
> Bell-like inequalities are easy to obtain -- in classical physics as well as 
> in QM. The hard thing is to show that your theory requires that they be 
> experimentally violated.


Maybe study them, and and could try to show that they are not. But the 
similarity with quantum logic, and the modal translation in arithmetic would 
make this like there is a number conspiration. Any way, they are well 
determined, so, as you point out correctly, it is a matter of work. Not an easy 
one, but a mandatory one for a (classical, platonism) computationalist.

Bruno




> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Russell Standish
On Sun, Jan 07, 2018 at 04:03:19PM +1100, Bruce Kellett wrote:
> 
> That relies on a particular classification of 'animals'. I consider that
> many birds and fish (as well as most mammals) could well be conscious.
> Insects, arachnids, and nematodes might well be different -- but they are
> 'animals' only in the most general sense that they are alive and not plants.

They are animals in the normal scientific sense of the term. There are
many things that are alive, but are neither animals or plants. Fungi
for example, or the various sorts of bacteria.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruce Kellett

On 7/01/2018 3:33 pm, Russell Standish wrote:

On Sat, Jan 06, 2018 at 04:30:35PM +1100, Bruce Kellett wrote:

On 6/01/2018 4:15 pm, Russell Standish wrote:


Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.

That is an extraordinary claim, and sufficient in itself to falsify your
theory.

Yes it is an extraordinary result, but I disagree that it has been falsified.


I know as a fact that my cat is conscious, and that my various dogs
over the years have also been conscious.

Cats and dogs are not "all animals". In fact they're members of an
extremely rare phylum that we also belong to, namely Chordata, which
are well and truly outnumbered by Arthropoda, not to mention Nematoda,
which outnumbers Arthropoda many to one.


That relies on a particular classification of 'animals'. I consider that 
many birds and fish (as well as most mammals) could well be conscious. 
Insects, arachnids, and nematodes might well be different -- but they 
are 'animals' only in the most general sense that they are alive and not 
plants.



If one applies the same argument to the statement "All mammals are
conscious", one cannot reject it at even the usual level of
significance (p=0.05). So I can well believe cats and dogs are
conscious, and it is not incompatible with my theory.


You refer to the "mirror test" in
your book -- an animal is conscious (is self-aware) if it recognizes itself
in a mirror. I agree that my cat does not recognize itself in the mirror --
but that only goes to show that this is a stupid test for consciousness.


Self awareness does appear to be essential for consciousness, and the
mirror test is operationally our best test at present.


I would doubt that it is the best test available at the moment. 
Awareness of a distinction between the self and the surroundings might 
be better, though probably in need of some refinement. That test would 
be satisfied well down the order of complexity of central nervous systems.


Bruce


But I would
agree that the mirror test is flawed, and needs improvement to handle
different sensory modalities that different species work with.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Russell Standish
On Sat, Jan 06, 2018 at 04:30:35PM +1100, Bruce Kellett wrote:
> On 6/01/2018 4:15 pm, Russell Standish wrote:
> 
> > Other things seem possible, such as the
> > extraordinary unlikelihood that all animals can be conscious.
> 
> That is an extraordinary claim, and sufficient in itself to falsify your
> theory.

Yes it is an extraordinary result, but I disagree that it has been falsified.

> I know as a fact that my cat is conscious, and that my various dogs
> over the years have also been conscious.

Cats and dogs are not "all animals". In fact they're members of an
extremely rare phylum that we also belong to, namely Chordata, which
are well and truly outnumbered by Arthropoda, not to mention Nematoda,
which outnumbers Arthropoda many to one.

If one applies the same argument to the statement "All mammals are
conscious", one cannot reject it at even the usual level of
significance (p=0.05). So I can well believe cats and dogs are
conscious, and it is not incompatible with my theory.

> You refer to the "mirror test" in
> your book -- an animal is conscious (is self-aware) if it recognizes itself
> in a mirror. I agree that my cat does not recognize itself in the mirror --
> but that only goes to show that this is a stupid test for consciousness.
>

Self awareness does appear to be essential for consciousness, and the
mirror test is operationally our best test at present. But I would
agree that the mirror test is flawed, and needs improvement to handle
different sensory modalities that different species work with.

> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread John Clark
On Sat, Jan 6, 2018 at 1:56 PM, Bruno Marchal  wrote:

​> ​
> The existence of the computations in arithmetic is a fact of the same type
> that 6 is not prime, just lengthier to prove.
>

Proof is not the same as truth, as a logician you should know that. I don't
know what system you're using but if you can prove that arithmetic without
the help of matter or energy can perform calculations, even something as
simple as 2+2, then at least one of the axioms used in your logical system
is invalid because you just proved something that is demonstrably untrue.


​>> ​
>> however mathematics by itself can't do matter.
>
>
> ​>​
> That’s true too.
>

​Then matter must be more fundamental ​than mathematics.


​> ​
> The problem if you make matter primary, or fundamental, or in need to be
> assumed, you have to explain how it make some computations more feel as
> real by some universal numbers and not others,
>

​To explain that I'd have to explain why some computations lead to
intelligent behavior and some don't, and if I could do that I'd be the
richest man on the face of the Earth. Spoilers Alert: I'm not.

John K Clark  ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread David Nyman
On 6 Jan 2018 19:46, "Bruno Marchal"  wrote:


On 5 Jan 2018, at 21:04, David Nyman  wrote:



On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:


On 4 Jan 2018, at 21:07, David Nyman  wrote:



On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:


On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:

On 4 January 2018 at 11:55, Bruno Marchal  wrote:

>
> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
> >
> >
> >
> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
> >>
> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
> >>
> >>>
> >>>
> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>  Now, it
>  could be that intelligent behavior implies mind, but as you yourself
>  argue, we don't know that.
> >>>
> >>> Isn't this at the crux of the scientific study of the mind? There
> seemed to be universal agreement on this list that a philosophical zombie
> is impossible.
> >>
> >>
> >> Precisely: that a philosophical zombie is impossible when we assume
> Mechanism.
> >
> > But the consensus here has been that a philosophical zombie is
> impossible because it exhibits intelligent behavior.
>
> Well, I think the consensus here is that computationalism is far more
> plausible than non-computationalism.
> Computationalism makes zombies non sensical.
>
>
>
> >
> >> Philosophical zombie remains logical consistent for a non
> computationalist theory of mind.
> >
> > It's logically consistent with a computationalist theory of brain. It is
> only inconsistent with a computationalist theory of mind because use
> include as an axiom that computation produces mind.  One can as say that
> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
> very cheap standard for theories to meet.
>
> At first sight, zombies seems consistent with computationalism, but the
> notion of zombies requires the idea that we attribute mind to bodies
> (having the right behavior). But with computationalism, mind is never
> associated to a body, but only to the person having the infinity of
> (similar enough) bodies relative representation in arithmetic. There are no
> “real bodies” or “ontological bodies”, so the notion of zombie becomes
> senseless. The consciousness is associated with the person, which is never
> determined by one body.
>

​So in the light of what you say above, does it then follow that the MGA
implies (assuming comp) that a physical system does *not* in fact implement
a computation in the relevant sense?



The physical world has to be able to implement the computation in the
relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
“act of faith.

The physical world is a persistent illusion. It has to be enough persistent
that you wake up at the hospital with the digital brain.



I ask this because you say mind is *never* associated with a body, but mind
*is* associated with computation via the epistemic consequences of
universality.



A (conscious) third person can associate a mind/person to a body that he
perceives. It is polite.

The body perceived by that third person is itself a construction of its own
mind, and with computationalism (but also with QM), we know that such a
body is an (evolving) map of where, and in which states, we could find, sy,
the electron and proton of that body, and such snapshot is only a
computational state among infinitely many others which would works as well,
with respect to the relevant computations which brought its conscious state.
Now, the conscious first person cannot associate itself to any particular
body or computation.

Careful: sometimes I say that a machine can think, or maybe (I usually
avoid) that a computation can think or be conscious. It always mean,
respectively, that a machine can make a person capable of manifesting
itself relatively to you. But the machine and the body are local relative
representation.

A machine cannot think, and a computation (which is the (arithmetical)
dynamic 3p view of the sequence of the relative static machine/state)
cannot think. Only a (first) person can think, and to use that thinking
with respect to another person, a machine is handy, like brain or a
physical computer.

The person is in heaven (arithmetical truth) and on earth (sigma_1
arithmetical truth), simultaneously. But this belongs to G*, and I should
stay mute, or insist that we are in the “after-act-of-faith” position of
the one betting that comp is true, and … assuming comp is true. It is
subtle to talk on those things, and it is important to admit that we don’t
know the truth (or we do get inconsistent and fall in the theological trap).



If so, according to comp, it would follow that (the material appearance and
behaviour of) a body cannot be considered *causally* relevant to the
computation-mind polarity,



Yes, that is true, with respect to the arithmetical truth (where there is
no bodies, 

Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 5 Jan 2018, at 21:04, David Nyman  wrote:
> 
> 
> 
> On 5 Jan 2018 19:27, "Bruno Marchal"  > wrote:
> 
>> On 4 Jan 2018, at 21:07, David Nyman > > wrote:
>> 
>> 
>> 
>> On 4 Jan 2018 18:16, "Bruno Marchal" > > wrote:
>> 
>>> On Jan 4, 2018, at 1:22 PM, David Nyman >> > wrote:
>>> 
>>> On 4 January 2018 at 11:55, Bruno Marchal >> > wrote:
>>> 
>>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker >> > > wrote:
>>> >
>>> >
>>> >
>>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>>> >>
>>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>>> >>
>>> >>>
>>> >>>
>>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>>  Now, it
>>>  could be that intelligent behavior implies mind, but as you yourself
>>>  argue, we don't know that.
>>> >>>
>>> >>> Isn't this at the crux of the scientific study of the mind? There 
>>> >>> seemed to be universal agreement on this list that a philosophical 
>>> >>> zombie is impossible.
>>> >>
>>> >>
>>> >> Precisely: that a philosophical zombie is impossible when we assume 
>>> >> Mechanism.
>>> >
>>> > But the consensus here has been that a philosophical zombie is impossible 
>>> > because it exhibits intelligent behavior.
>>> 
>>> Well, I think the consensus here is that computationalism is far more 
>>> plausible than non-computationalism.
>>> Computationalism makes zombies non sensical.
>>> 
>>> 
>>> 
>>> >
>>> >> Philosophical zombie remains logical consistent for a non 
>>> >> computationalist theory of mind.
>>> >
>>> > It's logically consistent with a computationalist theory of brain. It is 
>>> > only inconsistent with a computationalist theory of mind because use 
>>> > include as an axiom that computation produces mind.  One can as say that 
>>> > intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
>>> > very cheap standard for theories to meet.
>>> 
>>> At first sight, zombies seems consistent with computationalism, but the 
>>> notion of zombies requires the idea that we attribute mind to bodies 
>>> (having the right behavior). But with computationalism, mind is never 
>>> associated to a body, but only to the person having the infinity of 
>>> (similar enough) bodies relative representation in arithmetic. There are no 
>>> “real bodies” or “ontological bodies”, so the notion of zombie becomes 
>>> senseless. The consciousness is associated with the person, which is never 
>>> determined by one body.
>>> 
>>> ​So in the light of what you say above, does it then follow that the MGA 
>>> implies (assuming comp) that a physical system does *not* in fact implement 
>>> a computation in the relevant sense?
>> 
>> 
>> The physical world has to be able to implement the computation in the 
>> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD “act 
>> of faith.
>> 
>> The physical world is a persistent illusion. It has to be enough persistent 
>> that you wake up at the hospital with the digital brain.
>> 
>> 
>> 
>>> I ask this because you say mind is *never* associated with a body, but mind 
>>> *is* associated with computation via the epistemic consequences of 
>>> universality.
>> 
>> 
>> A (conscious) third person can associate a mind/person to a body that he 
>> perceives. It is polite. 
>> 
>> The body perceived by that third person is itself a construction of its own 
>> mind, and with computationalism (but also with QM), we know that such a body 
>> is an (evolving) map of where, and in which states, we could find, sy, the 
>> electron and proton of that body, and such snapshot is only a computational 
>> state among infinitely many others which would works as well, with respect 
>> to the relevant computations which brought its conscious state.
>> Now, the conscious first person cannot associate itself to any particular 
>> body or computation.
>> 
>> Careful: sometimes I say that a machine can think, or maybe (I usually 
>> avoid) that a computation can think or be conscious. It always mean, 
>> respectively, that a machine can make a person capable of manifesting itself 
>> relatively to you. But the machine and the body are local relative 
>> representation.
>> 
>> A machine cannot think, and a computation (which is the (arithmetical) 
>> dynamic 3p view of the sequence of the relative static machine/state) cannot 
>> think. Only a (first) person can think, and to use that thinking with 
>> respect to another person, a machine is handy, like brain or a physical 
>> computer.
>> 
>> The person is in heaven (arithmetical truth) and on earth (sigma_1 
>> arithmetical truth), simultaneously. But this belongs to G*, and I should 
>> stay mute, or insist that we are in the “after-act-of-faith” 

Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Brent Meeker



On 1/6/2018 2:11 AM, Bruno Marchal wrote:

Is the Mars Rover conscious?


Probably not, because, despite it run a conscious universal machine, 
Mars Rover itself might not be universal, and still less Löbian. I 
should see the program to be sure of this, of course. But it does not 
implement self-reference, from what I have heard about it.


But it does reference, and report, on its internal state and its 
position.  So how much self-reference constitutes consciousness?  In 
your reply to Bruce you said that it's not a particular computation that 
is consciousness, it's a class of machines that are capable of 
consciouness.  But it's not clear to me whether this equates 
consciousness with reflective self-awareness, or just self-awareness, or 
just awareness.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 6 Jan 2018, at 06:06, Bruce Kellett  wrote:
> 
> On 6/01/2018 7:11 am, Bruno Marchal wrote:
>>> On 5 Jan 2018, at 04:22, Bruce Kellett  wrote:
>>> 
>>> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
> On Jan 4, 2018, at 12:50 PM, Bruce Kellett  
> wrote:
> 
> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>>> On 29/12/2017 10:14 am, Russell Standish wrote:
 This is computationalism - the idea that our human consciousness _is_
 a computation (and nothing but a computation).
>>> What distinguishes a conscious computation within the class of all 
>>> computations? After all, not all computations are conscious.
>> Universality seems enough.
> What is a universal computation? From what you say below, universality 
> appears to be a property of a machine, not of a computation.
 OK, universality is an attribute of a machine, relatively to some 
 universal machinery, like arithmetic or physics.
 
 
>> But just universality gives rise only to a highly non standard, 
>> dissociative, form of consciousness. It might correspond to the cosmic 
>> consciousness alluded by people living highly altered state of 
>> consciousness.
>> 
>> You need Löbianity to get *self-consciousness*, or reflexive 
>> consciousness. A machine is Löbian when its universality is knowable by 
>> it. Equivalently, when the machine is universal and can prove its own 
>> "Löb's formula". []([]p -> p) -> []p. Note that the second 
>> incompleteness theorem is the particular case with p = f (f = "0≠1").
> Löbanity is a property of the machine, not of the computation.
 Yes. The same. I was talking about machine or about the person supported 
 by those machine. No machine (as conceived as a code, number, physical 
 object) can ever be conscious or think. It is always a more abstract 
 notion implemented through some machinery which do the thinking.
 
 Similarly a computation cannot be conscious, but it can support a person, 
 which is the one having genuinely the thinking or conscious attribute.
>>> The original suggestion by Russell was that "our human consciousness _is_ a 
>>> computation (and nothing but a computation).”
>> You can’t equate a first person notion with a third person notion. You can 
>> try to associate them, but consciousness will involve undefinable notion and 
>> reference to some notion of truth.
> 
> Russell clarifies his position to say that consciousness 'supervenes' on the 
> computation. But consciousness is not a purely first person concept -- I can 
> know that you are conscious by collecting evidence and making an inference. 
> You have to have a narrow interpretation of 'knowledge' to rule this out, and 
> such a narrow definition would rule out all scientific knowledge.


It makes it more modest, especially in metaphysics. But indeed, in philosophy 
of mind it used to have some strong meaning for knowledge, and science is 
better described as belief. Knowledge is never refutable, but belief are. In 
science, the theories are only ideas, making precise enough to be shown wrong.




> 
>>> You seem to be steering away from Russell's straightforward position.
>> I think you sur-interpret Russell, which might have commit “abuse of 
>> language”, forgetting not all knows we have already discussed this. I don’t 
>> know. His theory mentioned all infinite strings, that is the reals, which 
>> are not Turing universal. His approach, which has common things with the 
>> consequence of comp is not per se comp driven.
> 
> The UD includes non-halting programs so can generate all bitstrings, 
> including the reals.


Yes, and all Turing machines using those real as oracle. But physics remains 
determined by the self-reference logic, only you build a non mechanist theory 
of mind in which some non computable real play some role.




> 
>>> If human consciousness is a computation,
>> That does not make any sense. A mental attribute of a first person cannot be 
>> identified with any third person notion. But you can associate a mentality 
>> to higher order number relation whose semantics involves themselves. 
>> Consciousness starts as the bet in a reality and ends with the awakening of 
>> a person.
> 
> Poetic, but what does it mean?


It means that the average universal machine in arithmetic starts from believing 
that it belongs to a physical reality (Aristotle) and eventually grasp that the 
physical reality is a view of the border of the universal mind (the mind of the 
universal Turing machine), as seen by that universal mind.




> 
>>> then the computation is conscious (it is an identity thesis). You say that 
>>> the computation cannot be conscious, but can support a person. It is 
>>> difficult to see this as anything other than the introduction 

Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 6 Jan 2018, at 01:50, Russell Standish  wrote:
> 
> On Fri, Jan 05, 2018 at 02:22:08PM +1100, Bruce Kellett wrote:
>> 
>> The original suggestion by Russell was that "our human consciousness _is_ a
>> computation (and nothing but a computation)."
>> 
>> You seem to be steering away from Russell's straightforward position. If
>> human consciousness is a computation, then the computation is conscious (it
>> is an identity thesis). You say that the computation cannot be conscious,
>> but can support a person. It is difficult to see this as anything other than
>> the introduction of a dualistic element: the computation supports a
>> conscious person, but is not itself conscious? So wherein does consciousness
>> exist? You are introducing some unspecified magic into the equation. And
>> what characterizes those computations that can support a person from those
>> that cannot?
>> 
> 
> A more precise statement might have computationalism defined as
> "computational supervenience is true", where computational
> supervenience means consciousness supervenes on computations.
> 
> If you dig into what supervenience means, then it means that
> no property of consciousness can differ without a difference in the
> supervened computation.
> 
> But you can have a difference in the computation that is not reflected
> in the consciousness, so strict identity in a mathematical or
> Leibnizian sense does not follow.
> 
> AFAIK, Bruno uses "support" to mean "is supervened on". I do not know
> what he means by a "computation cannot be conscious" - as by analogy
> it would a be similar statement to "a bunch of molecules cannot be a gas”.

A bunch of molecules *can* be a gas. There is no problem as we equate two 3p 
notions.

The similar statement would be that “a bunch of molecules is a conscious 
person”. Here the correct thing would that a bunch of molecules can support a 
person relatively to some molecular world.

Bruno

> 
> 
> -- 
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 5 Jan 2018, at 21:19, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Friday, January 5, 2018 at 12:44:53 PM UTC-7, John Clark wrote:
> On Tue, Jan 2, 2018 at 8:43 AM, Bruno Marchal  > wrote:
> 
> ​>> ​​in their work real AI scientists don't use personal pronouns with no 
> clear referent, ​nor silly homemade terminology.
> 
> ​> ​See my papers for the mathematical definitions of all pronouns
> 
> ​Right, just like your papers explain why pigs fly.​ 
>  
> ​> ​You did explicitly agree that if two computers, localized in two 
> different places, but numerically (digitally) identical, and running the same 
> program, would not, in case they emulate a "conscious program", allow to that 
> consciousness to be able to localize itself.
> 
> ​Yes, and a third party couldn't do it either because consciousness doesn't 
> have a location, but its not unique in that regard, "green" or "fast"​ ​or 
> "big" doesn't have a location either.
> 
> ​> ​That fact is only needed to understand how to extract physics from 
> elementary arithmetic.
> 
> ​That could only be done if arithmetic is more fundamental than physics and I 
> see no evidence that is true, but I do see some evidence it is false.  ​ 
>  
> ​> ​you seem to believe in a primary physical universe,
> 
> You seem to believe in a primary ​mathematical​ universe​, ​but matter by 
> itself can do mathematics however mathematics by itself can't do matter.
> 
> But that's what the MWI does! It creates matter, indeed huge sets of 
> universes, by inferring a superposition never collapses. AG
> 



Another good reason to doubt matter. No problem with the notion of machine 3p 
histories, which is no mo more demanding that believing in the prime numbers 
(if you keep in mind that the notion of computation is arithmetical).

Bruno




>  John K Clark
>  
> 
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 5 Jan 2018, at 20:44, John Clark  wrote:
> 
> On Tue, Jan 2, 2018 at 8:43 AM, Bruno Marchal  > wrote:
> 
> ​>> ​​in their work real AI scientists don't use personal pronouns with no 
> clear referent, ​nor silly homemade terminology.
> 
> ​> ​See my papers for the mathematical definitions of all pronouns
> 
> ​Right, just like your papers explain why pigs fly.​ 
>  
> ​> ​You did explicitly agree that if two computers, localized in two 
> different places, but numerically (digitally) identical, and running the same 
> program, would not, in case they emulate a "conscious program", allow to that 
> consciousness to be able to localize itself.
> 
> ​Yes, and a third party couldn't do it either because consciousness doesn't 
> have a location, but its not unique in that regard, "green" or "fast"​ ​or 
> "big" doesn't have a location either.
> 
> ​> ​That fact is only needed to understand how to extract physics from 
> elementary arithmetic.
> 
> ​That could only be done if arithmetic is more fundamental than physics and I 
> see no evidence that is true, but I do see some evidence it is false.  ​ 

The existence of the computations in arithmetic is a fact of the same type that 
6 is not prime, just lengthier to prove.

But what are the evidences for a primary physical universe? Usually the feeling 
is based on “known on the table” sort of argument which have been shown invalid 
since antiquity by the dream argument (which is the one also amenable to 
mathematics with comp (= YD+CT, that is “yes doctor” + Church-Turing.




>  
> ​> ​you seem to believe in a primary physical universe,
> 
> You seem to believe in a primary ​mathematical​ universe​,

I never say so. I believe only that IF YD+CT is true THEN the primary reality 
is given by any sigma_1 complete theory. The physical appearances are recovered 
by the statistic on all computational consistent extensions, making YD+TC 
testable/refutable.


> ​but matter by itself can do mathematics

I have no clue what “matter” can mean here. I guess you mean that we can use 
matter to implement or incarnate a mathematician. Yes, that is locally first 
person plurally true;




> however mathematics by itself can't do matter.

That’s true too.

The problem if you make matter primary, or fundamental, or in need to be 
assumed, you have to explain how it make some computations more feel as real by 
some universal numbers and not others, which becomes zombie. Invoking primary 
matter is like invoking a god that nobody has ever seen to avoid justifying a 
measure mathematically to solve the mind-body problem. Such notion of matter 
just hide the problem, or provide a magical solution, which usually is not 
judged as valid. 

Bruno




> 
>  John K Clark
>  
> 
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread 'scerir' via Everything List
Somebody wrote: "So there are probably grades of consciousness, just as there 
are grades of ability to communicate. Cats, dogs, and some birds, are quite 
high on this scale, but jellyfish are probably quite low. But can you rule out 
the possibility that some environmental awareness does not constitute low level 
consciousness?"

# Indeed, there are grades of consciousness, I would say a smooth transition 
from a grade of consciuosness to another grade. When suffering from 
hydrocephalus (water in the brain) you may experience some apparent 
consciousness (and self-consciousness), but that is a true "first person" 
experience, or feeling. In other words, if somebody asks "who are you?", or 
"where are you now?", "what day is today?", you may be in trouble. Four years 
ago I said to the doctor: "That is simple, today is August 13, 1978."). Thus, 
to me, consciousness is not a 0/1 function, s.


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread agrayson2000


On Friday, January 5, 2018 at 11:29:44 PM UTC-7, Bruce wrote:
>
> On 6/01/2018 4:59 pm, Brent Meeker wrote:
>
> On 1/5/2018 9:30 PM, Bruce Kellett wrote:
>
> On 6/01/2018 4:15 pm, Russell Standish wrote: 
>
> Other things seem possible, such as the 
> extraordinary unlikelihood that all animals can be conscious. 
>
>
> That is an extraordinary claim, and sufficient in itself to falsify your 
> theory. I know as a fact that my cat is conscious, and that my various dogs 
> over the years have also been conscious. You refer to the "mirror test" in 
> your book -- an animal is conscious (is self-aware) if it recognizes itself 
> in a mirror. I agree that my cat does not recognize itself in the mirror -- 
> but that only goes to show that this is a stupid test for consciousness.
>
>
> But it's still unlikely that ALL animals can be conscious, e.g. jellyfish 
> (which recently found to sleep...perchance to dream?).
>
>
> Well, it depends on how you gauge consciousness in another living being -- 
> how do you know that your wife is conscious? or the person you meet in the 
> street? It seems that there must be a third person (intersubjective) 
> element to consciousness whereby we can know that other people are 
> conscious.
>
> I used the example of the fact that I did not see my abacus as talking to 
> me to judge that its computations were not supporting consciousness. That 
> is a good starting point. Cats and dogs can certainly "talk" to us: in fact 
> they can be very eloquent in making their wishes known! At a different 
> level, I was sitting on the verandah of our bush house over the Christmas 
> break, and a king parrot came in with his mate. They are long-lived birds, 
> and relatively territorial, so we know this pair of king parrots quite 
> well. The male sat on the roof above me and peered round to look at where I 
> was sitting. He then started to vocalize -- talk to me -- in response to my 
> quiet welcome and conversation. Sure, no English words were used, but the 
> recognition and communication was clear. So I put out some seeds on the 
> table, and he was happy. This same bird has, on another occasion, sat on 
> the arm of my wife's chair, a few feet away, while we talked.
>
> There is also a family of blue fairy wrens living by the house. Being the 
> late breeding season, the male is jealous of his territory and spends time 
> fighting his reflection in our windows, and in the bonnet of the car! He 
> fails the mirror test for consciousness, but he is clearly an able and 
> successful breeder. It is harder to establish direct communication with 
> wrens, but I have no doubt that they are intelligent birds, conscious and 
> knowledgeable of their surroundings.
>

*The problem with your claim -- and what I am about to write is surely not 
original -- is that humans could (at some point in time, if not now) build 
artificial birds that would emulate what you interpret as "consciousness".  
Think "Blade Runner" (the movie). If you had a test to distinguish the real 
from the artificial, you'd be on your way to understanding what 
consciousness IS, but at present I don't think this is the case. AG* 

>
> So there are probably grades of consciousness, just as there are grades of 
> ability to communicate. Cats, dogs, and some birds, are quite high on this 
> scale, but jellyfish are probably quite low. But can you rule out the 
> possibility that some environmental awareness does not constitute low level 
> consciousness?
>



> I'm still not clear on whether it is "machines" (axiom systems) which 
> *are* conscious, or *can be* conscious?  Or is it computations which *are* 
> consciousness, or *"support"* consciousness?  ISTM that my consciousness 
> is only a part of my thinking (and not always the best part).
>
>
> I think the notion of supervenience is the best way to approach it. But 
> one must be careful to distinguish models of supervenience for the reality 
> of actual consciousness.
>
> Bruce
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Quentin Anciaux
Le 6 janv. 2018 7:29 AM, "Bruce Kellett"  a
écrit :

On 6/01/2018 4:59 pm, Brent Meeker wrote:

On 1/5/2018 9:30 PM, Bruce Kellett wrote:

On 6/01/2018 4:15 pm, Russell Standish wrote:

Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.


That is an extraordinary claim, and sufficient in itself to falsify your
theory. I know as a fact that my cat is conscious, and that my various dogs
over the years have also been conscious. You refer to the "mirror test" in
your book -- an animal is conscious (is self-aware) if it recognizes itself
in a mirror. I agree that my cat does not recognize itself in the mirror --
but that only goes to show that this is a stupid test for consciousness.


But it's still unlikely that ALL animals can be conscious, e.g. jellyfish
(which recently found to sleep...perchance to dream?).


Well, it depends on how you gauge consciousness in another living being --
how do you know that your wife is conscious? or the person you meet in the
street? It seems that there must be a third person (intersubjective)
element to consciousness whereby we can know that other people are
conscious.

I used the example of the fact that I did not see my abacus as talking to
me to judge that its computations were not supporting consciousness.


Again that's a flawed example, so certainly not a good starting point. I
thought you understood that.

Quentin

That is a good starting point. Cats and dogs can certainly "talk" to us: in
fact they can be very eloquent in making their wishes known! At a different
level, I was sitting on the verandah of our bush house over the Christmas
break, and a king parrot came in with his mate. They are long-lived birds,
and relatively territorial, so we know this pair of king parrots quite
well. The male sat on the roof above me and peered round to look at where I
was sitting. He then started to vocalize -- talk to me -- in response to my
quiet welcome and conversation. Sure, no English words were used, but the
recognition and communication was clear. So I put out some seeds on the
table, and he was happy. This same bird has, on another occasion, sat on
the arm of my wife's chair, a few feet away, while we talked.

There is also a family of blue fairy wrens living by the house. Being the
late breeding season, the male is jealous of his territory and spends time
fighting his reflection in our windows, and in the bonnet of the car! He
fails the mirror test for consciousness, but he is clearly an able and
successful breeder. It is harder to establish direct communication with
wrens, but I have no doubt that they are intelligent birds, conscious and
knowledgeable of their surroundings.

So there are probably grades of consciousness, just as there are grades of
ability to communicate. Cats, dogs, and some birds, are quite high on this
scale, but jellyfish are probably quite low. But can you rule out the
possibility that some environmental awareness does not constitute low level
consciousness?



I'm still not clear on whether it is "machines" (axiom systems) which *are*
conscious, or *can be* conscious?  Or is it computations which *are*
consciousness, or *"support"* consciousness?  ISTM that my consciousness is
only a part of my thinking (and not always the best part).


I think the notion of supervenience is the best way to approach it. But one
must be careful to distinguish models of supervenience for the reality of
actual consciousness.

Bruce

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 5 Jan 2018, at 15:06, Jason Resch  wrote:
> 
> 
> 
> On Friday, January 5, 2018, David Nyman  > wrote:
> 
> 
> On 5 Jan 2018 03:22, "Bruce Kellett"  > wrote:
> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
> On Jan 4, 2018, at 12:50 PM, Bruce Kellett  > wrote:
> 
> On 4/01/2018 12:30 am, Bruno Marchal wrote:
> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
> On 29/12/2017 10:14 am, Russell Standish wrote:
> This is computationalism - the idea that our human consciousness _is_
> a computation (and nothing but a computation).
> What distinguishes a conscious computation within the class of all 
> computations? After all, not all computations are conscious.
> Universality seems enough.
> What is a universal computation? From what you say below, universality 
> appears to be a property of a machine, not of a computation.
> OK, universality is an attribute of a machine, relatively to some universal 
> machinery, like arithmetic or physics.
> 
> 
> But just universality gives rise only to a highly non standard, dissociative, 
> form of consciousness. It might correspond to the cosmic consciousness 
> alluded by people living highly altered state of consciousness.
> 
> You need Löbianity to get *self-consciousness*, or reflexive consciousness. A 
> machine is Löbian when its universality is knowable by it. Equivalently, when 
> the machine is universal and can prove its own "Löb's formula". []([]p -> p) 
> -> []p. Note that the second incompleteness theorem is the particular case 
> with p = f (f = "0≠1").
> Löbanity is a property of the machine, not of the computation.
> Yes. The same. I was talking about machine or about the person supported by 
> those machine. No machine (as conceived as a code, number, physical object) 
> can ever be conscious or think. It is always a more abstract notion 
> implemented through some machinery which do the thinking.
> 
> Similarly a computation cannot be conscious, but it can support a person, 
> which is the one having genuinely the thinking or conscious attribute.
> 
> The original suggestion by Russell was that "our human consciousness _is_ a 
> computation (and nothing but a computation)."
> 
> You seem to be steering away from Russell's straightforward position. If 
> human consciousness is a computation, then the computation is conscious (it 
> is an identity thesis). You say that the computation cannot be conscious, but 
> can support a person. It is difficult to see this as anything other than the 
> introduction of a dualistic element: the computation supports a conscious 
> person, but is not itself conscious? So wherein does consciousness exist? You 
> are introducing some unspecified magic into the equation. And what 
> characterizes those computations that can support a person from those that 
> cannot?
> 
> Let me see if I can attempt some sort of answer Bruce. The utility of the 
> notion of the 'universal' mechanism is precisely its ability to emulate all 
> other finitely computable mechanisms. But not all such mechanisms can be 
> associated with persons. What distinguishes this particular class, as 
> exemplified by Bruno's modal logic toy models is, in the first instance, 
> self-referentiality. This first-personal characteristic is, if you like, the 
> fixed point on which every other feature is centred. Then, with respect to 
> this fixed point of view, a distinction can be made between what is 
> 'believed' (essentially, provably true as a theorem and hence communicable) 
> by the machine and what is true *about* the machine (essentially, not 
> provable as a theorem and hence not communicable, but nonetheless 
> 'epistemically' true). 
> 
> 
> Given the above, what would be the shortest program with these properties?

Any first-order specification of a universal program can be considered as 
conscious, although in a dissociative state. But only the Löbian universal 
machine can talk about that. The non-Löbian machine are very simple mind, 
already in heaven, so to speak.



> 
> Is the Mars Rover conscious?

Probably not, because, despite it run a conscious universal machine, Mars Rover 
itself might not be universal, and still less Löbian. I should see the program 
to be sure of this, of course. But it does not implement self-reference, from 
what I have heard about it. 

Bruno



> 
> Jason
>  
> 
> Any machine possessing the foregoing features is in principle conscious, in 
> the sense of having implicit self-referential epistemic access to 
> non-communicable truths that are nonetheless entailed by its explicit and 
> communicable 'beliefs'. Of course it's a long step from the toy model to the 
> human person, but I think one can still discern the thread. The machine's 
> 'beliefs' can now be represented as communicable and explicit 

Re: What falsifiability tests has computationalism passed?

2018-01-06 Thread Bruno Marchal

> On 5 Jan 2018, at 21:04, David Nyman  wrote:
> 
> 
> 
> On 5 Jan 2018 19:27, "Bruno Marchal"  > wrote:
> 
>> On 4 Jan 2018, at 21:07, David Nyman > > wrote:
>> 
>> 
>> 
>> On 4 Jan 2018 18:16, "Bruno Marchal" > > wrote:
>> 
>>> On Jan 4, 2018, at 1:22 PM, David Nyman >> > wrote:
>>> 
>>> On 4 January 2018 at 11:55, Bruno Marchal >> > wrote:
>>> 
>>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker >> > > wrote:
>>> >
>>> >
>>> >
>>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>>> >>
>>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>>> >>
>>> >>>
>>> >>>
>>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>>  Now, it
>>>  could be that intelligent behavior implies mind, but as you yourself
>>>  argue, we don't know that.
>>> >>>
>>> >>> Isn't this at the crux of the scientific study of the mind? There 
>>> >>> seemed to be universal agreement on this list that a philosophical 
>>> >>> zombie is impossible.
>>> >>
>>> >>
>>> >> Precisely: that a philosophical zombie is impossible when we assume 
>>> >> Mechanism.
>>> >
>>> > But the consensus here has been that a philosophical zombie is impossible 
>>> > because it exhibits intelligent behavior.
>>> 
>>> Well, I think the consensus here is that computationalism is far more 
>>> plausible than non-computationalism.
>>> Computationalism makes zombies non sensical.
>>> 
>>> 
>>> 
>>> >
>>> >> Philosophical zombie remains logical consistent for a non 
>>> >> computationalist theory of mind.
>>> >
>>> > It's logically consistent with a computationalist theory of brain. It is 
>>> > only inconsistent with a computationalist theory of mind because use 
>>> > include as an axiom that computation produces mind.  One can as say that 
>>> > intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
>>> > very cheap standard for theories to meet.
>>> 
>>> At first sight, zombies seems consistent with computationalism, but the 
>>> notion of zombies requires the idea that we attribute mind to bodies 
>>> (having the right behavior). But with computationalism, mind is never 
>>> associated to a body, but only to the person having the infinity of 
>>> (similar enough) bodies relative representation in arithmetic. There are no 
>>> “real bodies” or “ontological bodies”, so the notion of zombie becomes 
>>> senseless. The consciousness is associated with the person, which is never 
>>> determined by one body.
>>> 
>>> ​So in the light of what you say above, does it then follow that the MGA 
>>> implies (assuming comp) that a physical system does *not* in fact implement 
>>> a computation in the relevant sense?
>> 
>> 
>> The physical world has to be able to implement the computation in the 
>> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD “act 
>> of faith.
>> 
>> The physical world is a persistent illusion. It has to be enough persistent 
>> that you wake up at the hospital with the digital brain.
>> 
>> 
>> 
>>> I ask this because you say mind is *never* associated with a body, but mind 
>>> *is* associated with computation via the epistemic consequences of 
>>> universality.
>> 
>> 
>> A (conscious) third person can associate a mind/person to a body that he 
>> perceives. It is polite. 
>> 
>> The body perceived by that third person is itself a construction of its own 
>> mind, and with computationalism (but also with QM), we know that such a body 
>> is an (evolving) map of where, and in which states, we could find, sy, the 
>> electron and proton of that body, and such snapshot is only a computational 
>> state among infinitely many others which would works as well, with respect 
>> to the relevant computations which brought its conscious state.
>> Now, the conscious first person cannot associate itself to any particular 
>> body or computation.
>> 
>> Careful: sometimes I say that a machine can think, or maybe (I usually 
>> avoid) that a computation can think or be conscious. It always mean, 
>> respectively, that a machine can make a person capable of manifesting itself 
>> relatively to you. But the machine and the body are local relative 
>> representation.
>> 
>> A machine cannot think, and a computation (which is the (arithmetical) 
>> dynamic 3p view of the sequence of the relative static machine/state) cannot 
>> think. Only a (first) person can think, and to use that thinking with 
>> respect to another person, a machine is handy, like brain or a physical 
>> computer.
>> 
>> The person is in heaven (arithmetical truth) and on earth (sigma_1 
>> arithmetical truth), simultaneously. But this belongs to G*, and I should 
>> stay mute, or insist that we are in the “after-act-of-faith” 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruce Kellett

On 6/01/2018 4:59 pm, Brent Meeker wrote:

On 1/5/2018 9:30 PM, Bruce Kellett wrote:

On 6/01/2018 4:15 pm, Russell Standish wrote:


Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.


That is an extraordinary claim, and sufficient in itself to falsify 
your theory. I know as a fact that my cat is conscious, and that my 
various dogs over the years have also been conscious. You refer to 
the "mirror test" in your book -- an animal is conscious (is 
self-aware) if it recognizes itself in a mirror. I agree that my cat 
does not recognize itself in the mirror -- but that only goes to show 
that this is a stupid test for consciousness.


But it's still unlikely that ALL animals can be conscious, e.g. 
jellyfish (which recently found to sleep...perchance to dream?).


Well, it depends on how you gauge consciousness in another living being 
-- how do you know that your wife is conscious? or the person you meet 
in the street? It seems that there must be a third person 
(intersubjective) element to consciousness whereby we can know that 
other people are conscious.


I used the example of the fact that I did not see my abacus as talking 
to me to judge that its computations were not supporting consciousness. 
That is a good starting point. Cats and dogs can certainly "talk" to us: 
in fact they can be very eloquent in making their wishes known! At a 
different level, I was sitting on the verandah of our bush house over 
the Christmas break, and a king parrot came in with his mate. They are 
long-lived birds, and relatively territorial, so we know this pair of 
king parrots quite well. The male sat on the roof above me and peered 
round to look at where I was sitting. He then started to vocalize -- 
talk to me -- in response to my quiet welcome and conversation. Sure, no 
English words were used, but the recognition and communication was 
clear. So I put out some seeds on the table, and he was happy. This same 
bird has, on another occasion, sat on the arm of my wife's chair, a few 
feet away, while we talked.


There is also a family of blue fairy wrens living by the house. Being 
the late breeding season, the male is jealous of his territory and 
spends time fighting his reflection in our windows, and in the bonnet of 
the car! He fails the mirror test for consciousness, but he is clearly 
an able and successful breeder. It is harder to establish direct 
communication with wrens, but I have no doubt that they are intelligent 
birds, conscious and knowledgeable of their surroundings.


So there are probably grades of consciousness, just as there are grades 
of ability to communicate. Cats, dogs, and some birds, are quite high on 
this scale, but jellyfish are probably quite low. But can you rule out 
the possibility that some environmental awareness does not constitute 
low level consciousness?



I'm still not clear on whether it is "machines" (axiom systems) which 
/*are*/ conscious, or */can be/* conscious?  Or is it computations 
which /*are*/ consciousness, or /*"support"*/ consciousness?  ISTM 
that my consciousness is only a part of my thinking (and not always 
the best part).


I think the notion of supervenience is the best way to approach it. But 
one must be careful to distinguish models of supervenience for the 
reality of actual consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Brent Meeker



On 1/5/2018 9:30 PM, Bruce Kellett wrote:

On 6/01/2018 4:15 pm, Russell Standish wrote:


Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.


That is an extraordinary claim, and sufficient in itself to falsify 
your theory. I know as a fact that my cat is conscious, and that my 
various dogs over the years have also been conscious. You refer to the 
"mirror test" in your book -- an animal is conscious (is self-aware) 
if it recognizes itself in a mirror. I agree that my cat does not 
recognize itself in the mirror -- but that only goes to show that this 
is a stupid test for consciousness.


But it's still unlikely that ALL animals can be conscious, e.g. 
jellyfish (which recently found to sleep...perchance to dream?).


I'm still not clear on whether it is "machines" (axiom systems) which 
/*are*/ conscious, or */can be/* conscious?  Or is it computations which 
/*are*/ consciousness, or /*"support"*/ consciousness?  ISTM that my 
consciousness is only a part of my thinking (and not always the best part).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruce Kellett

On 6/01/2018 4:15 pm, Russell Standish wrote:


Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.


That is an extraordinary claim, and sufficient in itself to falsify your 
theory. I know as a fact that my cat is conscious, and that my various 
dogs over the years have also been conscious. You refer to the "mirror 
test" in your book -- an animal is conscious (is self-aware) if it 
recognizes itself in a mirror. I agree that my cat does not recognize 
itself in the mirror -- but that only goes to show that this is a stupid 
test for consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Russell Standish
On Sat, Jan 06, 2018 at 03:33:44PM +1100, Bruce Kellett wrote:
> On 6/01/2018 11:50 am, Russell Standish wrote:
> > On Fri, Jan 05, 2018 at 02:22:08PM +1100, Bruce Kellett wrote:
> > > The original suggestion by Russell was that "our human consciousness _is_ 
> > > a
> > > computation (and nothing but a computation)."
> > > 
> > > You seem to be steering away from Russell's straightforward position. If
> > > human consciousness is a computation, then the computation is conscious 
> > > (it
> > > is an identity thesis). You say that the computation cannot be conscious,
> > > but can support a person. It is difficult to see this as anything other 
> > > than
> > > the introduction of a dualistic element: the computation supports a
> > > conscious person, but is not itself conscious? So wherein does 
> > > consciousness
> > > exist? You are introducing some unspecified magic into the equation. And
> > > what characterizes those computations that can support a person from those
> > > that cannot?
> > > 
> > A more precise statement might have computationalism defined as
> > "computational supervenience is true", where computational
> > supervenience means consciousness supervenes on computations.
> > 
> > If you dig into what supervenience means, then it means that
> > no property of consciousness can differ without a difference in the
> > supervened computation.
> > 
> > But you can have a difference in the computation that is not reflected
> > in the consciousness, so strict identity in a mathematical or
> > Leibnizian sense does not follow.
> 
> That probably makes more sense. Supervenience then means that no change in
> consciousness is possible without a corresponding change in the computation;
> whereas there may be differences in the underlying computation without any
> perceptible change in consciousness. Consequently, the same consciousness
> can supervene on a number of different computations (bitstrings).
> 
> In which case, it would seem to differ from the physicalist notion that
> mind, or consciousness, supervenes on the physical brain only in terms of
> the assumed ontology: you assume an ontology consisting of all possible
> bitstrings; the physicalist assumes a materialist ontology. For the
> physicalist, changes in consciousness mean corresponding changes in the
> brain, but different brain states might underlie the same conscious state --
> we are not necessarily conscious of the minutiae of brain chemistry .
> 
> In neither case, however, does that seem to get us any closer to an
> understanding of consciousness. At least the physicalist can manipulate the
> material brain and observe the conscious consequences -- stimulate this area
> and the patient reports feeling happy, or seeing red, or whatever, depending
> on the stimulation. It is difficult to manipulate all bitstrings in this
> manner. And you are still no closer to understanding exactly what properties
> a bitstring (computation) must have to support a mind (consciousness).

A bitstring is not the same as a computation. When I consider all
bitstrings, its more about being able to capture the complete set of
possible experiences (aka observer moments). It seems reasonable that
observer moments are defined by their information content. It is about
what can be said of observed reality (ie phenomenon). We have for
instance, a natural form of Occam's razor, and a requirement of
evolution (and deep time) to bootstrap the assumed necessary
complexity of consciousness. Other things seem possible, such as the
extraordinary unlikelihood that all animals can be conscious.

So some baby steps in the direction ...

> 
> > AFAIK, Bruno uses "support" to mean "is supervened on". I do not know
> > what he means by a "computation cannot be conscious" - as by analogy
> > it would a be similar statement to "a bunch of molecules cannot be a gas".
> 
> There seems to be a close correspondence between the bitstring ontology and
> the universal dovetailer ontology -- the UD, after all, does little more
> than produce all possible bitstrings. Bruno's account of consciousness
> appears to go beyond this by equating the qualia of consciousness with
> self-referential tautologies in modal logic. One worry I have with this is
> that it seems to take a similarity between two things as an indicator that
> the two things are identical -- the "my cat is a dog" fallacy:  basing the
> claim of identity on the superficial similarity that the animals each have
> four legs and a tail. Structural similarity does not, in general, imply
> identity.
>

Not sure where you say he's making this fallacy. Yes, there is a big
assumption that particular model logics describe the operation of
knowledge and hence consciousness. This may or may not ultimately play
out, but it is certainly a start. I'm not sure what a falsification of
any of these modal logics would mean however. Most likely that
the model logic supposedly capturing knowledge, does not in fact
capture 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruce Kellett

On 6/01/2018 7:11 am, Bruno Marchal wrote:

On 5 Jan 2018, at 04:22, Bruce Kellett  wrote:

On 4/01/2018 11:59 pm, Bruno Marchal wrote:

On Jan 4, 2018, at 12:50 PM, Bruce Kellett  wrote:

On 4/01/2018 12:30 am, Bruno Marchal wrote:

On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

On 29/12/2017 10:14 am, Russell Standish wrote:

This is computationalism - the idea that our human consciousness _is_
a computation (and nothing but a computation).

What distinguishes a conscious computation within the class of all 
computations? After all, not all computations are conscious.

Universality seems enough.

What is a universal computation? From what you say below, universality appears 
to be a property of a machine, not of a computation.

OK, universality is an attribute of a machine, relatively to some universal 
machinery, like arithmetic or physics.



But just universality gives rise only to a highly non standard, dissociative, 
form of consciousness. It might correspond to the cosmic consciousness alluded 
by people living highly altered state of consciousness.

You need Löbianity to get *self-consciousness*, or reflexive consciousness. A machine is Löbian when its 
universality is knowable by it. Equivalently, when the machine is universal and can prove its own 
"Löb's formula". []([]p -> p) -> []p. Note that the second incompleteness theorem is the 
particular case with p = f (f = "0≠1").

Löbanity is a property of the machine, not of the computation.

Yes. The same. I was talking about machine or about the person supported by 
those machine. No machine (as conceived as a code, number, physical object) can 
ever be conscious or think. It is always a more abstract notion implemented 
through some machinery which do the thinking.

Similarly a computation cannot be conscious, but it can support a person, which 
is the one having genuinely the thinking or conscious attribute.

The original suggestion by Russell was that "our human consciousness _is_ a 
computation (and nothing but a computation).”

You can’t equate a first person notion with a third person notion. You can try 
to associate them, but consciousness will involve undefinable notion and 
reference to some notion of truth.


Russell clarifies his position to say that consciousness 'supervenes' on 
the computation. But consciousness is not a purely first person concept 
-- I can know that you are conscious by collecting evidence and making 
an inference. You have to have a narrow interpretation of 'knowledge' to 
rule this out, and such a narrow definition would rule out all 
scientific knowledge.



You seem to be steering away from Russell's straightforward position.

I think you sur-interpret Russell, which might have commit “abuse of language”, 
forgetting not all knows we have already discussed this. I don’t know. His 
theory mentioned all infinite strings, that is the reals, which are not Turing 
universal. His approach, which has common things with the consequence of comp 
is not per se comp driven.


The UD includes non-halting programs so can generate all bitstrings, 
including the reals.



If human consciousness is a computation,

That does not make any sense. A mental attribute of a first person cannot be 
identified with any third person notion. But you can associate a mentality to 
higher order number relation whose semantics involves themselves. Consciousness 
starts as the bet in a reality and ends with the awakening of a person.


Poetic, but what does it mean?


then the computation is conscious (it is an identity thesis). You say that the 
computation cannot be conscious, but can support a person. It is difficult to 
see this as anything other than the introduction of a dualistic element: the 
computation supports a conscious person, but is not itself conscious? So 
wherein does consciousness exist? You are introducing some unspecified magic 
into the equation. And what characterizes those computations that can support a 
person from those that cannot?

Do you agree with the following propositions, with “I” denoting you:

1) I am conscious (here and now)

2) I can’t prove that I am conscious to any bad willing person (like alien who 
would  believe that you are a zombie)

3) I cannot doubt consciousness

4) I cannot define consciousness.

5) I know that I am conscious.

If yes, I define the consciousness of the person associated to the 
machine-number u, in some base, as an attribute (definable in arithmetic, or in 
second order arithmetic) which verifies those proposition.


No it does not verify these propositions, it merely creates a model in 
which they are true according to the axioms of the model. You have no 
reason to assume that minds, consciousness, or human persons are the 
products of an axiomatic system. This is another version of the "my cat 
is a dog" fallacy.


Bruce



  You might add something to make it more particular, but we have to reason 
about something, 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruce Kellett

On 6/01/2018 11:50 am, Russell Standish wrote:

On Fri, Jan 05, 2018 at 02:22:08PM +1100, Bruce Kellett wrote:

The original suggestion by Russell was that "our human consciousness _is_ a
computation (and nothing but a computation)."

You seem to be steering away from Russell's straightforward position. If
human consciousness is a computation, then the computation is conscious (it
is an identity thesis). You say that the computation cannot be conscious,
but can support a person. It is difficult to see this as anything other than
the introduction of a dualistic element: the computation supports a
conscious person, but is not itself conscious? So wherein does consciousness
exist? You are introducing some unspecified magic into the equation. And
what characterizes those computations that can support a person from those
that cannot?


A more precise statement might have computationalism defined as
"computational supervenience is true", where computational
supervenience means consciousness supervenes on computations.

If you dig into what supervenience means, then it means that
no property of consciousness can differ without a difference in the
supervened computation.

But you can have a difference in the computation that is not reflected
in the consciousness, so strict identity in a mathematical or
Leibnizian sense does not follow.


That probably makes more sense. Supervenience then means that no change 
in consciousness is possible without a corresponding change in the 
computation; whereas there may be differences in the underlying 
computation without any perceptible change in consciousness. 
Consequently, the same consciousness can supervene on a number of 
different computations (bitstrings).


In which case, it would seem to differ from the physicalist notion that 
mind, or consciousness, supervenes on the physical brain only in terms 
of the assumed ontology: you assume an ontology consisting of all 
possible bitstrings; the physicalist assumes a materialist ontology. For 
the physicalist, changes in consciousness mean corresponding changes in 
the brain, but different brain states might underlie the same conscious 
state -- we are not necessarily conscious of the minutiae of brain 
chemistry .


In neither case, however, does that seem to get us any closer to an 
understanding of consciousness. At least the physicalist can manipulate 
the material brain and observe the conscious consequences -- stimulate 
this area and the patient reports feeling happy, or seeing red, or 
whatever, depending on the stimulation. It is difficult to manipulate 
all bitstrings in this manner. And you are still no closer to 
understanding exactly what properties a bitstring (computation) must 
have to support a mind (consciousness).



AFAIK, Bruno uses "support" to mean "is supervened on". I do not know
what he means by a "computation cannot be conscious" - as by analogy
it would a be similar statement to "a bunch of molecules cannot be a gas".


There seems to be a close correspondence between the bitstring ontology 
and the universal dovetailer ontology -- the UD, after all, does little 
more than produce all possible bitstrings. Bruno's account of 
consciousness appears to go beyond this by equating the qualia of 
consciousness with self-referential tautologies in modal logic. One 
worry I have with this is that it seems to take a similarity between two 
things as an indicator that the two things are identical -- the "my cat 
is a dog" fallacy:  basing the claim of identity on the superficial 
similarity that the animals each have four legs and a tail. Structural 
similarity does not, in general, imply identity.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Russell Standish
On Fri, Jan 05, 2018 at 09:11:49PM +0100, Bruno Marchal wrote:
> 
> I think you sur-interpret Russell, which might have commit “abuse of 
> language”, forgetting not all knows we have already discussed this. I don’t 
> know. His theory mentioned all infinite strings, that is the reals, which are 
> not Turing universal. His approach, which has common things with the 
> consequence of comp is not per se comp driven.

Yes - it is important to bear in mind. My theory is not necessarily
computationalist, but AFAIK it can be derived from
computationalism. Too many people assume that I am a computationalist
when I'm simply neutral on the matter.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Russell Standish
On Fri, Jan 05, 2018 at 02:22:08PM +1100, Bruce Kellett wrote:
> 
> The original suggestion by Russell was that "our human consciousness _is_ a
> computation (and nothing but a computation)."
> 
> You seem to be steering away from Russell's straightforward position. If
> human consciousness is a computation, then the computation is conscious (it
> is an identity thesis). You say that the computation cannot be conscious,
> but can support a person. It is difficult to see this as anything other than
> the introduction of a dualistic element: the computation supports a
> conscious person, but is not itself conscious? So wherein does consciousness
> exist? You are introducing some unspecified magic into the equation. And
> what characterizes those computations that can support a person from those
> that cannot?
> 

A more precise statement might have computationalism defined as
"computational supervenience is true", where computational
supervenience means consciousness supervenes on computations.

If you dig into what supervenience means, then it means that
no property of consciousness can differ without a difference in the
supervened computation.

But you can have a difference in the computation that is not reflected
in the consciousness, so strict identity in a mathematical or
Leibnizian sense does not follow.

AFAIK, Bruno uses "support" to mean "is supervened on". I do not know
what he means by a "computation cannot be conscious" - as by analogy
it would a be similar statement to "a bunch of molecules cannot be a gas".


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 Jan 2018 23:42, "Brent Meeker"  wrote:



On 1/5/2018 3:02 PM, David Nyman wrote:



On 5 January 2018 at 22:52, Brent Meeker  wrote:

>
>
> On 1/5/2018 2:17 PM, David Nyman wrote:
>
> To take a more realistic example.
>>
>
> ​I do so love your appropriation of ​the terms 'real' and 'realistic' to
> your own theories.
>
>
I'll take that as rhetorical snark.


Well it may have been snarky, but it wasn't meant rhetorically. Realism is
one of the key things at issue, so it isn't playing fair to appropriate it
to your own theories as by right.




> I think a Mars Rover reporting its battery is low is more realistic than
> it seeing an apple in the most common sense of "realistic".
>

A Mars Rover reporting that its battery is low is roughly analogous to your
judgement that I might look like I could do with a good meal.


No, I'm using "reporting" in the common sense that the rover sent a message
in Engligsh, "My battery is low".  That's a perception by the rover, just
as "I see an apple." is a report of an perception.


You've just equated the two different things that are at issue. If you had
said a *report* of a perception by the rover, that might be analogous to my
reporting or judging that "I see an apple". The question in the case of the
rover would then be if it were the type of 'perception' that both entailed,
and is capable of referring to, a phenomenal counterpart. Does the rover
feel hungry or low in energy? In what ways is it aware of this? If some
future evolution of the rover were capable of truthfully communicating
reports of its phenomenal states, such states being entailments of internal
judgments (as distinct from some superadded mimicry, as with a marionette)
then I would have little principled reason to disbelieve it.




'Reporting' in this sense is the more or less straightforward 'reading' by
you of some state, in this case of charge. It has no semantic content
independent of that fact, any more than my 'looking hungry'.


And "seeing an apple" is different how (aside from the fact there aren't
apples on Mars?


Ah. Assuming you are serious, at this point I think the conversation has to
terminate.

David



Brent



David
 ​

>
>
> Brent
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Brent Meeker



On 1/5/2018 3:02 PM, David Nyman wrote:



On 5 January 2018 at 22:52, Brent Meeker > wrote:




On 1/5/2018 2:17 PM, David Nyman wrote:


To take a more realistic example.


​I do so love your appropriation of ​the terms 'real' and
'realistic' to your own theories.




I'll take that as rhetorical snark.



I think a Mars Rover reporting its battery is low is more
realistic than it seeing an apple in the most common sense of
"realistic".


A Mars Rover reporting that its battery is low is roughly analogous to 
your judgement that I might look like I could do with a good meal.


No, I'm using "reporting" in the common sense that the rover sent a 
message in Engligsh, "My battery is low".  That's a perception by the 
rover, just as "I see an apple." is a report of an perception.


'Reporting' in this sense is the more or less straightforward 
'reading' by you of some state, in this case of charge. It has no 
semantic content independent of that fact, any more than my 'looking 
hungry'.


And "seeing an apple" is different how (aside from the fact there aren't 
apples on Mars)?


Brent



David
 ​



Brent
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To post to this group, send email to
everything-list@googlegroups.com
.
Visit this group at
https://groups.google.com/group/everything-list
.
For more options, visit https://groups.google.com/d/optout
.


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 January 2018 at 22:52, Brent Meeker  wrote:

>
>
> On 1/5/2018 2:17 PM, David Nyman wrote:
>
> To take a more realistic example.
>>
>
> ​I do so love your appropriation of ​the terms 'real' and 'realistic' to
> your own theories.
>
>
> I think a Mars Rover reporting its battery is low is more realistic than
> it seeing an apple in the most common sense of "realistic".
>

A Mars Rover reporting that its battery is low is roughly analogous to your
judgement that I might look like I could do with a good meal. 'Reporting'
in this sense is the more or less straightforward 'reading' by you of some
state, in this case of charge. It has no semantic content independent of
that fact, any more than my 'looking hungry'.

David
 ​

>
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 January 2018 at 20:41, Brent Meeker  wrote:

>
>
> On 1/5/2018 4:00 AM, David Nyman wrote:
>
>
>
> On 5 Jan 2018 03:22, "Bruce Kellett"  wrote:
>
> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>
>> On Jan 4, 2018, at 12:50 PM, Bruce Kellett 
>>> wrote:
>>>
>>> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>>>
 On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

> On 29/12/2017 10:14 am, Russell Standish wrote:
>
>> This is computationalism - the idea that our human consciousness _is_
>> a computation (and nothing but a computation).
>>
> What distinguishes a conscious computation within the class of all
> computations? After all, not all computations are conscious.
>
 Universality seems enough.

>>> What is a universal computation? From what you say below, universality
>>> appears to be a property of a machine, not of a computation.
>>>
>> OK, universality is an attribute of a machine, relatively to some
>> universal machinery, like arithmetic or physics.
>>
>>
>> But just universality gives rise only to a highly non standard,
 dissociative, form of consciousness. It might correspond to the cosmic
 consciousness alluded by people living highly altered state of
 consciousness.

 You need Löbianity to get *self-consciousness*, or reflexive
 consciousness. A machine is Löbian when its universality is knowable by it.
 Equivalently, when the machine is universal and can prove its own "Löb's
 formula". []([]p -> p) -> []p. Note that the second incompleteness theorem
 is the particular case with p = f (f = "0≠1").

>>> Löbanity is a property of the machine, not of the computation.
>>>
>> Yes. The same. I was talking about machine or about the person supported
>> by those machine. No machine (as conceived as a code, number, physical
>> object) can ever be conscious or think. It is always a more abstract notion
>> implemented through some machinery which do the thinking.
>>
>> Similarly a computation cannot be conscious, but it can support a person,
>> which is the one having genuinely the thinking or conscious attribute.
>>
>
> The original suggestion by Russell was that "our human consciousness _is_
> a computation (and nothing but a computation)."
>
> You seem to be steering away from Russell's straightforward position. If
> human consciousness is a computation, then the computation is conscious (it
> is an identity thesis). You say that the computation cannot be conscious,
> but can support a person. It is difficult to see this as anything other
> than the introduction of a dualistic element: the computation supports a
> conscious person, but is not itself conscious? So wherein does
> consciousness exist? You are introducing some unspecified magic into the
> equation. And what characterizes those computations that can support a
> person from those that cannot?
>
>
> Thanks for the attempt to make this clearer (if not clear).  It is helpful.
>

​Was this addressed to Bruce or to me?

>
> Let me see if I can attempt some sort of answer Bruce. The utility of the
> notion of the 'universal' mechanism is precisely its ability to emulate all
> other finitely computable mechanisms. But not all such mechanisms can be
> associated with persons. What distinguishes this particular class, as
> exemplified by Bruno's modal logic toy models is, in the first instance,
> self-referentiality. This first-personal characteristic is, if you like,
> the fixed point on which every other feature is centred.
>
>
> Is this really a fixed point in the computer science sense of recursion?
> Or are you using this in a metaphorical way?
>

​Metaphorical. It might have some technical application but I'm less than
competent to judge.
​

>
>
> Then, with respect to this fixed point of view, a distinction can be made
> between what is 'believed' (essentially, provably true as a theorem and
> hence communicable) by the machine and what is true *about* the machine
> (essentially, not provable as a theorem and hence not communicable, but
> nonetheless 'epistemically' true).
>
>
> As I understood it the things that are true put unprovable are not about
> the machine (although they may characterize it in some sense) they are just
> sentences whose proof by that machine would lead to contradiction, but
> could be proven by different machine.
>

​Yes, in my understanding that's what makes them true *about* the machine.
​

>
>
>
> Any machine possessing the foregoing features is in principle conscious,
> in the sense of having implicit self-referential epistemic access to
> non-communicable truths that are nonetheless entailed by its explicit and
> communicable 'beliefs'.
>
>
> But why is not being able to prove some sentence like, "I'm lying." and
> example of epistemic access to a non-communicable truth like "I'm
> experiencing red."?
>

​Well, I take it that you aren't actually trying to 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Brent Meeker



On 1/5/2018 2:17 PM, David Nyman wrote:


To take a more realistic example.


​I do so love your appropriation of ​the terms 'real' and 'realistic' 
to your own theories.


I think a Mars Rover reporting its battery is low is more realistic than 
it seeing an apple in the most common sense of "realistic".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 January 2018 at 21:51, Brent Meeker  wrote:

>
>
> On 1/5/2018 6:48 AM, David Nyman wrote:
>
> On 5 January 2018 at 14:06, Jason Resch  wrote:
>
>>
>>
>> On Friday, January 5, 2018, David Nyman  wrote:
>>
>>>
>>>
>>> On 5 Jan 2018 03:22, "Bruce Kellett"  wrote:
>>>
>>> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>>>
 On Jan 4, 2018, at 12:50 PM, Bruce Kellett 
> wrote:
>
> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>
>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>>
>>> On 29/12/2017 10:14 am, Russell Standish wrote:
>>>
 This is computationalism - the idea that our human consciousness
 _is_
 a computation (and nothing but a computation).

>>> What distinguishes a conscious computation within the class of all
>>> computations? After all, not all computations are conscious.
>>>
>> Universality seems enough.
>>
> What is a universal computation? From what you say below, universality
> appears to be a property of a machine, not of a computation.
>
 OK, universality is an attribute of a machine, relatively to some
 universal machinery, like arithmetic or physics.


 But just universality gives rise only to a highly non standard,
>> dissociative, form of consciousness. It might correspond to the cosmic
>> consciousness alluded by people living highly altered state of
>> consciousness.
>>
>> You need Löbianity to get *self-consciousness*, or reflexive
>> consciousness. A machine is Löbian when its universality is knowable by 
>> it.
>> Equivalently, when the machine is universal and can prove its own "Löb's
>> formula". []([]p -> p) -> []p. Note that the second incompleteness 
>> theorem
>> is the particular case with p = f (f = "0≠1").
>>
> Löbanity is a property of the machine, not of the computation.
>
 Yes. The same. I was talking about machine or about the person
 supported by those machine. No machine (as conceived as a code, number,
 physical object) can ever be conscious or think. It is always a more
 abstract notion implemented through some machinery which do the thinking.

 Similarly a computation cannot be conscious, but it can support a
 person, which is the one having genuinely the thinking or conscious
 attribute.

>>>
>>> The original suggestion by Russell was that "our human consciousness
>>> _is_ a computation (and nothing but a computation)."
>>>
>>> You seem to be steering away from Russell's straightforward position. If
>>> human consciousness is a computation, then the computation is conscious (it
>>> is an identity thesis). You say that the computation cannot be conscious,
>>> but can support a person. It is difficult to see this as anything other
>>> than the introduction of a dualistic element: the computation supports a
>>> conscious person, but is not itself conscious? So wherein does
>>> consciousness exist? You are introducing some unspecified magic into the
>>> equation. And what characterizes those computations that can support a
>>> person from those that cannot?
>>>
>>>
>>> Let me see if I can attempt some sort of answer Bruce. The utility of
>>> the notion of the 'universal' mechanism is precisely its ability to emulate
>>> all other finitely computable mechanisms. But not all such mechanisms can
>>> be associated with persons. What distinguishes this particular class, as
>>> exemplified by Bruno's modal logic toy models is, in the first instance,
>>> self-referentiality. This first-personal characteristic is, if you like,
>>> the fixed point on which every other feature is centred. Then, with respect
>>> to this fixed point of view, a distinction can be made between what is
>>> 'believed' (essentially, provably true as a theorem and hence communicable)
>>> by the machine and what is true *about* the machine (essentially, not
>>> provable as a theorem and hence not communicable, but nonetheless
>>> 'epistemically' true).
>>>
>>
>>
>> Given the above, what would be the shortest program with these properties?
>>
>> Is the Mars Rover conscious?
>>
>
> ​Don't forget that the acid test is Yes (or No) Doctor.​ So AFAICT the
> only way to assess whether the Mars Rover is conscious would be against the
> same criteria as for any other independent agent. With human persons,
> assuming truthfulness and consistency, if for example you tell me that you
> see an apple, and that by this you do not alternatively mean to say that
> you have somehow indirectly figured out that your body and an apple are in
> some third-person relation, I typically accept your statement as the
> behavioural corollary of the epistemic truth to which you refer.
>
>
> To take a more realistic example.
>

​I do so love your appropriation of ​the terms 'real' and 'realistic' to
your 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread agrayson2000


On Friday, January 5, 2018 at 2:16:56 PM UTC-7, John Clark wrote:
>
> On Fri, Jan 5, 2018 at 3:19 PM,  wrote:
>
> ​> ​
>>> You
>>>  seem to believe in a primary 
>>> ​mathematical​
>>>  universe
>>> ​, ​but matter by itself can do mathematics however mathematics by 
>>> itself can't do matter.
>>>
>>
>> ​> ​
>> But that's what the MWI does! It creates matter, indeed huge sets of 
>> universes, by inferring a superposition never collapses. AG
>>
>
> *​But the MWI has the help of your flying saucer men in their secret 
> headquarters at Roswell.*
>

*That's a really dumb-shit reply, There could be a brown dwarf star nearby, 
so far undetected, and aliens could have evolved on one of its planets. 
Other possibilities exist. And so far you have failed to respond to your 
easily falsified idea about your overly broad claims of deficiencies of 
witness reports. You utterly fail to take CONTEXT into account. Can a 
dumb-shit recall anything that happened on the day his/her children were 
born many years ago? AG *

>
> *John K Clark ​ *
>
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Brent Meeker



On 1/5/2018 6:48 AM, David Nyman wrote:
On 5 January 2018 at 14:06, Jason Resch > wrote:




On Friday, January 5, 2018, David Nyman > wrote:



On 5 Jan 2018 03:22, "Bruce Kellett"
>
wrote:

On 4/01/2018 11:59 pm, Bruno Marchal wrote:

On Jan 4, 2018, at 12:50 PM, Bruce Kellett
> wrote:

On 4/01/2018 12:30 am, Bruno Marchal wrote:

On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

On 29/12/2017 10:14 am, Russell Standish
wrote:

This is computationalism - the idea
that our human consciousness _is_
a computation (and nothing but a
computation).

What distinguishes a conscious computation
within the class of all computations?
After all, not all computations are conscious.

Universality seems enough.

What is a universal computation? From what you say
below, universality appears to be a property of a
machine, not of a computation.

OK, universality is an attribute of a machine,
relatively to some universal machinery, like
arithmetic or physics.


But just universality gives rise only to a
highly non standard, dissociative, form of
consciousness. It might correspond to the
cosmic consciousness alluded by people living
highly altered state of consciousness.

You need Löbianity to get
*self-consciousness*, or reflexive
consciousness. A machine is Löbian when its
universality is knowable by it. Equivalently,
when the machine is universal and can prove
its own "Löb's formula". []([]p -> p) -> []p.
Note that the second incompleteness theorem is
the particular case with p = f (f = "0≠1").

Löbanity is a property of the machine, not of the
computation.

Yes. The same. I was talking about machine or about
the person supported by those machine. No machine (as
conceived as a code, number, physical object) can ever
be conscious or think. It is always a more abstract
notion implemented through some machinery which do the
thinking.

Similarly a computation cannot be conscious, but it
can support a person, which is the one having
genuinely the thinking or conscious attribute.


The original suggestion by Russell was that "our human
consciousness _is_ a computation (and nothing but a
computation)."

You seem to be steering away from Russell's
straightforward position. If human consciousness is a
computation, then the computation is conscious (it is an
identity thesis). You say that the computation cannot be
conscious, but can support a person. It is difficult to
see this as anything other than the introduction of a
dualistic element: the computation supports a conscious
person, but is not itself conscious? So wherein does
consciousness exist? You are introducing some unspecified
magic into the equation. And what characterizes those
computations that can support a person from those that cannot?


Let me see if I can attempt some sort of answer Bruce. The
utility of the notion of the 'universal' mechanism is
precisely its ability to emulate all other finitely computable
mechanisms. But not all such mechanisms can be associated with
persons. What distinguishes this particular class, as
exemplified by Bruno's modal logic toy models is, in the first
instance, self-referentiality. This first-personal
characteristic is, if you like, the fixed point on which every
other feature is centred. Then, with respect to this fixed
point of view, a distinction can be made between what is
'believed' (essentially, provably true as a theorem and hence
communicable) by the machine and what is true *about* the
machine (essentially, 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread John Clark
On Fri, Jan 5, 2018 at 3:19 PM,  wrote:

​> ​
>> You
>>  seem to believe in a primary
>> ​mathematical​
>>  universe
>> ​, ​but matter by itself can do mathematics however mathematics by itself
>> can't do matter.
>>
>
> ​> ​
> But that's what the MWI does! It creates matter, indeed huge sets of
> universes, by inferring a superposition never collapses. AG
>

*​But the MWI has the help of your flying saucer men in their secret
headquarters at Roswell.*

*John K Clark ​ *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Brent Meeker



On 1/5/2018 4:00 AM, David Nyman wrote:



On 5 Jan 2018 03:22, "Bruce Kellett" > wrote:


On 4/01/2018 11:59 pm, Bruno Marchal wrote:

On Jan 4, 2018, at 12:50 PM, Bruce Kellett
> wrote:

On 4/01/2018 12:30 am, Bruno Marchal wrote:

On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

On 29/12/2017 10:14 am, Russell Standish wrote:

This is computationalism - the idea that our
human consciousness _is_
a computation (and nothing but a computation).

What distinguishes a conscious computation within
the class of all computations? After all, not all
computations are conscious.

Universality seems enough.

What is a universal computation? From what you say below,
universality appears to be a property of a machine, not of
a computation.

OK, universality is an attribute of a machine, relatively to
some universal machinery, like arithmetic or physics.


But just universality gives rise only to a highly non
standard, dissociative, form of consciousness. It
might correspond to the cosmic consciousness alluded
by people living highly altered state of consciousness.

You need Löbianity to get *self-consciousness*, or
reflexive consciousness. A machine is Löbian when its
universality is knowable by it. Equivalently, when the
machine is universal and can prove its own "Löb's
formula". []([]p -> p) -> []p. Note that the second
incompleteness theorem is the particular case with p =
f (f = "0≠1").

Löbanity is a property of the machine, not of the computation.

Yes. The same. I was talking about machine or about the person
supported by those machine. No machine (as conceived as a
code, number, physical object) can ever be conscious or think.
It is always a more abstract notion implemented through some
machinery which do the thinking.

Similarly a computation cannot be conscious, but it can
support a person, which is the one having genuinely the
thinking or conscious attribute.


The original suggestion by Russell was that "our human
consciousness _is_ a computation (and nothing but a computation)."

You seem to be steering away from Russell's straightforward
position. If human consciousness is a computation, then the
computation is conscious (it is an identity thesis). You say that
the computation cannot be conscious, but can support a person. It
is difficult to see this as anything other than the introduction
of a dualistic element: the computation supports a conscious
person, but is not itself conscious? So wherein does consciousness
exist? You are introducing some unspecified magic into the
equation. And what characterizes those computations that can
support a person from those that cannot?



Thanks for the attempt to make this clearer (if not clear).  It is helpful.

Let me see if I can attempt some sort of answer Bruce. The utility of 
the notion of the 'universal' mechanism is precisely its ability to 
emulate all other finitely computable mechanisms. But not all such 
mechanisms can be associated with persons. What distinguishes this 
particular class, as exemplified by Bruno's modal logic toy models is, 
in the first instance, self-referentiality. This first-personal 
characteristic is, if you like, the fixed point on which every other 
feature is centred.


Is this really a fixed point in the computer science sense of 
recursion?  Or are you using this in a metaphorical way?


Then, with respect to this fixed point of view, a distinction can be 
made between what is 'believed' (essentially, provably true as a 
theorem and hence communicable) by the machine and what is true 
*about* the machine (essentially, not provable as a theorem and hence 
not communicable, but nonetheless 'epistemically' true).


As I understood it the things that are true put unprovable are not about 
the machine (although they may characterize it in some sense) they are 
just sentences whose proof by that machine would lead to contradiction, 
but could be proven by different machine.




Any machine possessing the foregoing features is in principle 
conscious, in the sense of having implicit self-referential epistemic 
access to non-communicable truths that are nonetheless entailed by its 
explicit and communicable 'beliefs'.


But why is not being able to prove some sentence like, "I'm lying." and 
example of epistemic access to a 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread agrayson2000


On Friday, January 5, 2018 at 12:44:53 PM UTC-7, John Clark wrote:
>
> On Tue, Jan 2, 2018 at 8:43 AM, Bruno Marchal  > wrote:
>
> ​>> ​
>>> ​in their work real AI scientists don't use personal pronouns with no 
>>> clear referent, ​nor silly homemade terminology.
>>
>>
>> ​> ​
>> See my papers for the mathematical definitions of all pronouns
>>
>
> ​Right, just like your papers explain why pigs fly.​
>  
>  
>
>> ​> ​
>> You did explicitly agree that if two computers, localized in two 
>> different places, but numerically (digitally) identical, and running the 
>> same program, would not, in case they emulate a "conscious program", allow 
>> to that consciousness to be able to localize itself.
>>
>
> ​Yes, and a third party couldn't do it either because consciousness 
> doesn't have a location, but its not unique in that regard, "green" or 
> "fast"​
>  
> ​or "big" doesn't have a location either.
>
> ​> ​
>> That fact is only needed to understand how to extract physics from 
>> elementary arithmetic.
>>
>
> ​That could only be done if arithmetic is more fundamental than physics 
> and I see no evidence that is true, but I do see some evidence it is false. 
>  ​
>  
>  
>
>> ​> ​
>> you seem to believe in a primary physical universe,
>>
>
> You
>  seem to believe in a primary 
> ​mathematical​
>  universe
> ​, ​but matter by itself can do mathematics however mathematics by itself 
> can't do matter.
>

*But that's what the MWI does! It creates matter, indeed huge sets of 
universes, by inferring a superposition never collapses. AG*

>
>  John K Clark
>  
>
>
>
>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruno Marchal

> On 5 Jan 2018, at 04:22, Bruce Kellett  wrote:
> 
> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>>> On Jan 4, 2018, at 12:50 PM, Bruce Kellett  
>>> wrote:
>>> 
>>> On 4/01/2018 12:30 am, Bruno Marchal wrote:
 On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
> On 29/12/2017 10:14 am, Russell Standish wrote:
>> This is computationalism - the idea that our human consciousness _is_
>> a computation (and nothing but a computation).
> What distinguishes a conscious computation within the class of all 
> computations? After all, not all computations are conscious.
 Universality seems enough.
>>> What is a universal computation? From what you say below, universality 
>>> appears to be a property of a machine, not of a computation.
>> OK, universality is an attribute of a machine, relatively to some universal 
>> machinery, like arithmetic or physics.
>> 
>> 
 But just universality gives rise only to a highly non standard, 
 dissociative, form of consciousness. It might correspond to the cosmic 
 consciousness alluded by people living highly altered state of 
 consciousness.
 
 You need Löbianity to get *self-consciousness*, or reflexive 
 consciousness. A machine is Löbian when its universality is knowable by 
 it. Equivalently, when the machine is universal and can prove its own 
 "Löb's formula". []([]p -> p) -> []p. Note that the second incompleteness 
 theorem is the particular case with p = f (f = "0≠1").
>>> Löbanity is a property of the machine, not of the computation.
>> Yes. The same. I was talking about machine or about the person supported by 
>> those machine. No machine (as conceived as a code, number, physical object) 
>> can ever be conscious or think. It is always a more abstract notion 
>> implemented through some machinery which do the thinking.
>> 
>> Similarly a computation cannot be conscious, but it can support a person, 
>> which is the one having genuinely the thinking or conscious attribute.
> 
> The original suggestion by Russell was that "our human consciousness _is_ a 
> computation (and nothing but a computation).”

You can’t equate a first person notion with a third person notion. You can try 
to associate them, but consciousness will involve undefinable notion and 
reference to some notion of truth.



> 
> You seem to be steering away from Russell's straightforward position.

I think you sur-interpret Russell, which might have commit “abuse of language”, 
forgetting not all knows we have already discussed this. I don’t know. His 
theory mentioned all infinite strings, that is the reals, which are not Turing 
universal. His approach, which has common things with the consequence of comp 
is not per se comp driven.



> If human consciousness is a computation,

That does not make any sense. A mental attribute of a first person cannot be 
identified with any third person notion. But you can associate a mentality to 
higher order number relation whose semantics involves themselves. Consciousness 
starts as the bet in a reality and ends with the awakening of a person.



> then the computation is conscious (it is an identity thesis). You say that 
> the computation cannot be conscious, but can support a person. It is 
> difficult to see this as anything other than the introduction of a dualistic 
> element: the computation supports a conscious person, but is not itself 
> conscious? So wherein does consciousness exist? You are introducing some 
> unspecified magic into the equation. And what characterizes those 
> computations that can support a person from those that cannot?

Do you agree with the following propositions, with “I” denoting you:

1) I am conscious (here and now)

2) I can’t prove that I am conscious to any bad willing person (like alien who 
would  believe that you are a zombie)

3) I cannot doubt consciousness

4) I cannot define consciousness.

5) I know that I am conscious.

If yes, I define the consciousness of the person associated to the 
machine-number u, in some base, as an attribute (definable in arithmetic, or in 
second order arithmetic) which verifies those proposition. You might add 
something to make it more particular, but we have to reason about something, 
and be sure we talk on the same thing.

Consciousness is a cousine of knowledge, it is private knowledge that something 
is happening. It can focus and un-focuse attention, and speed up the relative 
computations. It makes you enjoying the good coffee and disguted by the bad one.

Bruno



> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 Jan 2018 19:27, "Bruno Marchal"  wrote:


On 4 Jan 2018, at 21:07, David Nyman  wrote:



On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:


On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:

On 4 January 2018 at 11:55, Bruno Marchal  wrote:

>
> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
> >
> >
> >
> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
> >>
> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
> >>
> >>>
> >>>
> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>  Now, it
>  could be that intelligent behavior implies mind, but as you yourself
>  argue, we don't know that.
> >>>
> >>> Isn't this at the crux of the scientific study of the mind? There
> seemed to be universal agreement on this list that a philosophical zombie
> is impossible.
> >>
> >>
> >> Precisely: that a philosophical zombie is impossible when we assume
> Mechanism.
> >
> > But the consensus here has been that a philosophical zombie is
> impossible because it exhibits intelligent behavior.
>
> Well, I think the consensus here is that computationalism is far more
> plausible than non-computationalism.
> Computationalism makes zombies non sensical.
>
>
>
> >
> >> Philosophical zombie remains logical consistent for a non
> computationalist theory of mind.
> >
> > It's logically consistent with a computationalist theory of brain. It is
> only inconsistent with a computationalist theory of mind because use
> include as an axiom that computation produces mind.  One can as say that
> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
> very cheap standard for theories to meet.
>
> At first sight, zombies seems consistent with computationalism, but the
> notion of zombies requires the idea that we attribute mind to bodies
> (having the right behavior). But with computationalism, mind is never
> associated to a body, but only to the person having the infinity of
> (similar enough) bodies relative representation in arithmetic. There are no
> “real bodies” or “ontological bodies”, so the notion of zombie becomes
> senseless. The consciousness is associated with the person, which is never
> determined by one body.
>

​So in the light of what you say above, does it then follow that the MGA
implies (assuming comp) that a physical system does *not* in fact implement
a computation in the relevant sense?



The physical world has to be able to implement the computation in the
relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
“act of faith.

The physical world is a persistent illusion. It has to be enough persistent
that you wake up at the hospital with the digital brain.



I ask this because you say mind is *never* associated with a body, but mind
*is* associated with computation via the epistemic consequences of
universality.



A (conscious) third person can associate a mind/person to a body that he
perceives. It is polite.

The body perceived by that third person is itself a construction of its own
mind, and with computationalism (but also with QM), we know that such a
body is an (evolving) map of where, and in which states, we could find, sy,
the electron and proton of that body, and such snapshot is only a
computational state among infinitely many others which would works as well,
with respect to the relevant computations which brought its conscious state.
Now, the conscious first person cannot associate itself to any particular
body or computation.

Careful: sometimes I say that a machine can think, or maybe (I usually
avoid) that a computation can think or be conscious. It always mean,
respectively, that a machine can make a person capable of manifesting
itself relatively to you. But the machine and the body are local relative
representation.

A machine cannot think, and a computation (which is the (arithmetical)
dynamic 3p view of the sequence of the relative static machine/state)
cannot think. Only a (first) person can think, and to use that thinking
with respect to another person, a machine is handy, like brain or a
physical computer.

The person is in heaven (arithmetical truth) and on earth (sigma_1
arithmetical truth), simultaneously. But this belongs to G*, and I should
stay mute, or insist that we are in the “after-act-of-faith” position of
the one betting that comp is true, and … assuming comp is true. It is
subtle to talk on those things, and it is important to admit that we don’t
know the truth (or we do get inconsistent and fall in the theological trap).



If so, according to comp, it would follow that (the material appearance and
behaviour of) a body cannot be considered *causally* relevant to the
computation-mind polarity,



Yes, that is true, with respect to the arithmetical truth (where there is
no bodies, only natural numbers), and false for the physical realm, which,
despite being a statistics on dreams (associated to computations in

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread John Clark
On Tue, Jan 2, 2018 at 8:43 AM, Bruno Marchal  wrote:

​>> ​
>> ​in their work real AI scientists don't use personal pronouns with no
>> clear referent, ​nor silly homemade terminology.
>
>
> ​> ​
> See my papers for the mathematical definitions of all pronouns
>

​Right, just like your papers explain why pigs fly.​



> ​> ​
> You did explicitly agree that if two computers, localized in two different
> places, but numerically (digitally) identical, and running the same
> program, would not, in case they emulate a "conscious program", allow to
> that consciousness to be able to localize itself.
>

​Yes, and a third party couldn't do it either because consciousness doesn't
have a location, but its not unique in that regard, "green" or "fast"​

​or "big" doesn't have a location either.

​> ​
> That fact is only needed to understand how to extract physics from
> elementary arithmetic.
>

​That could only be done if arithmetic is more fundamental than physics and
I see no evidence that is true, but I do see some evidence it is false.  ​



> ​> ​
> you seem to believe in a primary physical universe,
>

You
 seem to believe in a primary
​mathematical​
 universe
​, ​but matter by itself can do mathematics however mathematics by itself
can't do matter.

 John K Clark





>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruno Marchal

> On 4 Jan 2018, at 21:07, David Nyman  wrote:
> 
> 
> 
> On 4 Jan 2018 18:16, "Bruno Marchal"  > wrote:
> 
>> On Jan 4, 2018, at 1:22 PM, David Nyman > > wrote:
>> 
>> On 4 January 2018 at 11:55, Bruno Marchal > > wrote:
>> 
>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker > > > wrote:
>> >
>> >
>> >
>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>> >>
>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>> >>
>> >>>
>> >>>
>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>  Now, it
>>  could be that intelligent behavior implies mind, but as you yourself
>>  argue, we don't know that.
>> >>>
>> >>> Isn't this at the crux of the scientific study of the mind? There seemed 
>> >>> to be universal agreement on this list that a philosophical zombie is 
>> >>> impossible.
>> >>
>> >>
>> >> Precisely: that a philosophical zombie is impossible when we assume 
>> >> Mechanism.
>> >
>> > But the consensus here has been that a philosophical zombie is impossible 
>> > because it exhibits intelligent behavior.
>> 
>> Well, I think the consensus here is that computationalism is far more 
>> plausible than non-computationalism.
>> Computationalism makes zombies non sensical.
>> 
>> 
>> 
>> >
>> >> Philosophical zombie remains logical consistent for a non 
>> >> computationalist theory of mind.
>> >
>> > It's logically consistent with a computationalist theory of brain. It is 
>> > only inconsistent with a computationalist theory of mind because use 
>> > include as an axiom that computation produces mind.  One can as say that 
>> > intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
>> > very cheap standard for theories to meet.
>> 
>> At first sight, zombies seems consistent with computationalism, but the 
>> notion of zombies requires the idea that we attribute mind to bodies (having 
>> the right behavior). But with computationalism, mind is never associated to 
>> a body, but only to the person having the infinity of (similar enough) 
>> bodies relative representation in arithmetic. There are no “real bodies” or 
>> “ontological bodies”, so the notion of zombie becomes senseless. The 
>> consciousness is associated with the person, which is never determined by 
>> one body.
>> 
>> ​So in the light of what you say above, does it then follow that the MGA 
>> implies (assuming comp) that a physical system does *not* in fact implement 
>> a computation in the relevant sense?
> 
> 
> The physical world has to be able to implement the computation in the 
> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD “act 
> of faith.
> 
> The physical world is a persistent illusion. It has to be enough persistent 
> that you wake up at the hospital with the digital brain.
> 
> 
> 
>> I ask this because you say mind is *never* associated with a body, but mind 
>> *is* associated with computation via the epistemic consequences of 
>> universality.
> 
> 
> A (conscious) third person can associate a mind/person to a body that he 
> perceives. It is polite. 
> 
> The body perceived by that third person is itself a construction of its own 
> mind, and with computationalism (but also with QM), we know that such a body 
> is an (evolving) map of where, and in which states, we could find, sy, the 
> electron and proton of that body, and such snapshot is only a computational 
> state among infinitely many others which would works as well, with respect to 
> the relevant computations which brought its conscious state.
> Now, the conscious first person cannot associate itself to any particular 
> body or computation.
> 
> Careful: sometimes I say that a machine can think, or maybe (I usually avoid) 
> that a computation can think or be conscious. It always mean, respectively, 
> that a machine can make a person capable of manifesting itself relatively to 
> you. But the machine and the body are local relative representation.
> 
> A machine cannot think, and a computation (which is the (arithmetical) 
> dynamic 3p view of the sequence of the relative static machine/state) cannot 
> think. Only a (first) person can think, and to use that thinking with respect 
> to another person, a machine is handy, like brain or a physical computer.
> 
> The person is in heaven (arithmetical truth) and on earth (sigma_1 
> arithmetical truth), simultaneously. But this belongs to G*, and I should 
> stay mute, or insist that we are in the “after-act-of-faith” position of the 
> one betting that comp is true, and … assuming comp is true. It is subtle to 
> talk on those things, and it is important to admit that we don’t know the 
> truth (or we do get inconsistent and fall in the theological trap).
> 
> 
> 
>> If so, according to comp, it would follow that (the material appearance and 
>> 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 January 2018 at 14:06, Jason Resch  wrote:

>
>
> On Friday, January 5, 2018, David Nyman  wrote:
>
>>
>>
>> On 5 Jan 2018 03:22, "Bruce Kellett"  wrote:
>>
>> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>>
>>> On Jan 4, 2018, at 12:50 PM, Bruce Kellett 
 wrote:

 On 4/01/2018 12:30 am, Bruno Marchal wrote:

> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>
>> On 29/12/2017 10:14 am, Russell Standish wrote:
>>
>>> This is computationalism - the idea that our human consciousness _is_
>>> a computation (and nothing but a computation).
>>>
>> What distinguishes a conscious computation within the class of all
>> computations? After all, not all computations are conscious.
>>
> Universality seems enough.
>
 What is a universal computation? From what you say below, universality
 appears to be a property of a machine, not of a computation.

>>> OK, universality is an attribute of a machine, relatively to some
>>> universal machinery, like arithmetic or physics.
>>>
>>>
>>> But just universality gives rise only to a highly non standard,
> dissociative, form of consciousness. It might correspond to the cosmic
> consciousness alluded by people living highly altered state of
> consciousness.
>
> You need Löbianity to get *self-consciousness*, or reflexive
> consciousness. A machine is Löbian when its universality is knowable by 
> it.
> Equivalently, when the machine is universal and can prove its own "Löb's
> formula". []([]p -> p) -> []p. Note that the second incompleteness theorem
> is the particular case with p = f (f = "0≠1").
>
 Löbanity is a property of the machine, not of the computation.

>>> Yes. The same. I was talking about machine or about the person supported
>>> by those machine. No machine (as conceived as a code, number, physical
>>> object) can ever be conscious or think. It is always a more abstract notion
>>> implemented through some machinery which do the thinking.
>>>
>>> Similarly a computation cannot be conscious, but it can support a
>>> person, which is the one having genuinely the thinking or conscious
>>> attribute.
>>>
>>
>> The original suggestion by Russell was that "our human consciousness _is_
>> a computation (and nothing but a computation)."
>>
>> You seem to be steering away from Russell's straightforward position. If
>> human consciousness is a computation, then the computation is conscious (it
>> is an identity thesis). You say that the computation cannot be conscious,
>> but can support a person. It is difficult to see this as anything other
>> than the introduction of a dualistic element: the computation supports a
>> conscious person, but is not itself conscious? So wherein does
>> consciousness exist? You are introducing some unspecified magic into the
>> equation. And what characterizes those computations that can support a
>> person from those that cannot?
>>
>>
>> Let me see if I can attempt some sort of answer Bruce. The utility of the
>> notion of the 'universal' mechanism is precisely its ability to emulate all
>> other finitely computable mechanisms. But not all such mechanisms can be
>> associated with persons. What distinguishes this particular class, as
>> exemplified by Bruno's modal logic toy models is, in the first instance,
>> self-referentiality. This first-personal characteristic is, if you like,
>> the fixed point on which every other feature is centred. Then, with respect
>> to this fixed point of view, a distinction can be made between what is
>> 'believed' (essentially, provably true as a theorem and hence communicable)
>> by the machine and what is true *about* the machine (essentially, not
>> provable as a theorem and hence not communicable, but nonetheless
>> 'epistemically' true).
>>
>
>
> Given the above, what would be the shortest program with these properties?
>
> Is the Mars Rover conscious?
>

​Don't forget that the acid test is Yes (or No) Doctor.​ So AFAICT the only
way to assess whether the Mars Rover is conscious would be against the same
criteria as for any other independent agent. With human persons, assuming
truthfulness and consistency, if for example you tell me that you see an
apple, and that by this you do not alternatively mean to say that you have
somehow indirectly figured out that your body and an apple are in some
third-person relation, I typically accept your statement as the behavioural
corollary of the epistemic truth to which you refer.

I suppose, notwithstanding this, if I were a computationalist Doctor, I
might have some theory about the shortest program with the relevant
characteristics to be associated with epistemic phenomena of the relevant
sort. But I'm not and I don't, so please don't ask me to operate.

David

Jason
>
>
>>
>> Any machine possessing the foregoing features is in principle 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Jason Resch
On Friday, January 5, 2018, David Nyman  wrote:

>
>
> On 5 Jan 2018 03:22, "Bruce Kellett"  wrote:
>
> On 4/01/2018 11:59 pm, Bruno Marchal wrote:
>
>> On Jan 4, 2018, at 12:50 PM, Bruce Kellett 
>>> wrote:
>>>
>>> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>>>
 On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

> On 29/12/2017 10:14 am, Russell Standish wrote:
>
>> This is computationalism - the idea that our human consciousness _is_
>> a computation (and nothing but a computation).
>>
> What distinguishes a conscious computation within the class of all
> computations? After all, not all computations are conscious.
>
 Universality seems enough.

>>> What is a universal computation? From what you say below, universality
>>> appears to be a property of a machine, not of a computation.
>>>
>> OK, universality is an attribute of a machine, relatively to some
>> universal machinery, like arithmetic or physics.
>>
>>
>> But just universality gives rise only to a highly non standard,
 dissociative, form of consciousness. It might correspond to the cosmic
 consciousness alluded by people living highly altered state of
 consciousness.

 You need Löbianity to get *self-consciousness*, or reflexive
 consciousness. A machine is Löbian when its universality is knowable by it.
 Equivalently, when the machine is universal and can prove its own "Löb's
 formula". []([]p -> p) -> []p. Note that the second incompleteness theorem
 is the particular case with p = f (f = "0≠1").

>>> Löbanity is a property of the machine, not of the computation.
>>>
>> Yes. The same. I was talking about machine or about the person supported
>> by those machine. No machine (as conceived as a code, number, physical
>> object) can ever be conscious or think. It is always a more abstract notion
>> implemented through some machinery which do the thinking.
>>
>> Similarly a computation cannot be conscious, but it can support a person,
>> which is the one having genuinely the thinking or conscious attribute.
>>
>
> The original suggestion by Russell was that "our human consciousness _is_
> a computation (and nothing but a computation)."
>
> You seem to be steering away from Russell's straightforward position. If
> human consciousness is a computation, then the computation is conscious (it
> is an identity thesis). You say that the computation cannot be conscious,
> but can support a person. It is difficult to see this as anything other
> than the introduction of a dualistic element: the computation supports a
> conscious person, but is not itself conscious? So wherein does
> consciousness exist? You are introducing some unspecified magic into the
> equation. And what characterizes those computations that can support a
> person from those that cannot?
>
>
> Let me see if I can attempt some sort of answer Bruce. The utility of the
> notion of the 'universal' mechanism is precisely its ability to emulate all
> other finitely computable mechanisms. But not all such mechanisms can be
> associated with persons. What distinguishes this particular class, as
> exemplified by Bruno's modal logic toy models is, in the first instance,
> self-referentiality. This first-personal characteristic is, if you like,
> the fixed point on which every other feature is centred. Then, with respect
> to this fixed point of view, a distinction can be made between what is
> 'believed' (essentially, provably true as a theorem and hence communicable)
> by the machine and what is true *about* the machine (essentially, not
> provable as a theorem and hence not communicable, but nonetheless
> 'epistemically' true).
>


Given the above, what would be the shortest program with these properties?

Is the Mars Rover conscious?

Jason


>
> Any machine possessing the foregoing features is in principle conscious,
> in the sense of having implicit self-referential epistemic access to
> non-communicable truths that are nonetheless entailed by its explicit and
> communicable 'beliefs'. Of course it's a long step from the toy model to
> the human person, but I think one can still discern the thread. The
> machine's 'beliefs' can now be represented as communicable and explicit
> third-person behaviour with respect to a 'physical' environment in which it
> is embedded; however, associated with this behaviour there are true but
> non-communicable epistemic phenomena to which the behaviour indirectly
> refers (i.e. they are true *about* the machine). An example of this would
> be any statement (or judgment, in the usual terminology of the field) you
> might make about your own phenomenal experience, as in "I see an apple". In
> behavioral terms, this statement or judgment is cashed out purely as
> physical action (neurocognitive, neuromuscular, etc). In epistemic terms
> however it cashes out as a truth (tautological and hence 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread David Nyman
On 5 Jan 2018 03:22, "Bruce Kellett"  wrote:

On 4/01/2018 11:59 pm, Bruno Marchal wrote:

> On Jan 4, 2018, at 12:50 PM, Bruce Kellett 
>> wrote:
>>
>> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>>
>>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>>>
 On 29/12/2017 10:14 am, Russell Standish wrote:

> This is computationalism - the idea that our human consciousness _is_
> a computation (and nothing but a computation).
>
 What distinguishes a conscious computation within the class of all
 computations? After all, not all computations are conscious.

>>> Universality seems enough.
>>>
>> What is a universal computation? From what you say below, universality
>> appears to be a property of a machine, not of a computation.
>>
> OK, universality is an attribute of a machine, relatively to some
> universal machinery, like arithmetic or physics.
>
>
> But just universality gives rise only to a highly non standard,
>>> dissociative, form of consciousness. It might correspond to the cosmic
>>> consciousness alluded by people living highly altered state of
>>> consciousness.
>>>
>>> You need Löbianity to get *self-consciousness*, or reflexive
>>> consciousness. A machine is Löbian when its universality is knowable by it.
>>> Equivalently, when the machine is universal and can prove its own "Löb's
>>> formula". []([]p -> p) -> []p. Note that the second incompleteness theorem
>>> is the particular case with p = f (f = "0≠1").
>>>
>> Löbanity is a property of the machine, not of the computation.
>>
> Yes. The same. I was talking about machine or about the person supported
> by those machine. No machine (as conceived as a code, number, physical
> object) can ever be conscious or think. It is always a more abstract notion
> implemented through some machinery which do the thinking.
>
> Similarly a computation cannot be conscious, but it can support a person,
> which is the one having genuinely the thinking or conscious attribute.
>

The original suggestion by Russell was that "our human consciousness _is_ a
computation (and nothing but a computation)."

You seem to be steering away from Russell's straightforward position. If
human consciousness is a computation, then the computation is conscious (it
is an identity thesis). You say that the computation cannot be conscious,
but can support a person. It is difficult to see this as anything other
than the introduction of a dualistic element: the computation supports a
conscious person, but is not itself conscious? So wherein does
consciousness exist? You are introducing some unspecified magic into the
equation. And what characterizes those computations that can support a
person from those that cannot?


Let me see if I can attempt some sort of answer Bruce. The utility of the
notion of the 'universal' mechanism is precisely its ability to emulate all
other finitely computable mechanisms. But not all such mechanisms can be
associated with persons. What distinguishes this particular class, as
exemplified by Bruno's modal logic toy models is, in the first instance,
self-referentiality. This first-personal characteristic is, if you like,
the fixed point on which every other feature is centred. Then, with respect
to this fixed point of view, a distinction can be made between what is
'believed' (essentially, provably true as a theorem and hence communicable)
by the machine and what is true *about* the machine (essentially, not
provable as a theorem and hence not communicable, but nonetheless
'epistemically' true).

Any machine possessing the foregoing features is in principle conscious, in
the sense of having implicit self-referential epistemic access to
non-communicable truths that are nonetheless entailed by its explicit and
communicable 'beliefs'. Of course it's a long step from the toy model to
the human person, but I think one can still discern the thread. The
machine's 'beliefs' can now be represented as communicable and explicit
third-person behaviour with respect to a 'physical' environment in which it
is embedded; however, associated with this behaviour there are true but
non-communicable epistemic phenomena to which the behaviour indirectly
refers (i.e. they are true *about* the machine). An example of this would
be any statement (or judgment, in the usual terminology of the field) you
might make about your own phenomenal experience, as in "I see an apple". In
behavioral terms, this statement or judgment is cashed out purely as
physical action (neurocognitive, neuromuscular, etc). In epistemic terms
however it cashes out as a truth (tautological and hence undoubtable)
that's implied by that same behaviour - in other words that it is in fact
the case that you really do see an apple.

It's important to take into account that all the terms employed are given
precise technical sense in mathematical logic, emulable in computation.
Bruno's computational schema is an attempt (motivated by 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruno Marchal

> On 4 Jan 2018, at 21:07, David Nyman  wrote:
> 
> 
> 
> On 4 Jan 2018 18:16, "Bruno Marchal"  > wrote:
> 
>> On Jan 4, 2018, at 1:22 PM, David Nyman > > wrote:
>> 
>> On 4 January 2018 at 11:55, Bruno Marchal > > wrote:
>> 
>> > On Jan 3, 2018, at 10:57 PM, Brent Meeker > > > wrote:
>> >
>> >
>> >
>> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>> >>
>> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>> >>
>> >>>
>> >>>
>> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>  Now, it
>>  could be that intelligent behavior implies mind, but as you yourself
>>  argue, we don't know that.
>> >>>
>> >>> Isn't this at the crux of the scientific study of the mind? There seemed 
>> >>> to be universal agreement on this list that a philosophical zombie is 
>> >>> impossible.
>> >>
>> >>
>> >> Precisely: that a philosophical zombie is impossible when we assume 
>> >> Mechanism.
>> >
>> > But the consensus here has been that a philosophical zombie is impossible 
>> > because it exhibits intelligent behavior.
>> 
>> Well, I think the consensus here is that computationalism is far more 
>> plausible than non-computationalism.
>> Computationalism makes zombies non sensical.
>> 
>> 
>> 
>> >
>> >> Philosophical zombie remains logical consistent for a non 
>> >> computationalist theory of mind.
>> >
>> > It's logically consistent with a computationalist theory of brain. It is 
>> > only inconsistent with a computationalist theory of mind because use 
>> > include as an axiom that computation produces mind.  One can as say that 
>> > intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
>> > very cheap standard for theories to meet.
>> 
>> At first sight, zombies seems consistent with computationalism, but the 
>> notion of zombies requires the idea that we attribute mind to bodies (having 
>> the right behavior). But with computationalism, mind is never associated to 
>> a body, but only to the person having the infinity of (similar enough) 
>> bodies relative representation in arithmetic. There are no “real bodies” or 
>> “ontological bodies”, so the notion of zombie becomes senseless. The 
>> consciousness is associated with the person, which is never determined by 
>> one body.
>> 
>> ​So in the light of what you say above, does it then follow that the MGA 
>> implies (assuming comp) that a physical system does *not* in fact implement 
>> a computation in the relevant sense?
> 
> 
> The physical world has to be able to implement the computation in the 
> relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD “act 
> of faith.
> 
> The physical world is a persistent illusion. It has to be enough persistent 
> that you wake up at the hospital with the digital brain.
> 
> 
> 
>> I ask this because you say mind is *never* associated with a body, but mind 
>> *is* associated with computation via the epistemic consequences of 
>> universality.
> 
> 
> A (conscious) third person can associate a mind/person to a body that he 
> perceives. It is polite. 
> 
> The body perceived by that third person is itself a construction of its own 
> mind, and with computationalism (but also with QM), we know that such a body 
> is an (evolving) map of where, and in which states, we could find, sy, the 
> electron and proton of that body, and such snapshot is only a computational 
> state among infinitely many others which would works as well, with respect to 
> the relevant computations which brought its conscious state.
> Now, the conscious first person cannot associate itself to any particular 
> body or computation.
> 
> Careful: sometimes I say that a machine can think, or maybe (I usually avoid) 
> that a computation can think or be conscious. It always mean, respectively, 
> that a machine can make a person capable of manifesting itself relatively to 
> you. But the machine and the body are local relative representation.
> 
> A machine cannot think, and a computation (which is the (arithmetical) 
> dynamic 3p view of the sequence of the relative static machine/state) cannot 
> think. Only a (first) person can think, and to use that thinking with respect 
> to another person, a machine is handy, like brain or a physical computer.
> 
> The person is in heaven (arithmetical truth) and on earth (sigma_1 
> arithmetical truth), simultaneously. But this belongs to G*, and I should 
> stay mute, or insist that we are in the “after-act-of-faith” position of the 
> one betting that comp is true, and … assuming comp is true. It is subtle to 
> talk on those things, and it is important to admit that we don’t know the 
> truth (or we do get inconsistent and fall in the theological trap).
> 
> 
> 
>> If so, according to comp, it would follow that (the material appearance and 
>> 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruno Marchal

> On 4 Jan 2018, at 19:31, Brent Meeker  wrote:
> 
> 
> 
> On 1/4/2018 3:55 AM, Bruno Marchal wrote:
>>> On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 1/3/2018 5:47 AM, Bruno Marchal wrote:
 On 03 Jan 2018, at 03:39, Brent Meeker wrote:
 
> 
> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>> Now, it
>> could be that intelligent behavior implies mind, but as you yourself
>> argue, we don't know that.
> Isn't this at the crux of the scientific study of the mind? There seemed 
> to be universal agreement on this list that a philosophical zombie is 
> impossible.
 
 Precisely: that a philosophical zombie is impossible when we assume 
 Mechanism.
>>> But the consensus here has been that a philosophical zombie is impossible 
>>> because it exhibits intelligent behavior.
>> Well, I think the consensus here is that computationalism is far more 
>> plausible than non-computationalism.
>> Computationalism makes zombies non sensical.
>> 
>> 
>> 
 Philosophical zombie remains logical consistent for a non computationalist 
 theory of mind.
>>> It's logically consistent with a computationalist theory of brain. It is 
>>> only inconsistent with a computationalist theory of mind because use 
>>> include as an axiom that computation produces mind.  One can as say that 
>>> intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
>>> very cheap standard for theories to meet.
>> At first sight, zombies seems consistent with computationalism, but the 
>> notion of zombies requires the idea that we attribute mind to bodies (having 
>> the right behavior). But with computationalism, mind is never associated to 
>> a body, but only to the person having the infinity of (similar enough) 
>> bodies relative representation in arithmetic. There are no “real bodies” or 
>> “ontological bodies”, so the notion of zombie becomes senseless. The 
>> consciousness is associated with the person, which is never determined by 
>> one body.
> 
> That's just word salad: A person has an infinity of bodies, but they're in 
> arithmetic and there are no real bodies so the person is not determined by 
> one body. ??

In the relative 3p way, the person is determined by the 
body/code/number/representation.

In the 1p way, the person is determined by the FPI on infinities of 
computations.






> 
>> 
>> 
>> 
>> 
 
 
 
> Of course that doesn't mean it's true. But it seems as good a working 
> hypothesis as "Yes, doctor".  And in fact it's the working hypothesis of 
> most studies of neurocognition, intelligence, and mind.
 Neuroscience and AI often bet, more or less explicitly, on mechanism, or 
 on its "strong AI" weakenings.
 
 (Note that UDA use mechanism, but its translation in arithmetic needs only 
 strong-AI. Note that if strong AI is true, and comp false, we get 
 infinitely many zombies in arithmetic.
>>> How do you know that?
>> 
>> I was wrong. Wrote to quickly. It is only if weak AI is true, and strong AI 
>> or comp false, that there will be infinitely many zombies in arithmetic. Of 
>> course, if strong AI is false, comp is false too.
>> 
>> 
>> 
>> 
>> 
 very curious one, which lacks body and mind, but act like you and me. They 
 are quite similar with the "Bohm's zombies", the beings in the branches of 
 the universal quantum wave which have no particles.
 
 
> If it's true then it provides a link from intelligent behavior to mind.
 The "non-zombie" principle is a consequence of comp, but I doubt that it 
 implies comp. It is not related to finiteness, as comp and strong AI are.
 
 
 
> We already have links from from physics to brain to intelligent behavior. 
>  So why isn't this the physics based theory of mind that Bruno et al keep 
> saying is impossible?
 This is a bit ambiguous and misleading. Comp makes physics necessary, and 
 that is why with Occam, physics cannot be assumed primitively if we want 
 to use actual physics to verify or refute comp.
>>> That very much depends on what physics comp makes necessary.
>> Well, if it violate our empirical physics, comp is refuted.
> 
> That's why we need to know what physics comp makes necessary in order to test 
> it.

Yes. And we know already that its propositional logic is a quantum logic of 
some sort. 



> 
>> 
>> 
>> 
 We can of course assume physics when doing physics, but not when doing 
 computationalist theory of mind.
>>> No, but we can assume physics when doing physicalist theory of mind.
>> Yes, but then the point is that a physicalist theory of mind (like with 
>> consciousness reducing the wave) will be non-computationalist.
>> 
>> 
 OK? "physics" is necessary for machine/numbers is what makes the physical 
 assumption eliminable, and is what makes computationalism testable.
>>> But it doesn't 

Re: What falsifiability tests has computationalism passed?

2018-01-05 Thread Bruno Marchal

> On 5 Jan 2018, at 04:36, Bruce Kellett  wrote:
> 
> On 5/01/2018 1:46 am, Jason Resch wrote:
>> On Thursday, January 4, 2018, Bruce Kellett < 
>> bhkell...@optusnet.com.au 
>> > wrote:
>> On 4/01/2018 6:41 pm, Quentin Anciaux wrote:
>>> 2018-01-04 6:57 GMT+01:00 Bruce Kellett >> >:
>>> My abacus does not talk to me.
>>> 
>>> 
>>> That would mean no computation are conscious at all...
>> 
>> No, that does not follow. Even if consciousness is a computation, it does 
>> not follow that all computations are conscious: A is a B does not imply that 
>> all Bs are As.
>> 
>>> technically your abacus is turing complete (well it has to be large 
>>> enough), so it could run a conscious computation... but that doesn't mean 
>>> that computation could talk to you, for that it would also need an I/O 
>>> system with our reality.
>> 
>> No, it does not have the necessary I/O equipment. But, as above, even Turing 
>> completeness does not mean that every computation such a Turing machine 
>> makes is conscious.
>> 
>> The real point of my original comment was that the only way you can 
>> distinguish a conscious computation from a non-conscious one is if it is 
>> conscious. In other words, the suggestion that consciousness is a 
>> computation tells you absolutely nothing interesting about consciousness.
>> 
>> 
>> Bruce
>> 
>> It tells you that consciousness can be multiply realized. That your "soul" 
>> can be resurrected by the right machine. It tells you that teleportation is 
>> possible. It tells you that first person indeterminacy is an artifact of 
>> duplication. It tells you that if arithmetic is real we must explain physics 
>> from the sum of experiences of the infinite consciousness Turing machines 
>> who exist in arithmetic.
> 
> If consciousness is a computation implementable on a universal Turing 
> machine, then these things might follow, but that does not really tell us 
> what consciousness is: it does not tell us which computations are conscious 
> and which are not.

We know very well what is the factorial function, but we can prove that there 
is no algorithmic procedure, or machine ever able to distinguish a program 
computing factorial and one which does not. Only in some case can we do that. 
That remains true for the computation too. (Rice theorem: if we could 
distinguish the semantics, we would be able to build a permutation of programs 
which would have no fixed point contradiction the second recursion theorem of 
Kleene. 
There are very few semantic attribute of programs/machines which are 
computable.So, there are no chance we will ever have any criteria to decide if 
an entity (3p describable) is conscious or not.

Of course I take your expression “consciousness is a computation” as an abuse 
of language. Computations are 3p, and consciousness is 1p, and such things can 
never been identified.

Bruno



> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Jason Resch
On Thu, Jan 4, 2018 at 9:36 PM, Bruce Kellett 
wrote:

> On 5/01/2018 1:46 am, Jason Resch wrote:
>
> On Thursday, January 4, 2018, Bruce Kellett < 
> bhkell...@optusnet.com.au> wrote:
>
>> On 4/01/2018 6:41 pm, Quentin Anciaux wrote:
>>
>> 2018-01-04 6:57 GMT+01:00 Bruce Kellett :
>>
>>> My abacus does not talk to me.
>>>
>>>
>> That would mean no computation are conscious at all...
>>
>>
>> No, that does not follow. Even if consciousness is a computation, it does
>> not follow that all computations are conscious: A is a B does not imply
>> that all Bs are As.
>>
>> technically your abacus is turing complete (well it has to be large
>> enough), so it could run a conscious computation... but that doesn't mean
>> that computation could talk to you, for that it would also need an I/O
>> system with our reality.
>>
>>
>> No, it does not have the necessary I/O equipment. But, as above, even
>> Turing completeness does not mean that every computation such a Turing
>> machine makes is conscious.
>>
>> The real point of my original comment was that the only way you can
>> distinguish a conscious computation from a non-conscious one is if it is
>> conscious. In other words, the suggestion that consciousness is a
>> computation tells you absolutely nothing interesting about consciousness.
>>
>>
>> Bruce
>>
>
> It tells you that consciousness can be multiply realized. That your "soul"
> can be resurrected by the right machine. It tells you that teleportation is
> possible. It tells you that first person indeterminacy is an artifact of
> duplication. It tells you that if arithmetic is real we must explain
> physics from the sum of experiences of the infinite consciousness Turing
> machines who exist in arithmetic.
>
>
> If consciousness is a computation implementable on a universal Turing
> machine, then these things might follow, but that does not really tell us
> what consciousness is: it does not tell us which computations are conscious
> and which are not.
>

I agree.  Though it might give us a place to start.  My hypothesis is that
"if statements" are the atoms of consciousness, as they can put the machine
into different states based on some conditional
.  You
could then say that the program is "aware" of that conditional variable.

Others on this list have argued that conscious is related to "Sigma 1
sentences", and still others that any universal machine is conscious.  In
any case, if computationalism were proved, it would tell us where to focus
on attention, rather than looking for some elan vital, some special
material as with biological naturalism, or some unknown physics like
quantum gravity.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 5/01/2018 1:46 am, Jason Resch wrote:
On Thursday, January 4, 2018, Bruce Kellett > wrote:


On 4/01/2018 6:41 pm, Quentin Anciaux wrote:

2018-01-04 6:57 GMT+01:00 Bruce Kellett
>:

My abacus does not talk to me.


That would mean no computation are conscious at all...


No, that does not follow. Even if consciousness is a computation,
it does not follow that all computations are conscious: A is a B
does not imply that all Bs are As.


technically your abacus is turing complete (well it has to be
large enough), so it could run a conscious computation... but
that doesn't mean that computation could talk to you, for that it
would also need an I/O system with our reality.


No, it does not have the necessary I/O equipment. But, as above,
even Turing completeness does not mean that every computation such
a Turing machine makes is conscious.

The real point of my original comment was that the only way you
can distinguish a conscious computation from a non-conscious one
is if it is conscious. In other words, the suggestion that
consciousness is a computation tells you absolutely nothing
interesting about consciousness.


Bruce


It tells you that consciousness can be multiply realized. That your 
"soul" can be resurrected by the right machine. It tells you that 
teleportation is possible. It tells you that first person 
indeterminacy is an artifact of duplication. It tells you that if 
arithmetic is real we must explain physics from the sum of experiences 
of the infinite consciousness Turing machines who exist in arithmetic.


If consciousness is a computation implementable on a universal Turing 
machine, then these things might follow, but that does not really tell 
us what consciousness is: it does not tell us which computations are 
conscious and which are not.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 5/01/2018 12:18 am, Quentin Anciaux wrote:
2018-01-04 12:36 GMT+01:00 Bruce Kellett >:


On 4/01/2018 6:41 pm, Quentin Anciaux wrote:

2018-01-04 6:57 GMT+01:00 Bruce Kellett
>:

My abacus does not talk to me.


That would mean no computation are conscious at all...


No, that does not follow. Even if consciousness is a computation,
it does not follow that all computations are conscious: A is a B
does not imply that all Bs are As.


You say that as your abacus does not talk to you that means not all 
computations are conscious... but that does not follows, it just means 
your abacus as not way to convey you it is conscious as it lacks a 
correct I/O system with you... the fact it does not talk to you is not 
evidence computations performed on it are not conscious even the 
simplest one.


Also if it was true *some* computations are conscious, as your abacus 
is turing complete, you could in principle run them on it... but your 
abacus still wouldn't talk to you, and it would be wrong to say that 
the computation is not conscious in this case...


The abacus was not a good example of a computation that was not 
conscious, because even if it were conscious it could not talk to me (no 
suitable I/O). However, a program that takes the number 2 as input and 
produces its square, namely 4, performs a perfectly good computation, 
but is clearly not conscious (else you reduce consciousness to 
triviality). This program is not universal, not part of a Turing machine 
(though it could be performed on such a machine). A computation 
performed on a machine is not the machine itself.


So we still need a mechanism by which we can sort conscious computations 
from the complete set of all computations, as produced, for example, by 
the universal dovetailer, since not all of these computations are conscious.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 4/01/2018 11:59 pm, Bruno Marchal wrote:

On Jan 4, 2018, at 12:50 PM, Bruce Kellett  wrote:

On 4/01/2018 12:30 am, Bruno Marchal wrote:

On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

On 29/12/2017 10:14 am, Russell Standish wrote:

This is computationalism - the idea that our human consciousness _is_
a computation (and nothing but a computation).

What distinguishes a conscious computation within the class of all 
computations? After all, not all computations are conscious.

Universality seems enough.

What is a universal computation? From what you say below, universality appears 
to be a property of a machine, not of a computation.

OK, universality is an attribute of a machine, relatively to some universal 
machinery, like arithmetic or physics.



But just universality gives rise only to a highly non standard, dissociative, 
form of consciousness. It might correspond to the cosmic consciousness alluded 
by people living highly altered state of consciousness.

You need Löbianity to get *self-consciousness*, or reflexive consciousness. A machine is Löbian when its 
universality is knowable by it. Equivalently, when the machine is universal and can prove its own 
"Löb's formula". []([]p -> p) -> []p. Note that the second incompleteness theorem is the 
particular case with p = f (f = "0≠1").

Löbanity is a property of the machine, not of the computation.

Yes. The same. I was talking about machine or about the person supported by 
those machine. No machine (as conceived as a code, number, physical object) can 
ever be conscious or think. It is always a more abstract notion implemented 
through some machinery which do the thinking.

Similarly a computation cannot be conscious, but it can support a person, which 
is the one having genuinely the thinking or conscious attribute.


The original suggestion by Russell was that "our human consciousness 
_is_ a computation (and nothing but a computation)."


You seem to be steering away from Russell's straightforward position. If 
human consciousness is a computation, then the computation is conscious 
(it is an identity thesis). You say that the computation cannot be 
conscious, but can support a person. It is difficult to see this as 
anything other than the introduction of a dualistic element: the 
computation supports a conscious person, but is not itself conscious? So 
wherein does consciousness exist? You are introducing some unspecified 
magic into the equation. And what characterizes those computations that 
can support a person from those that cannot?


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 4/01/2018 11:00 pm, Bruno Marchal wrote:

Yes, in my Conscience and Mechanism appendices, or in the appendice of the 
Lille thesis. I translated a Bell’s inequality in arithmetic, but cannot test 
it due to its intractability + my own incompetence of course. But Z1* 
introducing tuns of nesting modal boxes, making things hard to verify for 
reasonably complex formula.


Bell-like inequalities are easy to obtain -- in classical physics as 
well as in QM. The hard thing is to show that your theory requires that 
they be experimentally violated.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread David Nyman
On 4 Jan 2018 21:04, "Brent Meeker"  wrote:



On 1/4/2018 5:13 AM, David Nyman wrote:

On 3 January 2018 at 21:57, Brent Meeker  wrote:

>
>
> On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>
>>
>> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>>
>>
>>>
>>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>>>
 Now, it
 could be that intelligent behavior implies mind, but as you yourself
 argue, we don't know that.

>>>
>>> Isn't this at the crux of the scientific study of the mind? There seemed
>>> to be universal agreement on this list that a philosophical zombie is
>>> impossible.
>>>
>>
>>
>> Precisely: that a philosophical zombie is impossible when we assume
>> Mechanism.
>>
>
> But the consensus here has been that a philosophical zombie is impossible
> because it exhibits intelligent behavior.
>
> Philosophical zombie remains logical consistent for a non computationalist
>> theory of mind.
>>
>
> It's logically consistent with a computationalist theory of brain. It is
> only inconsistent with a computationalist theory of mind because use
> include as an axiom that computation produces mind.  One can as say that
> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
> very cheap standard for theories to meet.


​ISTM that you are failing to take account of an important distinction
here, despite having acknowledged it in previous conversations.​ I don't
think of comp as really taking it as axiomatic that computation produces
mind. Of course, CTM, which comp takes as its nominal point of departure,
does do precisely that. But the comp theory seeks then to provide a
*persuasive* model of an 'internalised' epistemic access (i.e to knowledge)
that can be emulated via computation. This form of subjective access to
knowledge - as distinct from information, mechanism, or behaviour in
general - can, as you know, be represented by a toy model deploying a range
of self-referential modal logics. Of course this model doesn't directly
allow one to infer a priori the entire phenomenal spectrum of
consciousness. But it is adequate to demonstrate a range of distinctively
first-personal characteristics of mind, such as internal/external,
shareable/non-shareable, doubtable/undoubtable, that are otherwise
effectively indistinguishable from a purely third-personal perspective.
Hence part of its role is to *persuade* us that the distinction between
body and mind is coterminous with that between a 'universal' mechanistic
ontology and its possible epistemic consequences.

Of these, ISTM the most important is actually the first mentioned. The idea
that intelligent behaviour entails mind is equivalent to mind's being
something entirely extrinsic.


No, that's a logical fallacy.  That X is entailed by Y and Y is extrinsic,
does not imply that X is entirely extrinsic.


No, but if X is to be understood as intrinsic, it demands a convincing
explication of why that claim of intrinsicality is warranted, other than as
a brute a posteriori assertion. IOW, if we already have a perfectly
adequate extrinsic account of all the relevant behaviour, what reason would
you propose for the additional assumption that this somehow additionally
entails all the phenomena of subjective internality, other than this is
(inconveniently) what remains to be explained?



Neither behaviour nor matter possess, or require in any explanatory role,
an 'interior' aspect. Neither matter nor behaviour have an 'inside' - plumb
their depths as you will and all you will discover is more 'outsides'. So
the axiom that intelligent behaviour entails mind would seem either to be
an effective elimination of the concept as redundant (popular in certain
recently discussed circles) or a theoretically ex nihilo evocation of
first-personal epistemic access on the exclusive basis of third-personal
action. The former option undercuts itself at the start; the latter seems
to lack any theoretical motivation other than a tacit (or on occasion
explicit) dismissal of the viability of any alternative approach to the
matter.


That's the same argument you've made in several different forms: Any third
person explanation of mind implies that there can be no first person
experience.  I don't see that it follows.


It doesn't. It's just that it's unargued for except as a brute 'identity'.
That's my point. That's why I contrasted it with the explicitly epistemic
elements of Bruno's theory. That's the sum and total of the distinction I'm
pointing out.

  Any reduction banishes the thing reduced.


Not at all. But that's just the point. In third person terms, when you
build a house from bricks, you don't then have the bricks plus a house
somehow evoked ex nihilo. You just have an optional redescription of the
same bricks. And that redescription, or emergent, is ultimately a
phenomenal construct. But you can 'reduce' that redescription to its
constituent bricks without banishing the phenomenal house. It continues

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Brent Meeker



On 1/4/2018 5:13 AM, David Nyman wrote:
On 3 January 2018 at 21:57, Brent Meeker > wrote:




On 1/3/2018 5:47 AM, Bruno Marchal wrote:


On 03 Jan 2018, at 03:39, Brent Meeker wrote:



On 1/2/2018 8:07 AM, Bruno Marchal wrote:

Now, it
could be that intelligent behavior implies mind, but
as you yourself
argue, we don't know that.


Isn't this at the crux of the scientific study of the
mind? There seemed to be universal agreement on this list
that a philosophical zombie is impossible.



Precisely: that a philosophical zombie is impossible when we
assume Mechanism.


But the consensus here has been that a philosophical zombie is
impossible because it exhibits intelligent behavior.

Philosophical zombie remains logical consistent for a non
computationalist theory of mind.


It's logically consistent with a computationalist theory of brain.
It is only inconsistent with a computationalist theory of mind
because use include as an axiom that computation produces mind. 
One can as say that intelligent behavior entails mind as an axiom
of physicalism.  Logic is a very cheap standard for theories to meet.


​ISTM that you are failing to take account of an important distinction 
here, despite having acknowledged it in previous conversations.​ I 
don't think of comp as really taking it as axiomatic that computation 
produces mind. Of course, CTM, which comp takes as its nominal point 
of departure, does do precisely that. But the comp theory seeks then 
to provide a *persuasive* model of an 'internalised' epistemic access 
(i.e to knowledge) that can be emulated via computation. This form of 
subjective access to knowledge - as distinct from information, 
mechanism, or behaviour in general - can, as you know, be represented 
by a toy model deploying a range of self-referential modal logics. Of 
course this model doesn't directly allow one to infer a priori the 
entire phenomenal spectrum of consciousness. But it is adequate to 
demonstrate a range of distinctively first-personal characteristics of 
mind, such as internal/external, shareable/non-shareable, 
doubtable/undoubtable, that are otherwise effectively 
indistinguishable from a purely third-personal perspective. Hence part 
of its role is to *persuade* us that the distinction between body and 
mind is coterminous with that between a 'universal' mechanistic 
ontology and its possible epistemic consequences.


Of these, ISTM the most important is actually the first mentioned. The 
idea that intelligent behaviour entails mind is equivalent to mind's 
being something entirely extrinsic.


No, that's a logical fallacy.  That X is entailed by Y and Y is 
extrinsic, does not imply that X is entirely extrinsic.


Neither behaviour nor matter possess, or require in any explanatory 
role, an 'interior' aspect. Neither matter nor behaviour have an 
'inside' - plumb their depths as you will and all you will discover is 
more 'outsides'. So the axiom that intelligent behaviour entails mind 
would seem either to be an effective elimination of the concept as 
redundant (popular in certain recently discussed circles) or a 
theoretically ex nihilo evocation of first-personal epistemic access 
on the exclusive basis of third-personal action. The former option 
undercuts itself at the start; the latter seems to lack any 
theoretical motivation other than a tacit (or on occasion explicit) 
dismissal of the viability of any alternative approach to the matter.


That's the same argument you've made in several different forms: Any 
third person explanation of mind implies that there can be no first 
person experience.  I don't see that it follows.  Any reduction banishes 
the thing reduced.  It appears to be just an assertion to save the 
mystery.  Somehow an explanation in terms of number relations in a 
Platonic realm is OK.  Forget saving the phenomenon; save the soul.




And indeed this bears directly on the 'consensus' against 
philosophical zombies. Physicalism, and in its wake any putative 
entailment from intelligent behaviour directly to mind, leads 
ineluctably to the notion of zombies.


Only for those who intuitively dismiss any physical explanation of 
mind..."My computer can't have a mind.  If it did it could beat me at 
chess."


The only evidence contra the notion that all bodies are zombies is the 
conjunction of 'I am not a zombie' + solipsism is false.


And that internal reflection is essential to my intelligent behavior.  
It's interesting that comp also has to assume solipism is false.


Whereas this conjunction may indeed be true, it is an act of faith 
rather than any non-trivial explication of the relation between bodies 
and minds.


Bruno's explication of the relation between bodies and minds is so 
non-trivial it's 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread David Nyman
On 4 Jan 2018 18:16, "Bruno Marchal"  wrote:


On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:

On 4 January 2018 at 11:55, Bruno Marchal  wrote:

>
> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
> >
> >
> >
> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
> >>
> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
> >>
> >>>
> >>>
> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>  Now, it
>  could be that intelligent behavior implies mind, but as you yourself
>  argue, we don't know that.
> >>>
> >>> Isn't this at the crux of the scientific study of the mind? There
> seemed to be universal agreement on this list that a philosophical zombie
> is impossible.
> >>
> >>
> >> Precisely: that a philosophical zombie is impossible when we assume
> Mechanism.
> >
> > But the consensus here has been that a philosophical zombie is
> impossible because it exhibits intelligent behavior.
>
> Well, I think the consensus here is that computationalism is far more
> plausible than non-computationalism.
> Computationalism makes zombies non sensical.
>
>
>
> >
> >> Philosophical zombie remains logical consistent for a non
> computationalist theory of mind.
> >
> > It's logically consistent with a computationalist theory of brain. It is
> only inconsistent with a computationalist theory of mind because use
> include as an axiom that computation produces mind.  One can as say that
> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
> very cheap standard for theories to meet.
>
> At first sight, zombies seems consistent with computationalism, but the
> notion of zombies requires the idea that we attribute mind to bodies
> (having the right behavior). But with computationalism, mind is never
> associated to a body, but only to the person having the infinity of
> (similar enough) bodies relative representation in arithmetic. There are no
> “real bodies” or “ontological bodies”, so the notion of zombie becomes
> senseless. The consciousness is associated with the person, which is never
> determined by one body.
>

​So in the light of what you say above, does it then follow that the MGA
implies (assuming comp) that a physical system does *not* in fact implement
a computation in the relevant sense?



The physical world has to be able to implement the computation in the
relevant (Turing-Church-Post-Kleene CT) sense. You need this for the YD
“act of faith.

The physical world is a persistent illusion. It has to be enough persistent
that you wake up at the hospital with the digital brain.



I ask this because you say mind is *never* associated with a body, but mind
*is* associated with computation via the epistemic consequences of
universality.



A (conscious) third person can associate a mind/person to a body that he
perceives. It is polite.

The body perceived by that third person is itself a construction of its own
mind, and with computationalism (but also with QM), we know that such a
body is an (evolving) map of where, and in which states, we could find, sy,
the electron and proton of that body, and such snapshot is only a
computational state among infinitely many others which would works as well,
with respect to the relevant computations which brought its conscious state.
Now, the conscious first person cannot associate itself to any particular
body or computation.

Careful: sometimes I say that a machine can think, or maybe (I usually
avoid) that a computation can think or be conscious. It always mean,
respectively, that a machine can make a person capable of manifesting
itself relatively to you. But the machine and the body are local relative
representation.

A machine cannot think, and a computation (which is the (arithmetical)
dynamic 3p view of the sequence of the relative static machine/state)
cannot think. Only a (first) person can think, and to use that thinking
with respect to another person, a machine is handy, like brain or a
physical computer.

The person is in heaven (arithmetical truth) and on earth (sigma_1
arithmetical truth), simultaneously. But this belongs to G*, and I should
stay mute, or insist that we are in the “after-act-of-faith” position of
the one betting that comp is true, and … assuming comp is true. It is
subtle to talk on those things, and it is important to admit that we don’t
know the truth (or we do get inconsistent and fall in the theological trap).



If so, according to comp, it would follow that (the material appearance and
behaviour of) a body cannot be considered *causally* relevant to the
computation-mind polarity,



Yes, that is true, with respect to the arithmetical truth (where there is
no bodies, only natural numbers), and false for the physical realm, which,
despite being a statistics on dreams (associated to computations in
arithmetic)





but instead must be regarded as a consistent *consequence* of it.




Again that is correct in the 0th person view, but 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Brent Meeker



On 1/4/2018 3:55 AM, Bruno Marchal wrote:

On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:



On 1/3/2018 5:47 AM, Bruno Marchal wrote:

On 03 Jan 2018, at 03:39, Brent Meeker wrote:



On 1/2/2018 8:07 AM, Bruno Marchal wrote:

Now, it
could be that intelligent behavior implies mind, but as you yourself
argue, we don't know that.

Isn't this at the crux of the scientific study of the mind? There seemed to be 
universal agreement on this list that a philosophical zombie is impossible.


Precisely: that a philosophical zombie is impossible when we assume Mechanism.

But the consensus here has been that a philosophical zombie is impossible 
because it exhibits intelligent behavior.

Well, I think the consensus here is that computationalism is far more plausible 
than non-computationalism.
Computationalism makes zombies non sensical.




Philosophical zombie remains logical consistent for a non computationalist 
theory of mind.

It's logically consistent with a computationalist theory of brain. It is only 
inconsistent with a computationalist theory of mind because use include as an 
axiom that computation produces mind.  One can as say that intelligent behavior 
entails mind as an axiom of physicalism.  Logic is a very cheap standard for 
theories to meet.

At first sight, zombies seems consistent with computationalism, but the notion 
of zombies requires the idea that we attribute mind to bodies (having the right 
behavior). But with computationalism, mind is never associated to a body, but 
only to the person having the infinity of (similar enough) bodies relative 
representation in arithmetic. There are no “real bodies” or “ontological 
bodies”, so the notion of zombie becomes senseless. The consciousness is 
associated with the person, which is never determined by one body.


That's just word salad: A person has an infinity of bodies, but they're 
in arithmetic and there are no real bodies so the person is not 
determined by one body. ??












Of course that doesn't mean it's true. But it seems as good a working hypothesis as 
"Yes, doctor".  And in fact it's the working hypothesis of most studies of 
neurocognition, intelligence, and mind.

Neuroscience and AI often bet, more or less explicitly, on mechanism, or on its 
"strong AI" weakenings.

(Note that UDA use mechanism, but its translation in arithmetic needs only 
strong-AI. Note that if strong AI is true, and comp false, we get infinitely 
many zombies in arithmetic.

How do you know that?


I was wrong. Wrote to quickly. It is only if weak AI is true, and strong AI or 
comp false, that there will be infinitely many zombies in arithmetic. Of 
course, if strong AI is false, comp is false too.






very curious one, which lacks body and mind, but act like you and me. They are quite 
similar with the "Bohm's zombies", the beings in the branches of the universal 
quantum wave which have no particles.



If it's true then it provides a link from intelligent behavior to mind.

The "non-zombie" principle is a consequence of comp, but I doubt that it 
implies comp. It is not related to finiteness, as comp and strong AI are.




We already have links from from physics to brain to intelligent behavior.  So 
why isn't this the physics based theory of mind that Bruno et al keep saying is 
impossible?

This is a bit ambiguous and misleading. Comp makes physics necessary, and that 
is why with Occam, physics cannot be assumed primitively if we want to use 
actual physics to verify or refute comp.

That very much depends on what physics comp makes necessary.

Well, if it violate our empirical physics, comp is refuted.


That's why we need to know what physics comp makes necessary in order to 
test it.







We can of course assume physics when doing physics, but not when doing 
computationalist theory of mind.

No, but we can assume physics when doing physicalist theory of mind.

Yes, but then the point is that a physicalist theory of mind (like with 
consciousness reducing the wave) will be non-computationalist.



OK? "physics" is necessary for machine/numbers is what makes the physical 
assumption eliminable, and is what makes computationalism testable.

But it doesn't seem to be testable because the conclusions drawn from it are 
extremely general


Not at all. It is very precise mathematical theories (qZ1*, qX1*, qS4Grz1).


Those are not theories, they are axiomatic systems.  Theories include an 
interpretation applying the mathematics to the observable world.








and already known

No. They are totally unknown, even ignored. I bet, and many others bet, in the 
eighties that this would be refuted before 2000. Some thought having already refute 
it, but they assumed a theory which was already refuted by incompleteness. Then we 
got the main confirmation in the nineties, but still no contradiction with 
“nature".




and supported by other assumptions: e.g. linearity of QM, probabilistic 
physics.  It doesn't tell us 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruno Marchal

> On Jan 4, 2018, at 1:22 PM, David Nyman  wrote:
> 
> On 4 January 2018 at 11:55, Bruno Marchal  > wrote:
> 
> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  > > wrote:
> >
> >
> >
> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
> >>
> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
> >>
> >>>
> >>>
> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>  Now, it
>  could be that intelligent behavior implies mind, but as you yourself
>  argue, we don't know that.
> >>>
> >>> Isn't this at the crux of the scientific study of the mind? There seemed 
> >>> to be universal agreement on this list that a philosophical zombie is 
> >>> impossible.
> >>
> >>
> >> Precisely: that a philosophical zombie is impossible when we assume 
> >> Mechanism.
> >
> > But the consensus here has been that a philosophical zombie is impossible 
> > because it exhibits intelligent behavior.
> 
> Well, I think the consensus here is that computationalism is far more 
> plausible than non-computationalism.
> Computationalism makes zombies non sensical.
> 
> 
> 
> >
> >> Philosophical zombie remains logical consistent for a non computationalist 
> >> theory of mind.
> >
> > It's logically consistent with a computationalist theory of brain. It is 
> > only inconsistent with a computationalist theory of mind because use 
> > include as an axiom that computation produces mind.  One can as say that 
> > intelligent behavior entails mind as an axiom of physicalism.  Logic is a 
> > very cheap standard for theories to meet.
> 
> At first sight, zombies seems consistent with computationalism, but the 
> notion of zombies requires the idea that we attribute mind to bodies (having 
> the right behavior). But with computationalism, mind is never associated to a 
> body, but only to the person having the infinity of (similar enough) bodies 
> relative representation in arithmetic. There are no “real bodies” or 
> “ontological bodies”, so the notion of zombie becomes senseless. The 
> consciousness is associated with the person, which is never determined by one 
> body.
> 
> ​So in the light of what you say above, does it then follow that the MGA 
> implies (assuming comp) that a physical system does *not* in fact implement a 
> computation in the relevant sense?


The physical world has to be able to implement the computation in the relevant 
(Turing-Church-Post-Kleene CT) sense. You need this for the YD “act of faith.

The physical world is a persistent illusion. It has to be enough persistent 
that you wake up at the hospital with the digital brain.



> I ask this because you say mind is *never* associated with a body, but mind 
> *is* associated with computation via the epistemic consequences of 
> universality.


A (conscious) third person can associate a mind/person to a body that he 
perceives. It is polite. 

The body perceived by that third person is itself a construction of its own 
mind, and with computationalism (but also with QM), we know that such a body is 
an (evolving) map of where, and in which states, we could find, sy, the 
electron and proton of that body, and such snapshot is only a computational 
state among infinitely many others which would works as well, with respect to 
the relevant computations which brought its conscious state.
Now, the conscious first person cannot associate itself to any particular body 
or computation.

Careful: sometimes I say that a machine can think, or maybe (I usually avoid) 
that a computation can think or be conscious. It always mean, respectively, 
that a machine can make a person capable of manifesting itself relatively to 
you. But the machine and the body are local relative representation.

A machine cannot think, and a computation (which is the (arithmetical) dynamic 
3p view of the sequence of the relative static machine/state) cannot think. 
Only a (first) person can think, and to use that thinking with respect to 
another person, a machine is handy, like brain or a physical computer.

The person is in heaven (arithmetical truth) and on earth (sigma_1 arithmetical 
truth), simultaneously. But this belongs to G*, and I should stay mute, or 
insist that we are in the “after-act-of-faith” position of the one betting that 
comp is true, and … assuming comp is true. It is subtle to talk on those 
things, and it is important to admit that we don’t know the truth (or we do get 
inconsistent and fall in the theological trap).



> If so, according to comp, it would follow that (the material appearance and 
> behaviour of) a body cannot be considered *causally* relevant to the 
> computation-mind polarity,


Yes, that is true, with respect to the arithmetical truth (where there is no 
bodies, only natural numbers), and false for the physical realm, which, despite 
being a statistics on dreams (associated to computations in arithmetic)





> but instead must be 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Quentin Anciaux
2018-01-04 12:36 GMT+01:00 Bruce Kellett :

> On 4/01/2018 6:41 pm, Quentin Anciaux wrote:
>
> 2018-01-04 6:57 GMT+01:00 Bruce Kellett :
>
>> My abacus does not talk to me.
>>
>>
> That would mean no computation are conscious at all...
>
>
> No, that does not follow. Even if consciousness is a computation, it does
> not follow that all computations are conscious: A is a B does not imply
> that all Bs are As.
>

You say that as your abacus does not talk to you that means not all
computations are conscious... but that does not follows, it just means your
abacus as not way to convey you it is conscious as it lacks a correct I/O
system with you... the fact it does not talk to you is not evidence
computations performed on it are not conscious even the simplest one.

Also if it was true *some* computations are conscious, as your abacus is
turing complete, you could in principle run them on it... but your abacus
still wouldn't talk to you, and it would be wrong to say that the
computation is not conscious in this case...


>
>
> technically your abacus is turing complete (well it has to be large
> enough), so it could run a conscious computation... but that doesn't mean
> that computation could talk to you, for that it would also need an I/O
> system with our reality.
>
>
> No, it does not have the necessary I/O equipment. But, as above, even
> Turing completeness does not mean that every computation such a Turing
> machine makes is conscious.
>
> The real point of my original comment was that the only way you can
> distinguish a conscious computation from a non-conscious one is if it is
> conscious. In other words, the suggestion that consciousness is a
> computation tells you absolutely nothing interesting about consciousness.
>
> Bruce
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)


Virus-free.
www.avg.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruno Marchal

> On Jan 4, 2018, at 12:50 PM, Bruce Kellett  wrote:
> 
> On 4/01/2018 12:30 am, Bruno Marchal wrote:
>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>>> On 29/12/2017 10:14 am, Russell Standish wrote:
 This is computationalism - the idea that our human consciousness _is_
 a computation (and nothing but a computation).
>>> 
>>> What distinguishes a conscious computation within the class of all 
>>> computations? After all, not all computations are conscious.
>> 
>> Universality seems enough.
> 
> What is a universal computation? From what you say below, universality 
> appears to be a property of a machine, not of a computation.

OK, universality is an attribute of a machine, relatively to some universal 
machinery, like arithmetic or physics.




> 
>> But just universality gives rise only to a highly non standard, 
>> dissociative, form of consciousness. It might correspond to the cosmic 
>> consciousness alluded by people living highly altered state of consciousness.
>> 
>> You need Löbianity to get *self-consciousness*, or reflexive consciousness. 
>> A machine is Löbian when its universality is knowable by it. Equivalently, 
>> when the machine is universal and can prove its own "Löb's formula". []([]p 
>> -> p) -> []p. Note that the second incompleteness theorem is the particular 
>> case with p = f (f = "0≠1").
> 
> Löbanity is a property of the machine, not of the computation.

Yes. The same. I was talking about machine or about the person supported by 
those machine. No machine (as conceived as a code, number, physical object) can 
ever be conscious or think. It is always a more abstract notion implemented 
through some machinery which do the thinking.

Similarly a computation cannot be conscious, but it can support a person, which 
is the one having genuinely the thinking or conscious attribute.

But, sometimes I commit abuse of language for reason of being short and clear. 
Sorry for the possible confusion possible.

Bruno




> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread David Nyman
On 4 January 2018 at 11:55, Bruno Marchal  wrote:

>
> > On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
> >
> >
> >
> > On 1/3/2018 5:47 AM, Bruno Marchal wrote:
> >>
> >> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
> >>
> >>>
> >>>
> >>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
>  Now, it
>  could be that intelligent behavior implies mind, but as you yourself
>  argue, we don't know that.
> >>>
> >>> Isn't this at the crux of the scientific study of the mind? There
> seemed to be universal agreement on this list that a philosophical zombie
> is impossible.
> >>
> >>
> >> Precisely: that a philosophical zombie is impossible when we assume
> Mechanism.
> >
> > But the consensus here has been that a philosophical zombie is
> impossible because it exhibits intelligent behavior.
>
> Well, I think the consensus here is that computationalism is far more
> plausible than non-computationalism.
> Computationalism makes zombies non sensical.
>
>
>
> >
> >> Philosophical zombie remains logical consistent for a non
> computationalist theory of mind.
> >
> > It's logically consistent with a computationalist theory of brain. It is
> only inconsistent with a computationalist theory of mind because use
> include as an axiom that computation produces mind.  One can as say that
> intelligent behavior entails mind as an axiom of physicalism.  Logic is a
> very cheap standard for theories to meet.
>
> At first sight, zombies seems consistent with computationalism, but the
> notion of zombies requires the idea that we attribute mind to bodies
> (having the right behavior). But with computationalism, mind is never
> associated to a body, but only to the person having the infinity of
> (similar enough) bodies relative representation in arithmetic. There are no
> “real bodies” or “ontological bodies”, so the notion of zombie becomes
> senseless. The consciousness is associated with the person, which is never
> determined by one body.
>

​So in the light of what you say above, does it then follow that the MGA
implies (assuming comp) that a physical system does *not* in fact implement
a computation in the relevant sense? I ask this because you say mind is
*never* associated with a body, but mind *is* associated with computation
via the epistemic consequences of universality. If so, according to comp,
it would follow that (the material appearance and behaviour of) a body
cannot be considered *causally* relevant to the computation-mind polarity,
but instead must be regarded as a consistent *consequence* of it.

David​


>
>
>
> >
> >>
> >>
> >>
> >>
> >>> Of course that doesn't mean it's true. But it seems as good a working
> hypothesis as "Yes, doctor".  And in fact it's the working hypothesis of
> most studies of neurocognition, intelligence, and mind.
> >>
> >> Neuroscience and AI often bet, more or less explicitly, on mechanism,
> or on its "strong AI" weakenings.
> >>
> >> (Note that UDA use mechanism, but its translation in arithmetic needs
> only strong-AI. Note that if strong AI is true, and comp false, we get
> infinitely many zombies in arithmetic.
> >
> > How do you know that?
>
>
> I was wrong. Wrote to quickly. It is only if weak AI is true, and strong
> AI or comp false, that there will be infinitely many zombies in arithmetic.
> Of course, if strong AI is false, comp is false too.
>
>
>
>
>
> >
> >> very curious one, which lacks body and mind, but act like you and me.
> They are quite similar with the "Bohm's zombies", the beings in the
> branches of the universal quantum wave which have no particles.
> >>
> >>
> >>> If it's true then it provides a link from intelligent behavior to mind.
> >>
> >> The "non-zombie" principle is a consequence of comp, but I doubt that
> it implies comp. It is not related to finiteness, as comp and strong AI are.
> >>
> >>
> >>
> >>> We already have links from from physics to brain to intelligent
> behavior.  So why isn't this the physics based theory of mind that Bruno et
> al keep saying is impossible?
> >>
> >> This is a bit ambiguous and misleading. Comp makes physics necessary,
> and that is why with Occam, physics cannot be assumed primitively if we
> want to use actual physics to verify or refute comp.
> >
> > That very much depends on what physics comp makes necessary.
>
> Well, if it violate our empirical physics, comp is refuted.
>
>
>
> >
> >> We can of course assume physics when doing physics, but not when doing
> computationalist theory of mind.
> >
> > No, but we can assume physics when doing physicalist theory of mind.
>
> Yes, but then the point is that a physicalist theory of mind (like with
> consciousness reducing the wave) will be non-computationalist.
>
>
> >
> >>
> >> OK? "physics" is necessary for machine/numbers is what makes the
> physical assumption eliminable, and is what makes computationalism testable.
> >
> > But it doesn't seem to be testable because the conclusions drawn from it
> are extremely 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruno Marchal

> On Jan 4, 2018, at 4:58 AM, Brent Meeker  wrote:
> 
> 
> 
> On 1/2/2018 6:36 AM, Bruno Marchal wrote:
>> 
>> On 01 Jan 2018, at 23:38, Brent Meeker wrote:
>> 
>>> 
>>> No. I do not commit the fallacy of "Your god is false, so my god is real".  
>>> I'm willing to say I don't know what must be real.
>> 
>> 
>> You just did it. You just said in your previews post: " I think arithmetic 
>> is a human invention...not the basis of reality."
> 
> A logician should know the difference between saying what something is not 
> and saying what something is.


You need to have some idea of what is real, to assert that arithmetic is not 
what is fundamentally real.



> ...
>> 
>>> Bell is (rightly) famous for suggesting an definitive experiment...not just 
>>> an illustration.
>> 
>> Here too. In particular the violation of Bell's inequality itself can be 
>> tested in the machine's physics. That is detailed in my long thesis version.
> 
> Is it available online?

Yes, in my Conscience and Mechanism appendices, or in the appendice of the 
Lille thesis. I translated a Bell’s inequality in arithmetic, but cannot test 
it due to its intractability + my own incompetence of course. But Z1* 
introducing tuns of nesting modal boxes, making things hard to verify for 
reasonably complex formula.

Bruno


> 
> Brent
> 
>> It leads to complex math, but that is hardly an argument of falsity. If the 
>> results were not ignored, (for pseudo-philosophical reason intolerable in 
>> science), we would have already refuted computationalism (or that classical 
>> indexical weak formulation of it), or improved it, notably by noting which 
>> of S4Grz1, Z1* and X1* are closer to the physicists' quantum logic. Note 
>> that if physics is entirely explained in S4Grz1, that would be a case for 
>> some sort of solipsism, but not ecaxtly the common one, and also, there are 
>> few chance that can happen (because quantum logic obeys the excluded middle, 
>> and the quantum logic coming from S4Grz1 does not).
>> 
>> Bruno
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruno Marchal

> On Jan 3, 2018, at 10:57 PM, Brent Meeker  wrote:
> 
> 
> 
> On 1/3/2018 5:47 AM, Bruno Marchal wrote:
>> 
>> On 03 Jan 2018, at 03:39, Brent Meeker wrote:
>> 
>>> 
>>> 
>>> On 1/2/2018 8:07 AM, Bruno Marchal wrote:
 Now, it
 could be that intelligent behavior implies mind, but as you yourself
 argue, we don't know that.
>>> 
>>> Isn't this at the crux of the scientific study of the mind? There seemed to 
>>> be universal agreement on this list that a philosophical zombie is 
>>> impossible.
>> 
>> 
>> Precisely: that a philosophical zombie is impossible when we assume 
>> Mechanism. 
> 
> But the consensus here has been that a philosophical zombie is impossible 
> because it exhibits intelligent behavior.

Well, I think the consensus here is that computationalism is far more plausible 
than non-computationalism.
Computationalism makes zombies non sensical. 



> 
>> Philosophical zombie remains logical consistent for a non computationalist 
>> theory of mind.
> 
> It's logically consistent with a computationalist theory of brain. It is only 
> inconsistent with a computationalist theory of mind because use include as an 
> axiom that computation produces mind.  One can as say that intelligent 
> behavior entails mind as an axiom of physicalism.  Logic is a very cheap 
> standard for theories to meet.

At first sight, zombies seems consistent with computationalism, but the notion 
of zombies requires the idea that we attribute mind to bodies (having the right 
behavior). But with computationalism, mind is never associated to a body, but 
only to the person having the infinity of (similar enough) bodies relative 
representation in arithmetic. There are no “real bodies” or “ontological 
bodies”, so the notion of zombie becomes senseless. The consciousness is 
associated with the person, which is never determined by one body.




> 
>> 
>> 
>> 
>> 
>>> Of course that doesn't mean it's true. But it seems as good a working 
>>> hypothesis as "Yes, doctor".  And in fact it's the working hypothesis of 
>>> most studies of neurocognition, intelligence, and mind.
>> 
>> Neuroscience and AI often bet, more or less explicitly, on mechanism, or on 
>> its "strong AI" weakenings.
>> 
>> (Note that UDA use mechanism, but its translation in arithmetic needs only 
>> strong-AI. Note that if strong AI is true, and comp false, we get infinitely 
>> many zombies in arithmetic. 
> 
> How do you know that?


I was wrong. Wrote to quickly. It is only if weak AI is true, and strong AI or 
comp false, that there will be infinitely many zombies in arithmetic. Of 
course, if strong AI is false, comp is false too.





> 
>> very curious one, which lacks body and mind, but act like you and me. They 
>> are quite similar with the "Bohm's zombies", the beings in the branches of 
>> the universal quantum wave which have no particles.
>> 
>> 
>>> If it's true then it provides a link from intelligent behavior to mind.
>> 
>> The "non-zombie" principle is a consequence of comp, but I doubt that it 
>> implies comp. It is not related to finiteness, as comp and strong AI are.
>> 
>> 
>> 
>>> We already have links from from physics to brain to intelligent behavior.  
>>> So why isn't this the physics based theory of mind that Bruno et al keep 
>>> saying is impossible?
>> 
>> This is a bit ambiguous and misleading. Comp makes physics necessary, and 
>> that is why with Occam, physics cannot be assumed primitively if we want to 
>> use actual physics to verify or refute comp. 
> 
> That very much depends on what physics comp makes necessary.

Well, if it violate our empirical physics, comp is refuted.



> 
>> We can of course assume physics when doing physics, but not when doing 
>> computationalist theory of mind.
> 
> No, but we can assume physics when doing physicalist theory of mind.

Yes, but then the point is that a physicalist theory of mind (like with 
consciousness reducing the wave) will be non-computationalist.


> 
>> 
>> OK? "physics" is necessary for machine/numbers is what makes the physical 
>> assumption eliminable, and is what makes computationalism testable.
> 
> But it doesn't seem to be testable because the conclusions drawn from it are 
> extremely general


Not at all. It is very precise mathematical theories (qZ1*, qX1*, qS4Grz1). 



> and already known

No. They are totally unknown, even ignored. I bet, and many others bet, in the 
eighties that this would be refuted before 2000. Some thought having already 
refute it, but they assumed a theory which was already refuted by 
incompleteness. Then we got the main confirmation in the nineties, but still no 
contradiction with “nature".



> and supported by other assumptions: e.g. linearity of QM, probabilistic 
> physics.  It doesn't tell us why memories get less reliable when they are 
> more often recalled. 

That kind of things are expected to be understood through mechanism.


> It doesn't tell us why we have no memories of 

Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 4/01/2018 12:30 am, Bruno Marchal wrote:

On 29 Dec 2017, at 01:29, Bruce Kellett wrote:

On 29/12/2017 10:14 am, Russell Standish wrote:

This is computationalism - the idea that our human consciousness _is_
a computation (and nothing but a computation).


What distinguishes a conscious computation within the class of all 
computations? After all, not all computations are conscious.


Universality seems enough.


What is a universal computation? From what you say below, universality 
appears to be a property of a machine, not of a computation.


But just universality gives rise only to a highly non standard, 
dissociative, form of consciousness. It might correspond to the cosmic 
consciousness alluded by people living highly altered state of 
consciousness.


You need Löbianity to get *self-consciousness*, or reflexive 
consciousness. A machine is Löbian when its universality is knowable 
by it. Equivalently, when the machine is universal and can prove its 
own "Löb's formula". []([]p -> p) -> []p. Note that the second 
incompleteness theorem is the particular case with p = f (f = "0≠1").


Löbanity is a property of the machine, not of the computation.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruce Kellett

On 4/01/2018 6:41 pm, Quentin Anciaux wrote:
2018-01-04 6:57 GMT+01:00 Bruce Kellett >:


My abacus does not talk to me.


That would mean no computation are conscious at all...


No, that does not follow. Even if consciousness is a computation, it 
does not follow that all computations are conscious: A is a B does not 
imply that all Bs are As.


technically your abacus is turing complete (well it has to be large 
enough), so it could run a conscious computation... but that doesn't 
mean that computation could talk to you, for that it would also need 
an I/O system with our reality.


No, it does not have the necessary I/O equipment. But, as above, even 
Turing completeness does not mean that every computation such a Turing 
machine makes is conscious.


The real point of my original comment was that the only way you can 
distinguish a conscious computation from a non-conscious one is if it is 
conscious. In other words, the suggestion that consciousness is a 
computation tells you absolutely nothing interesting about consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-04 Thread Bruno Marchal

> On Jan 3, 2018, at 9:02 PM, Brent Meeker  wrote:
> 
> 
> 
> On 1/3/2018 5:30 AM, Bruno Marchal wrote:
>> 
>> On 29 Dec 2017, at 01:29, Bruce Kellett wrote:
>> 
>>> On 29/12/2017 10:14 am, Russell Standish wrote:
 This is computationalism - the idea that our human consciousness _is_
 a computation (and nothing but a computation).
>>> 
>>> What distinguishes a conscious computation within the class of all 
>>> computations? After all, not all computations are conscious.
>> 
>> Universality seems enough. But just universality gives rise only to a highly 
>> non standard, dissociative, form of consciousness. It might correspond to 
>> the cosmic consciousness alluded by people living highly altered state of 
>> consciousness.
>> 
>> You need Löbianity to get *self-consciousness*, or reflexive consciousness. 
>> A machine is Löbian when its universality is knowable by it. Equivalently, 
>> when the machine is universal and can prove its own "Löb's formula". []([]p 
>> -> p) -> []p. Note that the second incompleteness theorem is the particular 
>> case with p = f (f = "0≠1").
> 
> But people are aware and self-aware and have been for millenia before Löb. 

Absolutely.

Like bacteria are Turing Universal well before Turing, and its DNA is made of 
A, T, C and G well before Watson. Like the Big Bang was a Big Bang well before 
Lemaître, and the far away galaxies were there well before Hubble … I guess we 
agree on this.



> They are not even universal

They are, even at different levels. A bacteria is Turing universal. Indeed I 
discovered that notion more or less explicitly when studying bacteria and 
molecular biology (but of course I did not get Church’s thesis).




> and all but a handful never even heard of the concept.  So why should the 
> mere idealized possibility (assuming unlimited time and memory) be the 
> requirement.  It seems obvious to me that humans are aware and conscious for 
> based on much more limited capacities. 

Universality is cheap. It does not ask for many capacities. Humans are more 
than universal/conscious, they are Löbian/self-conscious, but this does not 
mean they need to have heard about Löb, no more than a bacteria needs to have 
studied Watson and Crick. You loss me. Even Peano arithmetic can be said to be 
Löbian, even if Löb missed that discovery.




> In my favorite intelligent Mars Rover example I see no need to make the 
> rover's computers Löbian and they are not going to have unlimited time and 
> memory,

Nobody, and no universal machine needs an infinite time and memory. A universal 
Turing machine is just a special finite set of quadruples. The tape is only 
pedagogical folklore. If you want I explain more. Humans also have no infinite 
memories. 


> so they won't be universal.

An interpreter LISP is Turing universal independently of the size of the memory 
used. A universal Turing machine is a finite set of quadruplets which will 
compute phi_x(y) when given the two finite number x and y, and here phi_i is a 
enumeration of all Turing machine/quadruplet.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-03 Thread Quentin Anciaux
2018-01-04 6:57 GMT+01:00 Bruce Kellett :

> My abacus does not talk to me.
>
>
That would mean no computation are conscious at all... technically your
abacus is turing complete (well it has to be large enough), so it could run
a conscious computation... but that doesn't mean that computation could
talk to you, for that it would also need an I/O system with our reality.

Regards,
Quentin


> Bruce
>
>
>
> On 29/12/2017 3:49 pm, John Clark wrote:
>
>
> On Thu, Dec 28, 2017 at 7:29 PM, Bruce Kellett 
> wrote:
>
> ​ > ​
>> not all computations are conscious.
>
>
> ​ How do you know?​
>
> ​
>
> John K Clark​
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)


Virus-free.
www.avg.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-03 Thread Bruce Kellett

My abacus does not talk to me.

Bruce


On 29/12/2017 3:49 pm, John Clark wrote:


On Thu, Dec 28, 2017 at 7:29 PM, Bruce Kellett 
> wrote:


​ > ​
not all computations are conscious.


​ How do you know?​
​

John K Clark​


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-03 Thread Brent Meeker



On 1/2/2018 6:36 AM, Bruno Marchal wrote:


On 01 Jan 2018, at 23:38, Brent Meeker wrote:



No. I do not commit the fallacy of "Your god is false, so my god is 
real".  I'm willing to say I don't know what must be real.



You just did it. You just said in your previews post: " I think 
arithmetic is a human invention...not the basis of reality."


A logician should know the difference between saying what something is 
not and saying what something is.

...


Bell is (rightly) famous for suggesting an definitive 
experiment...not just an illustration.


Here too. In particular the violation of Bell's inequality itself can 
be tested in the machine's physics. That is detailed in my long thesis 
version.


Is it available online?

Brent

It leads to complex math, but that is hardly an argument of falsity. 
If the results were not ignored, (for pseudo-philosophical reason 
intolerable in science), we would have already refuted 
computationalism (or that classical indexical weak formulation of it), 
or improved it, notably by noting which of S4Grz1, Z1* and X1* are 
closer to the physicists' quantum logic. Note that if physics is 
entirely explained in S4Grz1, that would be a case for some sort of 
solipsism, but not ecaxtly the common one, and also, there are few 
chance that can happen (because quantum logic obeys the excluded 
middle, and the quantum logic coming from S4Grz1 does not).


Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What falsifiability tests has computationalism passed?

2018-01-03 Thread Brent Meeker



On 1/3/2018 5:47 AM, Bruno Marchal wrote:


On 03 Jan 2018, at 03:39, Brent Meeker wrote:




On 1/2/2018 8:07 AM, Bruno Marchal wrote:

Now, it
could be that intelligent behavior implies mind, but as you yourself
argue, we don't know that.


Isn't this at the crux of the scientific study of the mind? There 
seemed to be universal agreement on this list that a philosophical 
zombie is impossible.



Precisely: that a philosophical zombie is impossible when we assume 
Mechanism. 


But the consensus here has been that a philosophical zombie is 
impossible because it exhibits intelligent behavior.


Philosophical zombie remains logical consistent for a non 
computationalist theory of mind.


It's logically consistent with a computationalist theory of brain. It is 
only inconsistent with a computationalist theory of mind because use 
include as an axiom that computation produces mind.  One can as say that 
intelligent behavior entails mind as an axiom of physicalism.  Logic is 
a very cheap standard for theories to meet.







Of course that doesn't mean it's true. But it seems as good a working 
hypothesis as "Yes, doctor".  And in fact it's the working hypothesis 
of most studies of neurocognition, intelligence, and mind.


Neuroscience and AI often bet, more or less explicitly, on mechanism, 
or on its "strong AI" weakenings.


(Note that UDA use mechanism, but its translation in arithmetic needs 
only strong-AI. Note that if strong AI is true, and comp false, we get 
infinitely many zombies in arithmetic. 


How do you know that?

very curious one, which lacks body and mind, but act like you and me. 
They are quite similar with the "Bohm's zombies", the beings in the 
branches of the universal quantum wave which have no particles.




If it's true then it provides a link from intelligent behavior to mind.


The "non-zombie" principle is a consequence of comp, but I doubt that 
it implies comp. It is not related to finiteness, as comp and strong 
AI are.




We already have links from from physics to brain to intelligent 
behavior.  So why isn't this the physics based theory of mind that 
Bruno et al keep saying is impossible?


This is a bit ambiguous and misleading. Comp makes physics necessary, 
and that is why with Occam, physics cannot be assumed primitively if 
we want to use actual physics to verify or refute comp. 


That very much depends on what physics comp makes necessary.

We can of course assume physics when doing physics, but not when doing 
computationalist theory of mind.


No, but we can assume physics when doing physicalist theory of mind.



OK? "physics" is necessary for machine/numbers is what makes the 
physical assumption eliminable, and is what makes computationalism 
testable.


But it doesn't seem to be testable because the conclusions drawn from it 
are extremely general and already known and supported by other 
assumptions: e.g. linearity of QM, probabilistic physics.  It doesn't 
tell us why memories get less reliable when they are more often 
recalled.  It doesn't tell us why we have no memories of early 
childhood.  It doesn't tell us why Alzheimers causes loss of recent 
memories first.  It doesn't tell us whether spacetime derives from 
quantum entanglement.


Brent

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   >