On 17 Mar 2013, at 17:02, Craig Weinberg wrote:



On Sunday, March 17, 2013 10:47:05 AM UTC-4, Bruno Marchal wrote:

On 17 Mar 2013, at 03:47, Craig Weinberg wrote:



On Saturday, March 16, 2013 3:15:43 PM UTC-4, Bruno Marchal wrote:

On 15 Mar 2013, at 20:38, Craig Weinberg wrote:



On Friday, March 15, 2013 3:04:24 PM UTC-4, Terren Suydam wrote:
No, I think that you haven't understood it,

That's because you are only working with a straw man of me. What is it that you think that I don't understand? The legacy view is that if you have many molecular systems working together mechanically, you will naturally get emergent properties that could be mistaken for teleological entities. You can't tell the difference between a brain change that seems meaningful to you and a meaningful experience which causes a brain change. Just because you feel like you are moving your arm doesn't mean that isn't just a narrative fiction that serves a valuable evolutionary purpose.

All of that is fine, in some other theoretical universe. In our universe however, it can't work. There is no evolutionary purpose for consciousness or narrative fictions. The existence of the feeling that you can control your body makes no sense in universe where control is impersonal and involuntary. There is no possibility for teleology to even be conceived in a universe of endless meaningless chain reactions - no basis for proprietary attachment of any kind. It's circular to imagine that it could be important for an epiphenomenal self to believe it is phenomenal. Important how? It's like adding a steering wheel to a mountain.

due to whatever biases have led you to invest so much in your theory - a theory which is AFAICT completely unfalsifiable and predicts nothing.

No theory which models consciousness will ever be falsifiable, because falsifiability is a quality within consciousness. As far as prediction goes, one of the things it predicts that people who are bound to the extremes of the philosophical spectrum will be intolerant and misrepresent other perspectives. They will cling pathologically to unreal abstractions while flatly denying ordinary experience.

Materialism + computationalism can lead to nihilism. But computationalism, per se, does not deny ordinary experiences. It starts from that, as it is a principle of invariance of consciousness for a digital substitution made at some level.

It may not deny ordinary experiences, but it doesn't support them rationally either.

It supports them as much as possible. It supports some irrationalism like non communicable truth on the par of the machine.

Being non-communicable is a property of experience but non- communicability itself doesn't imply experience at all.

You are right. But knowledge of a non communicable truth has to be experienced, may be.




Experience can imply a use for computation, as a method of distributing access to experiential qualities, but computation cannot imply a use for experience.

That contradicts what machines already say when looking inward. It is not the computation which is thinking, but the person supported by one (and then an infinity of one). You deny the existence of that person, and I don't see why. Bringing matter, time or indeterminacy does not help.





As someone brought up on another conversation on FB, the construction of neural networks coincides with the end of conscious involvement

If you decide so, it might as well lead to that. But this idea is based on a confusion between syntax and semantics. Simple programs can have complex semantics. Enough complex to be cautious about attribution or non attribution of consciousness.




- the disappearance of personal attention into automatism. Learning makes consciousness redundant. Repetition allows awareness to withdraw from the act, which becomes robotic.

No worry. Our environment should be enough rich to remind us that our lives should not be taken for granted.








What is a reason why computation would be processed as an ordinary experience, when we clearly can be accomplished through a- signifying mechanical activities?


You lost me here.

We see that generic mechanical activities can be used to imitate experiences without actually embodying them.


We can see that, but that's is only a partial view of truth. Even for machine, we know that the syntactical description of the behavior of its components does only give a pârtial description of what the machine is able to know, without any external observer capable of guessing that truth. You continue to treat the machines in a pre- Turing-Gödel way. You defend a reductionist conception of machine, which does not fit what we already know about them.



Illuminated pixels can stimulate our consciousness to experience characters and scenes which are not literally present in the pixels. The pixel arrangements do not literally become people and places.

A computation is something far more rich and subtle than any pixel arrangements.

Bruno





Craig


Bruno




Craig


Bruno




Craig





On Fri, Mar 15, 2013 at 2:02 PM, Craig Weinberg <whats...@gmail.com> wrote:


On Friday, March 15, 2013 1:55:26 PM UTC-4, Terren Suydam wrote:



On Fri, Mar 15, 2013 at 1:38 PM, Craig Weinberg <whats...@gmail.com> wrote:

Exactly. It is interesting also in that it seems to be like one of those ambiguous images, in that as long as people are focused on one fixed idea of reality, they are honestly incapable of seeing any other, even if they themselves are sitting on top of it.


The irony in that statement is staggering. I couldn't satirize you any better if I tried.

Why, do you think that I have never considered the bottom up model of causation?


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to