On Jul 30, 8:34 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> > 1. Involuted continuum of universal sense: That the mind/body problem
> > is understood to be a manifestation at the scale of a human being of a
> > principle of private/interior-public/exterior relations common to all
> > phenomena in the cosmos. The appearance of the problem arises due to
> > the private nature of subjectivity, preventing it from being perceived
> > in the public-exterior realm*.
>
> So you assume that there is a cosmos, that there is inner experiences,  
> and some quasi panpsychic link between them. That is also like  
> assuming a solution of the mind-body problem at the start.
> I don't see how this could explain what is cosmos and matter, what is  
> mind, and what is the nature of the relations between them.

I wouldn't say there is 'a' comsos, I would say there is comsos -
order, experience. It does explain the relation between matter and
experience, one side is the opposite of the other in a continuum of
sense. Sense is the relation of the two sides of the continuum. The
nature of that relation depends entirely upon the scale and
complexity, history, purpose, context etc.

> > 2. Sensorimotive phenomenology: The interior of our own mind is
> > familiar to us personally, as well as through psychology
>
> OK.
>
> > and
> > neurology.
>
> I don't think neurology has put any light on the "interior of our  
> minds". Only on its possible low level implementation.

Through neurology we understand the effects of neurortransmitters,
hormones, etc. How addiction works whether it's gambling or cocaine...
Lots of things.

> It looks like dualism, with a substantialization of the soul.

I would say an involuted monism, but yeah. 'The soul' has too much
baggage. I would just say interiority or psyche.

> > 3. Extension and Limits of Sensorimotive phenomenolgy: Since the
> > private* world of the subject is limited to the natural scope and
> > scale of what the subject is, it is entirely probable that the cosmos
> > we can examine around us is the tip of the iceberg, concealing a
> > proprietary subcosmos within every atom, flea, planet, etc. It is
> > difficult to guess how these worlds might coalesce and scale up, but
> > the nature of sensorimotive phenomena is such that it extends through
> > and beyond physically discrete forms, or be absent in forms we might
> > be familiar with.
>
> Going from 2 to 3 is unclear. It looks to me like adding marmalade to  
> marmalade.

I'm trying to show how it follows that if sensorimotive phenomena
exists, (actually it insists) then it insists within all independent
phenomenon but outside of our ability to detect.

> > For obvious reasons, a bison
> > cannot be two dimensional and made of paint.
>
> Sure. By definition a bison is three dimensional. But a two  
> dimensional computer (that exists) can emulate a three dimensional  
> computer, in which bison can evolve and eat grass.

That's true, but it still truncates the interior dimensions of the
Bison's experience.

> Awareness/consciousness is an attribute of person. It is not an  
> attribute of a machine/body, still less a physical property of  
> physical matter.

If that were true, then you would have no more care for your own arm
than a pile of dirt. A brain is made of cells configured in space. The
perception of the brain is made of sensorimotive experiences of it's
cells configured in time. Cells are made of molecules. The feeling of
cells are made of the feeling or proto-sensorimotive events of
molecules. If you look at just the unfeeling side, then you see only
physical matter, but the physical matter that you actually are
undeniably feels. You stub your toe, and you feel pain in your toe.
It's your toe. It's part of you. It feels. Your nocireceptiors
manicure that feeling the way the brain likes it and then you feel
some of what your toe feels, plus you feel even more of what toe
stubbing means to you and your life.

> > We have good reason to assume this is the case
> > because we can suffer deterioration in awareness, unconsciousness,
> > death, etc.
>
> You say so, without even using your assumption.
>
> > Our consciousness is quite fragile compared to the robust
> > physical systems around us and we see that small changes in
> > functionality of the brain have tremendous effects upon human well
> > being.
>
> Small change in any machine can make them crash.

Then it would follow that they would be careful not to crash. You
can't be careful if you can't care.

> > The key insight is that it is not consciousness itself which is
> > fragile and special, it is the level of elaboration of consciousness -
>
> The problem consists in linking the solid consciousness with the  
> fragile body.

Remember, consciousness is subtractive. When the body dies,
consciousness isn't lost, it just has a new view as a consequence of
losing it's material filter. Sense is about pulling wholes through
holes. Without any resistance, there is only the whole and no need for
pulling.

> The word "function" is very tricky. Either you see a fnction  
> extensionally as a set of input-output (behavior), or you see a  
> function intensionally (note the "s"; it is not a spelling mistake!)  
> as a set of recipe to compute or to process some activity (leading to  
> output of not). Comp needs the two notions, and the second one is used  
> (sometimes implicitely) in the notion of substitution level. A copy of  
> a brain is supposed to preserve a local process, not a logical function.

Like what kind of local process? Membrane transport? Action
potentials?

> > What SEE recognizes, through insight into the role of privacy in
> > sensorimotive phenomena, is that individual reality tunnels are blind
> > to other reality tunnels which do not resemble their own. There may be
> > an inversely proportional relation between isomorphism and
> > sensorimotive blindness (call it AAD - Atrophic Agency Detection)
> > which mercifully sequesters relevant frames of reference through a
> > taxonomy which SEE calls perceptual relativity so we don't have to
> > worry about how each footstep affects the geology, and microbiology of
> > each square inch of underfoot real estate. For people, we can call
> > this merciful sensorimotive inhibition 'sanity'. Too much inhibition
> > however, can have consequences as well. One who is overly medicated or
> > impaired can be robotic and overly theoretical in their approach.
> > Unfeeling and reactionary. This is the flaw in digital reasoning.
>
> On the contrary, digital mechanism open us to stop in reducing machine  
> and numbers to their appearance.
> What you say is "machine are stupid", so *you* take mechanism as a  
> reductionist view of human.
> What I say is "human are intelligent/conscious", so mechanism opens up  
> to the idea that machine can be intelligent/conscious.
> I don't follow either your theory that medication can lead people to  
> be theoretical. If *that* is true, tell me the name of the medication,  
> and I will suggest this to some students.

Mild tranquilizers and stimulants, Adderall, anti-psychotics, would
tend to make a person's psyche more occidental. I don't have to reduce
numbers to an appearance, they are an appearance. The characters in
this sentence are nothing other than an appearance, they have no inner
life.

> > It
> > fails to appreciate the role of feeling in making human sense,
>
> My whole life is a fight to recognize the importance of feeling, not  
> just in human, but in the whole cosmos creation.
>
> > investing full faith in transhuman computation without admitting that
> > there is a significant difference between what we inscribe
> > symbolically and the organism that can do the inscribing and decoding.
>
> Machine do make that difference already. This is explained in full  
> details in some papers. Of course some amount of computer science  
> needs to be studied.

What would be a common-sense real world example of this?

> > In the mean time we will get better and better toys. I have no real
> > trouble with all of that, except that if we really care about
> > understanding what the cosmos actually is,
>
> Yes, that is the goal. Understanding what is the cosmos, where it does  
> come from, what does it hurt, etc.

Anywhere that the cosmos could come from would also be the cosmos,
wouldn't it?

> > and what we actually are,
> > then I have no qualms about pointing out that the difference between
> > an abstract machine algorithm and a living organism is an
> > insurmountable gap. It's mistaking the menu for the meal, the Mouse
> > for the crowd, etc.
>
> You are confusing a menu, with a program running relatively to some  
> universal numbers.

A program running relatively to some universal numbers is still not a
meal.

> > What a machine is, is the opposite of
> > sensorimotive phenomena. Is there no room in arithmetic for it's own
> > opposite.
>
> You might try to clarify this, and give arguments.

If you are saying that comp can emulate anything, and that a common
thing to emulate is symmetry, then it should be able to emulate the
opposite of itself. If that can be done, I think that what will be
found is feeling.

> > An uncomputable non-infinity which is anchored in ordinary
> > experience rather than illuminated technosis? Instead of addressing
> > this failure of math to find 'reality',
>
> The point is that it does not fail on that. Once you accept the idea  
> that you are a finite machine, arithmetic explains where the coupling  
> consciousness/realities come from. It even explains why we feel that  
> there is an insurmountable gap, and why that gap has a very special  
> shape and its role in the apparition of the cosmos.

You can model it that way, and it succeeds in it's own terms, but it's
terms are distorted to fit the ontology of it's methods. It's not a
matter of being finite, it's that we have finite aspects and
transfinite aspects.

> > comp concludes that it is
> > reality which has no reality.
>
> That would be a contradiction. Comp concludes only that physical  
> reality emerges from addition and mutiplication. The proof is  
> constructive (just look in the 'head' of any universal machine) and so  
> it leads to a testable physics. It explains already very werid feature  
> of the empiric observable reality (indeterminacy, non-locality, non-
> cloning, non-booleanity, symmetry at the bottom, etc.).
> Above all it leads to a precise mathematical theory of qualia and  
> feeling, *and* of quanta and measurement/interaction. A lot of works  
> remain, obviously.

It sounds like a good model, but ultimately it's inside out. Pain
cannot be generated within a mathematical theory. Arithmetic cannot
care.

> > I may be reductionist in my view on what
> > 'machines' are able to experience, but I am quite generous in my views
> > on what machines can enable us to experience.
>
> Yes machines, in the 3p sense, cannot have experience. Nor can a human  
> brain, or anything conceived in a 3p view. But machine have natural 1p  
> views on themselves, and it is the 1p which is subject to experience.

I agree. Our only difference I think is that you are saying that a 1p
experience of a simulation is going to be the same as the 1p
experience of the original if you copy the 3p view. I say plainly not,
as a mathematical model of fire does not produce heat, not even within
the 1p experience of the simulation world. The experience of heat is
different than the experience increased velocity and collisions of
virtual atoms.

Craig
http://s33light.org

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to