On Jul 31, 9:16 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Hi Craig,
>
> Sorry for having take some time to comment your posts. I will be busy
> the two next weeks, so be patient for possible comments.
> I comment all your 3 posts addressed to me in one mail.

Thanks, yeah no rush. It seems like we might be going in circles, I
keep thinking that I should try to sum up the core issues where we
agree and disagree: Mainly I think that your model features arithmetic
as a primitive, whereas I see arithmetic as a subjective experience,
and that the relation of subjectivity to objectivity is primitive to
me. Consequently it follows from your model that we would be able to
produce Turing consciousness mathematically in any physical or
informational medium whereas my model posits that consciousness is not
produced but rather is the elaborated 1p correlate to 3p neurology x
zoology x biology x chemistry x physics.

> >> On 28 Jul 2011, at 17:41, Craig Weinberg wrote:

> > To
> > say that it is representational is to conflate the referent and the
> > signifier.
>
> Not at all. It is a bet on the invariance of our subjective experience
> on a substitution level. Biology illustrates already the idea in the
> language of chemistry.
> Comp does not imply that everything is representational, nor that
> Turing machine can simulate everything. On the contrary, some
> machine's attributes are not Turing emulable.

What machine attributes are not Turing emulable? I thought Church says
that all real computations are Turing emulable.

> > In order for the machine to STOP, there doesn't
> > automatically need to exist a 1p experience of a red sign.
>
> It depends on the machine. In the case of human, there usually are 1p
> experience. I assume comp.

But stopping can be accomplished without the existence of any specific
1p experience, right?

> > In our 1p,
> > we see a red stop sign as a qualitative image, we understand it as a
> > symbolic text, we interpret it as a pragmatic condition that motivates
> > us to respond with motor commands to our body to push the brake, the
> > brake stops the car.
>
> Yes. But that is not an argument that some machine cannot do that. In
> the comp theory, there is no need to eliminate the 1p experience.
> Don't confuse the comp theory, and its misuse by materialists.

I'm trying to establish that a machine does not automatically do this.
I only know comp theory from what I've learned from you.

> > Through our interpretation we re-present the
> > signifier, which is a representation-neutral experience of presented
> > color, shape, size, and context. As the machine is a reverse
> > engineered logic, we have no reason to presume that our signifier -
> > the red light or sign, is presented just because a command is sent to
> > the processor queue to stop the car when the ccd in the camera
> > encounters electromagnetic changes of a particular sampled
> > configuration.
>
> You are right, but this only means that we fail on the correct
> substitution level.
> If we are machine, we cannot know which machine we are, nor really
> which computations go through, but we still face something partially
> explainable.

Yes, substitution level is the thing. I don't see the level as a
simple point on a one dimensional continuum though. It's punctuated by
qualitative paradigmatic leaps of synergy. Thus the big deal between
an organism being alive or not. The entropy cost is not uniform, just
as the different hues in the visible spectrum seem to appear to us as
qualitative regions of color despite the uniform arithmetic of
frequency on the band. Let's say that human consciousness spans the
spectrum from red (sensation) to violet (abstract thought) with
phenomena such as emotion, ego, etc in the orange-yellow-green zone. I
think that a computer chip is like taking something which is pre-
sensation (silicon detection = infra-red) and reverse engineering
around the back of the spectrum to ultra-violet: abstraction without
thought. If we want to go further backward into our visible spectrum
from the end, I think we would have to push forward more from the
beginning. You need something more sensitive than stone semiconductors
to get into the visible red wavelengths in order to have the 1p
experience get into the violet level of actual thought. Or maybe that
wouldn't work and you would have to build through each level from the
bottom (red) up.
>
> > It's going to stop the car whether there is an
> > experience of a sign or not. I say that there is an experience, but
> > it's likely not remotely like a human signifier and would compare as
> > one piano note compared to an entire symphony, if not the sum of
> > hundreds of symphonies filtered through different molecular, cellular,
> > physiological, neurological, and psychological audiences.
>
> My point works even if you decided that your "generalized brain" (the
> part of reality I need to emulate to get your consciousness preserved)
> is given by the quantum rational Heisenberg matrix, of the whole
> cluster of galaxies, at the level of strings.

What would be the part of a burning log that you need to emulate to
preserve it's fire?

> > An abstraction is an ideal teleological signifier, having no relevant
> > physical qualities itself but the capacity to be used as a template to
> > inform both physical and ideal forms. Concrete is the opposite, a
> > material referent which exists physically as an objective phenomena
> > which is subject to the teleonomy of physical, chemical, biological
> > consequences.
>
> That seems quite abstract to me.

Abstractions are as abstract as anything can be?

> >> I don't buy that there is necessarily a given physical universe. It
> >> is
> >> only an Aristotelian rumor, based on a gross extrapolation on our
> >> animal experience. But it fails, both on mind *and* matter.
>
> > I hear what you're saying, and I agree in the sense that from the
> > absolutely objective 0/∞p perspective there is no special difference
> > between physical and non-physical phenomena,
>
> You miss the point. Comp shows and makes it possible to illustrate the
> needs to explain how the physical arises or is build from conceptually
> simpler non physical notions, already well known, which are the
> mathematical relations.

Then maybe as well, I introduce Kromp to show and make it possible to
illustrate the need to explain how non physical notions arise or is
built from simpler ontological experiences of physical existence,
already well known, which are sensorimotive perceptions.

> > but in SEE, the idea is
> > that existence is a relation of essential phenomena confronting it's
> > tail,
>
> I think that you might confuse existence with consciousness.
> I think a scientist does not commit himself ontologically, beyond the
> terms of its theory.

I think that the act of not committing himself to anything beyond the
terms of his theory is an unscientific, and arbitrarily sentimental
commitment.

> > through the involution of time-space characteristics.
>
> This does not help.

Why not?

> > In this
> > sense [ ] the notions of mind and matter lose all
> > absolute character of abstract or concrete - it is only through
> > perceptual relativity which the tail is assigned material qualities
> > from the 1p of it's 'head'. Perceptual relativity bundles the
> > individual piano notes of sensorimotive experience into the qualia
> > chords, arpeggios, and symphonies experienced by the human 'head'. The
> > very experience of essence seeing it's tail as not self is one of
> > ontological glamor. Fear and fascination at the image of meaning and
> > experience involuted through spacetime - decompactified as discrete,
> > meaningless non-experiences.
>
> ?

Trying to say that it's not arithmetic that gives rise to mind v
matter, it's the experience of mind encountering it's own ontological
tail. That experience can then be modeled computationally.

> > I don't, but it makes sense to model it that way since we use silicon
> > for that reason, to tap into it's glass-like semiconductive
> > properties. Transparency, neutrality, reflection...the closest we can
> > get to a purely 'tail' material. You're right though, it could
> > experience anything and we wouldn't have any idea. Maybe it
> > experiences the ∞p omniscient perspective even, but I think
> > parsimony
> > suggests a piano note vs chords and symphonies model.
>
> ?

I'm just saying we have no way to know whether a rock writes like
Shakespeare or communicates with other galaxies, but parsimony
suggests that the matter in a rock have less rich experience than
would a starfish or a primate.

> > I'm saying we can get a better symphony out of a philharmonic
> > orchestra than a thousand drummers. Drums make us feel one way, a
> > cello feels a different way. Can you play cello with a million tiny
> > drums? Maybe. I don't think that the resonance would scale up the same
> > way. Can you make a cello player out of a trillion tiny drums? I doubt
> > it.
>
> The comp problem is that arithmetic play cello, not just with a
> million tiny drums, but also with a teragigamegatrillions of tiny
> drums, and then 10^teragigamegatrillions of tiny drums, and that this
> is only the beginning.

Mm. Back to substitution level. It's tempting but again,
10^teragigamegatrillions infrared and ultraviolet pixels don't equal
one visible pixel, do they?

> > There would have to be an explanation for why yellow should be any
> > more complex to create than a typical mechanical function.
>
> You are mistaken. We don't need to understand a process to copy it, or
> to isolate it with patience and time. But a computer is not an
> explanation of things, it is a door to an unknown. With, or without
> comp, we do have, at least, such a door in our head.

If the process we want to copy is understanding itself, how do you
know that we don't need to understand it?

> > I don't see
> > any indication that experience of any kind can be emulated by anything
> > independent of something naturally capable of experiencing it.
>
> Universal argument.

Is that bad?

> > There
> > is no arithmetic description which could be understood by a blind
> > person so be able to see yellow in their mind.
>
> You are right. But with comp what remains true is that there is an
> arithmetical transformation of his/her brain so that he/she is able to
> see yellow in his/her mind. The blind people does not need to
> understand the arithmetical relation related to its  qualia yellow
> more than you need to understand the functioning of your brain to think.

Sure, I get that. But producing that arithmetical transformation on a
brain where the visual cortex is missing doesn't make yellow happen.
The arithmetic is useless without those particular biological neurons
there to be informed by it.

> > Are you saying that I can build a computer out of silicon that runs a
> > program that runs a virtual server 10x faster than the silicon
> > computer is able to run?
>
> Yes. On almost all inputs. Once you have written your code on the 10x
> faster machine, I can find a better program making  Babbage machine,
> more rapid for all sufficiently large inputs. This is proved by
> diagonalization, and can't be applied in practice, at least in a
> direct way. It is Blum speed-up theorem.

Meh. 'Can't be applied in practice' = unicornlandia to me

> > That's unclear to me. I'm just saying that computing can't even
> > emulate Everything within the realm of computation, let alone in the
> > greater realm of sense.
>
> I don't know where you are saying "just" that. It seems you say there
> is a cosmos, and, if I am correct, you say "no" to the digitalist
> doctor. It is your right.

I don't insist there is 'a' cosmos, but everything we can access makes
sense or can be sensed, and that is cosmos.

But to justify that you believe that comp is
> false, you have to introduce some special non Turing emulable
> components. And this looks a bit like invoking UFO to explain global
> warming.

You just told me that  "On the contrary, some  machine's attributes
are not Turing emulable"

> >>> I don't think arithmetic can do anything by itself.
>
> >> I think you are quite wrong on this, but we may debate on the meaning
> >> of "doing".
>
> > When my computer begins evolving new operating systems by itself, and
> > when it's turned off then I would be convinced.
>
> What make you sure that this is not possible in a few years, or in a
> few billions of years. We discuss on a theoretical possibility and on
> its consequence (having been clear that we would take an elimination
> of person as a motive to abandon the theory).

Because arithmetic is just a representation of a feeling of order.
Like time. It doesn't have a teleological will. You could take
something that does have teleological will - like an organism and
empower it with a bunch of arithmetic machines. That could work
someday.

> >>>> I can appreciate a good poetical slogan to sum up a scientific
> >>>> theory,
> >>>> but such slogan per se cannot be taken as such as a theory.
>
> >>> It's not a theory, it's an observation.
>
> >> I would say it is a personal interpretation of an observation.
>
> > That's a given, but sure.
>
> That is not a given.

Aren't all observations personal interpretations of observations? Even
if we share the same interpretations, they are still personal
interpretations.

> Sometimes you talk as if you knew some truth. No
> doubt that you do know some truth, but it should not be used in
> arguing in those matters.

I don't make the distinction so much, because I assume it's a given
that I don't know any truth. Anything that I say is an idea or a
questioning or provocation of a truth. Sorry though, I'll try to
approach it differently if it's annoying.

>The 1-p is important, but it can't do
> science properly on its own. The civilized 1-p let the 3-p do the
> science, but the beauty of comp relies that the 3-p can recognize the
> 1-p, even if only partially. The evidence are that above the Löbian
> threshold, machine have a 1p and inner experiences.
>
>
>
> >>> There is absolutely nothing about
> >>> an animated CAD drawing of DNA which suggests it should be
> >>> associated
> >>> with anything that feels or thinks.
>
> >> Movies does not think, right.
> >> Computer does not think either. Nor brain.
> >> People think, thanks to brain.
>
> > Right. So how does AI 'think' like a human brain without a human
> > brain?
>
> AI?
> The term "artificial" is artificial. And so, it is natural to use the
> term "artificial",  for species developing big egos.

ok, then how does comp think like a human brain without a human brain?

> > What you're not seeing is that non-turing emulable is the definition
> > of awareness.
>
> What makes you say something like that? There are relations, but only
> relations, between the non Turing emulable, and awareness/
> consciousness. But this is justified by reasoning. By making it a
> definition you conflate quite distinguishable remarkable things.

It think it's because I'm working from a really simple symmetry model
and constantly, obsessively looking at set complements. It just seems
that if awareness is ill-fitting for Turing emulation, then perhaps
the reason for that is that is it's definition. What would be the
consequences of that if true? It seems potentially viable to me?

> > It's not due to infinities, it's due to there being a
> > such thing as the opposite of arithmetic which cannot be represented
> > within arithmetic as anything. Why can't arithmetic embrace a
> > mathematics of it's own involution?
>
> Give me a proof that it can't.

No I'm suggesting that it can and it should.

> >> Where does the cosmos come from?
>
> > It has no where not to come from.
>
> Can you doubt that it exists?

Doubt is part of the cosmos too.

> That is nice, but to share with other you need theories, assumptions,
> etc.
> Rigor and clarity is needed, especially in the non formal context.

I do strive for rigorous clarity, but I think that the nature of the
subject matter itself is oceanic and paradoxical.

> > The experiencer is a given.
>
>  From its 1-pov, I agree. But that does not mean that it is an
> elementary reality that we need to assume in the ontology.

It doesn't mean that it's opposite, 3-pov, is any more elementary.

> > It's a primary vector of orientation. It
> > comes from the sensorimotive interior of the cosmos being twisted into
> > a private balloon through the time-space involution process. We see a
> > cell versus a molecule but the feeling of a cell is like a larger hole
> > through which experience can be poured compared to a molecule. It's a
> > metaphor - there is no hole that can be described in three dimensions,
> > it's a qualitative diameter correlate, like amperes, to describe the
> > level of experiential 'greatness' which can be experienced.
>
> ?

Our experience is defined by the shape of the hole which our
silhouette cuts out of the whole. We experience a limited version of
the whole through the holes that our senses allow. If we have larger
holes (greater sensitivity), and more holes (different sense channels
ie chemical channels vs biological channels vs somatic channels), then
our perception potential is 'greater' in overall magnitude than the
potential of something with only physical sense channels, regardless
of how great the sensitivity of the thing is.

> Second post: Craig wrote:
> >> Diagonalization is a tool in theoretical computer science, to study
> >> the structure of what is non computable, degrees of unsolvability,
> >> etc. It comes from set theory, where Cantor used it to study the
> >> degrees of infinities of sets.
> >> Mecanical consept are immune for that tool, making the notion of *all
> >> computation* the most solid of all epistemological realities, indeed
> >> it makes it arithmetical, among other things.
>
> > I'm familiar enough with the Cantor set to get the gist of what you're
> > saying. What about non-epistemological or semi-epistemological
> > realities? I would define 1p as semi-epistemological.
>
> Introducing new words only hides the lack of argument. Epistemological
> or semi-epistemological will not prevent it by being used by machine,
> humans, the cosmos ...

I'm not trying to prevent it by being used by machines, I'm suggesting
that a machine would need to compute in the spectrum of non-epistemic
modeling to emulate 1p awareness.

> > Are you saying that arithmetic is the only primitive reality though?
>
> This is not the assumption of the UD reasoning. But once you get the
> conclusion, you can understand that we don't need to assume more than
> arithmetic, and that the existence of more is absolutely undecidable,
> and cannot be used to justify any inner experience, so that the usual
> OCCAM can be used to abandon the unnecessary postulates.

It seems like you would come to the same kind of conclusion using
sense primitive reasoning instead.  If we assume that computation is
just a feeling of order which we share with objects, then nothing else
is requires but sense.

> You are the one with strong assertions like "comp is false". I am just
> pointing that we can very modestly already listen to what machines are
> saying when introspecting.

What they are saying to us or what they are saying to themselves? A
key feature of introspection is that it is private.

>If you want really refute comp, you have to
> study computer science. Up to now, it makes your argument invalid,
> unless you show us the need of those special infinities, and what they
> are.

I'm not invested in refuting comp, so much as expanding or explaining
comp in a larger context.

> >> You need only to be a universal machine.
>
> > Is an infant a universal machine? If so, would you say that she
> > understands multiplication? If so, why does she need to be taught
> > math?
>
> Infant are universal machines. Teaching math, notably, actualizes
> their universal feature. In fact when they get the notion of aging,
> dying, anniversary, they are Löbian. Being universal they can emulate
> in principle any machine, but you need to teach them many things.

Why is their universal math feature not actualized to begin with,
while their universal sensorimotive features are actualized - robustly
- even before birth?


> What problem? We want a problem. The goal is to explain that comp does
> not solve the mind-body problem per se, but that it transform it into
> an interesting mathematical problem: deriving the laws of physics from
> a measure problem in arithmetic. Progress have been done.
> The goal is to show there is a real MB problem, and that the
> materialist common use of comp is just inconsistent.

Sounds interesting.

> > Perspective would not be imaginable were it not experienced first.
>
> Why? This would lead to infinite regress if that were true.

If you had a flat universe, how could you imagine it to be otherwise
based on the way phenomena behaves. Why would you suspect there could
be a such thing as 3D?


> > Maybe I don't understand enough about how comp us used to understand
> > this.
>
> I do think you should study comp. Note that this list is in advance
> compared to the literature: most computationalist philosophers still
> believe that comp can assume a primitively material universe (having a
> role in consciousness). That simply cannot work. UDA refutes this. And
> AUDA shows how and why, despite no basic universe, most UMs and LUMs
> will develop stable believe in stable local realities.

I don't think of the universe as materially primitive, it's the
relation between subject and object which is primitive and gives rise
to material appearances and abstract order.

> > They are reduction because the experience of listening to music is
> > significant without a formal mathematical analysis of it's structure,
> > while the formal analysis is not significant except in it's relation
> > to the 1p experience of the music.
>
> You beg the question, you just assume that the formal coupling brain-
> music is not enough. You need some infinite phlogiston in the brain to
> assume this.

It's not enough, because you can't find music inside the brain in it's
experienced form and you can't experience musical notation as music
without being able to hear music first. The assumption that you need
anything infinite or substantive to explain that is unexamined. All
you need is a 1p-3p continuum of phenomenology with two ontologically
opposite ends.

>
> Third post:
>
> >> So you assume that there is a cosmos, that there is inner
> >> experiences,
> >> and some quasi panpsychic link between them. That is also like
> >> assuming a solution of the mind-body problem at the start.
> >> I don't see how this could explain what is cosmos and matter, what is
> >> mind, and what is the nature of the relations between them.
>
> > I wouldn't say there is 'a' comsos, I would say there is comsos -
> > order, experience.
>
> I am a simple mind. If you assume A & B & C, it seems to me that you
> do assume A.

Oh, sorry, I misread your comment because I consider cosmos the same
as sense: the invariance between existence (ω) and essence (א). Yes, I
assume mind-body is not a problem but a specific relation of the
universal,set complimentary involuted continuum σ(אω){v}, where σ(אω)
is the  spectrum of a matrix where the x axis symmetrically follows
the succession of sensorimotive perception (א) gradually involuting to
the west as electromagnetic relativity (ω) and y axis {v} as relative
magnitude.

> > It does explain the relation between matter and
> > experience,
>
> You say so. Where is the explanation? What we already know is that you
> have to speculate some special infinities. So we have not see an
> explanation, but only a complexification of the problem, without
> motivation (except preventing computer to have inner experiences).

Experience and matter are symmetrical ends of a single ontology. Heads
and tails variations of the same invariant coin of sense.

> > one side is the opposite of the other in a continuum of
> > sense.
>
> ?

Subjectivity is the set complement for objectivity. Matter is
experience de-subjectified. Experience is matter de-materialized.
>
> > Sense is the relation of the two sides of the continuum.
>
> That seems like the billionth rewording of a version of the identity
> thesis.

>From what I'm seeing of identity thesis, my view differs in that I'm
not making a semantic argument where we care about what statements can
be made about things or expressions of things, but a phenomenological
hypothesis about the role that 1p subjectivity and 3p objectivity have
in mutually defining each others appearance. Further, I would say that
the two sets imagined in identity thesis are a single involuted set
with each extreme end of the set presenting the other as it's
complement/opposite.

> > The
> > nature of that relation depends entirely upon the scale and
> > complexity, history, purpose, context etc.
>
> Up to infinity.

?

> >> I don't think neurology has put any light on the "interior of our
> >> minds". Only on its possible low level implementation.
>
> > Through neurology we understand the effects of neurortransmitters,
> > hormones, etc. How addiction works whether it's gambling or cocaine...
> > Lots of things.
>
> Using comp, indeed. But that is the "easy problem". It does not touch
> the problem of the inner experience. Indeed it is a software problem.
> A priori, neurons have no big role, per se. Understanding how Deep
> Blue works does not need the understanding of transistors.

Understanding how Deep Blue appears to work does not need the
understanding of transistors, but understanding it's inner experience
might. That's really my main point that I think you are not
addressing. If we could stick some transistors in out brain, feel the
difference, then we might be able to have some good answers on how
machine complexity scales up to machine interiority.

> >> Sure. By definition a bison is three dimensional. But a two
> >> dimensional computer (that exists) can emulate a three dimensional
> >> computer, in which bison can evolve and eat grass.
>
> > That's true, but it still truncates the interior dimensions of the
> > Bison's experience.
>
> How do you know? You can be sure on this only if you are sure that
> some special infinities have a role, but you have not show them, nor
> explain their role.

Ehh. It doesn't seem like we have to know. By the same standard that I
don't need evidence to presume that you exist, I don't need evidence
to presume that a CGI sim of a Bison isn't tasting the grass that it
looks like it's eating. I don't think it invokes special infinities
because it's an oriental epistemological standard rather than
occidental. In fact, the more we overthink it, the less we know about
it. It's the 'what hurts?' standard of knowing (http://www.myspace.com/
ophidius/blog/529808779). If we assume interiority then we should be
horrified at the carnage and torture that will ensue over the next
decades trying to make intelligent machines - the aborted
computational fetuses, the convulsing fits of amputated code, etc. Our
computer science labs will make the Inquisition seem like a puppet
show.

> > Cells are made of molecules. The feeling of
> > cells are made of the feeling or proto-sensorimotive events of
> > molecules.
>
> For which you need to speculate on a new physics, as we have already
> agree.

It's got to happen sooner or later. Why not sooner?

> > If you look at just the unfeeling side, then you see only
> > physical matter, but the physical matter that you actually are
> > undeniably feels.
>
> I deny this. Not only the matter my body is made from does not feel,
> but, I think it does not exist basically.

You don't find it ironic that you consider that your actual experience
as a living organism, hundreds of millions of years in the making you
consider to be non-existent while conceptual intangibles that have
been developed over a few centuries are considered the true reality?

> Note that some experience show that we can imbue feeling on arbitrary
> object (like a plastic) hand when manipulated in some way.

Only if the object is used as an extension of our body. If you put a
local anesthetic in your hand, you won't be able to feel it.

> > You stub your toe, and you feel pain in your toe.
>
> Because I am well programmed, but you can make me feel I stub my toe,
> by activating directly electrodes planted in my brain.

Right, because 'you' are the interior of the brain and not the toe.
You can fool the nervous system, but it doesn't mean the toe isn't
still feeling pain on a cellular level.

> > It's your toe. It's part of you. It feels.
>
> Have you read about phantom limbs? Some people can have toe-ache
> without toes. would you say that "nothing" feels, in this case. We
> need sensorimotive insistence of the void.

No, same thing. Your perceptual maps can retain the cumulative
entanglement from the momentum of millions of connections with the
former toe and your brain can send you the sense of a phantom toe
because it's not sure why it suddenly lost connectivity with the
country of toe after such a long and dependable collaboration.

> >>> Our consciousness is quite fragile compared to the robust
> >>> physical systems around us and we see that small changes in
> >>> functionality of the brain have tremendous effects upon human well
> >>> being.
>
> >> Small change in any machine can make them crash.
>
> > Then it would follow that they would be careful not to crash. You
> > can't be careful if you can't care.
>
> Sure, but I don't see the point, unless you beg the question again,
> and assumes that a machine cannot care.

Caring doesn't come from the machine, it comes from what the machine
is executed on. Human nervous systems care a lot, living cells care
some, but inorganic crystals care very little. It's a function of
awareness, not of computation.

> > Remember, consciousness is subtractive. When the body dies,
> > consciousness isn't lost, it just has a new view as a consequence of
> > losing it's material filter. Sense is about pulling wholes through
> > holes. Without any resistance, there is only the whole and no need for
> > pulling.
>
> I can agree, but how can you be so sure that the material filter is
> not Turing emulable. It does look like a form of racism.

I'm not sure, I just have a hunch. Must there not be an opposite of a
Turing machine? That's what I would use to emulate the material
filter.

> >> The word "function" is very tricky. Either you see a fnction
> >> extensionally as a set of input-output (behavior), or you see a
> >> function intensionally (note the "s"; it is not a spelling mistake!)
> >> as a set of recipe to compute or to process some activity (leading to
> >> output of not). Comp needs the two notions, and the second one is
> >> used
> >> (sometimes implicitely) in the notion of substitution level. A copy
> >> of
> >> a brain is supposed to preserve a local process, not a logical
> >> function.
>
> > Like what kind of local process? Membrane transport? Action
> > potentials?
>
> You decide with your doctor. I don't care about that. Comp says that
> there is some level, not that the level is this or that. Comp explains
> we can never know for sure what is the level, except indirectly by
> deriving physics from what we can see when we observe ourself below
> that level.

The only thing I'm not comfortable with as far as substitution 'level'
is it seems to assume a one dimensional quantitative threshold,
whereas I see punctuated equilibrium (back to the red=feeling,
blue=abstract thought metaphor). It doesn't sit right that a digital
sim is going to feel anything. It would have shown that potential by
now. Someone would have noticed one chip that hesitates to complete a
logoff/shutdown or locked the start button out of fear. It seems
ludicrous, and not in a crazy-enough-to-be-true way.

> >> Yes, that is the goal. Understanding what is the cosmos, where it
> >> does
> >> come from, what does it hurt, etc.
>
> > Anywhere that the cosmos could come from would also be the cosmos,
> > wouldn't it?
>
> It is not. It would be misleading to call arithmetic a cosmos, which
> is a term I prefer to use to denote the physical local reality, and
> which emerges, in a very precise (and thus testable) way, from a tiny
> part of arithmetic.

The physical local reality I consider to be a function/figment of
perceptual relativity. It's just a particular band of the concrescence
of essential and existential relations. I agree, there's a lot more
going on than that, but I call the whole enchilada cosmos - order.

> >> You are confusing a menu, with a program running relatively to some
> >> universal numbers.
>
> > A program running relatively to some universal numbers is still not a
> > meal.
>
> But this is what you fail to explain to us. How could a universal
> number distinguish between a simulated meal and a meal?

That's the point. If it cannot distinguish between what is real and
what is simulated then it is not sane. This is the main function of
subjectivity is to differentiate between what is relevant in it's
perceptual frame and what can be ignored.

> To be able to
> do that, you need to introduce infinities and non Turing emulability
> in the working of the brain.
> This is like complexifying the data (in which such infinities does not
> occur) to avoid a theory. You make a problem just more complex to
> avoid a possible solution.

I don't think so. I think the brain can be modeled adequately through
computation, it just can't be executed through just any old medium. A
cookie shaped like a brain isn't a brain, no matter how perfectly it
resembles the brain. Even if you had one cookie per brain state and
moved them on a giant conveyor belt so that the speed of any
particular cookie-belt frame appears below the substitution velocity,
you still won't have a mind made of blurry speeding cookie.

>> It sounds like a good model, but ultimately it's inside out. Pain
>> cannot be generated within a mathematical theory. Arithmetic cannot
>> care.

> And woman cannot reason ... We heard so much ridiculous dismiss like
> that. Why not being agnostic on those issue.

Only because I have a model that explains the mind-body relation in a
different way. I have no moral opposition to arithmetic entities or
fear-based rejection of the proposal, I just think that it doesn't
work that way. There's too many indications that there is a
fundamental gap between machine intelligence and animal sentience. We
have never seen an inorganic material in nature behave like an animal.
We have never seen a silicon chip give any indication that it was
capable of growing or changing itself. We've never seem a mathematical
formula that suggests self-awareness. Natural language seems far more
promising to achieve sentience with. A multi-gigapoem that makes
silicon cry it's way into singing the body electric.

> Build a clear non-comp
> theory, and show us something which cannot be explained by computer
> science, and then we will be interested. But if you say "a machine
> cannot do that", it will just look the terribly sad feeling of
> superiority that some humans can have from time to time.

How would computer science model the logic to answer a question like
'how are you feeling today?' or 'what do you want out of life?'

> >> Yes machines, in the 3p sense, cannot have experience. Nor can a
> >> human
> >> brain, or anything conceived in a 3p view. But machine have natural
> >> 1p
> >> views on themselves, and it is the 1p which is subject to experience.
>
> > I agree. Our only difference I think is that you are saying that a 1p
> > experience of a simulation is going to be the same as the 1p
> > experience of the original if you copy the 3p view.
>
> At some right level. Yes.
>
> > I say plainly not,
> > as a mathematical model of fire does not produce heat, not even within
> > the 1p experience of the simulation world.
>
> How do you know that?

Oriental standard of epistemology again. Wisdom, not knowledge. It
doesn't make sense that you can make fire out of numbers.

> > The experience of heat is
> > different than the experience increased velocity and collisions of
> > virtual atoms.
>
> Sure, but the effect of virtual velocity and collision of virtual
> atoms can have the correct effect on the virtual neurons of simulated
> people, so that they will behave like if they were suffering from the
> heat. If not, it means that you put something infinite and non Turing
> emulable in the human brain, and this makes both matter and mind, and
> their relations, artificially more complex.

I don't see why it has to be infinite and I don't see what's wrong
with non Turing. How do you explain that we notice a difference at all
between computers and living organisms? Why would our brains not have
just evolved out of sandstone if they could have?

Craig
http://s33light.org

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to