On 10/23/2012 4:53 PM, John Mikes wrote:
Hi, Stephen,
you wrote some points in accordance with my thinking (whatever that is worth) with one point I disagree with: if you want to argue a point, do not accept it as a base for your argument (even negatively not). You do that all the time. (SPK? etc.) -

Hi John,

My English is pathetic and my rhetoric is even worse, I know this... I don't have an internal narrative in English, its all proprioceptive sensations that I have to translate into English as best I can... Dyslexia sucks! What I try to do is lay down a claim and then argue for its validity; my language often is muddled... but the point gets across sometimes. I have to accept that limitation...


My fundamental question: what do you (all) call *_'mind_*'?

Actually, mind - for me- is a concept, an abstraction, it isn't a thing at all...

(Sub: does the *_brain_* do/learn mind functions? HOW?)

The same way that we learn to communicate with each other. How exactly? /hypothesis non fingo///.

(('experimentally observed' is restricted to our present level of understanding/technology(instrumentation)/theories. Besides: "miraculous" is subject to oncoming explanatory novel info, when it changes into merely 'functonal'.))

    I agree.

To fish out some of my agreeing statements:
/*"Well, I don't follow the crowd...."*/
Science is no voting matter. 90+% believed the Flat Earth.

    I wish more ppl understood that fact!

*//*
/*"...* Alter 1 neuron and you might not have the same mind..."
/(Meaning: the 'invasion(?)' called 'altering a neuron' MAY change the functionalist's complexity /IN THE MIND!-/ which is certainly beyond our knowable domain. That makes the 'hard' hard. We 'like' to explain DOWN everything in today's knowable terms. (Beware my agnostic views!)

Agnostisism is a good stance to take. I am a bit too bold and lean into my beliefs. Sometimes too far...

"Computation" of course I consider a lot more than that (Platonistic?) algorithmic calculation on our existing (and so knowable?) embryonic device. I go for the Latin orig.: to THINK together - mathematically, or beyond. That mat be a deficiency from my (Non-Indo-European) mother tongue where the (improper?) translatable equivalent closes to the term "expectable". "I am counting on your visit tomorrow".

That is similar to my notion of "faith" as "expectation of future truth"...

/"I strongly believe that computational complexity plays a huge role in many aspects of the hard problem of consciousness and that the Platonic approach to computer science is obscuring solutions as it is blind to questions of resource availability and distribution."/
(and a lot more, do we 'know' about them, or not (yet).

    yep, unknown unknowns!

/"Is the brain strictly a classical system? - No,..."
/The *"BRAIN"* may be - as a 'Physical-World' figment of our bio-physio conventional science image, but its mind-related function(?) (especially the hard one) is much more than a 'system': ALL 'parts' inventoried in explained functionality). And: I keep away from the beloved "thought-experiments" invented to make uncanny ideas practically(?) feasible.

Ah, I love thought experiments, the are the laboratory of philosophy. ;-)

/"...As I see it, there is no brain change without a mind change and vice versa. The mind and brain are dual,..." / Thanks, Stephen, originally I thought there may be some (tissue-related) minor brain-changes not affecting the mind of which the 'brains' serves as a (material) tool in our "sci"? explanations. Reading your post(s) I realized that it is a complexity and ANY change in one part has consequences in the others.

Right. I have to account for the degradation effects. Psycho-physical parallelism is either exact or not at all.

So whatever 'part' we landscape as the /'neuronal brain'/ it is
still part of the wider complexity unknowable.

    Indeed!

Have a good trip onward

    Thanks. ;-)

John M
On Sun, Oct 21, 2012 at 8:43 PM, Stephen P. King <stephe...@charter.net <mailto:stephe...@charter.net>> wrote:

    On 10/21/2012 7:14 PM, Stathis Papaioannou wrote:

        On Mon, Oct 22, 2012 at 1:55 AM, Stephen P. King
        <stephe...@charter.net <mailto:stephe...@charter.net>> wrote:

                If there is a top-down effect of the mind on the atoms
                then there we
                would expect some scientific evidence of this.
                Evidence would
                constitute, for example, neurons firing when
                measurements of
                transmembrane potentials, ion concentrations etc.
                suggest that they
                should not. You claim that such anomalous behaviour of
                neurons and
                other cells due to consciousness is widespread, yet it
                has never been
                experimentally observed. Why?


            Hi Stathis,

                 How would you set up the experiment? How do you
            control for an effect
            that may well be ubiquitous? Did you somehow miss the
            point that
            consciousness can only be observed in 1p? Why are you so
            insistent on a 3p
            of it?

        A top-down effect of consciousness on matter could be inferred if
        miraculous events were observed in neurophysiology research. The
        consciousness itself cannot be directly observed.


    Hi Stathis,

        This would be true only if consciousness is separate from
    matter, such as in Descartes failed theory of substance dualism.
    In the dual aspect theory that I am arguing for, there would never
    be any "miracles" that would contradict physical law. At most
    there would be statistical deviations from classical predictions.
    Check out http://boole.stanford.edu/pub/ratmech.pdf for details.
    My support for this theory and not materialism follows from
    materialism demonstrated inability to account for 1p. Dual aspect
    monism has 1p built in from first principles. BTW, I don't use the
    term "dualism" any more as what I am advocating seems to be too
    easily confused with the failed version.


                I don't mean putting an extra module into the brain, I
                mean putting
                the brain directly into the same configuration it is
                put into by
                learning the language in the normal way.


                 How might we do that? Alter 1 neuron and you might
            not have the same
            mind.

        When you learn something, your brain physically changes. After
        a year
        studying Chinese it goes from configuration SPK-E to configuration
        SPK-E+C. If your brain were put directly into configuration
        SPK-E+C
        then you would know Chinese and have a false memory of the year of
        learning it.


        Ah, but is that change, from SPK-E to SPK-E+C, one that is
    numerable strictly in terms of a number of neurons changed? No. I
    would conjecture that it is a computational problem that is at
    least NP-hard. My reasoning is that if the change where emulable
    by a computation X *and* that X could also could be used to solve
    a P-hard problem, then there should exist an algorithm that could
    easily translate any statement in one language into another *and*
    finding that algorithm should require only some polynomial
    quantity of resources (relative to the number of possible
    algorithms). It should be easy to show that this is not the case.
        I strongly believe that computational complexity plays a huge
    role in many aspects of the hard problem of consciousness and that
    the Platonic approach to computer science is obscuring solutions
    as it is blind to questions of resource availability and distribution.

                In a thought experiment we can say that the imitation
                stimulates the
                surrounding neurons in the same way as the original.
                We can even say
                that it does this miraculously. Would such a device
                *necessarily*
                replicate the consciousness along with the neural
                impulses, or could
                the two be separated?


                 Is the brain strictly a classical system?

        No, although the consensus appears to be that quantum effects
        are not
        significant in its functioning. In any case, this does not
        invalidate
        functionalism.


        Well, I don't follow the crowd. I agree that functionalist is
    not dependent on the type of physics of the system, but there is
    an issue of functional closure that must be met in my conjecture;
    there has to be some way for the system (that supports the
    conscious capacity) to be closed under the transformation involved.

                As I said, technical problems with computers are not
                relevant to the
                argument. The implant is just a device that has the
                correct timing of
                neural impulses. Would it necessarily preserve
                consciousness?


                 Let's see. If I ingest psychoactive substances, there
            is a 1p observable
            effect.... Is this a circumstance that is different in
            kind from that
            device?

        The psychoactive substances cause a physical change in your
        brain and
        thereby also a psychological change.


        Of course. As I see it, there is no brain change without a
    mind change and vice versa. The mind and brain are dual, as
    Boolean algebras and topological spaces are dual, the relation is
    an isomorphism between structures that have oppositely directed
    arrows of transformation. The math is very straight forward...
    People just have a hard time understanding the idea that all of
    "matter" is some form of topological space and there is no known
    calculus of variations for Boolean algebras (no one is looking for
    it, except for me, that I know of). Care to help me? The idea of
    SPK-E -> SPK-E+C, that you mentioned, is an example of a variation
    of Boolean algebra!

    --




--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to