Roger, If the mind were not extended, then animal intelligence would not depend on brain size. Richard
On Mon, Aug 27, 2012 at 8:39 AM, Roger Clough <[email protected]> wrote: > It has been asked here-- what in fact is the mind-body problem ? > > http://oregonstate.edu/instruct/phl302/writing/mind-top.html > > > "The Mind Body Problem > > What philosophers call the mind body problem originated with Descartes. In > Descartes' philosophy > the mind is essentially a thinking thing, while the body is essentially an > extended thing - something which occupies space. > Descartes held that there is two way causal interaction between these two > quite different kinds of substances. > So, the body effects the mind in perception, and the mind effects the body > in action. But how is this possible? > How can an unextended thing effect something in space. How can something > in space effect an unextended thing?" > > --------------------------------------------------------------------------------------------------------------------------------------------- > > Immediately below I give an account of a man being pricked by a pin > in Leibniz's world versus such an action in the actual or phenomenal world. > > In summary, and in addition, > > 1) They amount to the same account, one virtual and one actual or > phenomenal. > > 2) Our so-called free will is only an apparent one. > > 3) Because monads overlap (are weakly nonlocal), since space is not a > property, > monads can have some limited, unconscious awareness of the rest of the > universe (including all life). > This awareness is generally very weak and generally unconscious. > Still, it means that we are an intimate part of the universe and all that > happens. > > 4) The virtual world of the monad of man strictly portrays men > as blind, completely passive robots. However, his monad > is inside of the supreme monad, which is his puppet-master. > But at the same time, then like as I recall Pinocchio, he > becomes seemingly alive in the everyday sense that we feel we are alive. > but through the supreme monad in which he is subordinately enclosed. > > 5) There is some bleed-through of future perceptions, so we can have > some dim awareness of future happenings. > > > > > > ---------------------------------------------------------------------------------------- > > > > I will just briefly discuss actions here by man. Each man is entirely > virtual, > a monad in the space of thought containing a database of perceptions > (given to him by God, of all the perceptions of the other monads in the > universe. > Some of these (animals) are mindless and others feelingless, > with only have corporeal functions (plants, rocks) ). > > Every monad has an internal source of energy, plus a pre-programed > set of virtual perceptions continuously and instantaneously given to him > by > the Supreme Monad, and a set of virtual actions the monad is programmed > to virtually desire or will giving him new perceptions as well as every > other > monad in the universe. > > All of these must function as virtual agents or entities according to > Leibniz's > principle of preestablished harmony. Only the supreme monad (God) can > perceive, > feel, and act. > > > So if God wants you to be pricked by a pain, feel the pain, and react, > he will cause a virtual monadic pin to virtually prick your sensory monad, > and then have you virtually feel pain as a monad, but actually to feel > a real pain in the phenomenal world, and to virtually jump and really > jump in both world, one virtually and one physically. > > > > How does this differ > > > > ================================================== > A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS > > BPersonally, I am looking at the "how is this possible" aspect, > first by asking what is possible from the aspect of Leibniz's metaphysics. > > What is possible is limited by Leibniz's monadology: > > http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html > > The principle issue is Leibniz's theory of causation. One account is given > at > > http://plato.stanford.edu/entries/leibniz-causation/ > > There seems to be some confusion and differing acounts on how things > happen, > but my own understanding is that: > > 1). All simple substances are monads, or which there are 3 types, > those just containing bodily perceptions (rocks, vegetables), > those containing affective perceptions as well (animals) and those (man) > which also have mental perceptions (ie all things mental). > > 2. Monads can do nothing or perceive anything on their own, but only > through God > (the supreme monad) according to our desires, which are actually God's > > > 3) All of the actions of lesser monads and the supreme monad God have been > scripted > in the Preestablished Harmony. > > 4) Thus causation is virtual, say like in a silent movie. No actual forces > are involved, > only virtual forces. > > 5) > > > > > > > > > > > > > > > > > > > > > > > > > > > > Roger Clough, [email protected] <[email protected]> > 8/27/2012 > Leibniz would say, "If there's no God, we'd have to invent him so > everything could function." > ----- Receiving the following content ----- > From: benjayk > Receiver: everything-list > Time: 2012-08-25, 11:16:59 > Subject: Re: Simple proof that our intelligence transcends that of > computers > > > I am getting a bit tired of our discussion, so I will just adress the main > points: > > > Jason Resch-2 wrote: > > > >> > >> > >> Jason Resch-2 wrote: > >> > > >> >> > >> >> But let's say we mean "except for memory and unlimited accuracy". > >> >> This would mean that we are computers, but not that we are ONLY > >> >> computers. > >> >> > >> >> > >> > Is this like saying our brains are atoms, but we are more than atoms? > >> I > >> > can agree with that, our minds transcend the simple description of > >> > interacting particles. > >> > > >> > But if atoms can serve as a platform for minds and consciousness, is > >> there > >> > a reason that computers cannot? > >> > > >> Not absolutely. Indeed, I believe mind is all there is, so necessarily > >> computers are an aspect of mind and are even conscious in a sense > >> already. > >> > > > > Do you have a meta-theory which could explain why we have the conscious > > experiences that we do? > > > > Saying that mind is all there is, while possibly valid, does not explain > > very much (without some meta-theory). > No, I don't even take it to be a theory. In this sense you might say it > doesn't explain anything on a theoretical level, but this is just because > reality doesn't work based on any theoretical concepts (though it > obviously > is described and incorporates them). > > > Jason Resch-2 wrote: > > > >> > >> > >> Jason Resch-2 wrote: > >> > > >> > Short of adopting some kind of dualism (such as > >> > http://en.wikipedia.org/wiki/Biological_naturalism , or the idea > that > >> God > >> > has to put a soul into a computer to make it alive/conscious), I > don't > >> see > >> > how atoms can serve as this platform but computers could not, since > >> > computers seem capable of emulating everything atoms do. > >> OK. We have a problem of level here. On some level, computers can > emulate > >> everything atoms can do computationally, I'll admit that. But that's > >> simply > >> the wrong level, since it is not about what something can do in the > sense > >> of > >> transforming input/output. > >> It is about what something IS (or is like). > >> > > > > Within the simulation, isn't a simulated atom like a real atom (in our > > reality)? > There is no unambiguous answer to this question IMO. > > But it only matters that the simulated atom is not like the real atom with > respect to our reality - the former can't substitute the latter with > respect > to reality. > > > Jason Resch-2 wrote: > > > >> > >> > >> Jason Resch-2 wrote: > >> > > >> >> > >> >> Jason Resch-2 wrote: > >> >> > > >> >> >> Jason Resch-2 wrote: > >> >> >> > > >> >> >> >> since this is all that is required for my argument. > >> >> >> >> > >> >> >> >> I (if I take myself to be human) can't be contained in that > >> >> definition > >> >> >> >> because a human is not a computer according to the everyday > >> >> >> >> definition. > >> >> >> > > >> >> >> > A human may be something a computer can perfectly emulate, > >> therefore > >> >> a > >> >> >> > human could exist with the definition of a computer. Computers > >> are > >> >> >> > very powerful and flexible in what they can do. > >> >> >> That is an assumption that I don't buy into at all. > >> >> >> > >> >> >> > >> >> > Have you ever done any computer programming? If you have, you > might > >> >> > realize that the possibilities for programs goes beyond your > >> >> imagination. > >> >> Yes, I studied computer science for one semester, so I have > programmed > >> a > >> >> fair amount. > >> >> Again, you are misinterpreting me. Of course programs go beyond our > >> >> imagination. Can you imagine the mandel brot set without computing > it > >> on > >> >> a > >> >> computer? It is very hard. > >> >> I never said that they can't. > >> >> > >> >> I just said that they lack some capability that we have. For example > >> they > >> >> can't fundamentally decide which programs to use and which not and > >> which > >> >> axioms to use (they can do this relatively, though). There is no > >> >> computational way of determining that. > >> >> > >> > > >> > There are experimental ways, which is how we determined which axioms > to > >> > use. > >> Nope, since for the computer no experimental ways exists if we haven't > >> determined a program first. > >> > >> > > You said computers fundamentally cannot choose which programs or axioms > to > > use. > > > > We could program a computer with a neural simulation of a human > > mathematician, and then the computer could have this capability. > That just would strengthen my point (note the words "we program" meaning > "we > choose the program"). > > > Jason Resch-2 wrote: > > > >> > >> Jason Resch-2 wrote: > >> > > >> > If the computer program had a concept for desiring novelty/surprises, > >> it > >> > would surely find some axiomatic systems more interesting than > others. > >> Sure. But he could be programmed to not to have such a concept, and > there > >> is > >> no way of determining whether to use it or not if we haven't already > >> programmed an algorithm for that (which again had the same problem). > >> > >> In effect you get an infinite regress: > >> How determine which program to use? ->use a program to determine it > >> But which? ->use a program to determine it > >> But which? ->use a program to determine it > >> .... > >> > >> > > Guess and check, with random variation, it worked for evolution. > But which guessing and checking program to use? ->use a more general > guessing and checking program to determine it > But which? ->use an even more more general guessing and checking program > to > determine it > etc.... > > You still never arrive at a program, in fact your problem just becomes > more > difficult each time you ask the question, because the program would have > to > be more general. > > > Jason Resch-2 wrote: > > > >> > You're crossing contexts and levels. Certainly, a heart inside a > >> computer > >> > simulation of some reality isn't going to do you any good if you > exist > >> on > >> > a > >> > different level, in a different reality. > >> So you are actually agreeing with me? - Since this is exactly the point > I > >> am > >> trying to make. > >> Digital models exist on a different level than what they represent, and > >> it > >> doesn't matter how good/accurate they are because that doesn't bridge > the > >> gap between model and reality. > >> > > > > But what level something is implemented in does not restrict the > > intelligence of a process. > This may be our main disagreement. > It boils down to the question whether we assume intelligence = (turing) > computation. > We could embrace this definition, but I would rather not, since it doesn't > fit with my own conception of intelligence (which also encompasses > instantiation and interpretation). > > But for the sake of discussion I can embrace this definition and in this > case I agree with you. Then we might say that computers can become more > intelligent than humans (and maybe already are), because they manifest > computations more efficiently than humans. > > Jason Resch-2 wrote: > > > >> Jason Resch-2 wrote: > >> > > >> >> And this seems to be empirically true because there is pretty much > no > >> >> other > >> >> way to explain psi. > >> >> > >> > > >> > What do you mean by psi? > >> Telepathy, for example. > >> > >> > > Are you aware of any conclusive studies of psi? > That depends on what you interpret as conclusive. For hard-headed > skepticists no study will count as conclusive. > > There are plenty of studies that show results that are *far* beyond > chance, > though. > Also the so called "anecdotal evidence" is extremely strong. > > > Jason Resch-2 wrote: > > > >> > >> Jason Resch-2 wrote: > >> > > >> >> > >> >> > >> >> Jason Resch-2 wrote: > >> >> > > >> >> >> I am not saying that nature is infinite in the way we picture it. > >> It > >> >> may > >> >> >> not > >> >> >> fit into these categories at all. > >> >> >> > >> >> >> Quantum mechanics includes true subjective randomness already, so > >> by > >> >> your > >> >> >> own standards nothing that physically exists can be emulated. > >> >> >> > >> >> >> > >> >> > The UD also contains subjective randomness, which is at the heart > of > >> >> > Bruno's argument. > >> >> No, it doesn't even contain a subject. > >> >> > >> >> Bruno assumes COMP, which I don't buy at all. > >> >> > >> >> > >> > Okay. What is your theory of mind? > >> I don't have any. Mind cannot be captured or even by described at the > >> fundamental level at all. > >> > > > > That doesn't seem like a very useful theory. Does this theory tell > > you whether or not you should take an artificial brain if it was the > only > > way to save your life? > Of course it is not a useful theory, since it is not a theory in the first > place. > To answer your question: No. There is no theoretical way of deciding that. > > benjayk > > -- > View this message in context: > http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34348098.html > > Sent from the Everything List mailing list archive at Nabble.com. > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To post to this group, send email to > [email protected].<[email protected].> > To unsubscribe from this group, send email to everything-list+ > [email protected]. <[email protected].> > For more options, visit this group at > http://groups.google.com/group/everything-list?hl=en. > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/everything-list?hl=en. > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

