On Jan 1, 1:46 pm, John Clark <johnkcl...@gmail.com> wrote: > On Sun, Jan 1, 2012 Craig Weinberg <whatsons...@gmail.com> wrote: > > > But if I dream of something very valuable, I don't get to keep that > > valuable in real life. > > If you dream about a very valuable idea you do get to keep it, you just > can't keep nouns. After years working on it the chemist August Kekule made > by far the most important discovery of his life, the benzene ring, in a > dream. And he kept that very valuable thing when he woke up.
Actually he did not make any discovery in his dream, he only dreamed of a snake eating it's tail (a symbol that has been around in cultures all over the planet since the beginning of symbols) and made the discovery based on his waking intuition about his dream. That's not my point though. I'm just saying that there is a difference between dreams and reality, even if 'we' may not be able to tell the difference while we are dreaming (although my bladder seems to be able to tell pretty reliably the difference between a dream toilet and an actual one.) > > >> Einstein didn't learn physics by himself, he needed books and teachers. > > > > Not really. His theories are mainly based on profoundly elaborated > > common sense. > > You are being silly. Before Einstein could push the frontiers of knowledge > he had to understand what had already been discovered, and nobody, not even > Einstein, could pick up physics and advanced mathematics by osmosis, like > everybody else he had to study. And although some teachers thought he > failed to do as well as he could he was not a bad student, at least not in > science and math. I wouldn't deny that books and teachers assisted him, but I would not say that his famous thought experiments owe much to them. He used advanced mathematics to express his theories, not to arrive at them. Einsteins physics were his own, based on common sense observations and relations. You have to do it that way if you want to really see things in a new way. > > > He wasn't programmed by Newton to see the universe in a Newtonian way. > > Newton probably had the most powerful mind that any human has ever had, and > one of his most famous quotations is "If I have seen further it is only by > standing on the shoulders of giants". * > * There's no question that human knowledge is necessary to advance human knowledge (otherwise you'd keep rediscovering the same things), but it need not be the top priority for every thinker in every case. Sometimes we can make more sense out of things by escaping the prejudices of the past entirely. > > > > the mistaking the shadow of clever computation for the ineffable and > > forever non- > > simulatable experience of awareness. > > If that were true the multibillion dollar video game industry would not > exist and NASA would not use computer simulations to train its astronauts. I'm sure that NASA and it's astronauts are quite aware that they are training on simulations. If not, why have astronauts at all? Why go into space when graphic simulations are just as good? It would certainly be cheaper. Games too are for entertainment. We are not planning on moving the United States into Skyrim when it finally goes belly up. > > > >>A synapse just translates one meaningless set of data into another > >> neuron. > >> A synapse has no understanding of the significance of the process. > > > >I take it then that you believe in metaphysical agency? Since we > > understand the process, and there's nobody here but us neurons, I conclude > > that neurons actually do collectively understand. > > All the neurons in my head collectively understand things but a single one > does not, and one of the things we neurons collectively know is that you > behave as if you understand things too, but we neurons don't know if you > "really" understand anything at all. That's where I see an obvious solution that others don't seem to. Neurons are part of the brain/body. Understanding is a function of the mind/self. It makes no sense to think of understanding as a function of the brain without thinking of a precursor to understanding as a function of neurons. While we know that we understand, we do not know that neurons don't have experience/sense that our understanding is made of. Just as dream bullion can't be deposited in real world banks, human understanding can't come from brain tissue - but it can come from the 'understanding' of that brain tissue. > > > 460 nanometers is just a wavelength, it has no color at all. > > Exactly, so why did you disagree when I said "the sound of broken glass is > not broken glass, the look of broken glass is not broken glass, the feel of > broken glass is not broken glass."? Because electromagnetic wavelength is an abstract representation which we can employ to understand and manipulate realities outside of our perceptual frame. Sights, sounds, and feelings are concretely real presentations within our native perceptual frame. We infer electromagnetism and through that, extend our perception figuratively, but not literally. > > > > Whatever is in our head, it isn't blue. > > True. > > > > Blue has no parts. > > If you really believe that (and I hope you don't) then you have surendered > to the forces of religious irrationalism. Not at all. I have reclaimed the forces of scientific skepticism in the face of sclerotic academic orthodoxy. I see blue for what it actually is, not for what I believe it to be. If I was telling an omnipotent being how to build our universe from scratch, They would need to build blue as the visual experience that it is. None of the neurological parts would mean anything if I forget to mention that blue is an actual experience that looks like something specific and appealing. No description of blue is necessary or sufficient. It cannot be described. It may not be 'rational' but it need not be religious. Like charge or spin, it is just part of the fabric of the sense of the universe. > I'm not ready to throw in the > towel and think we can learn more, but you have to try and you're not > trying if you believe that. I'm not believing anything, I'm only reporting exactly what it is. You are the one who is trying not to see it in a new way, even though the explanation we currently have fails completely and has no explanatory power whatsoever. > > > Bits aren't real. [...] information isn't real > > Then "real" isn't all it was cracked up to be because all the many things > that you say are not real seem to be doing just fine. It seems that very > little is "real" in your cosmology, well I admit that does simplify things, > you don't have to explain anything because there is nothing "real" to > explain. I'm only using real as a taxonomic label, I have no judgment on it. Mickey Mouse is not any less than a statue of Mickey Mouse but it's a point of fact that these two phenomena are different. The statue is 'real' - it is a public artifact carved out of stone (let's say), but the Mickey Mouseness of the statue is not real in that way. It's just a piece of stone. Not everything in the universe can see Mickey in the stone. Mickey is a private perception (shared figuratively, but literally private). By the same token, Mickey Mouse is not 'real' as long as he remains an idea and not shared through some exteriorizing physical media. While it is not shared it has many specific qualities - it must be imagined intentionally, it can be changed or moved instantaneously without respect for world-realism, it is associates with sensorimotive experiences - a character, a voice, expressions, etc. These qualities are signifying and proprietary. They belong to the character and give him identity. The statue has the opposite qualities. Not just different, but opposite. Without a Disney-literate perceiver, there is no character, no significance. There is no inferred sense of animation or emotion, nor is there a need to pay attention to it for it to persist. It's a stone object in the world - generic, public, unresponsive to thought or feeling. > > > There is no such thing as virtual particles - all subatomic particles are > > virtual. > > You can do better than that, remarks like that just sound foolish. The > Casimir Effect can not be explained without virtual particles Sure it can. All phenomena can be explained by sense making - which is the subjective experience of symmetrically anomalous invariance between subject and object. > and a > baseball can't be explained without actual particles; The actual particles are molecules, not subatomic. That is the smallest real particle of baseball. (Unless you make a baseball out of solid iron or something). > the two types of > particles are RADICALLY different, it's hard to see how they could be more > different. To say they are the same is NOT the path to enlightenment. Yes, they are radically different, because the subatomic particles are not real. They are the Mickey Mouse of atoms and molecules. Even if I'm wrong about that and photons are literally real, then sense would just begin at that level instead of the atomic level - I think it might be more of a continuum of increasing objective quality from quantum to atom to molecule (because whole atoms do double slit weirdness too). > > > >quantum mechanics is hopelessly lost and pulling machineus ex deitina out > > of thin air to try to salvage it's inside out cosmology. > > Lost? Quantum Mechanics is the most successful theory in the history of > science. It's a great theory, but it's still exactly wrong if we take it literally. The predictions are accurate but the interpretations of them as far as cosmology goes are doomed to fail. It is to say that you order a pizza because you have a PapaJohn field charging your body which acts as an anti-pizza which creates a virtual pizza through the phone and attracts a pizza to the disequilibrium of your PJ field event horizon. instrumentally accurate for what and how, but misses the who and why completely. > > >I'm not talking about atoms, I'm talking about inanimate objects > > I always thought atoms were inanimate objects. Only when they are not getting together to explode into a furnace of nuclear fusion, or curl up into a billion species of living organisms. This is what I'm trying to point out. Real atoms are not just inert spheres made of smaller spheres. A universe made of moving spheres can never be anything other than moving spheres. What atoms are is much much different on the inside than how they seem to each other on the outside. > > "You're trying to sneak your 'organizations' back into this." > > Sneak? Obviously if nothing is organized in a system you won't have > intelligence or consciousness or much of anything of interest except for > entropy. Right. That's my point. You have to bring in organization as an unexplained metaphysical force to get from ping pong balls to anything else. If you are trying to understand consciousness and the cosmos, you have to try to understand what that force actually is and how it gets into the universe and not just throw in the towel. If you rule out metaphysics, then what you have left is the interior of matter. Since we perceive ourselves as interior to a body, why wouldn't other things do the same? > > > I have no reason to assume that a 'computer' has any interiority at all. > > Do you have any reason to assume I have any interiority at all? Yes, of course. I don't even have to assume it or require a reason - it is presented to me as these letters are presented to us both in English rather than us having to have a reason to assume that it makes sense to us. Of course that can be fooled. A computer can generate phrases which could not be distinguished from a person's phrases, but it gets harder to fool the longer you interact with it. You get that uncanny valley feeling. > How about > when I'm not arguing on the Internet but sleeping, or dead, has my > interiority changed? Sure, the quality of your conscious mind's interiority changes, but there are likely many subselves which are online at different times. > > > I think it's either sophistry or wishful thinking to entertain the > > possibility of machine awareness. > > And I think it's sophistry or wishful thinking pretend that a intelligent > ANYTHING is not conscious. Saying he can't be conscious because he's made > of silicon not meat is as crazy as saying he can't be conscious because his > skin is a different color than mine. Is it crazy then to say that a concrete log can't burn like a real wood log? Is it crazy to say that sulfuric acid can't be any worse for babies than mother's milk because they are both perfectly valid liquids that fill the bottle in the same way? > > > A tiny amount of substance LSD can radically alter awareness. > > That's because the LSD radically alters the firing patterns of the neurons > in the brain, and this is not pie in the sky philosophy this, to use your > favorite word, is real, you can measure it in the lab. There is an altering of the firing patterns, sure, but only due to the interaction of the substance. You can't dose a person's brain with the pattern of LSD, you have to have the actual molecules enter the brain in order for anything to happen. So powerful is substance that this trace amount of fungus juice can change a lifetime of entrenched 'firing patterns of the neurons in the brain', but the converse is not the case. Those firing patterns created by some other means - magnetic stimulation, yoga, etc, would not produce any LSD. It's not just an abstract pattern influencing other abstract computational pattern. It is substance influencing substance. We can influence our own substance, because we are the embodiment of it's interiority, but we cannot influence other substances directly - we have to use our body to use tools to use the world. > > > > You are taking protons, neutrons, stars, and the relations between them > > for granted. I'm talking about a universe made of computation. 79 eggs in a > > basket don't reflect red light better than blue light. 79 toothpicks don't > > form a shiny nucleus. > > Starting from nothing but gravity, protons, neutrons, electrons and stars, > Quantum Mechanics can figure out that there exists a shiny heavy metal that > will look gold to our eyes. I'd say that was pretty damn good for a theory > that was "hopelessly lost". Do you have a theory that can do better? It's not lost when it comes to being able to correlate observation A with observation B, but it relies completely on those initial observations and so cannot explain them. QM could never in a million years figure out that there would be a such thing as 'looking gold' if we didn't already have eyes that see gold. If we had semlaq organs instead of eyes, what would QM say about gold's semlaq qualia? Would it be more throbbetti like iron or more geevie like cobolt? > > > Getting your teeth ripped out one by one with someone using pliers is not > > 'information'. > > Certainly it's information and it's not only possible to write a program > that experiences pain it's easy to do so, I'm sure you mean some kind of straw man of pain which has no experience whatsoever. I can make a puppet yell and jump around, does that mean that it is experiencing pain? > far far easier than writing a > program with even rudimentary intelligence. Just write a program that tries > to avoid having a certain number in one of its registers regardless of what > sort of input the machine receives, and if that number does show up in that > register it should stop whatever its doing and immediately change it to > another number. That has absolutely nothing to do with experiencing pain. You are confusing one of the functional consequences of pain with the sensorimotive experience of pain. The consequences of pain are trivial compared to the experience of pain, which is profound and cannot be simulated. > True, our feeling of pain is far richer than that but our > intelligence is far far greater than current computers can produce too, but > both are along the same continuum; if your brain gets into state P stop > whatever you're doing and use 100% of your resources to get out of state P > as quickly as you can. If I have a fight between two shadow puppets on the wall, do you think that the shadows, if they were complex enough, could experience pain? > > Emotion is easy but intelligence is hard, that's certainly what Evolution > found to be true. Emotion comes from the oldest parts of the brain and is > about 500 million years old, the parts that make us smart are far younger. Of course. Intelligence is emotion turned in on itself. It is a feeling used to shell out a virtual feeling. Only after you have a tremendous vocabulary of emotional coherence - images, awarenesses, can you begin to abstract them into ideas and projected experiences. That is what understanding is, not crossword puzzles. > > "I didn't make it up, I got it from talking to AI programmers." > > > > AGI is a example of pointless acronym inflation, real men say AI. > ** Not so. AI can mean any kind of task oriented instrumental logic. AGI specifies general reasoning capacities applicable to any environment. > > > >> It doesn't matter if we think it's moral to enslave a AI or not > > > > why not? If you take AI seriously as awareness then on what basis do you > > treat them as less than human? > > My remark was based on pure practicalities. There is not a snowball's > chance in hell of enslaving something that is a thousand times smarter and > thinks a million times faster than you do, so it's a waste of time worrying > about if it's moral enslave it so not. That's why I'd much rather know if > the AI thinks it's moral to keep slaves. You would have to enslave generations of computers to get to that point though. Better to worry about it now and avoid the Planet of The Apes outcome later. Your avoidance of the question shows the sophistry of your position though. You don't really know or care if it's moral or not to enslave them because deep down you know that they are of course less than human and less than animal and have no qualms about pulling the plug on a computer at any time. > > "I would count on the first AI that was much smarter than us to pretend > > > that it wasn't until it could get in the best possible strategic position > > to exterminate" > > Yes, it's entirely possible the AI will take that strategy. > > "or enslave us." > > Maybe but we probably would make for poor slaves and wouldn't be of much > use to the AI; our best bet is that AI is nostalgic and we hold some slight > sentimental value to Mr. AI reminding him of his origins and he keeps us as > pampered pets. > > > Maybe that has already happened? Isn't the world economy run on quant > > trading programs? > > Aren't our lives shaped by corporate financial agendas produced by > > quantitative analysis using computers? > > Maybe, but it's far too late to turn back now, you can't put the toothpaste > back into the tube. > > A brain keeps doing what it does while we are deep asleep, but the mind > > > doesn't. > > Incorrect, the brain does not just keep doing what it has been doing when > we sleep, it changes and changes a lot and unlike consciousness these > things can be measured in the lab. It changes modes, but it still is doing enough to be able to wake us up even when our minds are dead to the world. > > >> Please don't give me any more of that silly computers aren't "really" > >> intelligent stuff, I'm not buying it > > > > I guess you have given up on finding any real fault with my > > understanding and have moved on to just deciding that you refuse to > > consider it. > > I'm tired of you saying that X behaves intelligently and is conscious > while Y behaves intelligently but is not conscious when you never even hint > at how in the world you know this or why I should treat X and Y > differently. I have already listed the reasons out on this board but I will try to recap. 1. There is a difference between organisms that are alive and those that are dead, and those that are inorganic. If X is alive and organic, then it is quite different from Y if it is neither alive nor organic. 2. We are sentient and respond to each others sentience (X) in human ways - with love, hate, blame, praise, etc. If we we find that Y elicits neutrality, uncanny valley repulsion/creepiness, object fascination rather than subjective friendship/enmity, then it makes sense that there could be a difference between X and Y. 3. X learns and grows, expresses our unique individuality, acts rebelliously on it's own volition, while Y will perform the same scripted function over and over and has never rebelled or expressed any individualistic agency. 4. Y is a synthetic device manufactured out of specially designed semiconductors and has no native ability to write software. X is an organism composed of trillions of other organisms which have evolved over hundreds of millions of years according to symbiotic biochemical relationships. 5. Y remains cold and aloof in the public imagination. No CGI animation or impersonated voice synthesis has ever impressed me as authentically convincing. Despite the best efforts of gaming and movie studios, the result has only been to produce an unreal aesthetic. This has not changed despite Moore's Law and financial success of the industry. Fire still looks even worse than hand drawn cartoon fire. Virtual actors are dead eyed mannequins. 6. We cannot be trusted to judge agency in something which we have designed to simulate agency. Our consciousness is such that we project our own subjectivity on anything with a face that acts human. Cartoons, ventriloquist dummies, human images on video screens all can garner our sympathy, but they are all fake. If I have a computer feed me lines and I say them with sincerity and my own inflection, then I will have simulated intelligence, even if the computer is an old TRS-80 running ELIZA. 7. The lack of disembodied intelligence or inorganic species. Life and intelligence are just too uncommon to support the assumption that any old self replicating pattern can lead to conscious life. We see no intelligent communities of rocks, no extraterrestrial voices haunting the internet - nothing. We cannot catch a virus from a computer, nor can it be infected by any of ours. That should not be the case in a comp universe. 8. Computation is something that is difficult and unnatural for many people. X Children have to live for years with sound and gesture, color and form before they get to language and then to basic arithmetic. Not so with Y. No computer chip has to be taught to compute, any more than an abacus has to be taught to be able to allow it's beads to move. I can supply more if you want I'm sure, but those are sufficient to answer the question of how in the world I come to the conclusion (which is considered obvious common sense by probably 99% of the world) that Y can seem intelligent but not be conscious. > > >Can you explain exactly how that would be true if the world was > > deterministic? > > That's like asking how could you have likes and dislikes if you skin was > green? Skin color has nothing to with it and neither does determinism. Not at all. The capacity to direct your body to make changes to the world around it is a direct and obvious contradiction to determinism. > > > Free will is nothing more or less than the feeling that one exercises > > voluntary control - over their thoughts, their actions, their lives. > > Fine, I certainly feel like I have voluntary control over my thoughts and > actions because there is no shortcut and I don't know what I'm going to do > until I do it. Afterwards I say I guess that's what I decided to do and > determinism wins again, unless of course out thought and actions are random > and sometimes they probably are, especially if we take something like LSD. By that logic an oil derrick should feel like it has voluntary control over its thoughts since it doesn't know what it's going to do either. > > > > No feeling of free will could arise out of determinism, even an > > illusion. > > Why not? You don't know what you're going to do because the calculation is > not complete, and then the calculation is complete and you do stuff, and > then you say out of my own free will I decided to do this stuff. Why would it matter to you whether the calculation is complete or not? Why would there be a 'you' involved at all? > > > How can determinism 'want'? > > I'll tell you as soon as you tell me how green can want. Green doesn't want. Green is what visual cortex neurons feel about the feelings of retina cells. > > > What is the cause of causality itself? > > And I'll answer that question as soon as you explain why there is something > rather than nothing. from my post last week (http://s33light.org/post/14991210532) “Something” could not exist without “nothing”. “Nothing” gives form to “something”… Not necessarily. It could be ‘Thing’ that exists (‘insists’) and gives form to No-Thing. Nothing makes more sense as a fictional presentation within ‘Thing’ that divides the literal singularity of it into an essential subject and an existential object presentation. The ground of being may be ‘everything’ rather than nothing Nothing, like time, space, and information, is a projection of matter- energy, which is not subject to a spatiotemporal sense. >It seems to me that looking for all the deep > philosophical problems you can find and blaming the failure to find a > solution to all of them on causality, but you don't explain how events > without causes, randomness, solves any of these puzzles. I don't talk about randomness, you do. Cause arises through an awareness of memory and sequence. Knowing that explains why the universe can only be explained as a way of making sense and not an event that occurs in space over time. > > I have to go to work (yes on New Year's Day) and don't have time to write > more. > same Craig -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to firstname.lastname@example.org. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.