On Jul 30, 1:08 am, Jason Resch <jasonre...@gmail.com> wrote: > On Fri, Jul 29, 2011 at 12:50 AM, Craig Weinberg <whatsons...@gmail.com>wrote:
> It may or may not, but a system which could respond to questions in the same > way you do must understand how you think. It's not responding to questions, it's just blowing digits out of an exhaust pipe that you interpret as answers to your questions. As far as you are concerned, it's useful, maybe even more useful than having a real person understand you since it has no choice but to process your input to the extent that the script circumscribes whereas any entity which is capable of understanding is capable of ignoring them or lying as well. > The system which responds in the same way you do can offer its opinion on > anythings. It's a fantasy. No system can respond exactly the way you do unless it is you. Any system can appear to respond the way you do under limited conditions, as the YouTube simulation, without having an opinion or being aware of you or being a simulation of you. > Meaning is anything which makes a difference. That's an assumption. I would say that meaning is the experience of significance. It's a sensorimotive quality which informs your conscious attention. Meaning is something you feel. > If the logic and information > dictate changes in some system, then that information is meaningful to the > system. I understand what you're saying, but it assumes consciousness in an unconscious system. Elmer Fudd acts as a system, but he doesn't feel anything when he gets slapped but when he gets slapped upside the head with a frying pan, there is no meaning generated within Elmer Fudd, even though the cartoon system has changed in a particular way. The meaning comes through the feelings we experience when we consciously pay attention to the system. Elmer Fudd is fiction. It can't feel. Google is fiction. Super mega Google is fiction. > That's not true. There is not enough memory in the world to store every > possible game of chess. The first 7 or so moves were came directly from a > game database, but the rest involved the computer deciding what to do based > on its understanding of the game. It may not have to store all possible games of chess because it can compress the redundancies arithmetically. It still uses a strategy which requires no understanding. It just methodically compares the positions of each piece to the positions which will result from each scenario that can be simulated. It has no idea that it's playing a game, it experiences no difference between winning and not winning. A computer playing a game by itself is not a game. What happens when Deep Blue plays itself I wonder? > What in the human brain is not mechanical? The experience of the human brain, or any other physical phenomenon is not mechanical. The notion of a machine is as a logical abstraction which can be felt or felt through, but it has no feeling itself. > > Oh, they absolutely do use some of the same methods. Our brain is > > mechanical as well as experiential. We can emulate the mechanical side > > with anything. We can't emulate experience at all. > > Movement, speech, etc. are all mechanical. So could the mechanical actions > of a human be perfectly emulated? Earlier you suggested it would eventually > break down. The actions themselves can be emulated but the way that the actions are strung together can reveal the holes in the fabric of the mechanism. When you make a phone call, how long does it take for you to be able to tell whether it's a voicemail system vs a live person? Sometimes you can hear recording noise even before the voice begins. Occasionally you could be fooled for a few seconds. It's not that the system breaks down, it's that the genuine person will sooner or later pick up on subtle cues about the system which give it away. > > If that were the case, then software would be evolving by itself. You > > would have to worry about Google self-driving cars teaming up to kill > > us. We don't though. > > The cars aren't programmed with that capability. Exactly. A machine can't evolve for it's own purposes. It has no point of view, it can't decide to do something new (even though it can end up doing something that is new to us, there is no decision to accomplish novelty). It's not really a thing even except in our minds. > > Machines don't team up by themselves. > > They could. You just said they can't unless they are programmed with that capability. That's not 'by themselves'. > Neurons are very flexible. They self organize, recover from partial damage, > reproduce, move around, etc. But an emulation of neurons would provide the > same capabilities. You can emulate the alphabet of neuron behaviors but that doesn't mean it can use those behaviors in the way that live neurons can. > > No, it doesn't know what a 1 or 0 is. A machine executed in > > microelectronics likely has the simplest possible sensorimotive > > experience, barring some physical environmental effect like heat or > > corrosion. It has a single motive: 'Go' and 'No Go'. It has a single > > sense: This, this, this, and this are going, that that that are not > > going. It's the same sensorimotive experience of a simple molecule. > > Valence open. Valence filled. > > Why is it biochemistry can get past its lowest levels of valance elections > filled or not, but computer systems cannot get past the properties of > silicon? It is unfair ignore all the higher level complexity that is > possible in a computer. For the same reason that hydrogen can't be gold and inorganic matter can't be alive. The computer has no high level complexity, we just use it to store our high level complexity. Complexity is not awareness. > > It's a category error. No amount of sophisticated programming is going > > to simulate fire either. > > Tell that to the God who is running this universe in his computer. This universe could no more be simulated on a computer than it could on a network of plumbing. Fire isn't made of plumbing. > > > Did some alien intelligence or God have to look down on the first life > > forms > > > that evolved on Earth for them to be alive? > > > Nah. It would have been too boring. A billion years of blue green > > algae? > > So then why does simulated life need a human to interpret it? Because it is simulated only through our interpretation. By itself it is a completely different phenomenon from the original. A photograph of a fish is a simulation using a chemical emulsion on paper. Looks like a fish to us, but not to a blind cat. > > I'm sympathetic to this as an understanding of the role that qualia > > plays in behavior and evolution, but it doesn't address the main > > problem of qualia, which is why it would need to exist at all. > > If you couldn't see, you would be blind. That's tautology. It's like saying 'if you couldn't see the inside of your stomach, you wouldn't know how to digest food'. Qualia is not necessary to the function of the brain. Indeed you can lose consciousness every night and your brain has no problem surviving without your experience of it. Your body can find food and reproduce without there being any sense of color or shape, odor, flavor, etc. It could detect and utilize it's surroundings mechanically, like Deep Blue without ever having to feel that it is a thing. > > Why > > have experience when you can just have organisms responding to their > > environment as a machine does. > > Experiences are necessary for you to behave as you do. Tautological. Why is it necessary that I behave as I do? > In the right context, an isomorphic organization is possible, assuming no > infinities are involved. > > > That's why no living thing can survive on silicon. > > Silicon doesn't replace carbon in biochemistry because it is too large, and > does not fit snugly between the atoms it holds together as the smaller > carbon atoms do Exactly. It's different. Can't do the same things with it. Craig http://s33light.org -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to firstname.lastname@example.org. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.