On Sep 13, 9:51 pm, meekerdb <meeke...@verizon.net> wrote:
> On 9/13/2011 4:23 PM, Craig Weinberg wrote:
> > On Sep 13, 3:23 pm, meekerdb<meeke...@verizon.net> wrote:
> >> On 9/13/2011 12:00 PM, Craig Weinberg wrote:
> >>> It's easy to assume that it helps, just as it's easy for me to assume
> >>> that we have free will. If we don't need our conscious mind to make
> >>> decisions, then we certainly don't need the fantasyland associated
> >>> with our conscious minds to help with that process. Think of building
> >>> a robot that walks around and looks for food and avoids danger. Why
> >>> would it help to construct some kind of Cartesian theater inside of
> >>> it? Functionally, there is no reasonable explanation for perception or
> >>> experience, especially if you believe in determinism.
> >> It would help, even be essential, to the robot learning for it to remember
> >> things. But
> >> not just everything. It needs to remember important things, like what it
> >> was doing just
> >> before it fell down the stairs. So you design it to continually construct
> >> a narrative
> >> history and if something important happens you tuck that piece of
> >> narrative history into a
> >> database for future reference by associative memory ('near stairs'? don't
> >> back up). This
> >> memory consists of connected words learned by the speech/hearing module
> >> and images. For
> >> efficiency you use these same modules for associative construction of the
> >> narrative memory
> >> and for recall. Hence part of the same processing is used for recall and
> >> cogitation as
> >> well as perception and learning. That's why thinking has similarity to
> >> perception, i.e.
> >> sitting in a Cartesian theater.
> > Oh, I agree that there is a functional advantage to perception, it's
> > just not sufficient to explain the existence of it. Our immune system
> > needs to learn to remember things too. It may very well have a
> > narrative history of pathogens and strategic options, but there is no
> > compelling reason to assume that there is a theatrical presentation
> > going on which is comparable to our Cartesian theater.
> The operative word is "comparable". Of course it's not going to be
> "comparable" except in
> very broad functional terms, as you have acknowledged above.
> > Not only would
> > such a thing be unnecessary and probably detrimental as far as
> > computational overhead, but there really is no plausible raw material
> > for this perception to be manifested through.
> It's not necessary for a robot, but evolution has to work with what's
> available. And it
> may be more efficient than having separate processing for cogitation and
I think it could only be less efficient. It's like having an operating
system have to play some random movie for itself every time it
accesses some data.
> > It would be much easier
> > to just make the robot construct omniscient telepathy
> That's easy? The please explain how to do it.
You just exploit a loophole in the law that prevents it. Have a
program test random combinations and remove any that prove not to
cause omniscience.There is no loophole in 700nm wavelengths that could
turn into 'red' or no combination of dice rolls that will turn into
> > than to somehow
> > conjure an unprecedented thing like feeling or color out of
> > mathematical function.
> You don't have to 'conjure' them. They are inherent in perception.
That's what I've been saying. They are inherent in perception - not in
physical, arithmetic, or evolutionary function. They present their own
primitive, idiopathic phenomenology and not just a representation of
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at