On 6/26/2014 9:28 AM, 'Chris de Morsella' via Everything List wrote:

*From:*[email protected] [mailto:[email protected]] *On Behalf Of *LizR

>>Yes, according to this view we are just "along for the ride".

One way of looking at it. However it seems to me more apt to think of ourselves as the loci of the consensus of our brain/minds; to view ourselves as the dynamic manifestation of a consensus quorum, which is the wellhead of our coming into being.

We are in some sense operators as well in this neural consensus network, influencing its vast number of constituent neurons with what we feel, believe, conclude and so on. However all this “feeling”, “believing”, “concluding” is actually happening in the brain/mind and mediating through what we sense as being ourselves back through the quorum network (that I suspect is operating beneath our conscious selves) looping back to us as “doubt”, “certainty”, new thoughts or focus of attention or whatever beautiful or ugly turn our mind’s eye takes.

In my view our common view of ourselves, of our “I” is incomplete. We are more than we are conscious of being and the part of ourselves, of which we are conscious – IMO -- is the narrating loci of the executive decisional consensus network that I am arguing is our actual “self”…. Even though we are unaware of the existence of by far most of its constituent activity.


I quite agree. Conscious thought is only a small part of our "thinking" in the more general sense of information processing, problem solving,... It seems to be the part associated with language and visualization. If I were designing a Mars rover and I provided it with memories to use in learning I would want to filter out the rovers sensor data and store it only succinct chunks that can be easily found by association. And I'd only want to store one that indicated something different, something the rover didn't already "know". So I'd have it continually look at new data and compare it with what it would have predicted based on old data. Only new data that was not easily predicted would get filed in memory. I think this would instantiate consciousness in the rover. Of course this could be at different levels depending on how much the rover itself was in its predictive models. It might be only aware of it's position, temperature, battery charge,... Or is might also be aware of its relation to JPL, its predictive algorithms, its learning algorithms, ....

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to