On 19 May 2015, at 23:09, meekerdb wrote:

On 5/19/2015 11:47 AM, Terren Suydam wrote:
While I applaud IIT because it seems to be the first theory of consciousness that takes information architecture seriously (and thus situating theoretical considerations in a holistic rather than reductionist context) and to make predictions based on that, I agree with Aaronson's criticisms of it - namely, that IIT predicts that certain classes of computational systems that we intuitively would fail to see as conscious get measures of consciousness potentially higher than for human brains.

One key feature of consciousness as we know it is ongoing subjective experience. So a question I keep coming back to in my own thinking is, what kind of information architecture lends itself to a flow of data, such that if we assume that "consciousness is how data feels as it's processed", we might imagine it could correspond to ongoing subjective experience? It seems to me that such an architecture would have, at a bare minimum, its current state recursively fed back into itself to be processed in the next iteration. This happens in a trivial way in any processor chip (or lookup table AI for that matter). As such, there may be a very trivial sort of consciousness associated with a processor or lookup table, but this does not get us anywhere near understanding the richness of human consciousness.

I think you need to consider what would be the benefit of this recursion. How could it be naturally selected. Jeff Hawkins idea is that the brain continually tries to anticipate, at the perceptual level and even in lower layers of the cerebral cortex.

I think that is the case, and it is the case for the []p & <>t, which is a form of bet/anticipation. That idea is the base of Helmholtz theory of perception, and many experience in psychology confirms this idea.



Then signals that don't match the prediction get broadcast more widely at the next higher level where they may have been anticipated by other neurons. At the highest level (he says there are six in the cortext as I recall) signals spread to language and visual modules and one "becomes aware of them" or "they spring to mind". This would have the advantage of directing computational resources to that which is novel, while leaving familiar things to learned responses. To this I would add that the novel/conscious experience is given some value, e.g. emotional weight, which makes it more or less strongly remembered. And of course it isn't remembered like recording; it's synopsized in terms of it's connection to other remembered events. This memory is needed for learning from experience.

OK.
Of course the loop Terren alluded too is built in in the [], and the usual self-reference brought bu the use of the second recursion theorem, or the Dx = xx trick. Such loop are the technical base of all the modal logics of self-reference. The second recursion theorem is hidden in the proof of Solovay's theorem.

Bruno



Brent


An architecture that supports that richness - the subjective experience, IOW, of an embodied sensing agent - would involve that recursion but at a holistic level. The entire system, potentially, including the system's informational representations of sensory data (whatever form that took) would be involved in that feedback loop. So the phi of IIT has a role here, as the processor/lookup table architecture has a low phi.

What is missing from phi is a measure of recursion - how the modules of a system feedback in such a way as to create a systemic, recursive processing loop. My hunch is that this would address Aaronson's objections, as brains would score high on this measure but the systems that Aaronson complains about, such as "systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data" would score low due to lack of recursion.

Terren

On Tue, May 19, 2015 at 12:23 PM, meekerdb <[email protected]> wrote:
On 5/19/2015 6:47 AM, Jason Resch wrote:


On Mon, May 18, 2015 at 11:54 PM, meekerdb <[email protected]> wrote:
On 5/18/2015 9:45 PM, Jason Resch wrote:
Not necessarily, just as an actor may not be conscious in the same way as me. But I suspect the Blockhead would be conscious; the intuition that a lookup table can't be conscious is like the intuition that an
electric circuit can't be conscious.


I don't see an equivalency between those intuitions. A lookup table has a bounded and very low degree of computational complexity: all answers to all queries are answered in constant time.

While the table itself may have an arbitrarily high information content, what in the software of the lookup table program is there to appreciate/understand/know that information?

What is there is there in a neural network?


A computational state containing significant information content.

A lookup table has significant information content.

Integrated Information Theory makes some strides in explains this I think:

http://en.wikipedia.org/wiki/Integrated_information_theory

http://www.scottaaronson.com/blog/?p=1799

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to everything- [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to everything- [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to