Tonini doesn't even give a precise formula for what he calls phi, a measure of consciousness, in spite of all the math in his papers. Under reasonable interpretations of his hand wavy arguments, it gives absurd results. For example, error correcting codes or parity functions have a high level of consciousness. Scott Aaronson has more to say about this. https://scottaaronson.blog/?p=1799
But even if it did, so what? An LLM doing nothing more than text prediction appears conscious simply by passing the Turing test. Is it? Does it matter? On Mon, Apr 1, 2024, 7:35 AM John Rose <[email protected]> wrote: > On Sunday, March 31, 2024, at 7:55 PM, Matt Mahoney wrote: > > The problem with this explanation is that it says that all systems with > memory are conscious. A human with 10^9 bits of long term memory is a > billion times more conscious than a light switch. Is this definition really > useful? > > > A scientific panpsychist might say that a broken 1 state light switch has > consciousness. I agree it would be useful to have a mathematical formula > that shows then how much more conscious a human mind is than a working or > broken light switch. I still haven’t read Tononi’s computations since I > don’t want it to influence my model one way or another but IIT may have > that formula? In the model you expressed you assume a 1 bit to 1 bit > scaling which may be a gross estimate but there are other factors. > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T991e2940641e8052-M9c1f29e200e462ef29fbfcdf> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T991e2940641e8052-Med834aa6dc69b257fe377cec Delivery options: https://agi.topicbox.com/groups/agi/subscription
