On Thu, Jun 3, 2021 at 7:28 AM John Rose <[email protected]> wrote: > > I think these two recent papers support the idea that consciousness is > Universal Communication Protocol. Though it could be thought of more of as > pre-protocol hmmm… There are arguments for and against conscious AGI but it > still must be explored. The first paper describes conscious AI from a > communicational aspect among agents. Second paper is interesting in many > ways. I wonder, what is the minimalist panpsychist system complexity where a > wave function collapse even if slow could occur. Wave function collapse on a > pet rock may be too slow for a perceiver or perhaps not have enough system > complexity. > > https://arxiv.org/abs/2105.07879 > https://arxiv.org/abs/2105.02314
Both papers struggle to define consciousness. I don't think either one makes the case that consciousness is important to AGI. The first paper is well researched and cites all the relevant literature (4 pages of references). Their theory is that consciousness requires at least 2 agents to learn to communicate their internal mental states to the other by developing new language symbols. This is essentially a definition, since they admit there is no objective test for consciousness. And I am not even sure that humans even meet this requirement because we can already communicate our thoughts without inventing new words. Other animals and babies definitely are not conscious according to this definition. But my computer is, because when I am debugging a program I can add print statements to the code to inspect its internal state. Likewise, any well written program should have a model of my mental state detailed enough to predict what I will want with a minimum of communication. The second paper by Chalmers and McQueen says that if I can imagine what it is like to be a bat, then bats must be conscious. At least that is as close to an objective definition as I could find in their paper. They then explore the idea that consciousness leads to the collapse of the quantum wave equation (as opposed to Penrose and Hameroff who claim that wave collapse leads to consciousness). They argue that measuring devices and brains might be in a superposition of states (like Schrodinger's cat) until one makes a conscious observation of particles. They examine this in the context of Tononi's integrated information theory (IIT), acknowledging its criticisms, and propose quantum computer experiments to test the relation between Phi (Tononi's measure of consciousness) and wavefunction collapse. They show that a 2 bit system that swaps bits at each step has one unit of consciousness, and extend the idea to a 2 qubit system. This has all sorts of problems, many of which they acknowledge. For one, IIT has many inconsistencies. It assigns a high Phi value to obviously non-conscious systems like iterated hash functions because each bit depends on all the others. Tononi never defines consciousness either, so it's just another arbitrary definition that he makes no real attempt to justify. It doesn't lead to any testable predictions. They do have some understanding of quantum mechanics and acknowledge Everett, who says that the universe is described by a deterministic wave equation whose solution contains observers that observe particles. The observations appear random because the observers obviously can not know the complete state of the system that contains them. And even if they did know it, the calculations would be intractable. We cannot even compute simple systems like the energy levels of a helium atom with a non-quantum computer. The critical difference between an observer and a physical system is that the system behaves according to time reversible physics (actually charge-parity-time reversible in the case of the weak force), while an observer has at least one bit of memory to store the result of a measurement. Writing to memory is not time reversible, in spite of all the atoms it is made of obeying time reversible laws of physics. You can't run it backwards and recover the overwritten bit by reversing the directions of all the atoms and electrons that make up the device. It's because of statistical mechanics, not consciousness. Maybe it does feel like something to be a bat. We evolved to fear death and die, because that optimizes reproductive fitness. We get a little bit of positive reinforcement for every conscious perception, thought, or action, every time we recall a memory or learn something new. It feels like something we don't want to lose by dying. That's what we call consciousness. -- -- Matt Mahoney, [email protected] ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T06c11d0b87552585-Ma9ec225f65790dde24782034 Delivery options: https://agi.topicbox.com/groups/agi/subscription
