Are Gell-Mann's intermediate AIC and Deric's "criticality proper" similar to C. Alexander's "The void"? Could this be the dynamically undecidable zone, which quantum entanglement might be associated with? Is it computable? According to classical logic and classical science, it should not be computable.
Would the undecidable zone serve as a potential example of Shroedinger's cat, or vice versa, and if so, does the recent report on a simultaneous, dual-state provide hope for its computability? To my mind it would have, if it were on one machine, and not two. Clearly, collective science still cannot prove how two, parallel universes are entangled and by which quantum fabric. For now then, science has to remain a belief system, supported by doctrine and the theoretical acceptance of its own evidence. Sounds similar to a religion, does it not? Perhaps, science merely avoids the religion trap by leaving "God" out of hypotheses, but that is a matter Ben has seemingly dealt with most effectively, in my opinion. Still, a group of us can sit here, quite casually, and move in and out of the undecidability zone without too much difficulty. That, to me then, is the real hope of computability, of shifting the boundary on "undecidability" to a further point of "undecidability". It lives here with us, in AGI and in similar discussions zones. What if Shroedinger's cat got another life, or many antithesis-induced lives, and there were as many cats as there were dynamics of any order, one instance of which was the zone itself? Would this resemble Hawking's perpendicular dimension of "imagination time"? So then, when our human, critical consciousness could exist in a single, entangled state where imagination becomes reality, and reality becomes imagination, where Gestalt is, and is not, within the same timespace, unequally so, there might yet be hope for our machine about 7 degrees from the boundary of disorder, as an independent reality. This begs the question: "Is optimal consciousness (in the sense of largest-possible effective complexity) a quantum phenomenon, or a product of mere memory?". Rob From: a...@listbox.com To: a...@listbox.com Subject: RE: [agi] Couple thoughts Date: Thu, 19 Feb 2015 02:08:53 +0200 @ Russ Exciting! Thanks for this contribution. The diagram maps closely to what Gell-Mann (1994) describes as, and diagrammatically represents to be, Large Effective Complexity and Intermediate AIC (Algorithmic Information Content). The state of largest, effective complexity matches very closely to this result. He summarizes his sketch, as a "crude illustration" to mean: "...effective complexity of a system (relative to a properly functioning complex adaptive system as observer) varies with AIC, attaining high values only in the intermediate region between excessive order and excessive disorder." Further, he points out that: "Many important quantities that occur in discussions of simplicity, complexity, and complex adaptive systems [such as the AGI forum] share the property that they can be large only in that intermediate region." (p.60) The above relates to information compressionability. Rob Date: Wed, 18 Feb 2015 17:51:18 -0600 Subject: Re: [agi] Couple thoughts From: a...@listbox.com To: a...@listbox.com ...and now fo something completely different... http://mindblog.dericbownds.net/2015/02/psilocybin-as-key-to-consciousness.html Regardless of the space cowboy nature to the title of this link's blog post that appeared today, there is relevant research behind it that touches on some of the well made points presented in this thread. On Wednesday, February 18, 2015, Mike Archbold via AGI <a...@listbox.com> wrote: On 2/17/15, Matt Mahoney via AGI <a...@listbox.com> wrote: > On Wed, Feb 18, 2015 at 12:09 AM, Mike Archbold <jazzbo...@gmail.com> > wrote: >> I think under this approach, it "bans" work on trying to actually >> figure out the answers to these tough questions, and instead places >> emphasis on replicating the means (mechanics of the brain) that >> generates whatever it is we call consciousness. I mean, under this >> school of thought, if we don't know how consciousness solves hard >> problems, so what? As long as it works by copying the >> physics/mechanics/etc of the brain (that being the obviously gigantic >> challenge, of course) that is all that is required. > > Consciousness is the feelings (reinforcement signals, mostly positive) > that you associate with sensory perception and thoughts (recalled > memories) as they are written into episodic memory (memory associated > with a time or place). It only seems mysterious because reinforcement > signals alter your beliefs. Your brain works that way because it > increases your reproductive fitness. Even though you can't objectively > believe what I just stated, you want your consciousness to continue by > not dying. > > You don't need to model consciousness to solve most AI problems like > vision, language, or robotics. You do need to model belief in > consciousness as well as other types of reinforcement learning (such > as beliefs in free will and identity) in order to model or predict > human behavior. It is not hard to do that once you understand where > these illusions come from. > > It is a distraction to think that you have to replicate consciousness > to solve AI. It is like thinking that birds fly by some magic that you > have to replicate in order to build airplanes. > > -- > -- Matt Mahoney, mattmahone...@gmail.com > > I see your point. It contentious. There is still a certain appeal to any approach that tries to skirt completely around the, well, contentious problems. Neural nets, as you know, have always been that way, just a black box for the most part. Whole brain emulation, I think, which to me means a straight copy of the highest fidelity, should be taken into account, even if you don't agree with someone's approach under that flag. If consciousness is or isn't part of it is not as important as whether or not the copy is faithful to the original brain. If so, it will work. > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/18488709-8cf25195 Modify Your Subscription: https://www.listbox.com/member/?& Powered by Listbox: http://www.listbox.com AGI | Archives | Modify Your Subscription AGI | Archives | Modify Your Subscription ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com