Colin, in your quest to create an artificial consciousness, can you explain:
1. How do you test a human, animal, robot, or program to tell if it is conscious or not? 2. What aspect of human behavior is possible in a machine only if it is conscious? 3. What aspect of consciousness, if any, depends on behavior implemented in neurons rather than transistors? 4. How do you test whether a particular computing technology is capable of consciousness? 5. Your definition of consciousness? Are you referring to the state of wakefulness (the opposite of unconsciousness), what thinking feels like, or something else? On Sun, Jun 30, 2019, 5:26 PM Colin Hales <[email protected]> wrote: > > > On Sun, Jun 30, 2019 at 7:05 PM Nanograte Knowledge Technologies < > [email protected]> wrote: > >> Colin, it's been 7 years since your YouTube feature on Ephapsis. While I >> find your theory plausible (as an independent researcher myself), your >> views on rights being ascribed to levels of consciousness are somewhat >> disturbing. I think this particular view has serious, social implications. >> Especially as to the definitions and measurements of consciousness and the >> elemental rights of the conscious and objects with apparent absence of >> consciousness. >> >> I give it to you how a basic look at clinical consciousness would make >> anyone aware of the degree of complexity involved with measuring >> consciousness. >> https://www.ncbi.nlm.nih.gov/books/NBK380/ >> >> Off course, my approach to AGI does not suffer a similar fate as yours >> apparently does, mainly due to my conviction that a purely machine-based >> version of measurable consciousness is achievable, and in context of an AGI >> entity. >> >> To me this has relevance for your approach and proposal for collaboration >> around AGI, in the sense of you probably being an advocate for human and >> animal biological hybridization with computational technology in order to >> attain a version of AGI. Would you say this assessment was correct? >> >> Even though we'll probably see such cyborgs in our lifetime, this is an >> approach I do not support. I'm more a humanoid kind of person myself. >> >> However, time has passed and we all adapt our views as we progress in our >> learning. This makes me wonder; have your views on the path to attaining a >> singularity changed since then? >> >> Robert Benjamin >> >> > A few comments before I resume the deployment of the next part in the > proposed paper. > > My ideas have developed a long way since then. I now have a chip design. I > never watched that video. I really don't use the word ephapsis, any more. > It's a misnomer, technically. I tend to you 'electromagnetic field > coupling' and label it as EM. > > I'll never see it, I won't live long enough, but 'information integration > theory' and its PHI measurement, can be sensibly applied to the chips in > the diagram (e) left, middle, right and compared with (f) left. middle. > right. Those that developed the PHI measure get to tag along in the AGI > 'moonshoot' to get the clinching proof that PHI actually measures > consciousness. They have a stake in this too (espectially in the early > stages) .As a formal measure of the kinds of mutual information content in > a system with a boundary that's currently called a 'markov blanket', it > seems plausible that it can provide a measure of consciousness that may > play a part in certifying the creation of an artificial consciousness, > which is most likely to be mandatory for any real AGI at any intelligence > level. > > In the next section on the final/nuanced structure of the science of > natural general intelligence, I include the use of biological tissue as the > 'essential physics' of an engineered brain that fits into the framework at > location (e) left. This is currently an active area of research in > neuroscience, but has yet to be applied explicitly in an artificial general > intelligence context. It's still 'artificial' in the sense of > human-originated (genetic/tissue engineering). The best thing about the > approach is that you can claim that all the natural physics is present, > including that physics that delivers a 1st person perspective > (consciousness). But I am focussed on a silicon-based chip for AGI that > incorporates an engineered inorganic version of the physics responsible for > consciousness. > > The next stage of the exposition should clarify a lot of this. I'm glad > it's taking your thoughts along the right track! > > Singularity? > > Hmmm. The 'moon-shot' big-science project I defined in a previous post > takes 10-15 years to create an AGI with a bee-sized brain. In the process > it will solve the origin of consciousness, and 'bee-level' AGI will result > (Not a functional bee, a robot the size of a dog with different function > and a bee-sized brain, 500K neurons ish.). It's an evolutionary > design/development process and along the way it will pass through > 'heart-muscle' and other single cell nervous systems, worms, jellyfish > (invertebrates), and on, up into the insects (drosophila first) , arriving > at the bee. > > That's where my plan stops. At that point we have AGI robotics that can > repair our ecosystem by providing a new 'artificial animal ecology'. After > that? Do we take the artificial brains up into the primate-level? To human > level? Beyond? These will become possible. It will be a choice we make. > Note that in my previous blurt on the structure of the 10-15 year project > and its deliverables and function, I called the chips I want to build > 'intrinsically safe'. Because the AGI created by the project is > fundamentally limited by hardware (and it is likely to be proved along the > way that 'computers' will never do AGI), we get to choose if we create > human-level and beyond. Just like the lower animals, if you have the > general intelligence of a bee ... that's where you stay. You'd have to > build you own chip foundry and rebuild your own brain to do it. Humans are > the only ones with the brains and just as important, the massive industrial > complex needed to make even the dumbest of AGI. > > To build an human level AGI is to build, in effect, an artificial > scientist. We get to choose whether we want to do that. That's as far as I > need to go. > > I have to remain focussed on the next part of the paper, now. > > Colin > > > > > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/T87761d322a3126b1-M272a3fe5433159a6637299f0> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Mdbf1d450c9948fb1a69f1199 Delivery options: https://agi.topicbox.com/groups/agi/subscription
