It's even worse when you consider the fact that the empirical evidence for psi is overwhelming and there is a quasi-religious opposition to recognition of this fact in neuroscience let alone AGI theory.
On Mon, Dec 21, 2020 at 3:12 PM Colin Hales <[email protected]> wrote: > Hi Steve, > OK. Let's try: > > Page 2: > "In scientific behavior, empirical observation and theoretical science > face-off normally in the following three familiar science contexts: > > (i) Observation of a natural context (*empirical science* > ). > > (ii) Observation of artificial versions of the natural > context. Call this engineered or replicated nature a > ‘scientifically-artificial’ version of nature (*empirical science*). > > (iii) Creation of abstract models predictive of properties of > the natural context observable in (i) and (ii) (*theoretical science*)." > > This process is literally drawn in Figure 1 for 5 different science > contexts, all of which do exactly this (i)/(ii)/(iii) process EXCEPT in > (e), for the brain where: > > (A) (ii) empirical science, in neuroscience and 'artificial > intelligence', *is missing from the science.* > (B) It just so happens that if you decide to do (ii), brain EM is the > thing that has been lost and that you replicate for the purposes. If you do > the science to explore that, then you are not using a general purpose > computer. You are exploring actual EM physics. It is empirical science. > (C) if you claim (iii) is all you need then you are distorting the science > in one place: *a unique, anomalous and unprecedented lack for which > empirical proof is required*. That proof arises through using (ii) and > (iii) *together*. > > I have simply said what the paper says. > > cheers > colin > > > > > > On Tue, Dec 22, 2020 at 6:01 AM Steve Richfield <[email protected]> > wrote: > >> Hi Colin, >> >> Most of the people on this list, including you and me, are each doing >> their own thing, while reviewing each other for mutual benefit. NOW, I >> FINALLY understand other people's objections to some of my earlier >> postings, namely, I was exposing them to my evolving view of the world, and >> each exposure was 95% the same as the previous exposure, and I wasn't >> announcing what was new with this version. Instead of continually writing >> anew, perhaps I should have included change bars, or encapsulated the >> changing theory into a one-screen abstract, or ??? >> >> Most people here feel they see a fatal flaw in your work, but different >> people see different apparent flaws, so it is difficult to carry on a group >> conversation. Without addressing the apparent flaws, even though they might >> not be real flaws, you are chasing your audience away. >> >> As for me, understanding and models are two sides of the same coin. >> Ordinary explanations of everything center around models of their operation >> or lack thereof. "Claiming to operate in the absence of a model seems to be >> either >> 1. a simple declaration of abandoning science - which I think I know you >> enough to KNOW you aren't intending, or >> 2. part of the first step in the Scientific Method - looking for >> interesting things to study further - but you apparently disclaim this by >> claiming to be able to magically jump to useful hardware/wetware/AI WITHOUT >> creating a model upon which to build an explanation. >> 3. that something useful can come of systems without need for the >> functional complexity of synapses, that commonly have non-linearities, >> integrate, differentiate, etc. >> >> I'm not sure whether I just don't see a pot of gold at the end of your >> rainbow, or I just don't see your particular rainbow. >> >> Perhaps you could write a screenful of words that advance your central >> theses? I might even take a shot at what I understand, for you to edit to >> correct my errors: >> >> *The physical arrangement of neurons in brains strongly suggests that >> field considerations might predominate over detailed wiring considerations. >> Indeed, some of the more inexplicable computational abilities of neurons, >> like mutual inhibition, are difficult to explain based on connections, but >> easier to explain based on fields.* >> >> *Colin (you) proposes that computational analogues to the operation of >> these fields might turn out to be adequate to explain VERY complex behavior >> - like the operation of our brains.* >> >> *Steve (me) believes fields are just another component of normal neural >> operation, that MUST be factored in for neuroscience and AI to ever >> advance. However, fields are linear, so ignoring the non-linear components >> like synapses would be like leaving the transistors out of an IC and >> expecting it to do something useful.* >> >> >> OK. Can you correct the errors in the above to match your view of reality? >> >> Thanks again for all of your efforts. >> >> *Steve Richfield* >> >> On Fri, Dec 18, 2020 at 8:28 PM Colin Hales <[email protected]> wrote: >> >>> Hi, >>> For a very long time I have been trying to articulate a fundamental >>> issue in the conduct science of AI (AGI). The issue is the proper conduct >>> of the science such that we can know, with empirical certainty, whether and >>> under what circumstances, a general-purpose computed abstract model of >>> nature (the brain) has functional equivalence with the nature (the brain). >>> >>> It's taken 10 years of brutal grind, but I think I have found the >>> mature/accurate shape of the argument, the proper nature of the problem, >>> and the way forward. >>> >>> I have completed the paper to preprint stage before I go to a journal >>> for the final peer review meat-grinder. >>> >>> So for a bit of a quiet read while the world self-immolates over the >>> next couple of weeks: >>> >>> Hales, C.G. (2020). The Model-less Neuromimetic Chip and its >>> Normalization of Neuroscience and Artificial Intelligence. >>> https://doi.org/10.36227/techrxiv.13298750.v2 >>> >>> 1 main article. >>> 2 supplementary supporting articles. >>> 4 videos from a computational EM study. >>> >>> Many of you will find previous discussions here remain part of it. It's >>> been quite a job to get to the bottom of the matter. >>> >>> I hope it makes sense of a difficult issue. >>> >>> Take care out there, >>> >>> cheers, >>> Colin >>> >> >> >> -- >> Full employment can be had with the stoke of a pen. Simply institute a >> six hour workday. That will easily create enough new jobs to bring back >> full employment. >> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-Mff5a457d3291043724f78a0f> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M7e0b77d3ecaa7bf7c906aff7 Delivery options: https://agi.topicbox.com/groups/agi/subscription
