Hi Colin, Most of the people on this list, including you and me, are each doing their own thing, while reviewing each other for mutual benefit. NOW, I FINALLY understand other people's objections to some of my earlier postings, namely, I was exposing them to my evolving view of the world, and each exposure was 95% the same as the previous exposure, and I wasn't announcing what was new with this version. Instead of continually writing anew, perhaps I should have included change bars, or encapsulated the changing theory into a one-screen abstract, or ???
Most people here feel they see a fatal flaw in your work, but different people see different apparent flaws, so it is difficult to carry on a group conversation. Without addressing the apparent flaws, even though they might not be real flaws, you are chasing your audience away. As for me, understanding and models are two sides of the same coin. Ordinary explanations of everything center around models of their operation or lack thereof. Claiming to operate in the absence of a model seems to be either 1. a simple declaration of abandoning science - which I think I know you enough to KNOW you aren't intending, or 2. part of the first step in the Scientific Method - looking for interesting things to study further - but you apparently disclaim this by claiming to be able to magically jump to useful hardware/wetware/AI WITHOUT creating a model upon which to build an explanation. 3. that something useful can come of systems without need for the functional complexity of synapses, that commonly have non-linearities, integrate, differentiate, etc. I'm not sure whether I just don't see a pot of gold at the end of your rainbow, or I just don't see your particular rainbow. Perhaps you could write a screenful of words that advance your central theses? I might even take a shot at what I understand, for you to edit to correct my errors: *The physical arrangement of neurons in brains strongly suggests that field considerations might predominate over detailed wiring considerations. Indeed, some of the more inexplicable computational abilities of neurons, like mutual inhibition, are difficult to explain based on connections, but easier to explain based on fields.* *Colin (you) proposes that computational analogues to the operation of these fields might turn out to be adequate to explain VERY complex behavior - like the operation of our brains.* *Steve (me) believes fields are just another component of normal neural operation, that MUST be factored in for neuroscience and AI to ever advance. However, fields are linear, so ignoring the non-linear components like synapses would be like leaving the transistors out of an IC and expecting it to do something useful.* OK. Can you correct the errors in the above to match your view of reality? Thanks again for all of your efforts. *Steve Richfield* On Fri, Dec 18, 2020 at 8:28 PM Colin Hales <[email protected]> wrote: > Hi, > For a very long time I have been trying to articulate a fundamental issue > in the conduct science of AI (AGI). The issue is the proper conduct of the > science such that we can know, with empirical certainty, whether and under > what circumstances, a general-purpose computed abstract model of nature > (the brain) has functional equivalence with the nature (the brain). > > It's taken 10 years of brutal grind, but I think I have found the > mature/accurate shape of the argument, the proper nature of the problem, > and the way forward. > > I have completed the paper to preprint stage before I go to a journal for > the final peer review meat-grinder. > > So for a bit of a quiet read while the world self-immolates over the next > couple of weeks: > > Hales, C.G. (2020). The Model-less Neuromimetic Chip and its Normalization > of Neuroscience and Artificial Intelligence. > https://doi.org/10.36227/techrxiv.13298750.v2 > > 1 main article. > 2 supplementary supporting articles. > 4 videos from a computational EM study. > > Many of you will find previous discussions here remain part of it. It's > been quite a job to get to the bottom of the matter. > > I hope it makes sense of a difficult issue. > > Take care out there, > > cheers, > Colin > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M62ea1905ccf598858dda3808> > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-Mee465ce1260b3c8b92857014 Delivery options: https://agi.topicbox.com/groups/agi/subscription
