Accidentally sent direct... oops. From: Colin Geoffrey Hales Sent: Wednesday, 14 January 2015 6:47 PM To: 'Steve Richfield' Subject: RE: [agi] How to create an uploader
Hi Steve, Mr Google tells me the Harmon Neuron an electronics variant on a compartmental model with all the usual deferrals to the Membrane capacitance, Nernst equation and ion channel conductances etc etc. OK. But it’s a model! I am not doing an equivalent circuit model. OK. Clearly I have to reset the thinking. Take a look at the standard compartmental model of a simple single compartment. Here’s one: http://icwww.epfl.ch/~gerstner/SPNM/node18.html Take one of the compartments. A little cylindrical segment in the diagram. Flatten it to a sheet. The capacitor is an equivalent circuit model for membrane. The parallel branch across it is an ion channel conductance modelled with a resistor branch. No matter how sophisticated the parallel branching is, all these lumped-element circuit models ever do is get a transmembrane voltage to behave like the voltage of a small chunk of membrane. This is a model. A computation. It ‘writes’ its output in the form of a voltage. I am not doing this. I am not making a model. Instead I am literally making an inorganic patch of neuron membrane. Totally different outcome. Not a model. In reality (1) C has a huge sheet-dipole electric field extending out into space around the membrane dielectric. (2) The electric field across C (membrane) is 10 million volts/meter. Huge. (3) The resistor R is INSIDE the dielectric of C (not in any abstract parallel relation with it). This is ion channel penetration of the membrane. Exactly the same (scalar) current magnitude but completely different current _density_ (vector). (4) When the channel operates it ‘shorts out’ the capacitor on the inside of the dielectric, creating an evanescent dynamic inverse electric field dipole orthogonal to the capacitor (dielectric = membrane). So I build a planar sheet capacitor. I penetrate the capacitor dielectric with a non-linear field-dependent shunt that breaks down under controlled circumstances (low conductance) and then stops, letting the membrane charge up. Back to what it was. I replicate the EM field system of the tissue. This is NOT the EM field system of an arbitrary lumped-element circuit model, which can be almost anything you like. This collection of materials produces the EM field system that is literally the same as a neuron. That field system happens to have a voltage. The exact same voltage as the other model. But instead it actually uses the physics of the membrane materials to do it. The model is the emulation. My device is the replication. They are totally different. One is replicating the physics. Instead the model is merely reproducing, using voltage, a parameter observed in the physics. Reproducing an observed voltage is not replicating the physics that produced the original voltage. Lumped element electrical equivalent circuit models make brilliant use of ‘gauge invariance’ in Maxwell’s equations to allow an infinity of different field systems to produce the exact same voltage. You may say ‘so what?’ Well guess what you have to do to resolve it one way or the other? Replicate. Guess what’s the only thing never done in the history of neuroscience? Replicate. The only version ever attempted is sitting on my desk at university. Can you see what I mean? I am trying really hard to reveal the fundamental difference between emulation and replication. Am I getting through? Cheers Colin From: Steve Richfield [mailto:[email protected]] Sent: Wednesday, 14 January 2015 4:11 PM To: [email protected]<mailto:[email protected]>; Colin Geoffrey Hales Subject: Re: [agi] How to create an uploader Colin, There is a curious footnote in history that relates to your position. Are you familiar with the Harmon Neuron? This is a circuit that works much like they thought neurons worked when the circuit was designed. Now we know MUCH more, so such a circuit would need to be redesigned, or better yet, replaced with a tiny one-chip microcomputer instead of a lot of discrete components. Or, given the power of modern one-chip microcomputers you could replace a bunch of neurons with a single microcomputer. A while back there was a sizable startup formed to do just that. I forget its name. As I recall they developed a product, but they never managed to sell any of them Now, this has all been subsumed into neural networks, until... You came along and wanted to do another iteration on the above process. Artificial neurons have one important place in the world - they are FAST, because of their entirely-parallel method of operation. Steve On Mon, Jan 12, 2015 at 4:00 PM, Colin Geoffrey Hales via AGI <[email protected]<mailto:[email protected]>> wrote: Hi folk, My (Chapter 12 PCT) test is ‘consciousness-agnostic’ as far as test subjects are concerned. It only demands full embodiment. Can I suggest that no matter what your attitude to consciousness is, that the PCT (or whatever it evolves into) be considered as a way to bring science to this community that will attract science funding (eventually)? I also note that until we actually build something that has consciousness by physics replication nobody can proclaim to actually know anything solid about it except that the physics of it does not exist in any computational substrate that exists at present. I also note that the only real example of human level general intelligence, natural general intelligence (NGI)! , has consciousness and that we are currently using it to conduct this discussion and that science is critically dependent on it, whatever it is. Empirical fact .... get over it! I also note that some of the attitudes here, to computing as an AGI and the consciousness/intelligence relation, are a bit like climate deniers. That is there’s merely evidence-less opinion masquerading as a science outcome. The reasons for this preference/opinion I can’t claim to understand. It is invariant to evidence in a way that I find quite disturbing. What is it about modern life that fosters this kind of thing? That causes shootings in Paris?Some people would rather be self-assured that they absolutely ‘know’ garbage rather than admit to not knowing something. Some sort of ignorance phobia? So strange. Scientists know that when you realise you don’t know something you’re a long way towards a solution. I’ll try not to go Rumsfeldian here. Being wrong is a job requirement for a scientist. Let yourself be wrong and you’ll get to what is right by wrongness attrition! You can only be wrong so many times in a row. But if you never try to make yourself wrong you’ll never know whether you are right or not. Like climate change and its deniers, the consciousness basis of intelligence will roll over the backs of the deniers, leaving its tread-marks on a bewildered sub-group of denialists’ backs. Thomas Kuhn recognised this sub-group. Ernst Mach died in denial of electrons. They get old and become irrelevant, and are ultimately regarded as having left science. Their preferences become a religion. Their community a cult. BTW Did you know the science of consciousness recently became a ‘generational’ activity? Roughly 25. It started around 1990. An entire generation of scientists has inhabited it. They think they are studying something real and very very important. That community knows _exactly_ what it is studying. They also know they don’t know what it is. Just like fire was, long ago. To know what you’re studying does not mean you know what it is. That is science. That is not being done in the computer-only-centric part of the AGI community. Which seems dominant even now after 60+ years of failure. What the existing computer-based-AGI community has been doing for 60+ years is examine a hypothesis that consciousness is irrelevant. This is being done in a way that is not actually science and none of the practitioners get that. The science-of-consciousness community will be the community that solves the AGI problem. That community will have an explanation as to why the 60+ years of computer-based-AGI failure has happened and could have been predicted. With the consciousness understanding in place, then we’ll be able to design AGI from a perspective of explicitly choosing to include consciousness or not, by design, and by knowing what its presence or absence does to the resulting artificial intelligence. Only then will the ethics issues make sense. Signing off for now. 2015 beckons. Dammit I said I wouldn’t ramble. Sorry. Cheers Colin AGI | Archives<https://www.listbox.com/member/archive/303/=now> [https://www.listbox.com/images/feed-icon-10x10.jpg] <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | Modify<https://www.listbox.com/member/?&> Your Subscription [https://www.listbox.com/images/listbox-logo-small.png]<http://www.listbox.com> -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
