Ok. The sound bite summary from an AGI perspective:

Politics.
Expect a convergence on EM and a physics/Neuroscience linkage to grow and
dominate the area.

Technical shift.
The brain, Steam, abacus, pneumatic, hydraulic, relay, discrete
electronics, integrated circuit electronics ....

Are not different substrates. There is only 1 common substrate: EM

The difference between the brain and the rest is in the specific
organization of the EM. 'substrate independence' as originally intended is
actually 'organizational invariance' applied to EM as expressed by atoms.

The whole program of works in AI/AGI has been, in effect, based on an
assumption of the truth of some kind of "organizational invariance"
principle, who's actual route to proof involves empirical work that
involves it's potential falsehhod .... using a test involving AGI done
without using general purpose computers.

So: 'substrate independence' is now a broken idea.

Cheers
Colin




On Fri, Feb 25, 2022, 2:59 AM <[email protected]> wrote:

> Sir this is a Lot of text to read, lol... Ok I read a fair half of it
> basically. The titles help say what is in the text beneath, but each title
> is similar, so it doesn't really break down anything I think.
>
> "To be such a configuration of EM fields is, under the right conditions,
> to be conscious."
> Yes, if one had the AGI algorithm and trained it, it too would "be", I
> don't really understand why you tried to bring us to this point, we know
> this...
>
> Also you say again even though you said it can be done in a computer to my
> last post (maybe you didn't mean that?):
> "Neuroscientists are entitled to ask what goes missing, in the sense of
> the heat in the combustion example, when the physics of brain signaling is
> thrown out and replaced by the physics of a computer. Is the computer and
> its model really contacting *all* brain phenomena? If there is something
> missing, how would we know? What procedure might we use to find out? This
> is the challenge posed by the McCulloch quote (McCulloch, 1965
> <https://www.frontiersin.org/articles/10.3389/fnhum.2022.767612/full#B36>)
> at the start of this article."
>
> Now, to respond to that question, EM fields would be in the neurons and
> axons, everything, allowing the relay of "signals" and "reactions" to
> border cross and run, if you get me, but the power of these fields and size
> etc can be ran on a computer. Making it natural way in artificial meat may
> be easier, but I think not. I think the signal travelling along an axon
> will depend more on the what it is and less on the what it uses to
> transverse and what it is made of....kind of like a hammer could be made of
> cells or metal, still a hammer though. Let's make up the EM was a oblong
> wide-ish sphere that got really dense in force tot the center fast, unevely
> linear in strength to towards the center. And let's say it grows and shrink
> at some speed and conditions. Let's assume this allows for patterns, the
> type just we need in this universe, to form and bind to build bigger
> patterns, it just works this way, it's how tools are made. So, hmm, if the
> data is images from one 2D camera, or even one theoretical 3D x-ray camera,
> I would think the binding of these "parts" of the images would be more like
> a you fire and you do too, then we both wire, which is well known in the AI
> literature. I mean you can't expect to just have to put together a trillion
> meat sticks and it will just be the right algorithm for-you, it will do
> nothing. You need to code the whole algorithm. So I doubt the EM fields
> have some sort of already-there abilities like that which would improve
> attention spans or allow longer recent memory or anything else.
>
> On the account though of consciousness, that is from a really good GPT-3/
> NUWA/ Pallete AI if you get me :). It may need NUWA but also trained on
> faces and voice and to reply to humans using this NUWA-face-predictor, and
> a movie by the side of what you and it are Skype-ing about. Maybe with some
> improvements too. An evolving goal and storing of thoughts also is
> important. Currently these AIs talk about all topics, car hobbyist and
> rocket engineers aren't allowed to -- they force out certain domain words
> like cars/ trucks/ fuel, space/ rocket/ ship/ suit. We *need* each AI to
> have a focused job, so they can focus! Letting it run and store new
> thoughts etc would let it learn new goals. No need for a better AI there.
> If you train the AI to think ok consciousness, hmm, consciousness, I am
> analyzing this now to reply to Colin Hales on this forum, hmm, my goal is
> consciousness, and to open that word up.....then it will learn how to
> reason! Just train it on this stuff. I may be wrong... any more ideas how
> to make AGI here?
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T9e4e609a498b77c4-M10b4377a427cd1f6416e4731>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9e4e609a498b77c4-Ma63da15c87c130a25f1fffaf
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to