Colin

I understand your point. It is clear. It is a valid point. However, I find it 
incredulous how you seem to be so utterly convinced that the physics was 
shunned by the industry. My evidence suggests you are partially informed.

Bearing in mind, I'm an independent researcher. Self funded, no strings 
attached, unbeholden. My research leads me where it does, and I follow that 
trail. Before any of my work was finalized, I scanned across emerging research 
in quantum physics and neuroscience, in order to test for future feasibility. I 
spent time with the mind of Brian Greene, and other prominent physicists.

By no means did I ever envision myself to be involved with actually developing 
such a brain chip. The reason for this was that it seemed apparent  that the 
world would do so, and then what would be lacking would be the scientific 
component of firmware, quantum-compliant logic. That is the area I focused my 
research on. My stuff would be integrated with such a chip, become it, and more.

As far back as 2014, I proposed on YouTube the notion of life on a chip. Even 
before that, the notion (as a potential project) was discussed with a prominent 
engineering firm. I'm telling you this, to help you understand that your 
perspective, though valid and duly respected, is not totally reliable.

Surely, as a neuro-scientist you are familiar with the empirical research into 
em fields and the physics of the human brain? I accessed such research as part 
of the body of knowledge I had to contend with. All I've been thinking about 
for the past, 2 years is the activation of em fields as information carriers of 
the logic that was being developed. For that reason, I still follow emerging 
developments of a prominent chip manufacturer in China and infer chip 
development on the hand of emerging robots from Japan. It's a never ending part 
of R&D. So many aspects one has to always bear in mind and integrate into a 
feasibility model.

As a pragmatist, I'd like to encourage you to get your head out of your ass and 
to get of your purist, high horse. Your upset with industry and those anogramed 
parties on the list is a noisy distraction from the real work that lies ahead. 
Do the work yourself then, if you deem it seriously neglected. You do not need 
funding to propose testable designs. You need theoretical research that would 
be feasible in empirical studies. Design the experiment.

Maybe the world is simply not ready for you yet.  Maybe, for now, the majority 
of the industry is happy to fail, rinse, repeat in an overwhelming possibility 
of AI applications. It's emergence at work. In all probability, structural 
functionality would eventually follow. We cannot control that process of 
technological evolution.

Instead, you could be developing the next 40 years of science, which Brian 
Greene, in 1999, stated still had to be developed. Twenty years later, where is 
the development? It's there, but as you correctly pointed out, not 
mainstreamed. I cannot do that kind of development myself. I'm not smart enough 
and educated in that field. It seems though, you cannot develop my component of 
research either. That is my passion and my gift. Tapestries are not weaved by a 
single strand. How do we connect these dots into a reality you see needs to be 
made practical? Would you be the one to do so?

So, instead of lamblasting everyone around you for their alleged lack of 
enlightenment, maybe it is upon you to lead the way? If not, you are nothing 
but a noisy critic and a cynic, which is not such a bad role to play either. 
The world need free thinkers with critical abilities.

Still, that would be a great shame. From what I've seen, once you drop your 
superior attitude, the sense you talk makes for incredible reading. There's so 
much to learn from your clear perspective. Now we need to see more from you, 
the physics thereof.

In summary. I get it! I get it! Now what are we going to do about it?

PS: If age was a qualifier of superiority, the Dinosaurs would've been 
developing AGI today.

Rob
________________________________
From: Colin Hales via AGI <[email protected]>
Sent: Monday, 20 August 2018 2:39 AM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.

Rob,
As someone 62 years old and had an entire career in industrial automation,  
that has worked full time on AGI via neuroscience,  since 2003, most of it in 
academia, I disagree that I have missed anything. I have sailed the ocean of 
literature into all its corners.

This is so frustrating. I keep saying it, but the actual topic keeps getting 
missed.

Let's drop the names. AI, AGI, whatever. Let's get back to basics. Let's see if 
I can spell it out in a way that works.

The _science_ involved in this is a science of natural general intelligence. 
When you do science, part of the process involves the creation of an artificial 
version of the natural original. This, along with the study of the physical 
nature itself, forms the ' empirical science' part.

The other half of the science is called 'theoretical science'. In this, 
abstract formalisms describe the nature. You can explore these formalisms by 
computer and get them to reveal things and predict things about how the nature 
appears when you look.

For example, the standard model of particle physics ... You can compute that 
model and predict a Higgs boson. That's the theoretical half. You can build a 
super collider and make Higgs bosons. That's the empirical science.

Ok?

This is universal. It applies to everything I science without exception.

Except for natural general intelligence.

Here, in the science, and only here, the two have been confused. There's a 
hypothesis that's used (in the comedy) called the 'substrate independence 
hypothesis'. Call it SIH.

Given enough resources I could put the brain's signalling physics itself (the 
electromagnetism that does the signalling, in the exact same physical form it 
has in a brain. There's no abstract model of the brain. The result is not a 
computer. There's a truckload of 'computation' in the exact same form it has in 
the brain. There's no _computer_. I put these chips in a robot suit. In front 
of me I claim I have an 'artificial general intelligence'. Call it robot A. 
This fits the behaviour that everywhere else gets called  the empirical 
science. This time, it's the empirical science of artificial general 
intelligence.

Now we switch to the theoretical part. I do an abstract model of the same 
brain. I put the physics of a computer on the chips instead. I code up the 
abstract model and run it on the computer. Everywhere else in science, this is 
part of theoretical science. I put that computer inside the same robot suit. 
Call it robot B. In front of me, what have I got? Everywhere in the science 
conducted to date, robot B  gets called 'artificial general intelligence'. Is 
it really an artificial version of the natural original?

So we now have 2 physically identical robot bodies in all respects except for 
their brains. Brain A has natural brain signalling physics on the chips. Brain 
B had the physics of a computer on the chips. Utterly different physics.

Q.Which is actually functionally equivalent to the 'natural general 
intelligence'?

A. Formally, scientifically, Nobody knows. You. Me. Nobody.

Everybody assumes the SIH true and does Robot B. Fails. And then does it again. 
Rinse. Repeat. Fail. For 65 years this goes on. Wars rage on twitter about the 
equivalence as we speak. The character Ragy Scarum, for example, did it again 
during this email thread. Know who it is? 😁

Fact.
Nobody ever does Robot A to do the real empiricism done everywhere else, where 
robot A _and_ robot B would be built and compared/contrasted .... This would be 
the (heretical, in the comedy) real empirical test of the SIH. Done without 
assuming it's truth or falsehood, but by measurement.

We are talking about a unique, singular deformed science.

I am not claiming the SIH true or false. I am claiming _nobody_ knows and 
showing what the science that empirically tests it looks like.

Under exactly what conditions is the SIH true? What exactly goes missing if the 
SIH is false?

You do not know. I do not know. Nobody knows, because the real science of it 
never gets done. There is not even a sign, anywhere in the literature, of a 
plan for robot A, the empirical science of an  artificial version of natural 
general intelligence.

So when I say AI or AGI hasn't started yet, that's exactly what I am saying, 
practically.

This is about a deeply structurally deformed science supported by a community 
unaware of it.

In order that the science go on as it is, believing robot A and robot B are 
identities to the extent of never doing robot B, is to participate in the 
world's first science that was born deformed and generationally trained to 
maintain it.

That's what the 'centuries old ....' reference is about in the comedy.

If you turned up with a 'Higgs bosons don't have to be created to prove they 
exist' hypothesis and asked to cancel the construction of the Large Hadron 
Collider, you'd be laughed out of town by the physicists.

Yet for the biggest boson in the history of all bosons .... The EM field system 
of the brain that creates all the brain's signalling and adaptation, that's 
exactly what has happened. Nobody ever builds that boson. Instead the entire 
physics of the brain is simply thrown out, wholesale. Ironic, as well as being 
extremely  naive, for the 'most complex single object in science'.

The only difference in the practice that accounts for it  is the 3 generations+ 
of workers untrained in real empirical science. This is a cultural problem.

Can you see this?

Before you answer, remember, I am not asking you to believe anything about what 
computers can or cannot do. You can wander off and use computers and do 
miracles. All irelevant to this discussion. I don't care. I care about the 
science. I am talking about a deformed science and exactly what it looks like 
and exactly what it would look like if it was normalised to look like science 
does everywhere else.

This is not an opinion. You can measure the deformation in the science. It's an 
empirical fact of the science.

But the reverse .... The proof of SIH justifying the lack of empirical science 
testing the SIH's possible falsehood, does not exist!

This singularly deformed science, presented as a joke, is the message of the 
comedy.

If only it was as funny in reality. The entire future, and all $ and effort ... 
As a bet placed on an assumption of SIH truth that is never questioned, and for 
which there's a perfectly servicable, centuries old science practice ideally 
suited to sorting it out, and it gets ignored in this one and only place in 
science.

Have I made my point?

Colin

On Mon., 20 Aug. 2018, 8:27 am Nanograte Knowledge Technologies via AGI, 
<[email protected]<mailto:[email protected]>> wrote:
Colin

I accept your empassioned argument. I call what I do AGI, for the simple reason 
that it encodes tacit and explicit knowledge into systems models, which are 
transferable onto a computational platform. Now, that is not AGI yet, but to my 
understanding it resembles one of the critical building blocks that would help 
enable eventual AGI. What I do, pertains to AGI. However, about a year ago I 
posted a sound theoretical and semantic argument on this group why the term AGI 
was superfluous to the context of machine intelligence. As is typical, no one 
bothered commenting on it. Yet, the term AGI stuck, so we are being forced to 
use it.

In my view, AI is AI, at different levels of operation and maturity. Still, 
let's go with the flow and keep it in context of a version of AGI, shall we? 
This methodology I developed then - at that point - resembled a KM component, 
having features for 1-step mutation, diversification, and recombination. 
Quantum based in meta design and evolutionary in operation. Here, I mean 
Darwinian in evolution as a complex-adaptive system, not some other mantra.

The result constitutes a scientific method within a holistic, systems framework 
to enable systems-based communication. It has its own language and rules. A 
full-blown ontology. Not a computer language, but a symbolic way to express 
itself in, sufficiently effective to achieve the aforementioned 
complex-adaptive characteristics. Again, that does not constitute AGI. Merely 
some critical building blocks towards AGI.

Still, if we could manage to place a dynamically-driven knowledge engine on a 
computational platform, and leave it be in a location where natural stimuli 
may, or may not affect its behavioral responses, and it actually auto-responded 
to adapt to these stimuli, and showed assimilation of these environmental 
changes to a degree that same stimuli were seemingly adapted to in learning, we 
may be ready to start thinking about such a platform as edging towards AGI 
functionality.

Now, again, I'm not going to call that AGI, but would be bold enough to point 
out that the resultant machine would offer up critical building blocks towards 
achieving AGI functionality.

Suppose then, we could empirically translate all knowledge this machine 
encountered and encoded in such a standardized language, and integrated it with 
the knowledge base of this machine? When we saw this knowledge being 
synthesized, aged, recombined into new contexts, in a traceable manner, would 
we then concede the makings of AGI?

Furthermore, when we observed this knowledge in states of regression, new 
insights extracted from the source, and then recombined as new knowledge to be 
synthesized with predictive contexts, would we concede AGI functionality? When 
such discoveries were taught to source, via autonomous updates, and so on 
progressively, would we reconsider AGI functionality?

I think Turing had great vision, but nonetheless vision limited by the next 
step of foreseeable computer engineering - a psycho-social need for humans to 
relate to machines. AGI should not be tested for by how it mimics humans. It's 
way beyond the performance of average human beings. It should be tested by how 
far it can autonomously extend human-centric functionality, with self-motivated 
intent. To do so, would mean to know humankind.

AGI then? Simplistically speaking, I envision a machine with the ability to 
autonomously spot an environmental situation, and motivating itself under its 
own power to successfully act upon that environmental factor, to would so with 
increasing success. I may envision other machines too, even sub-species of 
machines.

But this one, empirically, it must be hard coded in its value set (it's social 
conscience if you will) to do good to mankind, meaning to apply its resources 
and comprehension to know what constitutes good to mankind and its environment, 
and have the knowhow to autonomously apply effective complexity to achieve the 
specific level of eco-systemic good in that particular context. To do so, off 
course it would have to ensure its own survival and the survival of any 
goal-centric enabled network of autonomous machines. Then, would we call that 
AGI?

Assuming the phletora of sensors available to enable humano-robotic features 
such as hearing, vision, speech, and touch, I would be bold enough to say that 
in such a machine, emotion (as feedback-driven motivational competency), would 
become possible.

I can say this with certainty, because these systems models have already been 
completed and tried and tested. The framework exists. The method exists. Many, 
other modular components exist. My next job would be to assemble such a machine 
and achieve that which you so candidly assert no possibility of exists.

I would spend the rest of my life doing so, which is rather short in terms of 
scientific development. However, you have no right to discount the 23-odd years 
I've spent on this journey with a solid vision guiding me.

I do not blame you for your perspective, but based on what I've experienced 
during practical tests of these models, and subsequently submitted as 
well-disguised field research to the IEEE for review, and more, you are simply 
not properly informed yet. Not all knowledge is published or paraded plainly, 
meaning we can only see as far as our eyes can reach. In my case, I only showed 
that, which I chose to show, for my research reasons.

As a scientist dealing in the realm of possibility let me challenge you then 
with your own words. What if, this was meta AGI? What if...

Rob



________________________________
From: Colin Hales via AGI <[email protected]<mailto:[email protected]>>
Sent: Sunday, 19 August 2018 11:06 PM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.



On Sun., 19 Aug. 2018, 6:11 pm Nanograte Knowledge Technologies via AGI, 
<[email protected]<mailto:[email protected]>> wrote:
Colin

You're right, off course. My point is; if AGI would not deal with this level of 
abstract, human communication of temporal, emotive states, how would it ever be 
taken seriously to solve highly-abstract problems? Poets do indeed have their 
say.  😉

Next year, I hope to be living in Panama. From a  stronger base, I'm quite 
certain I would then be able to start addressing the big issues in life in 
terms of a pragmatic AGI application.

For example, using AGI-tech to help understand and mitigate the unfolding, 
Fukushima nuclear disaster, which is affecting the globe. There are 
life-existential problems to come to grips with and find resolutions for 
regarding; health, safety, food and water security, and environmental 
contamination. There's more than enough work for all of us interested in the 
field and no time to waste. Not all of us would be able to run away to Mars.

I think that is primarily where AGI development should be heading in.

Rob


Rob,
There are eras in science history where an entire thread of a particular 
science works diligently and thoroughly for a very long time, and then finds 
itself facing a shift that renders the entire era irrelevant or misguided.

AGI is one of those eras.

There are hundreds of folk like yourself, that follow ideas. It's all great 
stuff and you never know what will result.

In what you just said about your Panama plan and environmental work, you are 
presupposing the very things my little story questions.

You presuppose that AGI science has actually started.

For the reasons in the story (yes, it's all there!) AGI hasn't started yet. 
Instead, vast cliques of automation have been labelled AI and AGI and GAI and 
sometimes ASI.

All valuable work. But none of it is founded on a properly formed science, with 
a real empirical basis.

That's the problem. Everyone assumes that to pick up a computer leads, 
potentially, to Artificial General Intelligence. A culture born in the 
'rapture' of the story.

I severely challenge that presuposition in the comedy and in my book.

Dressing a computer-based brain in a robot suit does not do empirical AGI. It 
is an elaborate form of theoretical neuroscience.

All these issues are in the comedy.

I am trying to get everyone to realise it. Real AGI science will only ever 
start when brain physics is put on the chips. It's big science, chip foundry 
work, and hasn't ever even been thought about until now, let alone started.

So when you specify that you're heading off to do the things you say, and I'm 
sure you'll do good work, I'm making a case that you shouldn't be calling it 
AGI.

And I don't just single out you. You're in great company. I mean everybody. The 
whole thing. Since 1956.

I am trying every means at my disposal to get this message out.

Nobody is working on AI. Nobody is working on AGI. Everybody is working on 
automation using theoretical models of the brain and calling it an artificial 
version of intelligence. When it's not.

This mistake has only ever happened in this one place. And it could only ever 
happen at the birth of computers. And it did.

Turning this around and correcting the science is what the story is about.

At these points in science history, those that turn up with the correction to 
practice don't usually have a good time. I can confirm that! It's no fun at 
all. Also, in these periods of shift, non-standard forms of communication can 
help dislodge the glasses of the received view.

And occasionally you get to send the whole thing up.

I hope your adventures are rewarding and impactful, but just imagine if you're 
mistaken in calling it AGI. Just ask yourself that 'What if ....'

Cheers
Colin






Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-Maa3b3c86f9ac5f6678993f69>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-M6f0f8d17b502d56c7343ff93
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to