Colin

I accept your empassioned argument. I call what I do AGI, for the simple reason 
that it encodes tacit and explicit knowledge into systems models, which are 
transferable onto a computational platform. Now, that is not AGI yet, but to my 
understanding it resembles one of the critical building blocks that would help 
enable eventual AGI. What I do, pertains to AGI. However, about a year ago I 
posted a sound theoretical and semantic argument on this group why the term AGI 
was superfluous to the context of machine intelligence. As is typical, no one 
bothered commenting on it. Yet, the term AGI stuck, so we are being forced to 
use it.

In my view, AI is AI, at different levels of operation and maturity. Still, 
let's go with the flow and keep it in context of a version of AGI, shall we? 
This methodology I developed then - at that point - resembled a KM component, 
having features for 1-step mutation, diversification, and recombination. 
Quantum based in meta design and evolutionary in operation. Here, I mean 
Darwinian in evolution as a complex-adaptive system, not some other mantra.

The result constitutes a scientific method within a holistic, systems framework 
to enable systems-based communication. It has its own language and rules. A 
full-blown ontology. Not a computer language, but a symbolic way to express 
itself in, sufficiently effective to achieve the aforementioned 
complex-adaptive characteristics. Again, that does not constitute AGI. Merely 
some critical building blocks towards AGI.

Still, if we could manage to place a dynamically-driven knowledge engine on a 
computational platform, and leave it be in a location where natural stimuli 
may, or may not affect its behavioral responses, and it actually auto-responded 
to adapt to these stimuli, and showed assimilation of these environmental 
changes to a degree that same stimuli were seemingly adapted to in learning, we 
may be ready to start thinking about such a platform as edging towards AGI 
functionality.

Now, again, I'm not going to call that AGI, but would be bold enough to point 
out that the resultant machine would offer up critical building blocks towards 
achieving AGI functionality.

Suppose then, we could empirically translate all knowledge this machine 
encountered and encoded in such a standardized language, and integrated it with 
the knowledge base of this machine? When we saw this knowledge being 
synthesized, aged, recombined into new contexts, in a traceable manner, would 
we then concede the makings of AGI?

Furthermore, when we observed this knowledge in states of regression, new 
insights extracted from the source, and then recombined as new knowledge to be 
synthesized with predictive contexts, would we concede AGI functionality? When 
such discoveries were taught to source, via autonomous updates, and so on 
progressively, would we reconsider AGI functionality?

I think Turing had great vision, but nonetheless vision limited by the next 
step of foreseeable computer engineering - a psycho-social need for humans to 
relate to machines. AGI should not be tested for by how it mimics humans. It's 
way beyond the performance of average human beings. It should be tested by how 
far it can autonomously extend human-centric functionality, with self-motivated 
intent. To do so, would mean to know humankind.

AGI then? Simplistically speaking, I envision a machine with the ability to 
autonomously spot an environmental situation, and motivating itself under its 
own power to successfully act upon that environmental factor, to would so with 
increasing success. I may envision other machines too, even sub-species of 
machines.

But this one, empirically, it must be hard coded in its value set (it's social 
conscience if you will) to do good to mankind, meaning to apply its resources 
and comprehension to know what constitutes good to mankind and its environment, 
and have the knowhow to autonomously apply effective complexity to achieve the 
specific level of eco-systemic good in that particular context. To do so, off 
course it would have to ensure its own survival and the survival of any 
goal-centric enabled network of autonomous machines. Then, would we call that 
AGI?

Assuming the phletora of sensors available to enable humano-robotic features 
such as hearing, vision, speech, and touch, I would be bold enough to say that 
in such a machine, emotion (as feedback-driven motivational competency), would 
become possible.

I can say this with certainty, because these systems models have already been 
completed and tried and tested. The framework exists. The method exists. Many, 
other modular components exist. My next job would be to assemble such a machine 
and achieve that which you so candidly assert no possibility of exists.

I would spend the rest of my life doing so, which is rather short in terms of 
scientific development. However, you have no right to discount the 23-odd years 
I've spent on this journey with a solid vision guiding me.

I do not blame you for your perspective, but based on what I've experienced 
during practical tests of these models, and subsequently submitted as 
well-disguised field research to the IEEE for review, and more, you are simply 
not properly informed yet. Not all knowledge is published or paraded plainly, 
meaning we can only see as far as our eyes can reach. In my case, I only showed 
that, which I chose to show, for my research reasons.

As a scientist dealing in the realm of possibility let me challenge you then 
with your own words. What if, this was meta AGI? What if...

Rob



________________________________
From: Colin Hales via AGI <[email protected]>
Sent: Sunday, 19 August 2018 11:06 PM
To: AGI
Subject: Re: [agi] Knock. Knock. Knock. Knock. Knock.



On Sun., 19 Aug. 2018, 6:11 pm Nanograte Knowledge Technologies via AGI, 
<[email protected]<mailto:[email protected]>> wrote:
Colin

You're right, off course. My point is; if AGI would not deal with this level of 
abstract, human communication of temporal, emotive states, how would it ever be 
taken seriously to solve highly-abstract problems? Poets do indeed have their 
say.  😉

Next year, I hope to be living in Panama. From a  stronger base, I'm quite 
certain I would then be able to start addressing the big issues in life in 
terms of a pragmatic AGI application.

For example, using AGI-tech to help understand and mitigate the unfolding, 
Fukushima nuclear disaster, which is affecting the globe. There are 
life-existential problems to come to grips with and find resolutions for 
regarding; health, safety, food and water security, and environmental 
contamination. There's more than enough work for all of us interested in the 
field and no time to waste. Not all of us would be able to run away to Mars.

I think that is primarily where AGI development should be heading in.

Rob


Rob,
There are eras in science history where an entire thread of a particular 
science works diligently and thoroughly for a very long time, and then finds 
itself facing a shift that renders the entire era irrelevant or misguided.

AGI is one of those eras.

There are hundreds of folk like yourself, that follow ideas. It's all great 
stuff and you never know what will result.

In what you just said about your Panama plan and environmental work, you are 
presupposing the very things my little story questions.

You presuppose that AGI science has actually started.

For the reasons in the story (yes, it's all there!) AGI hasn't started yet. 
Instead, vast cliques of automation have been labelled AI and AGI and GAI and 
sometimes ASI.

All valuable work. But none of it is founded on a properly formed science, with 
a real empirical basis.

That's the problem. Everyone assumes that to pick up a computer leads, 
potentially, to Artificial General Intelligence. A culture born in the 
'rapture' of the story.

I severely challenge that presuposition in the comedy and in my book.

Dressing a computer-based brain in a robot suit does not do empirical AGI. It 
is an elaborate form of theoretical neuroscience.

All these issues are in the comedy.

I am trying to get everyone to realise it. Real AGI science will only ever 
start when brain physics is put on the chips. It's big science, chip foundry 
work, and hasn't ever even been thought about until now, let alone started.

So when you specify that you're heading off to do the things you say, and I'm 
sure you'll do good work, I'm making a case that you shouldn't be calling it 
AGI.

And I don't just single out you. You're in great company. I mean everybody. The 
whole thing. Since 1956.

I am trying every means at my disposal to get this message out.

Nobody is working on AI. Nobody is working on AGI. Everybody is working on 
automation using theoretical models of the brain and calling it an artificial 
version of intelligence. When it's not.

This mistake has only ever happened in this one place. And it could only ever 
happen at the birth of computers. And it did.

Turning this around and correcting the science is what the story is about.

At these points in science history, those that turn up with the correction to 
practice don't usually have a good time. I can confirm that! It's no fun at 
all. Also, in these periods of shift, non-standard forms of communication can 
help dislodge the glasses of the received view.

And occasionally you get to send the whole thing up.

I hope your adventures are rewarding and impactful, but just imagine if you're 
mistaken in calling it AGI. Just ask yourself that 'What if ....'

Cheers
Colin






Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-M882d8308dcf1a167814c284c>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc4740af26e8cd0ee-Mc9fbeedefca0621daaa6005b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to