After checking the http://old.reddit.com/r/OpenCog subReddit for the past
year or so, yesterday I had the pleasant surprise of seeing that "nickb"
had posted an announcement twenty hours earlier about an upcoming on-line
conference. Searching for more information, I came across
http://multiverseaccordingtoben.blogspot.com/2020/05/gtgi-general-theory-of-general.html
as a thought-provoking blogpost by BenG (Ben Goertzel). Late at night I
captured a local copy of the text so that I could write this response in
the morning.

In the second paragraph, BenG makes an intriguing mention of "consciousness
from first, second and third person perspectives",
which immediately reminded me of the http://ai.neocities.org/var.html#pov
variable which I use in my various AI Minds to indicate a "point of view"
(POV) which can refer to first, second or third person.

I am pleased to see BenG in the fifth paragraph referring to a "new
conceptual model of mind", because I feel that AGI software needs to
include concepts.

In paragraph six BenG talks about "psi phenomena" -- which I shy away from
-- although I use "Psi" as the name of the conceptual array in my AI Minds.

Paragraph eleven mentions Occam's Razor, which always perks my interest
because I long ago memorized its statement in the original Latin: "Entia
non sunt multiplicanda praeter necessitatem" -- "Entities are not to be
muliplied outside of necessity."

Paragraph thirteen alludes to Charles Saunders Peirce, a name known to me
because an erstwhile acquaintance of mine, about whom a whole chapter was
written in "Mathematical Cranks", was often talking about Peirce when we
sat together in the long-gone "Last Exit" coffee shop on Brooklyn Avenue
near the University of Washington.

Oh, gee. Paragraph fourteen mentions Donald Knuth, whom the mathematical
crank was often talking about.

In paragraph 18 BenG says that "one can look at intelligence as the ability
to achieve complex goals in complex environments using limited resources."
Here is where I must diverge from BenG's approach to AGI. Instead of
worrying about goals for a living organism, since 1993 I have always tried
to build artificial Minds by creating first concepts and then associative
interactions among concepts.

Paragraph 21 talks about "CoDDs, Combinatorial Decision
Directed-acyclic-graphs" as a "computational model", and here I must assert
that the Mentifex approach to building AI Minds that think in English,
German, Latin or Russian is based on the gradual accretion (adding on) of
software implementations of concepts and their associative interactions
under the guidance of Natural Language Processing (NLP), a coin whose
obverse side is Natural Language Understanding --
http://ai.neocities.org/NLU.html -- (NLU).

Paragraph 32 talks about the Cognitive Synergy "which underpins the OpenCog
AGI design". Here I would like to suggest that Netizens who hunger and
thirst for TrueAGI should build elementary Minds out of concepts and NLP
structures, then let such dawn-of-AGI entities compete a la "survival of
the fittest" to evolve all the more advanced goal-seeking mechanisms. I
mean, we can create software that thinks, then let the thinking software
evolve into advanced goal-seeking software.

Oh! Paragraph 34 accords with the Mentifex approach inasmuch as BenG says
that "These specialized cognitive algorithms must be learned/evolved based
on multiple constraints...."

In paragraph 35 BenG describes the "chicken-egg problem -- which is solved
by human AGI system designers performing the first round of the iteration."
I want the Menfifex AI Minds to be the egg which evolves into the AGI
chicken.

https://www.mail-archive.com/agi@agi.topicbox.com/msg05555.html
https://www.mail-archive.com/agi@agi.topicbox.com -- list of messages.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T295a3bd072ac1b6f-Mb8a9e23179c5107da6e53b48
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to