For the first time that I can remember, AI Magazine can be accessed for
free.  Since membership costs more than $140 a year, and the magazine
prints 4 times a year, that meant that until now you would have to pay more
than $35 an issue if you wanted to read the articles.  But the bad news
(from my point of view) is that the articles look like the same old same
old.  I haven't read many of them but it looks like I would have to read
quite a few to find one that interested me.  The stories are either
retellings of the classics or something from the machine learning
underground.

40 years ago I thought that intelligence was all about building knowledge.
While learning through similarity, similarity of form, case based reasoning
and weighted reasoning looked like keys to starting the process, the
essence of human learning is by building on what you already know.
I glanced at some articles about transference of knowledge (OK I looked at
the abstracts and read 1 article) and while this kind of thing seems like
it might be considered to be basis of building on what you already know, it
isn't.  Why not you ask?  (OK you didn't actually ask that.)  While
building on what you already know may seem like a mundane (and even tired)
theory of human learning it is based on building on competency.  And that
is exactly what most AI applications lack.  A self-driving car may be able
to avoid most pedestrians but it does not have insight into the reasons why
it should do this. AI cannot make those kinds of connections and the
insipid tacking on of a few sentences about the situation does not solve
the problem.  (OK my theories of text-based AGi says that it could if it
was powerful enough to integrate knowledge insightfully but that is the
whole point of what I am saying.)

So is this just a case where knowledge about a thing has to be associated
with numerous pieces of knowledge about other things?  Well that is part of
it.  But it also has to have some insight about the thing that it knows
something about.  It has to exhibit some genuine creativity that
demonstrates rational insight about the subject matter.  And it has to be
able to understand some of the reasons behind the thing.  So I believe that
competency at one level of knowledge is absolutely necessary for AGI if it
is to be capable of building on that knowledge.

Now, I do not know how to create an AGI program even though I seem like I
am always saying that I know what needs to be done.  But, the thing is, I
can simulate this kind of situation for an AGi program and then build the
program around the simulations.  I would start off with a number of
mini-domains of knowledge which could be used to explain reasons in the
other domains and which were necessary to form a genuine (but simplistic)
model of knowledge as I envision it.  These mini-domains would have to be
very carefully planned.  Then I could add more  mini-domains which could be
used to expand the mini-domains which had already been established.  By
upgrading the AGi program as the simulation became more sophisticated I
could test whether this theory has any value and whether it can be used to
develop something that seems like genuine knowledge.  I could test whether
the program could gain some genuine competency in these restricted
interrelated domains. And I could see what happened as the
interrelations formed more and more complex systems. It sounds like a
pretty good plan and it sounds like it is something that I can do.
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to