The questions are whether some independent intelligence is at all possible
for a computer program and whether a simple program can show some of the
higher orders of independent general intelligence?  I think that computers
already have done enough smart stuff to answer the first question.  The
second question is the one I cannot answer yet.  I feel that emergence is
something that we could see in simple AI programs if they are designed
smartly enough.  With a narrow form of reasoning you will see productions
that are completely fixed based on the input.  For general reasoning you
will need to see something that is a little different.  You will first need
to see that the program has a general grasp of the subject matter so that
it can not only produce a good answer to a narrow question but it can also
have some insight about what it did.  The simplest example of this is for
the program to act as if it was aware of what it did.  As this
pseudo-awareness becomes more insightful, as it gains more knowledge about
what it did, it will become aware.  This isn't the sci-fi "aware" (like
Skynet became *aware* in the more powerful sense) but a more fundamental
kind of awareness.  Like what I am doing right now is writing about a
possible route toward AGI in an online discussion group.  This awareness
makes me able to respond to other kinds of statements about what I am doing
so not only am I able to produce some written thoughts about AGI but I can
also make some statements about this. I am not only able to produce some
output but I have some simple awareness about the fact what I am doing.

So, by creating a network of "ideas" which are able to fit these ideas into
many different kinds of situations the computer program could
hypothetically become somewhat aware (in this basic sense) of what it
is doing.  However, this model is very difficult.  For example, suppose
that it had some "ideas" that it felt that it was using correctly according
to how the user has reacted to it in the past.  If a new idea was input and
if the program decided that the new idea concerned something that it had
previously learned about then how is this new idea going to be integrated
into the previously learned ideas.  Does it supplement some ideas that had
been previously learned with this new idea, or is the new idea meant
to correct and replace an idea that was wrong? Was the old idea wrong or
was it complete nonsense?  A new idea might even present another theory
about the subject matter in which case the old ideas are not being
proclaimed wrong but the new idea is intended as an alternative.  These
kinds of possibilities can be reduced to simple relations just like the
"rule" in rule-based AI that Stan was suggesting.  In fact a rule is one
possibility.  Another possible relation is a relation between idea-objects,
which is like a rule in some ways but not completely the same.  A new
statement may be intended to represent an alternative selection for a rule
or relation. Or it may be intended as a correction.  A correction may be
hard (completely wrong) or soft (some points were wrong but some points
were right.)  Of course the teacher may make a mistake but if the program
is able to learn to make some inferences based on a greater awareness then
it might one day be able to find a possible conflict in what the teacher
has presented.

Obviously, it is not going to work.  However, by putting an abstract of its
internal reasoning on the screen along with the text based IO that I want
to use, if it ever comes close to seeing relations in the Input that I want
it to see, I can emphasize those relations by using the mouse to select
phrases or snippets of its reasoning (or the material that I presented to
it) that I want it to emphasize.  Although this is not going to be true AGI
I will be able to see how my if my ideas make any sense in a working
model.  Then I would hope that I could find ways to emphasize relations
without the pointing and clicking while still using the abstract of its
internal reasoning.

I am making sense.

Jim Bromer.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to