I have decided to start with a very simple experiment. The program will at
first take random samples from the text that I input, and see if it can find
the sample repeated in some subsequent text. So, to make this clear, it will
presumably detect that smaller samples, like individual letters, are repeated
more often than longer text and it will pay more attention (at first) to
snippets that are repeated more often. If it finds that a sample is repeated
it will start building on that by taking larger samples that appear around the
repeated text in order to build on the earlier work. It will then build
selection sets of text that appear in close proximity to the repeated snippets.
Of course it will have to try to optimize the representation of these
relations in some way. Once I work out the management details of this I will
then program it to try to react to my responses.
Obviously, this isn't AGI. You might see that this has something that
-potentially- has some similarity to the measures of frequency of object usage
and co occurrence of objects that you can find in some industrial paradigms.
How is it different? It is obviously something that is much more primitive.
It is not designed just to measure word frequency and word co occurrence it is
designed to be a test of some AI principles that I think are important and to
help me better understand how I might further develop the program to get it
closer to AGI. For example, how could a program that is operating on the level
of an extremely primitive syntactic analysis be optimized to make it more ready
to detect meaningful words and phrases? This is a problem that can only be
studied by starting at a primitive level. (Primitive relative to the IO
modality that is being used.)
The next step, to get it to learn how to react to my responses is a major AGI
problem because the range of the possible purposes of my responses is too great
to define. So, I believe that the ability to improve on my success in this
step in this stage of the problem will be a true AGI indicator- so long as it
is based on learning using primitives like the kind that I mentioned.
Now I am implying that meaning can be derived from syntactic interactions. And
in order to make this possible, the program will have to be given some
primitives which are intended to internally represent or mark a relation of
meaning as well as the primitives which can be used to mark relations of
syntax. The program will have to start of by guessing which words, snippets and
interactions of text should be associated with these markers. Then as I guess
which kinds of markers are being associated with the text being exchanged I
hope to be able to guide the program to make it more likely to acquire a
substantial basis for further learning. In the worse case I could turn this
program into a different kind of programming device, but it is my hope that as
I become more aware of how general knowledge is built from interactions I hope
to be able to do better than that. However, I already have a key insight about
this. In order to get the program to learn at a higher level than being
programmed it has to be given some guidance about how it can go further than
the initial potential for learning that it was given in the programming.
This may sound like that I am claiming that there is a fine line between
programming a computer and educating it, but if there is a line it is going to
be very smudgy line (which, ironically enough, is a meaning of the phrase "a
fine line").
This is a well thought out plan. It might not work but I won't know until I
try it.
Jim Bromer
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com