From: [email protected]
"Building on previous learning" is kind of vague. Doesn't any machine learning
algorithm do that? How will you test your program, measure the results, and
compare it to other approaches to solving the same problems?
-----------
Are you saying that there are machine learning algorithms that constitute
working AGI programs? What are their characteristics? Why do they fail to
show low end scalability. (I get that any contemporary AGI program is going to
have a limited range due to complexity. But tell me about a working AGI
program that shows the ability to build on its previous learning within the
limits of low-end scalability.) A program which uses a numerical range for a
specified kind-of-problem can exhibit increasing accuracy as it learns from
experience. However, most of us would agree that is narrow AI. The claim for
the general efficacy of machine learning algorithms is that they can be applied
to a wide variety of problems. A simple numerical range type of learning
algorithm could generalized and then also be applied to wide variety of
problems. Maybe I'm being a little superficial about this, but from where I am
right now, it looks like the only difference between a machine learning
algorithm and a simple numerical range type of learning algorithm is that the
ml algorithm is one step up the abstraction ladder. The sort of thing I am
talking about has to be thought about out loud (so to speak). It is very
unlikely that you are going to help solve the problem without being clear about
it. The issue of meta awareness is one that is very well known in these groups.
A few machine learning programs are designed to be monitor themselves. I
would guess that at best that meta awareness is limited to a well-defined
numerical-range learning algorithms (narrow AI) from the processes that it is
monitoring, but I don't recall ever hearing of an AGI program that did even
that. (Programs that are designed to monitor a complex of machinery will have
some systems that are designed to monitor the program itself, so I would say
that it is obvious that some meta-awareness of sorts must be built into some AI
designs. But in that case the situation demands that any meta awareness would
have to be secure from false confidence and so the meta awareness would not be
an AGI design. Even though I have not specified an answer to your question I
have shed some light on the problem. If a reader wants to dismiss it because
he has thought of meta awareness before well what can I say? The issue is not
whether you thought about it but whether you have actually incorporated it into
your program in AGI form or at least have an idea how you might do that in
mind? So while a machine learning algorithm can 'build on previous learning' to
improve a narrow response to the kind of problem that it is used on, it cannot
generalize on that result to be able to see how that learning might be applied
to a different kind of problem. Using the AGI meta awareness example (a meta
awareness that is AGI and not just some narrow monitoring algorithm) as a
guide, then you can see that the problem has something to do with the AGI
program learning how to use something that it has learned in one
kind-of-problem to solve another kind-of-problem. Simple analogous projection
is one kind-of-solution to this kind-of-barrier but again it is almost done
using a predefined method or most wildly with some narrow-AI kinds of
algorithms. So it is not enough for the program to be aimed at a fairly narrow
kind-of-problem and come up with a good response to that range of problems, it
also has to be able to apply its experiences to other kinds of problems. But
it is not enough to have one or two narrow methods to project this knowledge
onto a different kind of problem, there must be some AGI actions guiding the
process. Now some problems are more difficult than others and a solution to a
difficult problem might require solutions to numerous other kinds of problems.
So if an algorithm seemed to solve a difficult problem you might say that it
was equivalent to my qualification for an AGI program. This equivalency
argument is ok, but we need to design an AGI program that actually can actually
put the pieces together (to some extent) and then demonstrate that it can do
this to a variety of kinds-of-applied-problem-situations. I want to take the
program up the next step of that (particular) abstraction ladder. I can go on
and on if you want me to, and eventually I will get closer and closer to a
conjectured solution for your specific questions. But I have to make sure that
we are on the same page. Jim Bromer
Date: Sat, 3 Aug 2013 16:30:46 -0400
Subject: Re: [agi] A Very Simple AGI Project
From: [email protected]
To: [email protected]
Thank you for telling us what your program won't do. Maybe you can tell us what
it will do.
"Building on previous learning" is kind of vague. Doesn't any machine learning
algorithm do that? How will you test your program, measure the results, and
compare it to other approaches to solving the same problems?
On Sat, Aug 3, 2013 at 9:06 AM, Jim Bromer <[email protected]> wrote:
My AGI project is going to be an application of a simple AGI theory that I
have. While the database management part would not be simple to read it is
simple (and amateurish) compared to an industrial db management program. I am
not claiming that I have a solution for AGI complexity. My program is intended
to show that it is capable of making more progress than contemporary AGI
programs. It won't play Jeopardy or chess or anything like that but most of us
agree that those programs are not true AGI. My simple AGI project is intended
to show preliminary feasibility. One thing missing in most AGI programs is the
ability to build on previous learning to solve new kinds of problems. So,
while we know that a Bayesian character recognition program could be combined
with other linguistic recognition programs we don't see the character detection
program continuing on to learn to recognize spoken words, and phrases and then
use these abilities to begin to understand simple sentences. (My first simple
AGI program is not going to be able to learn to recognize handwritten
characters or spoken words but I am claiming that if it works then I would be
able to adapt it for other kinds of problems.)
So anyway, just in case someone still doesn't understand what I am saying: My
simple AGI program will not achieve true human-level intelligence, but it is
intended to demonstrate that it can build on what it has previously learned to
continue learning. So right before it is overwhelmed by complexity I am hoping
that it will go a little further than contemporary AGI programs in order to
demonstrate preliminary feasibility for a structural learning program which
unquestionably builds on previous learning. This ability to implicitly and
explicitly build on previous learning to adapt for new kinds of problems is a
fundamental ability of General Intelligence and that is what I am aiming to
create in my Simple AGI project. Or at least I am going to test my theories to
see if they are strong enough for this simple AGI project.
Jim Bromer
AGI | Archives
| Modify
Your Subscription
--
-- Matt Mahoney, [email protected]
AGI | Archives
| Modify
Your Subscription
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com