I cannot give an easy example of the problem where Bayesian systems are not
able to distinguish between situations when information from different sources
can be combined and when it can't because I am not that familiar with the
language of statistics so I can't use simple references that might point you in
the direction of my ideas.
When a human being uses applied statistics he can combine his general knowledge
of the world with his specialized knowledge of the science of statistics to
find ways to make his research more effective and insightful. But in
statistical-based-AGI we are asking our automated computer programs to use
statistical analysis to effectively learn how to create better models about
reality. The belief that this makes sense as a basis for AGI strikes me as
absurd. So while Bayesian methods and weighted reasoning in general are almost
certainly important tools for AGI, they do not constitute a sound basis - in
themselves - for methods that can produce self-improving insights about the
world. An analogous criticism may be applied to any kind of mathematical or
logical method. I am not saying that math and logic has no place in AGI or
something like, that I am saying that we have to come up with other (effective)
algorithms to deal with the problem of getting automated learning systems to
use these algorithms effectively. Suppose that some composite source of data,
which was expressed or could be derived into appropriate Bayesian form, was
used to derive theories about the world. Without knowing that the data source
was a composite there is little that the Bayesian (theoretical or abstract)
formula could (in itself) do to detect it. So we have to rely on other ways to
be able to find the composition of extracted streams of data (which
superficially seems to be integral). But there are also other kinds of
problems. Most AGI data is not in appropriate Bayesian form. So here the
Bayesian-AGI guys would typically find a substitute characterization of a
situation so that the data which could be expressed in Bayesian forms. An
analogous criticism can be directed at any mathematical or logical form used as
a presumptive AGI method. I am amazed at how far AI has advanced using these
clunky models but it is clear to me that the fundamental failure of narrow
methods to set as the base for AGI can be found right here. When we try to
design an AGI program we are not locked into dealing with the IO data
environment in just one way. We can - and do - try various methods to enhance
that data. Most of the ways that are in common use are direct
recharacterizations of the data. They are not explicitly based on program
acquired theoretical recharacterizations. So if you are dealing with images
you might try to increase the contrast or use a Gaussian method to try to
detect the edges of the shapes in the image. Or, alternatively, you might
imagine a neural network which is trained to detect edges. These are not an
acquired theoretical-recharacterizations because they do not rely on the AGI
program to create its own theories (with or without outside influences) which
it might apply to the problem. When you start thinking of the problem from
this basis a number of familiar parts of the problem-solving algorithms start
to change. The programmer does not have a clear distinction between the parts
of the acquired theory since all the possible theories that might be acquired
can not all be foreseen. And particular data from an extraction of data taken
from the IO data environment might be applied directly to parts of the
extraction in one acquired (or learned) method but only indirectly in another
acquired method. Suppose that an AGI program developed a 'theory' that it
should use different video analysis methods when the light from the camera is
bright and when it is dim. This distinction might be derived from the static
parts of the background of the scene as seen at different times of the day. So
the overall light from the static background might not be used as the direct
object of further analysis (in this particular part of the algorithm) but as a
conditional. Now suppose the program subsequently realizes that this method
does not always work. Perhaps some of the details of some image objects go into
shadow and then come out and it might act on the observation of this apparent
change to investigate it further. At this point, parts of the static
background might be used in a conditional and parts as comparative objects to
compare with the target object (to compare shadows for example). Many people
have pointed out that they do not think that babies create theories. Perhaps
the phrase "acquired theoretical-recharacterization" does not accurately
describe the kind of thing that I am thinking about. I believe that human
beings develop implicit theories, or theory-like objects of thought. It was
once said that Neural Networks work the way the mind work and you might say
that neural networks develop implicit-theory-like relations. And I believe
that Bayesian Networks are also able to develop implicit-theory-like relations.
What is different in my theory of AGI is that these parts of the theory-like
object and the implementation of the theory-like functions must in some cases
be distinct and be open to precise activation by the artificial mind even if
this internal operation might not be fully available to the mind at a level of
meta-awareness. In this model, values or references may in some cases be
combined, they might be combined indirectly, they might only be combined as
distinct parts of a thought-object (or thought-like algorithm), or they might
not be combinable. To the best of my memory I have never had this conversation
with an enthusiast of weighted reasoning. This lack of interest might be
because I am working on an idea which is still new enough to be a little
elusive or it might be due directly to the mistaken belief that weighted
reasoning is the solution to inadequacy of discrete reasoning paradigms. Jim
Bromer
Date: Sat, 20 Jul 2013 12:30:41 -0500
From: [email protected]
To: [email protected]
Subject: Re: [agi] A Very Simple AGI Project
On 7/20/2013 9:14 AM, Jim Bromer wrote:
Text seems brittle because it was tried and it did
not work. But neither did visual, robotic, or
other sensor-based AGI. If the brittleness criticism was based
on a lack of substantial achievement in spite of the
effort, then the brittleness criticism would have to be applied
to all AGI modalities. Of course knowledge that is gathered
only through text is going to be brittle in the sense that it
would not be able to achieve the range of understanding that
human beings can achieve, but the use of cell phones or
robotics are not going to create genuine human experiences
either.
The only conclusion, based on the acceptance of a general lack
of substantial advancement in the field, is that we do not have
basic AGI because computers cannot achieve general intelligence
or general intelligence needs even more advanced hardware than
we have or there has been something important missing in AGI
research.
Something that Bayesian enthusiasts never talk about in these
discussion groups is how can a mostly independent learning
system make the distinction between those kinds of situations
where Bayesian methods can be used to combine different sources
of data from those cases where different sources of weighted
values can't be combined or have to be combined in a certain
way.
I wonder if you could describe an example of what you mean here?
As you may know, the Microsoft Troubleshooter uses a Bayesian
approach...
AGI | Archives
| Modify
Your Subscription
____________________________________________________________
Moviefone - Official Site
Find the Latest Movie Showtimes and Your Nearest Theaters at Moviefone.
Moviefone.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com