Piaget Modeler (I can't remember your real name),

This is my insight into the absurdity of using "prediction" as a key AGI
concept of verification.  Just like all the other narrow tools, it works
well for some situations (that are rightly or wrongly called "narrow" in
this group) while failing completely in other situations.  The problem is
that confirmation through prediction has to be predicated on absolute
knowledge (about the correctness of something predicted) in which case it
makes the methods contradictory when used for uncertainty or any conditions
that uncertainty can be found.

In the early years of AI it was believed that the method might be used to
confirm theories that a strong AI program might create.  However, since
confirmation is based on absolute knowledge it really was only an illusion
even from the beginning.  In other words confirmation (or disconfirmation)
through prediction is not a solution for uncertain knowledge because, like
logic, it relies on certainty.

I am really amazed that you guys do not seem to get this.  Of course you
can use expectation (calling it anything you want) and probability along
with evidentiary methods because when you lack certainty then anything goes
no matter what you try and the best you can do is muddle through.  There is
nothing to replace lost certainty so you are going to end up using
uncertain methods anyway.

Of course we adults feel that we are certain about a great many things and
our reasoning is largely based on these things or at least related to these
things.  And if AI or AGI were able to achieve some basic level of human
knowing like that which we take for granted then these methods probably
would be satisfactory.  A great many principles from the philosophy and
science of mind could probably be used to power an AI or AGI program if
contemporary AGI was at a basic human level. However, AGI programs (and
almost all AI programs) are so primitive or narrow, that the presumption
that a principle of "prediction" could be combined with probability to
achieve certain knowledge is absurd.  I agree that Watson's game of
Jeopardy is not just "narrow AI" but it is based on "certain knowledge"
that can be construed by the appearance of a "fact" in one of a few
sources.  What makes Watson so amazing that it relies (seems to rely on)
some combination of  NLP along with some kind of highly automated NL
notation of text.

Jim Bromer

On Tue, Jun 19, 2012 at 1:48 PM, Piaget Modeler
<[email protected]>wrote:

>  Jim,
>
> Is this your actual belief or is this disinformation?
>
> ------------------------------
> Date: Tue, 19 Jun 2012 10:40:22 -0400
> Subject: [agi] Prediction Did Not Work (except in narrow ai.)
> From: [email protected]
> To: [email protected]
>
>
> The original idea behind the use of "prediction" in AI was that the
> prediction could be compared against the actuality and that comparison
> could be used to test the theory that produced the prediction.  (Some
> author popularized that model for AI but it was proposed by academic
> researchers before he did so. Karl Popper used the concept as part of his
> model of scientific discovery in the 1930s, but his principles, which were
> based on logical positivism, have become more dubious because logical
> certainty has become a more dubious principle of knowledge. And, oh, by the
> way, Popper did not believe that AI was possible.)  So, continuing with the
> march of the use of "prediction" in AI, AI people could see that our
> expectations were like "predictions" and so it did seem that the human
> mind did indeed use a method of prediction.  Of course the principle that a
> prediction could be compared against an actuality in order to evaluate the
> accuracy of a theory only works in narrow AI, and as narrow AI failed to
> produce simple AGI that part of the cherished notion of "prediction" has
> been gradually eroded.
>
> This group uses the term prediction to simply refer to something that is
> "known" and as such it is a concept which is pretty shallow since
> its verification as a mental product is thereby based on the experience
> that when we know something we act as if we were confident that it would
> happen.  The problem with such concepts like "knowing" or "prediction" is
> that they -do not- confirm the efficacy of theories that an AGI program
> might produce, except in those circumstances which would be considered
> narrow AI by this group.
>
> Let me repeat that.
> The problem with such concepts like "knowing" or "prediction" is that they
> -DO NOT- confirm the effaciacy of theories that an AGI program might
> produce, except in those circumstances which would be considered narrow
> AI by this group.
>
> So sure, when someone points out that the human mind uses "expectation"
> and expectation is a little like "prediction" I do agree.  But here the
> word prediction is just being used to describe "knowing something."  There
> is no principle of confirmation or disconfirmation of the use of
> "prediction" that can be used to produce AGI, except for special cases.
> After years and years of the repetition of the word in these types of
> discussions there is still no AGI so that should give you a hint about how
> good an idea it was.
>
> If the use of prediction as a confirming method can only be used in a
> limited set of circumstances then its power in these discussions has been
> so diminished that it should not be used as if it were a magical concept.
> Without some efficacy the word should not be used as a special technical
> term.  The word should be used in the way it is usually used.
>
> As I implied, Popper originally used the word the concept in a logical
> model of scientific theory.  If a theory could be used to predict a
> confirming or disconfirming observable event then the theory could be
> disconfirmed by the failure of the event to occur.  (If the event occurred
> it still might be caused by a coincidence.)
>
> It is coming back to me. (Or else my creative memory is kicking in.)
> The author who popularized the theory of confirmation through prediction
> had a model of probability and confirmation by prediction.  That model is
> inherently contradictory.
>
> It amazes me that you guys don't get this.
>
> Jim Bromer
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to