You believe it because
1. You read that it would work when you were younger and more gullible.
2. You don't know of anything that will work.

Basically, you are in the wilderness and don't know it.

There is no such thing as a good verification method.  They all work some
of the time, but the problem is that there is no way for an actual AGI
program to know when they are working and when they aren't working.  If
there was then we would all be able to show some neat AGI programs that
worked up until a point.  The result is that we have to rely on evidentiary
methods which are tied to some kind of general modelling (an intrinsic
modelling) that is geared to the one task of collecting evidence, cross
validating it, cross-examining it and so on to make our evidentiary
verification methods intrinsically stronger.

The idea that you would mention reinforcement in this context really
reveals the surprising naivety of dealing with the problem.  For example,
how do you confirm or disconfirm what I am saying?  Is it possible that
much of what I just said is right but a little of it is wrong?  Suppose
that most of what I am saying is right but there are a few details that are
wrong.  Does that mean that reinforcement will encourage me to continue and
that my theory about this will be strengthened?  No it does not.  Even if
there is only a little thing wrong with my theory it could potentially ruin
the greater theory if it is reinforced.  Now suppose that you might want to
encourage me to continue just so that you might examine my theory in
greater detail.  Wouldn't that act as a kind of reinforcement?  The problem
is that you would be effectively reinforcing my theory even while you
disparaged it (or at least were sceptical of part of it.)  The idea that a
simplistic and self-contradictory behaviorist model could be effectively
used in the development of higher intelligence is not sound.  The
behaviorists claimed that reinforcement was based on pure observation but
of course it was a philosophical construct that was developed before their
births or when they were young enough to allow them to accept the theory as
a sound basis for their musings.  The great thing about the mind is that we
are capable of discernment; we do not have to dogmatically react to
some little reinforcement as if it verified the greater elaboration of a
theory.  We can discern that someone might be interested in one of our
theories without necessarily ascribing to it.  Similarly we can discern
that someone might not react well to the presentation of one of our
theories without having to wallow in their inability to comprehend as an
absolute negative reinforcement of our own theories.  For example,
discernment allows me to dismiss the criticism of a blatant bigot because I
do not need to look very far to find what is motivating him.  Its right
there near the surface.  Bigotry is such a primitive form of reasoning you
really have to wonder what makes people think that they can get any
traction out of it.  At the same time, discernment also allows me to
reconsider the bigot's intellectual reasoning just to make sure that there
is nothing of value in it that I might have missed.

To believe that the jailer's fascism is what is responsible for making the
mind work is really beyond the pale.

Jim Bromer



On Tue, Jul 24, 2012 at 4:10 PM, Piaget Modeler
<[email protected]>wrote:

>
> I believe the opposite.  I believe prediction IS the essence of
> verification,
> of correlation.  Prediction based regulation (intrinsic reinforcement or
> correction) is practical.
>
> If not, how else would you verify?
>
> --------------------------------
>
> > Date: Tue, 24 Jul 2012 15:58:25 -0400
> > Subject: [agi] Prediction is not a reliable method of verification
> > From: [email protected]
> > To: [email protected]
>
> >
> > Mike's response to me in the thread, "Image schemas control all forms
> > of action [Lakoff replies]" demonstrates what is wrong with the
> > "prediction" method of confirmation of a theory about the world. Mike
> > acts as if he believes that since no one has a demonstration of an AGI
> > program that this proves that a computer program (an algorithm as he
> > calls it) is unable to deal effectively with new situations. Most of
> > believe that this conclusion is completely wrong. At the very least
> > his unanswered challenge does not in any way confirm his theory.
> >
> > Now he might associate his prediction with a more constrained theory,
> > like he predicts that no one in this group has an actual working AGI
> > program and the lack of a taker for his challenge to produce one
> > verifies it. Ok, but again the fact that no one is willing to accept
> > his challenge does not actually verify that. There could be someone
> > who has a working AGI program.
> >
> > But even if we take it as verifying evidence, there is still no way
> > that an automated program, which had to rely on prediction as a method
> > to confirm its theories can actually verify that the predicted event
> > truly happened and it truly verified its theory. So even with a more
> > constrained theory, in order to use prediction as a method to confirm
> > a theory you first have to find a way to verify that the theory and
> > the prediction were well constructed, the theory and the prediction
> > were both uniquely interdependent and the observation by which the
> > prediction was "confirmed" was also uniquely correct.
> >
> > In other words, if we had an AGI program that was able to think then
> > we might use the prediction method just as a human being might use it.
> > Or misuse it.
> >
> > This shows that the verification through prediction is a faulty a
> > concept as logic or any of the other would-be verification methods.
> >
> > Jim Bromer
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed:
> https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to