Can you explain differently, in other words the second part of this post. I am very interested in this as a large part of an AI system.
I believe in some fashion there needs to be a controlling algorithm that tells the AI that it is doing "Right" be it either an internal or external human reward. We receive these rewards in our daily life, in our jobs relationships and such, wether we actually learn from these is to be debated though.
James Ratcliff
Richard Loosemore <[EMAIL PROTECTED]> wrote:
Will,
Comments taken, but the direction of my critique may have gotten lost in
the details:
Suppose I proposed a solution to the problem of unifying quantum
mechanics and gravity, and suppose I came out with a solution that said
that the unified theory involved (a) a specific interface to quantum
theory, which I spell out in great detail, and (b) ditto for an
interface with geometrodynamics, and (c) a linkage component, to be
specified.
Physicists would laugh at this. What linkage component?! they would
say. And what makes you *believe* that once you sorted out the linkage
component, the two interfaces you just specified would play any role
whatsoever in that linkage component? They would point out that my
"linkage component" was the meat of the theory, and yet I had referred
to in such a way that it seemed as though it was just an extra, to be
sorted out later.
This is exactly what happened to Behaviorism, and the idea of
Reinforcement Learning. The one difference was that they did not
explicitly specify an equivalent of my (c) item above: it was for the
cognitive psychologists to come along later and point out that
Reinforcement Learning implicitly assumed that something in the brain
would do the job of deciding when to give rewards, and the job of
deciding what the patterns actually were .... and that that something
was the part doing all the real work. In the case of all the
experiments in the behaviorist literature, the experimenter substituted
for those components, making them less than obvious.
Exactly the same critique bears on anyone who suggests that
Reinforcement Learning could be the basis for an AGI. I do not believe
there is still any reply to that critique.
Richard Loosemore
William Pearson wrote:
> On 01/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
>> I had similar feelings about William Pearson's recent message about
>> systems that use reinforcement learning:
>>
>> >
>> > A reinforcement scenario, from wikipedia is defined as
>> >
>> > "Formally, the basic reinforcement learning model consists of:
>> >
>> > 1. a set of environment states S;
>> > 2. a set of actions A; and
>> > 3. a set of scalar "rewards" in the Reals.
>> > "
>>
>> Here is my standard response to Behaviorism (which is what the above
>> reinforcement learning model actually is): Who decides when the rewards
>> should come, and who chooses what are the relevant "states" and
>> "actions"?
>
> The rewards I don't deal with, I am interested in external brain
> add-ons rather than autonomous systems, so the reward system will be
> closely coupled to a human in some fashion.
>
> The rest of post I was trying to outline a system that could alter
> what it considered actions and states (and bias, learning algorithms
> etc). The RL definition was just there as an example to work against.
>
>> If you find out what is doing *that* work, you have found your
>> intelligent system. And it will probably turn out to be so enormously
>> complex, relative to the reinforcement learning part shown above, that
>> the above formalism (assuming it has not been discarded by then) will be
>> almost irrelevant.
>
> The internals of the system will be enormously more complex compared
> to the reinforcement part I described. But it won't make that
> irrelevent. What goes on inside a PC is vastly more complex than the
> system that governs the permissions of what each *nix program can do.
> This doesn't mean the permission governing system is irrelevent.
>
> Like the permissions system in *nix the reinforcement system it is
> only supposed to govern who is allowed to do what, not what actually
> happens. Unlike the permission system it is supposed to get that from
> the affect of the programs on the environment. Without it both sorts
> of systems would be highly unstable.
>
> I see it as a necessity for complete modular flexibility. If you get
> one of the bits that does the work wrong, or wrong for the current
> environment, how do you allow it to change?
>
>> Just my deux centimes' worth.
>>
>
> Appreciated.
>
>>
>> On a more positive note, I do think it is possible for AGI researchers
>> to work together within a common formalism. My presentation at the
>> AGIRI workshop was about that, and when I get the paper version of the
>> talk finalized I will post it somewhere.
>>
>
> I'll be interested, but sceptical.
>
> Will
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
> subscription, please go to
> http://v2.listbox.com/member/[EMAIL PROTECTED]
>
>
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
[EMAIL PROTECTED]>
Thank You
James Ratcliff
http://FallsTown.com - Local Wichita Falls Community Website
http://Falazar.com - Personal Website
Hosting Starting at $9.95
Dialups Accounts - $8.95
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
