On 01/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:

>
> A reinforcement scenario, from wikipedia is defined as
>
> "Formally, the basic reinforcement learning model consists of:
>
>  1. a set of environment states S;
>  2. a set of actions A; and
>  3. a set of scalar "rewards" in the Reals.
> "

Here is my standard response to Behaviorism (which is what the above
reinforcement learning model actually is):  Who decides when the rewards
should come, and who chooses what are the relevant "states" and "actions"?

The rewards I don't deal with, I am interested in external brain
add-ons rather than autonomous systems, so the reward system will be
closely coupled to a human in some fashion.

The rest of post I was trying to outline a system that could alter
what it considered actions and states (and bias, learning algorithms
etc). The RL definition  was just there as an example to work against.

If you find out what is doing *that* work, you have found your
intelligent system.  And it will probably turn out to be so enormously
complex, relative to the reinforcement learning part shown above, that
the above formalism (assuming it has not been discarded by then) will be
almost irrelevant.

The internals of the system will be enormously more complex compared
to the reinforcement part I described. But it won't make that
irrelevent. What goes on inside a PC is vastly more complex than the
system that governs the permissions of what each *nix program can do.
This doesn't mean the permission governing system is irrelevent.

Like the permissions system in *nix the reinforcement system it is
only supposed to govern who is allowed to do what, not what actually
happens. Unlike the permission system it is supposed to get that from
the affect of the programs on the environment.  Without it both sorts
of systems would be highly unstable.

I see it as a necessity for complete modular flexibility. If you get
one of the bits that does the work wrong, or wrong for the current
environment, how do you allow it to change?

Just my deux centimes' worth.


Appreciated.


On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism.  My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.


I'll be interested, but sceptical.

 Will

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to