AIXI = Solomonoff Induction ∘ Sequential Decision Theory

When I say "decision tree" you should think of "sequential decision
theory". Sequential decision theory relies upon Solomonoff Induction, with
which it is composed into a top down theory of AGI wherein the
future-discounted reward for a decision (action) is inferred based on the
best unsupervised model of the available data (ie:  the most highly
compressed form of the data). PPM is just one technique for approximating
this "best" model.  Q learning, as with all reinforcement learning schemes,
implicitly compresses its experiences in a quasi supervised manner by
associating each state with a probability distribution of reward by their
associated actions.

Hopefully I got all that right.

On Mon, Aug 24, 2020 at 4:23 PM <[email protected]> wrote:

> Totally lost here....can you build on text prediction (PPM, ok?) (how Q RL
> learning would work/ fit in). And, whatever you're saying....
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T0291e6571222d56c-M613d7acdfe02eb4913988c3b>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0291e6571222d56c-M7934acb5fe8ef25094f3d0f4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to