in POMDP, apart of the likelihood, which is your observation model, what
really matters is the belief state, which is posterior probability of being
in one state given the measurements and the likelihood.


On Thu, Jun 26, 2014 at 6:34 AM, Piaget Modeler via AGI <[email protected]>
wrote:

> Bueller.... Bueller...  Bueller...
>
> ------------------------------
> From: [email protected]
> To: [email protected]
> Subject: MDP & POMDP Algorithms
> Date: Sat, 21 Jun 2014 23:53:49 -0700
>
>
> Hi All,
>
> Just doing some work in Premise and thought about trying out the MDP and
> POMDP algorithms.
>
>
> I have the following data types defined
>
> let MDP
>   :states
>   :actions
>   :transitions
>   :rewards
>   :horizon
> end
>
> let POMDP
>   :states
>   :actions
>   :observations
>   :transitions
>   :likelihoods
>   :rewards
>   :horizon
> end
>
> let policy
>   :action
>   :state
> end
>
>
> Let me know if this is correct, and if so, what should the algorithms look
> like?
>
> Thanks in advance.
>
> ~PM
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/15717384-a248fe41> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to