On Mon, Mar 16, 2015 at 9:59 AM, martin biehl via AGI <[email protected]>
wrote:

> A policy is a one-step plan depending on observations/state. The
> (observation dependent) two-step plan is obtained from the policy by
> incorporating the next observation and performing the associated action. A
> mullti-step plan is then dependent on the multiple future observations. A
> plan that does not depend on future observations is a special case of this,
> and maybe is what AI planning does (but I don't know much about it).
>


A traditional plan in the AI planning literature sense does not depend on
future observations, but there is now a big literature on dynamic/adaptive
planning algorithms as well...

ben



>
> On Mon, Mar 16, 2015 at 1:33 AM, Piaget Modeler via AGI <[email protected]>
> wrote:
>
>>
>> Thanks for that feedback.  Much appreciated.
>>
>> Any other viewpoints?
>>
>> Anyone...
>>
>> Anyone...
>>
>> Bueller...
>>
>> Bueller...
>>
>> ~PM
>>
>> ------------------------------
>> From: [email protected]
>> To: [email protected]
>> Subject: RE: [agi] Plans vs. Policies
>> Date: Mon, 16 Mar 2015 03:13:46 +0200
>>
>>
>> In brief then...
>>
>> The example model by itself is no silver bullet. It is suggested that it
>> be seamlessly integrated with 2, other quantum-based methodologies known to
>> me. Recently-verified research on the potential for the 3 integration has
>> proved successful. Thus, it forms a 3-body approach, which is significant
>> in its holistic value. All 3 the base methodologies operate as a platform
>> of synthesis in entangled and/or disentangled states of thesis and
>> antithesis. By viewing it in this way, one would notice that it potentially
>> offers full support for the content of your ongoing discussions on coding?
>>
>> In most-normal form, the ontological model already contains all the
>> inherent, system policies. This is an emerging by-product of the
>> engineering process. It is robust in that it has 2 hierarchies of control
>> (Criticality and Priority). There are various strategies for dealing with
>> the hierarchy at any level of abstraction.
>>
>> The inherent policies are transcribed into simple sentences, exploiting
>> all the implications of the possible, compound functions (adaptive
>> associations - not dependencies). Except for components of the value
>> "outcome" - never present in the existentially mature ontological model as
>> submitted - all other components are potentially self recursive.
>> Cursiveness would probably emerge at the reductionist level. In general,
>> all contexts contain their own, particular level (graining) of policies.
>> These could be tweaked in terms of coarseness, and thus perspective.  Scope
>> of work could be managed into programs and projects, budgets, and so on.
>>
>> As such, the ontology presents as a superpattern, which is repeated
>> consistently throughout any life cycle of any ontological event. Any
>> triggering of the system would be treated as a state and an event. States
>> and events would hold different values of the same, related data. Ambiguity
>> becomes a non issue. This would provide elasticity to allow for dynamic
>> routing within the policies of the hierarchy and scalable, parallel
>> processing. A basic, optimization algorithm would come in very handy at
>> this point. Policy statements may further be converted into plans, but
>> these may never be in conflict with the superposition policy framework in
>> the "superpattern". Thus, plans become objectively testable for ontological
>> viability.
>>
>> Once entangled, policies assume a watchdog role. However, due to the
>> nature of the entangled hierachical standard, not all policies would have
>> to consume system resources at all times. Thus, policies could be created
>> and destroyed for functional value, without negatively affecting the
>> e-governance competency and performance. During intital systems
>> development, policies would become semantically embedded into processes,
>> via rules. Processes would also include functions, procedural workflow
>> (program logic), and data. The value of information (views, products, and
>> reports) would depend on the event-eco-systemic and end-user requirements,
>> but be determined via the collective elements of process.
>>
>> Overall, the ontology is supported by a problem-solving framework, which
>> is fully integrated with the systems-management framework. The elements of
>> the problem-solving framework are People, Organization, Technology and
>> Process. These elements are synthesized by its own version of an inference
>> engine, namely E-Governance. For practical purposes, all policies are
>> registered in the e-Governance layer, but co-managed by the actual policies
>> and a policy-inference component, and so on to reduction. This layer
>> provides the external ontological boundary, or People seam, to all MMI
>> events.
>>
>> Even further, the ontology is supported by a practical,
>> systems-management framework, which is driven by various aspects of system
>> events, across workflow (content), including closed and open-looped
>> feedback (feedback), of the emergent value chain (competency) for each
>> event of the instantiation of a plan. In addition, the ontology is directly
>> strenghtened by a researched, systems model for generative methodology. By
>> design, all parts of the whole speak the same systems language, and when in
>> synthesis, should theoretically generate the exponential value of being
>> worth more than the sum of its parts.
>>
>> Trade-offs? So far, field research has yielded only 1, significant
>> tradeoff. This tradeoff occurs during the process of migrating a systems
>> model (logical) to a functional process (logical). The absence of reliable
>> science for process remains an actual constraint, but some strategies for
>> testing the consistency of process has been designed to deal with possible
>> quality and integrity issues which may arise during the migration.
>>
>> Further, for AGI, I would imagine tradoffs occurring in the form of
>> programming language(s), frameworks, and the selection of networked
>> platform(s). In short then, one potential process tradeoff and x number of
>> emerging technical tradeoffs. These tradeoffs should be dealt with as
>> constraints or system requirements within the various extended
>> architectures. These could probably be included as a standard model in any
>> API, or API bus. To determine its suitability to the ontology, all
>> architectures have to be tested for retro compliance with the policy
>> implications of the superpattern.
>>
>> In summary:
>> Within this ontology, a Plans vs Policies construct cannot exist within
>> the operational system. At a planning decision level, they could exist
>> separately within the same spacetime, but still not in opposition. Thye
>> remain semantically separated entities in their own right. This is where
>> planning could be used in a staging role, without having any direct
>> performance impact on the operational system, other than
>> eventually-approved workload (semantically integrated) or demand (workflow
>> management) on system resources.
>>
>> When finally implemented (institutionalized) all polices become events of
>> plans. Whn implemented (stage gated) plans become events of programs and
>> projects. Projects inherit policy-compliant processes, sub-plans, special
>> instructions (as problems), functions, resources, time, costs, and
>> semantics. It becomes the structure for all these elements of Plan in
>> Action (as a solution mechanism or value chain). In purely machine terms,
>> Project could be satisfied by a single character, status parameter of any
>> element within the ontology. In psuedo code; If project status = m, set
>> elements {a,b,c,d...} to 0, else 1. This would auto-generate a
>> mutally-exclusive construct for that particular event instance (tweakable
>> via a standards-driven range and/or threshhold of graining).
>>
>> Control over project elements may be strengthened by a cross-correlating
>> state function, which would relatively affect the mutual-exclusivity
>> construct in a dynamic manner if needs be, at optimal efficiency. For
>> example, if any of the excluded elements should be assigned a particular
>> 1/0 workflow status, a particular rule may update the construct
>> automatically, and in theory exclude/invoke another element in one step,
>> and/or destroy the memory-resident element if it were not required for any
>> event-future processing. Thus, resource-utilization could be pruned on a
>> real-time basis, advancing the overall platform and layered performance to
>> a level of effective complexity.
>>
>> Thank you. This was useful to me to explain.
>>
>> I hope it was generally clear enough and pertinent to your questions.
>>
>>
>> From: [email protected]
>> To: [email protected]
>> Subject: [agi] Plans vs. Policies
>> Date: Sun, 15 Mar 2015 16:22:46 -0700
>>
>> Reinforcement Learning uses "policies" to select actions while most work
>> in AI Planning emphasizes
>> the construction and representation of a "plan" which consists of a
>> sequence of actions (or a hierarchy
>> of composite and primitive actions).  Kindly compare, contrast, evaluate
>> trade-offs, and recommend
>> the plans or policies approach
>>
>> Your rationale is appreciated.
>>
>> ~PM
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10872673-8f99760d> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." -- George Bernard Shaw



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to