You could, for example, try to write a planner for solving the problem of
selecting the relevant factors to form an initial state (and all the other
possible states) for your lower solver to work with. To do this you would
probably have to use some kind of abstractions.  Since you want it to be
based on solid scientific methods, you might try something like, material
objects, properties and relations between objects.  This you might find
would not work so well since you want an AGI program to deal with imaginary
things like fantasies and creative solutions to problems.  So then you
might start by working with something like ideas, concepts and conceptual
relations.  However, this does not solve the representation problem so then
you have to include something about representations.  However, you are back
to the earlier problem, you cannot 'represent' every thing that you can
think about so you might try to find a slightly better word-concept to use,
like 'symbols' or 'references'.  Even though you cannot 'represent'
everything there is about the universe, for example, you can still refer to
it. Since you will need a way to represent the symbolic references you
might try to rely on grammatical primitives like 'nouns' and 'verbs'.  This
however implies that every concept-situation you might want your solver to
represent could be represented with individual words.  This does not quite
make sense so you might then use actions and objects as part of your native
abstractions for the solver to use in solving the problem of building a
state system (or other kind of system) for a higher level (or more
primitive) solver to solve.  And you might try to think about using
something like computational syntax instead of linguistic syntax, since,
your solver is going to be a computational program and it will act as if it
operating on computational strings anyway.  These ideas however, still do
not solve the problem because it is much too abstract and too full of
gaps.  And while the abstractions may seem like they are ideal for
representing any possible situation, it turns out they are not because they
are themselves possible states of thought.  So, for another example, can
you have a representation of something that cannot be represented?  Well
yes, since you can represent a reference to it.  But that means that your
solver has to be on guard against getting caught into illogical
representations that refer to things that cannot be referred to in the way
your abstract program is going to try to refer to them.  While the
comprehension and representation of the entire universe (for example) may
not represent much of a problem to anyone other than a fool, it does show
how problems of symbolic representations and references can creep into your
solver when you start with higher abstractions.  So now you need to add an
incremental trial and error method to detect such possibilities.  And since
such things may be hidden by more complicated systems of references the
problems cannot be easily detected.  And incremental methods may not be
strong enough to gain traction for your solver to create possible plan
states (or other plan domains) so your solver solver will have to be able
to latch onto something that will build traction into its incremental trial
and error methods as it progresses in its goal to create a domain that is
suitable for a planner.

Oops. I am talking too much about my own ideas again. Sorry.  However, it
all seems so relevant to the situation that you just described, and so
relevant to the question of the problem of creating an actual AGI
program that I can excuse myself.


On Fri, Dec 27, 2013 at 11:34 PM, Piaget Modeler
<[email protected]>wrote:

> So I'm writing my solvers, and I hit the first roadblock...
>
> Given (1) a sea of sensory stimuli, (2)  a prioritized set of goals, and
> (3) a possibly empty
> set of plans, how does one select the relevant stimuli to form an initial
> state for a problem?
>
> Another way to ask the question is how do humans select relevant features
> from their
> current environment to be able to formulate or retrieve plans that address
> their goals?
>
>  And how many of these environmental features are enough to describe an
> initial state?
>
> So there is PDDL.  (Planning Domain Definition Language). But to use PDDL,
> one has
> to first solve the problem of describing the relevant features of the
> environment.
>
> How do we come up with these relevant features, to be able to formulate an
> initial state?
>
> Thoughts?
>
> ~PM
>
> (I forgot what Newell & Simon said about the PSCM  & Unified Theories of
> Cognition.
> Time for more research...)
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Jim Bromer



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to