It stems from a larger metaphysics principle in Aristotle's works:
Something comes into being, and then passes away.  It's a transition
from the hypothetical to the actual, and then to nothing when the
situation ends.

On 4/28/14, Stanley Nilsen via AGI <[email protected]> wrote:
> Hi PM,
>
> A few thoughts -
>
> One might try to come up with methods to generalize situations - put in
> categories and sub categories and sub sub categories... This sounds
> logical, but also terribly tedious.
>
> My alternative is to look at the world as sets of triggers. A trigger
> initiates an action - maybe to assert a new fact. The new fact might
> then trigger something else...
>
> What is triggered depends on what our "understanding" makes of triggers.
> Pretty much a Rube Goldberg contraption, but gets interesting quickly.
> Understanding isn't that vague, it's whatever can be coded into rules.
>
> Beware of thinking you must invent "understanding" to build AGI. No, the
> AGI needs to harvest the understanding that is all around us.
>
> Stan
>
>
> On 04/28/2014 09:11 AM, Piaget Modeler via AGI wrote:
>> It's a good start, I'd say.
>>
>> ~PM
>>
>> > Subject: Re: [agi] Situations
>> > From: [email protected]
>> > Date: Mon, 28 Apr 2014 17:14:25 +0900
>> > To: [email protected]
>> >
>> > Hello PM,
>> >
>> > here is a sketchy answer.
>> > What do you think?
>> > ----
>> > As an abstract model, situational representation would have the
>> following
>> > features:
>> > Situation is a super-class of Event and State.
>> > A situation is associated with time and place (location).
>> > A situation is associated with its participants.
>> > A situation is associated with attributes and relations of the
>> participants.
>> >
>> > In the brain, the representation of non-present situations is
>> > 'imagined.' Imagined representation is somehow distinguished
>> > from sensory (actual/present) representation.
>> > Representation of non-present situations should be composed of imagined
>> > parts.
>> >
>> > The neural representation of some situation is associated with another
>> > as relevant.
>> > If the Bayesian brain hypothesis (or similar one) is correct,
>> > the relevance is measured by some probabily theory.
>> > ----
>> >
>> > -- AN
>> >
>> > 2014/04/28 15:35、Piaget Modeler <[email protected]> wrote:
>> >
>> > > How do we form situations in our mind?
>> > >
>> > > Some may be actual, hypothetical, or anticipatory.
>> > >
>> > > How would you model situations?
>> > >
>> > > Assuming that we have millions of them to choose from, how
>> > > do we ignore irrelevant situations and work with relevant ones?
>> > >
>> > > I have some theories, but I'd like to hear your thoughts?
>> > >
>> > > ~PM
>> >
>> >
>> >
>> > -------------------------------------------
>> > AGI
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
>> > Modify Your Subscription: https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription    [Powered by Listbox] <http://www.listbox.com>
>>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to