I would define a situation as: a particular network of interrelated
properties, which tend to collectively apply to __sets of entities that are
somewhat clustered together in time and possibly space as well__....

A situation can be used as a context for reasoning, but can also be
considered as an object of reasoning in itself...

Contextualizing semantics and reasoning relative to situations has a long
history, see "situation semantics" from the 1980s or whenever....  Though
the situation semanticists construed the notion of "situation" quite
broadly.


-- Ben G



On Wed, Apr 30, 2014 at 12:26 PM, Aaron Hosford via AGI <[email protected]>wrote:

> There is no difference in *what* can be treated as an object or a
> situation, but they are different *treatments *of those things. Treating
> something as an object is a holistic perspective, while treating it as a
> situation is a reductionist perspective. Each has its own advantages and
> disadvantages, depending on what you are attempting to accomplish. It would
> probably be ideal if a system could look at things automatically as
> objects, and then "zoom in" as needed, converting them to situations. This
> would lend a sort of fractal structure to problem solving, allowing
> relationships and interactions to be inspected at the most appropriate
> level of detail.
>
>
>
> On Tue, Apr 29, 2014 at 5:30 AM, Jim Bromer via AGI <[email protected]>wrote:
>
>> There is no difference between an object and a situation, because a
>> situation can be treated as an 'object' (of thought or otherwise be treated
>> as object-like. And of course situations occur inside of situations. That
>> is true even in traditional uses of the terms.
>>
>> Jim Bromer
>>
>>
>> On Mon, Apr 28, 2014 at 8:25 PM, Piaget Modeler via AGI 
>> <[email protected]>wrote:
>>
>>> Can one have situations inside situations?
>>>
>>> What's the difference between an object and a situation?
>>>
>>> Kindly advise.
>>>
>>> ~PM
>>>
>>> ------------------------------
>>> Date: Mon, 28 Apr 2014 16:50:24 -0600
>>> From: [email protected]
>>> To: [email protected]
>>> Subject: Re: [agi] Situations
>>>
>>>
>>> Greetings Telmo,
>>>  I've responded to your comments below.
>>> Are you working on an ontology based AGI approach?
>>>
>>> Stan
>>>
>>> On 04/28/2014 02:30 PM, Telmo Menezes via AGI wrote:
>>>
>>> Hi Stanley,
>>>
>>>
>>> On Mon, Apr 28, 2014 at 9:23 PM, Stanley Nilsen via AGI <[email protected]
>>> > wrote:
>>>
>>>  Hi PM,
>>>
>>> A few thoughts -
>>>
>>> One might try to come up with methods to generalize situations - put in
>>> categories and sub categories and sub sub categories...  This sounds
>>> logical, but also terribly tedious.
>>>
>>> My alternative is to look at the world as sets of triggers.   A trigger
>>> initiates an action - maybe to assert a new fact.  The new fact might then
>>> trigger something else...
>>>
>>>
>>>  Ok, but I don't see how this removes the need for an ontology.
>>>
>>> As I understand it, there are several efforts to create massive
>>> ontology.  And, we can all see the "value" of it.  The struggle is in
>>> finding the mechanisms that can cash in on that value - the magic sauce?
>>>
>>> I focus on how to become more intelligent when you start at next to
>>> nothing.  What's the bootstrap look like?  At what point does a computer
>>> begin to build it's intelligence?  And, what do the construction elements
>>> resemble?
>>>
>>>    It could be implicit or explicit, but you still have to be able to
>>> make more and more distinctions between triggers or actions. I tell the AI
>>> to book me a trip to Cambridge. What Cambridge, UK or USA? And then, to
>>> book the ticket I have to know that Cambridge is a town, and that I already
>>> know something about how to book travels into towns, and so on.
>>>
>>>
>>> Software "assistants" are pretty popular now.  I understand Microsoft is
>>> planning one to compete with Siri.   Maybe this is the way to the future.
>>> Start out assisting and one day take over :)
>>>
>>>
>>>  You need some way to generalise, and this leads to some hierarchy of
>>> types. I bet our brain encodes a huge one. But how does it encode it?
>>>
>>>
>>>
>>> What is triggered depends on what our "understanding" makes of
>>> triggers.  Pretty much a Rube Goldberg contraption, but gets interesting
>>> quickly.  Understanding isn't that vague, it's whatever can be coded into
>>> rules.
>>>
>>>
>>>  So you would say that a thermostat understands temperature?
>>>
>>> No, I would say that whatever is reading and setting the thermostat
>>> needs to understand the effect they want to achieve.  The "user" chooses
>>> the thermostat based on  understanding of outcomes that are expected.
>>>
>>> The thermostat is simply a "see" mechanism - it triggers something
>>> else.  If you wrote a rule to act like a thermostat, I would say that the
>>> rule understands an aspect of a thermostat - e.g. numbers change over time
>>> and there is a trigger point.  I don't think the rule needs to know about
>>> atomic vibrations, or the cost of a barrel of oil.
>>>
>>> I'm not downplaying ontology, it will be useful.  I just don't put it as
>>> first priority in building an AGI.
>>>
>>> Stan
>>>
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"In an insane world, the sane man must appear to be insane". -- Capt. James
T. Kirk

"Emancipate yourself from mental slavery / None but ourselves can free our
minds" -- Robert Nesta Marley



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to