All the machine has to do is figure out what time and place variables are
being requested, then checking those against its own internal memory
records concerning the event.

That type of question does not lend itself to logic necessarily.

Is pattern "Sue" at the clinic at 4:00
Is pattern "Jane" at the clinic at 4:00

If both yes, did they orientations intersect?
If yes for how long.

Basically you would need pattern recognition and the ability of the machine
to accurately model a 3d world.

Then you would also need voice filtering software as well.  Can a machine
know for certain that
they faced each other incidentally or if the actually communicated?  You
would need to have prior
data of each individuals voice pattern to filter what they said at that
juncture to and see if either one
or both mentioned the other's name.

I'm not sure if I'm on the right track, or if it is currently
technologically feasible
to do all that, but that's how I would attempt to solve the problem.

On Sun, Jun 17, 2012 at 5:11 AM, Mike Tintner <[email protected]>wrote:

> How do you get to A): ?
>
>
> A)
> Two people in a big crowded space are unlikely to notice each other
>
> from:
>
> "Sue and Jane were both at the clinic at 4.00 - did they see each other?"
>
> How do you know to ask questions about the clinic and Sue and Jane and
> seeing?
>
> Please outline the **logical** principles  - esp. those you think existed
> in your head about "crowded spaces", "people" and "seeing."
>
> There are none. Logic cannot observe the world. Logic has never discovered
> a single new fact about the world. Logic can only work out ramifications of
> existing observations/facts. No real world reasoner - scientist,
> technologist - et al uses logic.
>
> As I more or less indicated below, your logical propositions piggybacked
> on my imaginative observations.
>
> Logic, like justice, is literally blind.
>
>
> ------------------------------**--------------------
> From: "Ben Goertzel" <[email protected]>
> Sent: Saturday, June 16, 2012 11:53 PM
>
> To: "AGI" <[email protected]>
> Subject: Re: [agi] Real World Reasoning
>
>  To know
>>> whether they could have missed each other you really do have to visualise
>>> the clinic and the possible crowds and the individual figures - and
>>> "figure
>>> out" whether they could be physically apart enough not to see each other.
>>> Reasoning here depends on the brain's imaginative capacity to move
>>> figures
>>> around the world's scenes/stages and check whether they fit together or
>>> not.
>>> Checking whether logical symbols match each other isn't going to help
>>> you,
>>> and is a fundamentally secondary operation.
>>>
>>
>> No, what you describe is just one possible strategy for solving the
>> problem.
>>
>> OpenCog also has some code for this sort of simulation modeling, but it's
>> not always needed...
>>
>> If asked whether two folks in a doctor's clinic at around the same time
>> are
>> likely to bump into each other, I can certainly answer the question
>> without
>> visualizing the clinic.
>>
>> If someone asked me that question "Jane and Sally were at a certain
>> doctor's
>> clinic at around the same time; do you think they bumped into each
>> other?",
>> I wouldn't necessarily ask the questioner about the specific geometry
>> of the clinic,
>> I might just ask something general like "How big is the clinic?  How big
>> is the
>> waiting area?  How many people tend to be there at once?"
>>
>> Based on this general information, I could then reason logically about
>> the odds
>> that Jane and Sally bumped into each other.  The chain of reasoning might
>> go something like
>>
>> (A and B) implies C
>>
>> where
>>
>> A)
>> Two people in a big crowded space are unlikely to notice each other
>>
>> B)
>> The doctor's office is a big crowded space, according to what I've just
>> been told
>>
>> C)
>> Jane and Sally probably didn't notice each other when they were in the
>> doctor's office
>>
>> ...
>>
>> This is **uncertain logical reasoning** applied to commonsense knowledge.
>>
>> At some point in the history of the mind doing this reasoning, the
>> proposition
>> (A) was probably learned from experience [though it's possible to learn
>> such
>> things via language instead]....  However, just because I learned (A) via
>> visual, embodied experience at some point in my past, doesn't prevent me
>> from using (A) in the future in chains of logical reasoning where I have
>> no idea
>> what the big crowded space in question looks like.
>>
>> This is part of the power of logical reasoning: it lets us draw
>> conclusions about cases
>> where we **lack** the concrete sensory information or episodic memory to
>> use more direct methods.
>>
>> There's no contradiction between visual observation, "mind's eye"
>> simulation,
>> and logical reasoning.  These mental processes all need to work together.
>>
>> There's also no contradiction between logical reasoning, as a description
>> of what minds do sometimes, and neural network modeling as a way of
>> describing brains.  There are clear connections between logical inference
>> rules and neural net dynamics (e.g. Hebbian learning between neuronal
>> groups and uncertain term logic deduction).  Neural nets can implement
>> logical inference along with other cognitive methods, though in OpenCog
>> we have not currently chosen neural nets as our implementation tool.
>>
>> It seems you may be unaware of the unconscious uncertain logical
>> reasoning your mind
>> continually does.  But this deficit in your own introspective habits
>> or capability,
>> while unfortunate for you, shouldn't be taken as a constraint for others'
>> AGI
>> development work....
>>
>> -- Ben G
>>
>>
>> ------------------------------**-------------
>> AGI
>> Archives: 
>> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
>> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
>> 6952829-59a2eca5<https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5>
>> Modify Your Subscription: 
>> https://www.listbox.com/**member/?&;<https://www.listbox.com/member/?&;>
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> ------------------------------**-------------
> AGI
> Archives: 
> https://www.listbox.com/**member/archive/303/=now<https://www.listbox.com/member/archive/303/=now>
> RSS Feed: https://www.listbox.com/**member/archive/rss/303/**
> 12578217-f409cecc<https://www.listbox.com/member/archive/rss/303/12578217-f409cecc>
> Modify Your Subscription: https://www.listbox.com/**
> member/?&id_**secret=12578217-198e7b68<https://www.listbox.com/member/?&;>
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to