Aaron wrote:  "So why does the brain clump things into objects and classes?"

This is an assumption, based on a particular paradigm. Other paradigms would 
view this differently. One paradigm says that we organize the world into value 
groups rather than classes.  So we would group "red, yellow, blue" into 
"colors", or "1, 2, 3" into numbers.  There may not necessarily be an object 
associated with "red" or "3".  
Just another opinion.
~PM.

Date: Sun, 4 Nov 2012 22:58:20 -0600
Subject: Re: [agi] Re: Simulation for Perception, Symbols for Understanding
From: [email protected]
To: [email protected]

I'm of the opinion that if we want to deal with complexity effectively, we 
should look at existing technologies used to handle it. The Object Oriented 
paradigm is, I think, an excellent example. It is specifically designed to 
limit complexity through encapsulation, clumping related information together 
and putting it behind a firewall, of sorts. The bonus is, we already think in 
terms of objects and classes, so not only does maintenance of an Object 
Oriented program become easier due to limits in the interconnectedness of 
classes introduced by encapsulation, but reasoning about it becomes easier due 
to our natural way of understanding things in Object Oriented terms.

So why does the brain clump things into objects and classes? I think the reason 
the Object Oriented approach works for software development carries over 
perfectly to thought and reasoning. It is simpler to categorize things and 
ignore their detailed internal workings in favor of high level summaries of 
expectations. Saying dogs can bite is saying there is a "bite" method for class 
"dog". Who cares about how a dog does its biting when we're trying to decide 
whether to go near one or not?

Once you've shifted to an Object Oriented perspective, it's also fairly easy to 
describe a situation in those terms, and it comes out looking remarkably like 
natural language. (In many Object Oriented languages, method calls directly 
parallel English grammar: if dog.bite(me, time = past) then me.avoid(dog).) 
This is more evidence, to me, that Object Oriented is a useful metaphor for how 
our minds are organized.

The simulation techniques these guys are using is a way to recognize the 
current behaviors of people and objects in the visual field, which can then be 
used to generate Object Oriented descriptions of the scene. (I don't have a 
reference on hand, but has been shown, I believe, that typically once a person 
looks away from a scene, they only remember a general description, not all the 
details. It's true of me personally, at the least.) Once an effective 
description has been put together in this high-level representational scheme, 
it is much easier to identify a small set of relevant possibilities and reason 
about them to put together a plan of action. Combinatorics are still present, 
but they are on the scale of thousands of cases instead of billions. After a 
plan of action has been generated at the abstract level, the process of 
generalization can then be reversed to move back down the 
generalization/specialization hierarchy towards a detailed simulation, at which 
point flaws in the plan can be identified and it can be iteratively revised 
through repeated generalization/specialization cycles until an effective one is 
produced.



On Sun, Nov 4, 2012 at 6:27 AM, Jim Bromer <[email protected]> wrote:

On Tue, Oct 30, 2012 at 2:11 PM, [email protected] <[email protected]> 
wrote:


They need certainty or confidence values, and a list of possibilities, not just 
a single outcome. Then reasoning can choose which interpretation(s) make the 
most sense in context. But for their purposes -- automated video logging & 
alerts -- this works fine. Once the work is done, attaching confidence vaues 
and multiple possibilities should be relatively minor.

On Sat, Nov 3, 2012 at 7:53 PM, Todor Arnaudov <[email protected]> wrote:

You don't need millions of dumb samples of "all possible cases of ..." like the 
brute force (dumb) machine learning, the problem must be approached right with 
finding the appropriate correlations, then there is not a combinatorial 
explosion.


---------------------------------------------------                             
          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to