On Sun, Jan 20, 2013 at 3:15 PM, Anastasios Tsiolakidis <
[email protected]> wrote:

> Well,
>
> I don't want to sound like the other old white men here, but you are just
> twisting a form of supervised learning. I think in the long run we need to
> work out interactive models of disambiguation that include touching and
> handling and moving around, so that handling informs seeing as much as
> seeing informs handling. I can't see this happening without "features", we
> need not have hang ups about geometric shapes, rather we need real world
> statistics for the more likely combinations of shapes and colors and
> assemblies. I guess this is already a tall order and different from edge
> detection etc.
>

My intention is just to outline an abstract framework.  In practice, we
will employ a lot of heuristics to make the process efficient.  But most
vision researchers attempt to find short-cuts and ended up wasting even
more time =)

Yes, interactivity is required for AGI, but I was only focused on visual
recognition.  It would be nice to see how my model fits in an AGI
architecture, I have not spelled it out, though I have proposed an
architecture based on logic + reinforcement learning.


Now, to make things worse, an AGI will probably need to operate in real
> time in its world, so if it is not pointing a camera to a training set but
> rather, say, plays football, it will probably need to actively generate
> probabilities for different microverses (stochastic versions of its
> immediate environment) and then match them best as it can to the incoming
> data stream, rather than idly calculate some Bayesian. If you do kinda
> believe in evolution etc then you'll agree that the long lineage of
> organisms we are related to could hardly have bothered to recognize cones
> with attached triangles, but they sure as hell needed to get out of the way
> of sharks. Despite our human obsession with our ability to look at
> photographs and pigeonhole them, I have the suspicion those "survival
> metrics" never went away, and we're probably lucky they didn't. For
> example, the vast majority of cognitive and even physiological systems I
> know about, including human vision and vision physiology, are primed as is
> well known for change, they are looking out for new data (like you being
> able to find your mouse pointer much faster after a little vigorous
> movement as opposed to scanning the screen).
>
> Microverses and stream processing all the way!
>

Yes, the AGI needs to build cognitive models of the world.  The cognitive
models in turn *can* generate geometric models.  These relations may help
us design a better AGI architecture...

YKY



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to