Re: [agi] general weak ai

2007-03-06 Thread Russell Wallace

On 3/6/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:


Consider voxels. Most agents don't have to deal with anything remote
at a high-precision level. A nice structure to use with object positions
is to use short-integer voxel-relative coordinates.

Something like
typedef struct voxel_struct {
charid;
charx;
chary;
charz;
} voxel;

and space as voxel voxbox[SIZE];



What simulation algorithms did you have in mind with that data structure?
There are good reasons for the typical emphasis on floating point, polygons
and more sophisticated structures; the human eye and brain track things to
better than 1/256, and so do embedded computer systems; integer arithmetic
is not necessarily faster than floating point on modern hardware (and can
even be slower); and frankly, we're nowhere near the stage at which worrying
about what kind of machine word to use is useful rather than harmful
(premature optimization is the root of all evil and all that).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Eugen Leitl
On Tue, Mar 06, 2007 at 02:12:10PM -0500, Ben Goertzel wrote:
> For a somewhat recent discussion of issues regarding storing and 
> querying spatiotemporal objects, see:
> 
> http://citeseer.ist.psu.edu/hadjieleftheriou02efficient.html
> 
> They describe various tree data-structures that are particularly 
> efficient for storing spatiotemporal information regarding persistent
> objects that move according to nonlinear (yet still continuous) 
> trajectories.

Consider voxels. Most agents don't have to deal with anything remote
at a high-precision level. A nice structure to use with object positions
is to use short-integer voxel-relative coordinates.

Something like
typedef struct voxel_struct {
charid;
charx;
chary;
charz;
} voxel;

and space as voxel voxbox[SIZE];

The reasons for this representation are multiple; it is quite
suitable for a physical simulation, allows variable resolution by 
masking, rapid neighbourhood scan and collision detection, and
is uniquely suitable for superrealtime runs by virtue of simplicity
and embarrassing parallelism. (Also, trees on modern machines tend
to ruin your access predictability, which hurts if out of cache).
 
> If an AGI system (running on contemporary or near-future computer 
> hardware) is going to efficiently keep track of a bunch of objects in a 
> dynamic environment, it's going to have to use something like this.

A GByte node buys some 10^9 of such primitives, at about 10..20 Hz
processing rate. Trees are nice if you want to do variable-resolution
grids with plenty of open space, but I'm not sure it's worth the
hassle. Modern machines like arrays, especially cache-aligned and
prefetched. About the only way to hit the theoretical memory peak.
 
> The brain, as a massively parallel system, doesn't face the same storage 
> and access issues, though it has different problem.

If you want to be efficient, you will run into the same design
constraints as animal CNS.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi] general weak ai

2007-03-06 Thread J. Storrs Hall, PhD.
On Tuesday 06 March 2007 13:34, Mark Waser wrote:

> > Another, simpler example is indexing items via time and space: you need
> > to be able to submit a spatial and/or temporal region as a query and find
> > items relevant to that region of spacetime.
>
> A near query where you pin down one entity is easy (as are queries in
> one-dimensional space).  A near query where you ask which two points out of
> one hundred in a two-dimensional space are the closest is not easy at all
> (and I would *love* to hear how you would index such a thing).

This is one of the key issues I'm banging my head against at the moment :^)

For the specific problem mentioned in 2d it isn't bad -- Delaunay triangulate 
the space and sort the edges. Both stages n log n (the degree of each point 
in a Delaunay triangulation tends to be close to constant).

For the general case, look at "  Nearest-Neighbor Methods in Learning and 
Vision:  Theory and Practice"
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10931

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel

Mark Waser wrote:

Just polynomially expensive, I believe


Depends upon whether you're fully connected or not but yeah, yeah . . . .

Another, simpler example is indexing items via time and space: you 
need to be able to submit a spatial and/or temporal region as a query 
and find items relevant to that region of spacetime.


A near query where you pin down one entity is easy (as are queries in 
one-dimensional space).  A near query where you ask which two points 
out of one hundred in a two-dimensional space are the closest is not 
easy at all (and I would *love* to hear how you would index such a 
thing).




The query you have suggested is a difficult one but fortunately not a 
very useful one 

For a somewhat recent discussion of issues regarding storing and 
querying spatiotemporal objects, see:


http://citeseer.ist.psu.edu/hadjieleftheriou02efficient.html

They describe various tree data-structures that are particularly 
efficient for storing spatiotemporal information regarding persistent
objects that move according to nonlinear (yet still continuous) 
trajectories.


If an AGI system (running on contemporary or near-future computer 
hardware) is going to efficiently keep track of a bunch of objects in a 
dynamic environment, it's going to have to use something like this.


The brain, as a massively parallel system, doesn't face the same storage 
and access issues, though it has different problem.


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Mark Waser

Just polynomially expensive, I believe


Depends upon whether you're fully connected or not but yeah, yeah . . . .

Another, simpler example is indexing items via time and space: you need to 
be able to submit a spatial and/or temporal region as a query and find 
items relevant to that region of spacetime.


A near query where you pin down one entity is easy (as are queries in 
one-dimensional space).  A near query where you ask which two points out of 
one hundred in a two-dimensional space are the closest is not easy at all 
(and I would *love* to hear how you would index such a thing).



- Original Message - 
From: "Ben Goertzel" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, March 06, 2007 12:44 PM
Subject: Re: [agi] general weak ai



Mark Waser wrote:
I like the idea of exploiting the biased statistics of actual changes to 
the grid in real situations, in order to avoid the overhead of 
constantly doing whole-grid-replacement updates...  Qualitatively this 
seems like the sort of thing the brain must be doing, and the kind of 
thing any AI system must do to cope with a rapidly changing 
environment...


The problem, of course, being that whole-grid updates are cheap for a 
parallel processing system like the brain and exponentially expensive for 
serial processing.


Just polynomially expensive, I believe ... but in concept, your point is 
correct... The brain can continually update its whole grid model of the 
perceived world cheaply in parallel, but to do comparable updating in a 
serial computer framework, fancy footwork is required...


(Just as, OTOH, the brain requires fancy footwork to do precise 
arithmetic, which is trivial for digital computers, serial or 
otherwise...)


In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to managing 
large amounts of data in real-time.


May I, again, request some details?  Thanks!
As a single example, using a graphDB type structure rather than a 
relationalDB type structure makes it relatively cheap to answer questions 
about paths, such as "What are the 5 highest-weight paths from node-set A 
to node-set B?" (where e.g. the weight of a path may be defined as the 
product of the weights of the links along the path, which may be assumed 
in [0,1]).   This kind of query is important to the control of uncertain 
inference in a large knowledge base.


Another, simpler example is indexing items via time and space: you need to 
be able to submit a spatial and/or temporal region as a query and find 
items relevant to that region of spacetime.


Also, you want to be able to submit a "subgraph template" and find all 
subgraphs that match it, e.g.


AND
   InheritanceLink $X pig
   EvaluationLink resides ($X, China)
   OR
   EvaluationLink loves ($X, butter)
   EvaluationLink likes ($X, $Y)
   EvaluationLink dislikes( George Bush, $Y)

To match this against all possible ($X,Y) pairs is not so slow in a graph 
knowledge base like we use, but would be slow if all the component links 
were stored in a relational DB in a standard way.


[Standard disclaimer: nearly all nodes in Novamente are not named with 
English names nor do they represent English-language words or concepts. 
Examples like the above are given only for communicative transparency, but 
are nonrepresentative and in a way misleading.  The concept of "pig" in 
Novamente may be encapsulated in a single node for some purposes, but is 
more foundationally represented as a dynamic
configuration of nodes and links that habitually become important together 
in certain contexts...]


There are plenty other examples too, but that will suffice for now...

-- Ben





- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, March 06, 2007 11:42 AM
Subject: Re: [agi] general weak ai



Bob Mottram wrote:


What attracted me about the DP method was that it's less ad-hoc than 
landmark based systems, but the most attractive feature is of course 
the linear scaling which is really essential when dealing with large 
amounts of data.

Yeah...

In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to managing 
large amounts of data in real-time.
I like the idea of exploiting the biased statistics of actual changes to 
the grid in real situations, in order to avoid the overhead of 
constantly doing whole-grid-replacement updates...  Qualitatively this 
seems like the sort of thing the brain must be doing, and the kind of 
thing any AI system must do to cope with a rapidly changing 
environment...


Ben





On 06/03/07, *Ben Goertzel* <[EMAIL PROTECTED] 
> wrote:


Thanks, this stuff is cool!  The DP-SLAM data structure trick may
potentially be useful within Novamente at some point...

ben




This list is sponsored by AGIRI: http://

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel

Mark Waser wrote:
I like the idea of exploiting the biased statistics of actual changes 
to the grid in real situations, in order to avoid the overhead of 
constantly doing whole-grid-replacement updates...  Qualitatively 
this seems like the sort of thing the brain must be doing, and the 
kind of thing any AI system must do to cope with a rapidly changing 
environment...


The problem, of course, being that whole-grid updates are cheap for a 
parallel processing system like the brain and exponentially expensive 
for serial processing.


Just polynomially expensive, I believe ... but in concept, your point is 
correct... The brain can continually update its whole grid model of the 
perceived world cheaply in parallel, but to do comparable updating in a 
serial computer framework, fancy footwork is required...


(Just as, OTOH, the brain requires fancy footwork to do precise 
arithmetic, which is trivial for digital computers, serial or otherwise...)


In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to 
managing large amounts of data in real-time.


May I, again, request some details?  Thanks!
As a single example, using a graphDB type structure rather than a 
relationalDB type structure makes it relatively cheap to answer 
questions about paths, such as "What are the 5 highest-weight paths from 
node-set A to node-set B?" (where e.g. the weight of a path may be 
defined as the product of the weights of the links along the path, which 
may be assumed in [0,1]).   This kind of query is important to the 
control of uncertain inference in a large knowledge base.


Another, simpler example is indexing items via time and space: you need 
to be able to submit a spatial and/or temporal region as a query and 
find items relevant to that region of spacetime.


Also, you want to be able to submit a "subgraph template" and find all 
subgraphs that match it, e.g.


AND
   InheritanceLink $X pig
   EvaluationLink resides ($X, China)
   OR
   EvaluationLink loves ($X, butter)
   EvaluationLink likes ($X, $Y)
   EvaluationLink dislikes( George Bush, $Y)

To match this against all possible ($X,Y) pairs is not so slow in a 
graph knowledge base like we use, but would be slow if all the component 
links were stored in a relational DB in a standard way.


[Standard disclaimer: nearly all nodes in Novamente are not named with 
English names nor do they represent English-language words or concepts.  
Examples like the above are given only for communicative transparency, 
but are nonrepresentative and in a way misleading.  The concept of "pig" 
in Novamente may be encapsulated in a single node for some purposes, but 
is more foundationally represented as a dynamic
configuration of nodes and links that habitually become important 
together in certain contexts...]


There are plenty other examples too, but that will suffice for now...

-- Ben





- Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, March 06, 2007 11:42 AM
Subject: Re: [agi] general weak ai



Bob Mottram wrote:


What attracted me about the DP method was that it's less ad-hoc than 
landmark based systems, but the most attractive feature is of course 
the linear scaling which is really essential when dealing with large 
amounts of data.

Yeah...

In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to 
managing large amounts of data in real-time.
I like the idea of exploiting the biased statistics of actual changes 
to the grid in real situations, in order to avoid the overhead of 
constantly doing whole-grid-replacement updates...  Qualitatively 
this seems like the sort of thing the brain must be doing, and the 
kind of thing any AI system must do to cope with a rapidly changing 
environment...


Ben





On 06/03/07, *Ben Goertzel* <[EMAIL PROTECTED] 
> wrote:


Thanks, this stuff is cool!  The DP-SLAM data structure trick may
potentially be useful within Novamente at some point...

ben


 


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Mark Waser
I like the idea of exploiting the biased statistics of actual changes to 
the grid in real situations, in order to avoid the overhead of constantly 
doing whole-grid-replacement updates...  Qualitatively this seems like the 
sort of thing the brain must be doing, and the kind of thing any AI system 
must do to cope with a rapidly changing environment...


The problem, of course, being that whole-grid updates are cheap for a 
parallel processing system like the brain and exponentially expensive for 
serial processing.


In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to managing 
large amounts of data in real-time.


May I, again, request some details?  Thanks!

- Original Message - 
From: "Ben Goertzel" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, March 06, 2007 11:42 AM
Subject: Re: [agi] general weak ai



Bob Mottram wrote:


What attracted me about the DP method was that it's less ad-hoc than 
landmark based systems, but the most attractive feature is of course the 
linear scaling which is really essential when dealing with large amounts 
of data.

Yeah...

In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to managing 
large amounts of data in real-time.
I like the idea of exploiting the biased statistics of actual changes to 
the grid in real situations, in order to avoid the overhead of constantly 
doing whole-grid-replacement updates...  Qualitatively this seems like the 
sort of thing the brain must be doing, and the kind of thing any AI system 
must do to cope with a rapidly changing environment...


Ben





On 06/03/07, *Ben Goertzel* <[EMAIL PROTECTED] > 
wrote:


Thanks, this stuff is cool!  The DP-SLAM data structure trick may
potentially be useful within Novamente at some point...

ben



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel

Bob Mottram wrote:


What attracted me about the DP method was that it's less ad-hoc than 
landmark based systems, but the most attractive feature is of course 
the linear scaling which is really essential when dealing with large 
amounts of data.

Yeah...

In other contexts, we have paid a lot of attention to funky 
indexing/updating methods in Novamente, as they are critical to managing 
large amounts of data in real-time. 

I like the idea of exploiting the biased statistics of actual changes to 
the grid in real situations, in order to avoid the overhead of 
constantly doing whole-grid-replacement updates...  Qualitatively this 
seems like the sort of thing the brain must be doing, and the kind of 
thing any AI system must do to cope with a rapidly changing environment...


Ben





On 06/03/07, *Ben Goertzel* <[EMAIL PROTECTED] 
> wrote:


Thanks, this stuff is cool!  The DP-SLAM data structure trick may
potentially be useful within Novamente at some point...

ben



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram

What attracted me about the DP method was that it's less ad-hoc than
landmark based systems, but the most attractive feature is of course the
linear scaling which is really essential when dealing with large amounts of
data.



On 06/03/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:


Thanks, this stuff is cool!  The DP-SLAM data structure trick may
potentially be useful within Novamente at some point...

ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel

Bob Mottram wrote:


I don't have an overview document as such, but I'm adding stuff into 
the wiki as needed.  Actually there is very little which is unique 
about my approach.  Almost all of the ideas which I'm using originated 
elsewhere, and many of them have been around for 20 years or so.  All 
I'm really doing is bringing them together into a single system which 
hopefully will result in something which you can install and run on a 
robot in an easy way.


Occupancy grids have been around for a long time, and they're a 
fundamentally probabilistic method.  What could be said to be slightly 
new is that both the pose of the robot and the mapping process itself 
are both considered to be uncertain.  The robots parambulations 
through space can be considered to form a tree like structure, which 
is occasionally pruned as new data comes in.  For more details of this 
approach see:


  http://www.cs.duke.edu/~parr/dpslam/ 



Thanks, this stuff is cool!  The DP-SLAM data structure trick may 
potentially be useful within Novamente at some point...


ben

The way that the tree of possibilities collapses over time reminds me 
rather of quantum decoherence.  The robot is effectively traversing 
multiple possible universes, only falling into a single one of them 
when enough observations have been made.


To make this kind of vision system work special attention needs to be 
payed to the sensor models, so that the probability density functions 
accurately reflect the characteristics of the sensors being used (in 
this case stereo cameras).




On 06/03/07, *Ben Goertzel* <[EMAIL PROTECTED] 
> wrote:



Hi Bob,

Is there a document somewhere describing what is unique about your
approach?

Novamente doesn't involve real robotics right now but the design does
involve occupancy grids and "probabilistic simulated robotics", so
your
ideas are of some practical interest to me...

Ben

Bob Mottram wrote:
>
> Some of the 3D reconstruction stuff being done now is quite
impressive
> (I'm thinking of things like photosynth, monoSLAM and Moravec's
stereo
> vision) and this kind of capability to take raw sensor data and turn
> it into useful 3D models which may then be cogitated upon would be a
> basic prerequisite for any AGI operating in the real world.  I'm
sure
> that these and other similar methods are soon destined to be
fall into
> the bracket of being "no longer AI", instead being considered as
just
> another computational tool.
>
> In the past I've tried many ad-hoc vision experiments, which would
> certainly come under the "narrow AI" label, but I now no longer
> believe that this kind of approach is a good way to
proceed.  Far more
> straightforward, albeit more computationally demanding, techniques
> give a general solution to the vision problem which is not highly
> specific to any particular kind of domain or environment.  Under
this
> system applications which are often treated separately, such as
visual
> navigation and object recognition, actually turn out to be the same
> algorithm deployed on different spatial scales (maybe a classic case
> of physics envy!).
>
> My own computer vision project can be found here
> http://code.google.com/p/sentience/
>
>
>
> On 06/03/07, * Andrew Babian* <[EMAIL PROTECTED]

> >> wrote:
>
> Listening to a computer vision lecture, I'm impressed out
how much
> is being
> done now with very domain specific techniques.  They can take
> general pictures
> from different viewpoints, and recreate a 3-d representation of
> the world.
> This is similar to the sort of stereo reconstruction that people
> do.  We are
> perhaps better optimized to our exact hardware, and we can still
> use a lot
> more general real world knowledge about things than current
> computer vision
> techniques, but they get pretty impressive results with just
data
> crunching
> sort of techniques.  And they make no pretension to being AI at
> all, but they
> fit the classic definition of "weak AI" as something that
would take
> intelligence for a human to do (though maybe it's a kind of
animal
> intelligence).  If I can coin a term, it's one of those
"post AI"
> fields that
> maybe used to be thought of as AI, but it no longer is, like
speech
> recognition, I guess.  So what I'm wondering is how much people
> who are
> interested in general AI want to go back and find general AI
> solutions to post
> AI problems.  The example comes to mind of Jeff Hawkins 

Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram

I don't have an overview document as such, but I'm adding stuff into the
wiki as needed.  Actually there is very little which is unique about my
approach.  Almost all of the ideas which I'm using originated elsewhere, and
many of them have been around for 20 years or so.  All I'm really doing is
bringing them together into a single system which hopefully will result in
something which you can install and run on a robot in an easy way.

Occupancy grids have been around for a long time, and they're a
fundamentally probabilistic method.  What could be said to be slightly new
is that both the pose of the robot and the mapping process itself are both
considered to be uncertain.  The robots parambulations through space can be
considered to form a tree like structure, which is occasionally pruned as
new data comes in.  For more details of this approach see:

 http://www.cs.duke.edu/~parr/dpslam/

The way that the tree of possibilities collapses over time reminds me rather
of quantum decoherence.  The robot is effectively traversing multiple
possible universes, only falling into a single one of them when enough
observations have been made.

To make this kind of vision system work special attention needs to be payed
to the sensor models, so that the probability density functions accurately
reflect the characteristics of the sensors being used (in this case stereo
cameras).



On 06/03/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:



Hi Bob,

Is there a document somewhere describing what is unique about your
approach?

Novamente doesn't involve real robotics right now but the design does
involve occupancy grids and "probabilistic simulated robotics", so your
ideas are of some practical interest to me...

Ben

Bob Mottram wrote:
>
> Some of the 3D reconstruction stuff being done now is quite impressive
> (I'm thinking of things like photosynth, monoSLAM and Moravec's stereo
> vision) and this kind of capability to take raw sensor data and turn
> it into useful 3D models which may then be cogitated upon would be a
> basic prerequisite for any AGI operating in the real world.  I'm sure
> that these and other similar methods are soon destined to be fall into
> the bracket of being "no longer AI", instead being considered as just
> another computational tool.
>
> In the past I've tried many ad-hoc vision experiments, which would
> certainly come under the "narrow AI" label, but I now no longer
> believe that this kind of approach is a good way to proceed.  Far more
> straightforward, albeit more computationally demanding, techniques
> give a general solution to the vision problem which is not highly
> specific to any particular kind of domain or environment.  Under this
> system applications which are often treated separately, such as visual
> navigation and object recognition, actually turn out to be the same
> algorithm deployed on different spatial scales (maybe a classic case
> of physics envy!).
>
> My own computer vision project can be found here
> http://code.google.com/p/sentience/
>
>
>
> On 06/03/07, * Andrew Babian* <[EMAIL PROTECTED]
> > wrote:
>
> Listening to a computer vision lecture, I'm impressed out how much
> is being
> done now with very domain specific techniques.  They can take
> general pictures
> from different viewpoints, and recreate a 3-d representation of
> the world.
> This is similar to the sort of stereo reconstruction that people
> do.  We are
> perhaps better optimized to our exact hardware, and we can still
> use a lot
> more general real world knowledge about things than current
> computer vision
> techniques, but they get pretty impressive results with just data
> crunching
> sort of techniques.  And they make no pretension to being AI at
> all, but they
> fit the classic definition of "weak AI" as something that would take
> intelligence for a human to do (though maybe it's a kind of animal
> intelligence).  If I can coin a term, it's one of those "post AI"
> fields that
> maybe used to be thought of as AI, but it no longer is, like speech
> recognition, I guess.  So what I'm wondering is how much people
> who are
> interested in general AI want to go back and find general AI
> solutions to post
> AI problems.  The example comes to mind of Jeff Hawkins who seemed
> like he was
> trying to work on visual recognition tasks for his model.  I got
> his team's
> demo working in Matlab finally (actually because I am doing this
> computer
> vision thing and I had an excuse to get a copy of matlab).  Some
> sort of
> graphic pattern recognizer.  I didn't look too far into it, but I
> would almost
> be sure that it pales compared to real CV techniques.  Sure, there
> is a
> question of how to generally handle knowledge problems, but it may
> just be
> that the best way to handle AI is just to individually find t

Re: [agi] general weak ai

2007-03-06 Thread Pei Wang

On 3/6/07, Andrew Babian <[EMAIL PROTECTED]> wrote:


Well what is intelligence if not a collection of tools?


To me, this widely accepted attitude towards AI is a major reason for
the lack of progress in AGI in the past decades.

A metaphor I have been using is: while computer science and the
so-called "AI" have been building tools, real AI (what we call "AGI"
here) should build "hands", which is more general, flexible, and
efficient than any problem-specific tool.

For any given specific task, it is always possible to build a tool
that works better than bare hands. For the same reason, a truly
intelligent system doesn't necessarily provide the best solution to a
given domain problem --- to pursue that kind of goal always lead AI
back to traditional computer science, both in theory and in practice.
That is why "as soon as a field becomes mature, it is no longer
considered AI anymore".

A more detailed discussion is in
http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel



Well what is intelligence if not a collection of tools?  One of the hardest
problems is coming up with such tools that are generalizable across domains,
but can't that just be a question of finding more tools that work well in a
computer environment, instead of just finding the "ultimate principle". 
The "ultimate principles" of intelligence mostly have to do with the 
emergent structures and
dynamics that arise in a complex system, allowing this system to model 
and predict its own

overall coordinated behavior patterns...

These structures/dynamics include things we sloppily describe with words 
like "self", "will"

and "attention" ...

Thinking of a mind as a toolkit is misleading.  A mind must contain a 
collection of tools that
synergize together so as to give rise to the appropriate high-level 
emergent structures and dynamics.
The tools are there, but focusing on their individual and isolated 
functionality is not terribly

productive in an AGI context.

The brain, for example, has some kick-ass specialized tools, such as its 
face recognition
algorithms.  But these are not the essence of its intelligence.  Some of 
its weaker tools, such as
its very sloppy algorithms for reasoning under uncertainty, are actually 
more critical to its
general intelligence, as they have subtler and more thoroughgoing 
synergies with other tools

that help give rise to important emergent structures/dynamics.

-- Ben Goertzel


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Andrew Babian
On Tue, 6 Mar 2007 09:49:47 +, Bob Mottram wrote 
> Some of the 3D reconstruction stuff being done now is quite impressive (I'm
thinking of things like photosynth, monoSLAM and Moravec's stereo vision) and
this kind of capability to take raw sensor data and turn it into useful 3D
models which may then be cogitated upon would be a basic prerequisite for any
AGI operating in the real world.  I'm sure that these and other similar
methods are soon destined to be fall into the bracket of being "no longer AI",
instead being considered as just another computational tool. 
> 
> In the past I've tried many ad-hoc vision experiments, which would certainly
come under the "narrow AI" label, but I now no longer believe that this kind
of approach is a good way to proceed.  Far more straightforward, albeit more
computationally demanding, techniques give a general solution to the vision
problem which is not highly specific to any particular kind of domain or
environment.  Under this system applications which are often treated
separately, such as visual navigation and object recognition, actually turn
out to be the same algorithm deployed on different spatial scales (maybe a
classic case of physics envy!). 


Well what is intelligence if not a collection of tools?  One of the hardest
problems is coming up with such tools that are generalizable across domains,
but can't that just be a question of finding more tools that work well in a
computer environment, instead of just finding the "ultimate principle".  Ideas
like gofai symbolic symbol manipulation and Bayesian decision networks seem to
me to naturally just fit into the idea of part of an AI kit, but I personally
would want this kit to be more compatible with the post AI techniques. 
Another example, that someone is using "AI" is often recognized by them using
some kind of search instead of some algorithm, like gradient ascent or
resolution, but there's not reason why a system can't throw multiple
approaches at a problem, and maybe fall back on some general search when
needed.  And maybe that's why I think an AI's proper world is controlling a
computer (ie. a PC), so it can just run programs whenever it needs to get
things done.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel


Hi Bob,

Is there a document somewhere describing what is unique about your approach?

Novamente doesn't involve real robotics right now but the design does 
involve occupancy grids and "probabilistic simulated robotics", so your 
ideas are of some practical interest to me...


Ben

Bob Mottram wrote:


Some of the 3D reconstruction stuff being done now is quite impressive 
(I'm thinking of things like photosynth, monoSLAM and Moravec's stereo 
vision) and this kind of capability to take raw sensor data and turn 
it into useful 3D models which may then be cogitated upon would be a 
basic prerequisite for any AGI operating in the real world.  I'm sure 
that these and other similar methods are soon destined to be fall into 
the bracket of being "no longer AI", instead being considered as just 
another computational tool.


In the past I've tried many ad-hoc vision experiments, which would 
certainly come under the "narrow AI" label, but I now no longer 
believe that this kind of approach is a good way to proceed.  Far more 
straightforward, albeit more computationally demanding, techniques 
give a general solution to the vision problem which is not highly 
specific to any particular kind of domain or environment.  Under this 
system applications which are often treated separately, such as visual 
navigation and object recognition, actually turn out to be the same 
algorithm deployed on different spatial scales (maybe a classic case 
of physics envy!).


My own computer vision project can be found here 
http://code.google.com/p/sentience/




On 06/03/07, * Andrew Babian* <[EMAIL PROTECTED] 
> wrote:


Listening to a computer vision lecture, I'm impressed out how much
is being
done now with very domain specific techniques.  They can take
general pictures
from different viewpoints, and recreate a 3-d representation of
the world.
This is similar to the sort of stereo reconstruction that people
do.  We are
perhaps better optimized to our exact hardware, and we can still
use a lot
more general real world knowledge about things than current
computer vision
techniques, but they get pretty impressive results with just data
crunching
sort of techniques.  And they make no pretension to being AI at
all, but they
fit the classic definition of "weak AI" as something that would take
intelligence for a human to do (though maybe it's a kind of animal
intelligence).  If I can coin a term, it's one of those "post AI"
fields that
maybe used to be thought of as AI, but it no longer is, like speech
recognition, I guess.  So what I'm wondering is how much people
who are
interested in general AI want to go back and find general AI
solutions to post
AI problems.  The example comes to mind of Jeff Hawkins who seemed
like he was
trying to work on visual recognition tasks for his model.  I got
his team's
demo working in Matlab finally (actually because I am doing this
computer
vision thing and I had an excuse to get a copy of matlab).  Some
sort of
graphic pattern recognizer.  I didn't look too far into it, but I
would almost
be sure that it pales compared to real CV techniques.  Sure, there
is a
question of how to generally handle knowledge problems, but it may
just be
that the best way to handle AI is just to individually find the
best ways for
computers to solve the different problems posed to
intelligences.  That's
actually one of the ideas that I seem to get from Minsky and his
idea of
"resources", formerly called "agents".  And I also am always
concerned about
the tendency towards physics envy among AI folk, that there need
to be simple
unified prinicples underneath intelligence.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Mind mapping software

2007-03-06 Thread BillK

I thought this free software opportunity might be of interest to some here.

ConceptDraw MINDMAP 4 is a mind-mapping and team brainstorming tool
with extended drawing capabilities.

Use it to efficiently organize your ideas and tasks with the help of
Mind Mapping technique. ConceptDraw MINDMAP 4 supports extra file
formats, multi-page documents. It offers a rich collection of
pre-drawn shapes. ConceptDraw MINDMAP 4 has extended capabilities for
creating web sites and PowerPoint presentations.

This software is temporarily available for free.
* But you must download and install it within the next 19 hours. **

Restrictions for the free edition.
1. No free technical support
2. No free upgrades to future versions
3. Strictly non-commercial usage

Normal price 119 USD.




BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram

Some of the 3D reconstruction stuff being done now is quite impressive (I'm
thinking of things like photosynth, monoSLAM and Moravec's stereo vision)
and this kind of capability to take raw sensor data and turn it into useful
3D models which may then be cogitated upon would be a basic prerequisite for
any AGI operating in the real world.  I'm sure that these and other similar
methods are soon destined to be fall into the bracket of being "no longer
AI", instead being considered as just another computational tool.

In the past I've tried many ad-hoc vision experiments, which would certainly
come under the "narrow AI" label, but I now no longer believe that this kind
of approach is a good way to proceed.  Far more straightforward, albeit more
computationally demanding, techniques give a general solution to the vision
problem which is not highly specific to any particular kind of domain or
environment.  Under this system applications which are often treated
separately, such as visual navigation and object recognition, actually turn
out to be the same algorithm deployed on different spatial scales (maybe a
classic case of physics envy!).

My own computer vision project can be found here
http://code.google.com/p/sentience/



On 06/03/07, Andrew Babian <[EMAIL PROTECTED]> wrote:


Listening to a computer vision lecture, I'm impressed out how much is
being
done now with very domain specific techniques.  They can take general
pictures
from different viewpoints, and recreate a 3-d representation of the world.
This is similar to the sort of stereo reconstruction that people do.  We
are
perhaps better optimized to our exact hardware, and we can still use a lot
more general real world knowledge about things than current computer
vision
techniques, but they get pretty impressive results with just data
crunching
sort of techniques.  And they make no pretension to being AI at all, but
they
fit the classic definition of "weak AI" as something that would take
intelligence for a human to do (though maybe it's a kind of animal
intelligence).  If I can coin a term, it's one of those "post AI" fields
that
maybe used to be thought of as AI, but it no longer is, like speech
recognition, I guess.  So what I'm wondering is how much people who are
interested in general AI want to go back and find general AI solutions to
post
AI problems.  The example comes to mind of Jeff Hawkins who seemed like he
was
trying to work on visual recognition tasks for his model.  I got his
team's
demo working in Matlab finally (actually because I am doing this computer
vision thing and I had an excuse to get a copy of matlab).  Some sort of
graphic pattern recognizer.  I didn't look too far into it, but I would
almost
be sure that it pales compared to real CV techniques.  Sure, there is a
question of how to generally handle knowledge problems, but it may just be
that the best way to handle AI is just to individually find the best ways
for
computers to solve the different problems posed to intelligences.  That's
actually one of the ideas that I seem to get from Minsky and his idea of
"resources", formerly called "agents".  And I also am always concerned
about
the tendency towards physics envy among AI folk, that there need to be
simple
unified prinicples underneath intelligence.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303