On 23/10/06, Neil H. <[EMAIL PROTECTED]> wrote:
I'm also pretty surprise that they haven't done anything major with
their vSLAM tech:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091


Evolution really failed to capitalise upon their early success.  One of their biggest mistakes was to make their API prohibiively expensive, so that few people have ever used it.  There was a command line API supplied by default, but that was dreadful and had major limitations which users of the robots have long complained about.

The vSLAM technology is a monocular SLAM method which works to an extent, but when it fails it fails catastrophically.  They did experiment with stereo SLAM last year (there is a paper somewhere online about that).  Stereo gives much better accuracy and doesn't require the robot to travel for at least one metre before it can localise, but there are some fundamental issues with using things like SIFT features for doing stereo which they probably didn't realise.

 

> I think their stuff was also licenced to Sony for use on their
> AIBO, before Sony axed their robotics products.

Sony licensed the tech, but I think they only used it so that AIBO
could visually recognize pre-printed patterns on cards, which would
signal the AIBO to dance, return to the charging station, etc. SIFT is
IMHO overkill for that kind of thing, and it's a pity they didn't do
anything more interesting with it.


It's a shame they ditched AIBO and their other robots in development.  AIBO users were rather unhappy about that.  Perhaps some other company will buy the rights.

 

Perhaps. To play devil's advocate, how well do you think stereo vision
system would actually work for creating a 3D structure of a home
environment? It seems that distinctive features in the home tend to be
few and far between. Of course, the regions between distinctive
features tend to be planar surfaces, so perhaps it isn't too bad.


Well this is exactly what I'm (unofficially) working on now.  From the results I have at the moment I can say with confidence that it will be possible to navigate a robot around a home environment using a pair of stereo cameras, with the robot remaining within at least a 7cm position tollerance.  7cm is just a raw localisation figure, and after kalman filtering and sensor fusion with odometry the accuracy should be much better than that.  You might think that there are not many features on walls, but even in environments which people consider to be "blank" there are often small imperfections or shading gradients which stereo algorithms can pick up.  In real life few surfaces are perfectly uniform.

With good localisation performance high quality mapping becomes possible.  I can run the stereo algorithms at various levels of detail, and use traditional occupancy grid methods (with a few tweaks) to build up evidence in a probablistic fashion.  The idea at the moment is to have the localisation algorithms running in real time using low-res grids, and to build a separate high quality model of the environment in a high resolution grid more gradually in a low priority background task.  Once you have a good quality grid model its then quite straightforward to detect things like walls and furniture, and to simplify the data down to something which is a more efficient representation similar to something you might find in a game or an AGI sim.  You can also use the grid model in exactly the same way that 2D background subtraction systems work (except in 3D) in order to detect changes within the environment.


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to