Another recent development is the CMU telepresence robot, which is quite low cost and would be a good place to start.  Since it uses a linux based PC there should be plenty of scope for programming more sophisticated applications than Lego would be able to handle.

  http://www.terk.ri.cmu.edu/recipes/index.php

Although its intended to be used for education I'm sure a ruggedised version of this could have industrial or home uses.  Al the software at present is open source. 

For a more commercial system intelligence would be supplied to the robot via a web based subscription service, and could be purely human, purely AI, or a mixture of the two.  Once you have people driving robots around via the internet you can bring your data mining systems to bear and start to automate some of that human intelligence.






On 24/10/06, Pei Wang <[EMAIL PROTECTED]> wrote:
Bob and Neil,

Thanks for the informative discussion!

Several questions for you and others who are familiar with robotics:

For people whose interests are mainly in the connection between
sensorimotor and high-level cognition, what kind of API can be
expected in a representative robot? Something like Tekkotsu?

Any comments on Microsoft Robotics Studio?

Recently some people are talking about "cognitive robotics", though I
haven't found any major new idea beyond what "robotics" has been
covering, except the suggestion that high-level cognition should be
taken into consideration. Am I missing something important?

If I want to start to try some low-budget programmable robot (say, in
the price range of Robosapien V2 and LEGO Mindstorms NXT), which one
will you recommend? I won't have high expectation in performance, but
will be interested in testing ideas on the coordination of perception,
reasoning, learning, and action.

Pei

On 10/24/06, Bob Mottram < [EMAIL PROTECTED]> wrote:
>
>
> On 23/10/06, Neil H. <[EMAIL PROTECTED]> wrote:
> > I'm also pretty surprise that they haven't done anything major with
> > their vSLAM tech:
> >
> http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091
>
>
> Evolution really failed to capitalise upon their early success.  One of
> their biggest mistakes was to make their API prohibiively expensive, so that
> few people have ever used it.  There was a command line API supplied by
> default, but that was dreadful and had major limitations which users of the
> robots have long complained about.
>
> The vSLAM technology is a monocular SLAM method which works to an extent,
> but when it fails it fails catastrophically.  They did experiment with
> stereo SLAM last year (there is a paper somewhere online about that).
> Stereo gives much better accuracy and doesn't require the robot to travel
> for at least one metre before it can localise, but there are some
> fundamental issues with using things like SIFT features for doing stereo
> which they probably didn't realise.
>
>
> > > I think their stuff was also licenced to Sony for use on their
> > > AIBO, before Sony axed their robotics products.
> >
> > Sony licensed the tech, but I think they only used it so that AIBO
> > could visually recognize pre-printed patterns on cards, which would
> > signal the AIBO to dance, return to the charging station, etc. SIFT is
> > IMHO overkill for that kind of thing, and it's a pity they didn't do
> > anything more interesting with it.
>
>
> It's a shame they ditched AIBO and their other robots in development.  AIBO
> users were rather unhappy about that.  Perhaps some other company will buy
> the rights.
>
>
> > Perhaps. To play devil's advocate, how well do you think stereo vision
> > system would actually work for creating a 3D structure of a home
> > environment? It seems that distinctive features in the home tend to be
> > few and far between. Of course, the regions between distinctive
> > features tend to be planar surfaces, so perhaps it isn't too bad.
>
>
> Well this is exactly what I'm (unofficially) working on now.  From the
> results I have at the moment I can say with confidence that it will be
> possible to navigate a robot around a home environment using a pair of
> stereo cameras, with the robot remaining within at least a 7cm position
> tollerance.  7cm is just a raw localisation figure, and after kalman
> filtering and sensor fusion with odometry the accuracy should be much better
> than that.  You might think that there are not many features on walls, but
> even in environments which people consider to be "blank" there are often
> small imperfections or shading gradients which stereo algorithms can pick
> up.  In real life few surfaces are perfectly uniform.
>
> With good localisation performance high quality mapping becomes possible.  I
> can run the stereo algorithms at various levels of detail, and use
> traditional occupancy grid methods (with a few tweaks) to build up evidence
> in a probablistic fashion.  The idea at the moment is to have the
> localisation algorithms running in real time using low-res grids, and to
> build a separate high quality model of the environment in a high resolution
> grid more gradually in a low priority background task.  Once you have a good
> quality grid model its then quite straightforward to detect things like
> walls and furniture, and to simplify the data down to something which is a
> more efficient representation similar to something you might find in a game
> or an AGI sim.  You can also use the grid model in exactly the same way that
> 2D background subtraction systems work (except in 3D) in order to detect
> changes within the environment.
>
>  ________________________________
>
>  This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
> or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303

Reply via email to