Edgar,

No, not 'enlightened' robots - finite-state (digital) processing robots who are 
'self-aware'.

When they start seriously working again with analog processing to emulate human 
thinking I'll start taking more notice.

...Bill!

--- In [email protected], Edgar Owen <edgarowen@...> wrote:
>
> > 
> > 
> > 
> > 
> > Permanent Address: 
> > http://www.scientificamerican.com/article.cfm?id=automaton-robots-become-self-aware
> > Automaton, Know Thyself: Robots Become Self-Aware
> > 
> > Droids met the challenge of perceiving their self-image and reflecting on 
> > their own thoughts as part an effort to develop robots that are more 
> > adaptable in unpredictable situations
> > 
> > By Charles Q. Choi  | Thursday, February 24, 2011 |
> > 
> > BOTTY IMAGE: An artist's depiction of a robot reflecting on itself. Image: 
> > Victor Zykov, Cornell University
> > 
> > Robots might one day trace the origin of their consciousness to recent 
> > experiments aimed at instilling them with the ability to reflect on their 
> > own thinking.
> > 
> > Although granting machines self-awareness might seem more like the stuff of 
> > science fiction than science, there are solid practical reasons for doing 
> > so, explains roboticist Hod Lipson at Cornell University's Computational 
> > Synthesis Laboratory.
> > 
> > "The greatest challenge for robots today is figuring out how to adapt to 
> > new situations," he says. "There are millions of robots out there, mostly 
> > in factories, and if everything is in the right place at the right time for 
> > them, they are superhuman in their precision, in their power, in their 
> > speed, in their ability to work repetitively 24/7 in hazardous 
> > environments—but if a bolt falls out of place, game over."
> > 
> > This lack of adaptability "is the reason we don't have many robots in the 
> > home, which is much more unstructured than the factory," Lipson adds. "The 
> > key is for robots to create a model of themselves to figure out what is 
> > working and not working in order to adapt."
> > 
> > So, Lipson and his colleagues developed a robot shaped like a four-legged 
> > starfish whose brain, or controller, developed a model of what its body was 
> > like. The researchers started the droid off with an idea of what motors and 
> > other parts it had, but not how they were arranged, and gave it a directive 
> > to move. By trial and error, receiving feedback from its sensors with each 
> > motion, the machine used repeated simulations to figure out how its body 
> > was put together and evolved an ungainly but effective form of movement all 
> > on its own. Then "we removed a leg," and over time the robot's self-image 
> > changed and learned how to move without it, Lipson says.
> > 
> > Now, instead of having robots modeling their own bodies Lipson and Juan 
> > Zagal, now at the University of Chile in Santiago , have developed ones 
> > that essentially reflect on their own thoughts. They achieve such thinking 
> > about thinking, or metacognition, by placing two minds in one bot. One 
> > controller was rewarded for chasing dots of blue light moving in random 
> > circular patterns and avoiding red dots as if they were poison, whereas a 
> > second controller modeled how the first behaved and whether it was 
> > successful or not.
> > 
> > So why might two brains be better than one? The researchers changed the 
> > rules so that chasing red dots and avoiding blue dots were rewarded 
> > instead. By reflecting on the first controller's actions, the second one 
> > could make changes to adapt to failures—for instance, it filtered sensory 
> > data to make red dots seem blue and blue dots seem red, Lipson says. In 
> > this way the robot could adapt after just four to 10 physical experiments 
> > instead of the thousands it would take using traditional evolutionary 
> > robotic techniques.
> > 
> > "This could lead to a way to identify dangerous situations, learning from 
> > them without having to physically go through them—that's something that's 
> > been missing in robotics," says computer scientist Josh Bongard at the 
> > University of Vermont, a past collaborator of Lipson's who did not take 
> > part in this study.
> > 
> > Beyond robots that think about what they are thinking, Lipson and his 
> > colleagues are also exploring if robots can model what others are thinking, 
> > a property that psychologists call "theory of mind". For instance, the team 
> > had one robot observe another wheeling about in an erratic spiraling manner 
> > toward a light. Over time, the observer could predict the other's movements 
> > well enough to know where to lay a "trap" for it on the ground. "It's 
> > basically mind reading," Lipson says.
> > 
> > "Our holy grail is to give machines the same kind of self-awareness 
> > capabilities that humans have," Lipson says. "This research might also shed 
> > new light on the very difficult topic of our self-awareness from a new 
> > angle—how it works, why and how it developed."
> > 
> > One potential application they have tested for self-aware machines is with 
> > a model bridge, with sensors continuously monitoring vibrations across its 
> > frame to develop a self-image of its "body". "In simulations we've shown 
> > that it could identify weakened joints a lot sooner than via traditional 
> > civil engineering methods," Lipson says. "The bridge isn't going to 
> > suddenly wake up one day and say hello, but in a primitive sense you can 
> > say it has self-image, enough to turn on a red light if something's wrong."
> > 
> > A key question for this research concerns how far it can actually go. 
> > "These are very simple robots, maybe eight or a dozen moving parts, so it's 
> > relatively easy to construct models of everything. But if you scale it up, 
> > will it still be able to make a good model of self?" Bongard asks. "That 
> > question also extends to social robots observing a human or something else 
> > complex. The question of scalability is what research is examining at the 
> > moment."
> > 
> > Intriguingly, the research also revealed what mental illness robots might 
> > develop. For instance, the starfishlike robot that developed a body image 
> > "spontaneously developed 'phantom limb' syndrome, thinking it had arms and 
> > legs where it didn't," Lipson says. "As robots become more complex and 
> > evolve themselves, we could see the same kinds of disorders we [humans can] 
> > have appear in machines."
> > 
> > Lipson detailed his team's research February 19 at the annual meeting of 
> > the American Association for Advancement of Science conference in 
> > Washington, D.C.
> > 
> > Source: Scientific American
> > http://www.scientificamerican.com/article.cfm?id=automaton-robots-become-self-aware&print=true
> >
>




------------------------------------

Current Book Discussion: any Zen book that you recently have read or are 
reading! Talk about it today!Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/Zen_Forum/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/Zen_Forum/join
    (Yahoo! ID required)

<*> To change settings via email:
    [email protected] 
    [email protected]

<*> To unsubscribe from this group, send an email to:
    [email protected]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/

Reply via email to