> On Jul 17, 2025, at 10:35 AM, gene heskett <ghesk...@shentel.net> wrote: >> > I hope I live to see it do that Chris. There appears to be 2 problems, the > first being how to spec the targets with sufficient accuracy which it didn't > do in that demo and how to do that on a step and repeat basis. >> _
Clasic CNCis an examle of the “old way”. It is meticulusluy programmed and then the machine follows the programming perfectly. The new way is the machine looks at the part you want and the metal you gave and just figures it out as it goes. We have all seen chess-playing computer apps. The app has no plan, it just figure it out as it goes. It is not perfect but good enough to beat all of a handfull of thre best humans. The cost saving come from not having to program it. Usually with this kind of robot, they don’t bother with mechanical precision. feedback using vision and touch and IMU. For example to pick up a beer can you look at it with a camera to get a “close enough” idea of where it might be. Then vision can measure realative distance from fingers to can and we close that distance, then move the fingers until you feel contact with the can, then close the thumb until you feel pressure on both thumb and fingers. You know enough about beer cans to know that an empty can needs less pressure and might be crushed and a full can is different. You don’t try for even 2 mm level precision movement. You can not pre-program this because there are many places the beer can might be and there are 10,000 other objects you might need to grasp. I think this is how humans grasp beer cans. You really can’t even program it. You have to train it with examples and hope that regularization forces generalization. The state of the art in 2025 is not so far advanced when it comes to higher level functions. We can do some simple well-defined tasks in isolation. Look at this clip from Tesla and assume iti is highly edited to show the one in 20 times when it works. The robots are on teathers to prevent damage when they fall. The big deal here is that they are NOT programmed with “if this then do that” logic. And certainly there are not (x,y,z) target points. Rather they are trained from examples. It is a neural network, there is no “‘code” just a few billion numeric weights. Some day when the robots are working with the CNC machines, they will simle notice that the machine needs attention, walk over to it can see what needs to be done. Cars are already doing this on roads, if there is a slow truck the cars just changes lans and passes. and if there is a stop sign it stops and looks for other cars. No pre-programming. It makes its own plan to acheive a high-level goal. But it can’t think or p;=lan, it is simply programsed by example. But driving is a VERY well defined tsk with very ell defines rules and goals. It was the foirst to be well-automated. Here is So. Calif. Wamo taxis drive aroubd with no one in the front seat and the novity has worn off. However I was impressed still when one yeoilded to me on a bicycle. Wamo drives like “grandma” it stay under the speed limit, always gives other right of way and makes a full stop at EVERY stop sign. They are here in force today. Each robot is not able to learn. It has fixed programming (the weights in the network, never change). Learning takes place in a big data canera where thousands of computers crunch the data to great new network weights that replaces the old program every few weeks. Very much like what Telsa does with it’s cars. Doing these kinds of simple tasks, in isolation, with high failure rates in what billion dollor companies can do in 2025. Boston Dynamic seems to have the best mechanics and most athletic robots that have enough power to do run and walk on outdoor surfaces but Tesla seems to have a weakling robot but is all-in on the AI. https://youtu.be/a-Wc16CmDk0 Tesla's humanoid robot performs household chores youtu.be _______________________________________________ Emc-users mailing list Emc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/emc-users