Ben, Before I comment on your reply, note that my former posting was about my PERCEPTION rather than the REALITY of your understanding, with the difference being taken up in the answer being less than 1.00 bit of information.
Anyway, that said, on with a VERY interesting (to me) subject. On 12/11/08, Ben Goertzel <[email protected]> wrote: > > Well, the conceptual and mathematical algorithms of NCE and OCP > (my AI systems under development) would go more naturally on MIMD > parallel systems than on SIMD (e.g. vector) or SISD systems. There isn't much that an MIMD machine can do better than a similar-sized SIMD machine. The usual problem is in finding a way to make such a large SIMD machine. Anyway, my proposed architecture (now under consideration at AMD) also provides for limited MIMD operation, where the processors could be at different places in a single complex routine. Anyway, I was looking at a 10,000:1 speedup over SISD, and then giving up ~10:1 to go from probabilistic logic equations to matrices that do the same things, which is how I came up with the 1000:1 from the prior posting. I played around a bunch with MIMD parallel code on the Connection Machine > at ANU, back in the 90s The challenge is in geometry - figuring out how to get the many processors to communicate and coordinate with each other without spending 99% of their cycles in coordination and communication. However, indeed the specific software code we've written for NCE and OCP > is intended for contemporary {distributed networks of multiprocessor > machines} > rather than vector machines or Connection Machines or whatever... > > If vector processing were to become a superior practical option for AGI, > what would happen to the code in OCP or NCE? > > That would depend heavily on the vector architecture, of course. > > But one viable possibility is: the AtomTable, ProcedureRepository and > other knowledge stores remain the same ... and the math tools like the > PLN rules/formulas and Reduct rules remain the same ... but the MindAgents > that use the former to carry out cognitive processes get totally > rewritten... I presume that everything is table driven, so the code could completely vectorized to execute the table on any sort of architecture including SIMD. However, if you are actually executing CODE, e.g. as compiled from a reality representation, then things would be difficult for an SIMD architecture, though again, you could also interpret tables containing the same information at the usual 10:1 slowdown, which is what I was expecting anyway. This would be a big deal, but not the kind of thing that means you have to > scrap all your implementation work and go back to ground zero That's what I figured. OO and generic design patterns do buy you *something* ... OO is often impossible to vectorize. Vector processors aside, though ... it would be a much *smaller* > deal to tweak my AI systems to run on the 100-core chips Intel > will likely introduce within the next decade. There is an 80-core chip due out any time now. Intel has had BIG problems finding anything to run on them, so I suspect that they would be more than glad to give you a few if you promise to do something with them. I listened to an inter-processor communications plan for the 80 core chip last summer, and it sounded SLOW - like there was no reasonable plan for global memory. I suspect that your plan in effect requires FAST global memory (to avoid crushing communications bottlenecks), and this is NOT entirely simple on MIMD architectures. My SIMD architecture will deliver equivalent global memory speeds of ~100x the clock speed, which still makes it a high-overhead operation on a machine that peaks out at ~20K operations per clock cycle. Steve Richfield ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
