Jed Rothwell wrote:
> > MPP architecture has been a long time coming but I am convinced it is > the wave of the future. Massively parallel MIMD machines are the wave of the present. Every top end machine I'm aware of is a massively parallel MIMD machine. What's more, the switch architecture is, in most cases, a plain old mesh, and is likely to remain so for a good while at least, because it scales arbitrarily. Multidimensional toroidal mesh-connected machines have reasonable diameter, by the way, at least until they become outrageously huge. (For a fun exercise try to figure out how to cable a Butterfly-switch based system with 10,000 processors...) Programming a mesh-connected massively parallel machine is not a task for the faint hearted. They're great for simulating things that go "Bang!", because the data naturally breaks down into chunks which can be handled with only local knowledge, but applying all that computing power to anything else seems awkward at best. > I think it would help things like voice input > and translation software, and artificial intelligence of course. See > chapter 10 of my book. I do not know whether programmers will ever > become good at writing parallel algorithms. Perhaps they cannot do this > because they are not used to parallel architecture, or perhaps because > the human mind deals with problems in a serial step-by-step fashion only > (even though the brain itself is an MPP processor par excellence). Or perhaps it is just enormously more complex to program a massively parallel machine than a serial machine. You have all the problems of a serial machine, all the complexity of serial programming, *plus* a whole additional layer of complexity on top, along with a whole new collection of race-condition bugs which you simply couldn't have in serial programming. It's sort of like the difference between teaching one child how to do something, versus running the entire school system for the whole city, choosing the curriculum to be used in every school, and overseeing every individual teacher to see that they're teaching the curriculum properly. The latter job is intrinsically harder than the former. > But > whether people ever get good at it or not, compilers will eventually > make the process automatic. > > This gadget splits the object code between the conventional CPU and the > parallel processors. In most programs, a small section of the code runs > most of the time. A large body of code sets up the problem, and then a > small section iterates to solve the problem. A compiler that outputs > code for parallel processors can make most of the code and most > functions sequential (ordinary); only the innermost iterative code needs > to be made parallel. (But perhaps in the future sequential code will not > be considered ordinary.) > > - Jed >

