I think this is significant because it might signify a paradigm shift. This might well be a hype, but let's just assume this is future direction of CPU design. Then we might as well start experimenting now. I would just throw some random ideas: parallel execution at statement level, look up symbol and attributes predicitively, parallelize hash function, dictionary lookup, sorting, list comprehension, etc, background just-in-time compilation, etc, etc.
One of the author's idea is many of today's main stream technology (like OO) did not come about suddenly but has cumulated years of research before becoming widely used. A lot of these ideas may not work or does not seems to matter much today. But in 10 years we might be really glad that we have tried.
aurora <[EMAIL PROTECTED]> writes:Just gone though an article via Slashdot titled "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" [http://www.gotw.ca/publications/concurrency-ddj.htm]. It argues that the continous CPU performance gain we've seen is finally over. And that future gain would primary be in the area of software concurrency taking advantage hyperthreading and multicore architectures.
Well, another gain could be had in making the software less wasteful of cpu cycles.
I'm a pretty experienced programmer by most people's standards but I see a lot of systems where I can't for the life of me figure out how they manage to be so slow. It might be caused by environmental pollutants emanating from Redmond.
-- http://mail.python.org/mailman/listinfo/python-list
