On Tue, Apr 3, 2012 at 7:23 AM, Tom Novelli <[email protected]> wrote:
> Even if there does turn out to be a simple and general way to do parallel
> programming, there'll always be tradeoffs weighing against it - energy usage
> and design complexity, to name two obvious ones.

Not necessarily.

As to the former, not every FLOP is equally efficient, since silicon
transistor power draw doesn't scale linearly with clock speed.
Parallel designs tend to be more efficient for the same effective
speed as single-core clockspeed ramping. This trend may continue to
scale for a while, such that hundred-core designs (with the right
approaches to concurrency) are so much more efficient than quad-core
chips that, say, phone manufacturers will be compelled to embrace that
approach.

For design complexity, at the hardware level this is something of a
business advantage, since processor companies need to keep selling new
chips somehow. With enough effort in developing software concepts and
tools to handle concurrency, there's no reason that concurrent
programming of the future need be more challenging than serial
programming is today. This doesn't mean that design complexity isn't a
factor, but I think it does mean that it won't always be a very big
one.

Steve
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to