On Mon, Jan 7, 2013 at 8:36 AM, John Culp <[email protected]> wrote:

> Timothy Normand Miller wrote:
>
>
>> So basically, the GPU is a moving target anyway.  We should focus on
>> meeting current scientific needs, publish lots of results, and then use
>> our
>> clout from this to get more funding to chase whatever the GPU evolves into
>> next.  Eh?
>>
>>
> I read that briefly wondering wth did 'we' mean (ain't English great). But
> yes. I don't know if it would help to emphasize "heterogeneous computing"
> over "GPU" for your next developments if the GPU is truly going to be
> subsumed by the CPU.  Who knows what funding and budgets will be over the
> next several years, and it would be horrid to be thought of as a one trick
> pony.  But like I've said, it is not my field.


"We" could mean lots of different groups.

The CPU evolved for single-thread performance, while the GPU evolved from
an embarrassingly parallel problem.  Over time, they've been taking on each
other's features, like more general purpose instructions in the GPU, vector
instructions in the CPU, and multi-core CPUs that compromise a bit on
single-thread performance to squeeze more cores onto a chip.  Larrabee /
Knights Bridge / Xeon Phi is an interesting example or something trying to
land in the middle.  It's still an OoO processor, but it has 512-bit vector
instructions and has 62 cores running at 1GHz.

The pressures that drove CPUs and GPUs where they are are not going away.
 We haven't completely nicked the multithreaded programming problem, so
we're not going to see the GPU take over the role of the main CPU any time
soon.  Niagara is even more GPU-like, but it still has to get pretty good
single-thread performance.  Despite GPGPU, I'm not sure it's sensible to
try too hard to converge these two technologies too much, because it would
just make it mediocre at both.    And as I say, those two different problem
spaces aren't going away.  There's not a lot of point in making the two run
the same ISA (why not start fresh with GPUs?), but standardizing the native
GPU ISA, like has been done with x86 and ARM, may be sensible.  For a long
time, GPUs have been a peripheral, allowing vendors to hide ISA changes
behind a driver, but that may be forced to change eventually.  nVidia has
managed to hold that off for a while using PTX, however, and LLVM manages
to be a great portable intermediate representation for both CPUs and GPUs.

"Heterogeneous computing" is a hot buzzword right now, so I'd be smart to
leverage it.  :)

I also think that people are getting more FPGA-savvy, which is a whole
other way to solve computational problems.


-- 
Timothy Normand Miller, PhD
Assistant Professor of Computer Science, Binghamton University
http://www.cs.binghamton.edu/~millerti/<http://www.cse.ohio-state.edu/~millerti>
Open Graphics Project
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to