OrionWorks wrote:

The web page states rather cryptically: "The only C language environment that unlocks the many-core processing power of GPUs to solve the world's most computationally-intensive challenges". Ok... Any C compiler??? ...or does one need to purchase a specially designed C language compiler.

It looks like their own compiler that produces parallel code. It is the "CUDA C" processor with a set of optimized libraries. In other words, if you already have your problem coded in C you just recompile it. Or I guess most people know how to write in C these days.

Not long ago this would have been Fortran instead of C.

See:

http://uuu.enseirb.fr/~pelegrin/enseignement/enseirb/archsys/documents/architectures/tesla_technical_brief.pdf

QUOTE:

The key to CUDA is the C compiler for the GPU. This first-of-its-kind programming environment simplifies coding parallel applications. Using C, a language familiar to most developers, allows programmers to focus on creating a parallel program instead of dealing with the complexities of graphics APIs. To simplify development, the CUDA C compiler lets programmers combine CPU and GPU code into one continuous program file. Simple additions to the C program tell the CUDA compiler which functions reside on the CPU and which to compile for the GPU. The program is then compiled with the CUDA compiler for the GPU, and then the CPU host code is compiled with the developer's standard C compiler.

Developers use a novel programming model to map parallel data problems to the GPU. CUDA programs divide information into smaller blocks of data that are processed in parallel. This programming model allows developers to code once for GPUs with more multiprocessors and for lower-cost GPUs with fewer multiprocessors. . . .


MPP architecture has been a long time coming but I am convinced it is the wave of the future. I think it would help things like voice input and translation software, and artificial intelligence of course. See chapter 10 of my book. I do not know whether programmers will ever become good at writing parallel algorithms. Perhaps they cannot do this because they are not used to parallel architecture, or perhaps because the human mind deals with problems in a serial step-by-step fashion only (even though the brain itself is an MPP processor par excellence). But whether people ever get good at it or not, compilers will eventually make the process automatic.

This gadget splits the object code between the conventional CPU and the parallel processors. In most programs, a small section of the code runs most of the time. A large body of code sets up the problem, and then a small section iterates to solve the problem. A compiler that outputs code for parallel processors can make most of the code and most functions sequential (ordinary); only the innermost iterative code needs to be made parallel. (But perhaps in the future sequential code will not be considered ordinary.)

- Jed

Reply via email to