On Sat, 05 Apr 2008 15:58:45 +0200 Syren Baran <[EMAIL PROTECTED]> wrote:
> > Shouldt we start with the first thing first? > That would be the assembler, just turning the opcodes into binary form. > Well, and some way to transfer the binary to the gpu. > That shouldnt be much of a hassle (the command set is pretty limited, > luckily). > Still have some reading in those docs to do, but i think i could write a > simple assembler. > > Syren > GPU are not a CPU, programming them is far more complexe than programming CPU, you have to handle things that CPU do for you. For instance you have setup how data are routed to GPU. Also GPU are not intended to run usual program, ie program with if, swith, jump, and others alike instructions. GPU could handle such instructions but your often limited in the depth you can have (for instance no more than 16 nested for, if statement). I think a good analogy is the cell processor where for OS they have a unit which is just like a CPU while stream unit look like a GPU. So any application which want to efficiently use a GPU need to be cut btw the core application which would run on a "normal" CPU and a specific part intended to run on GPU. This part need to be designed with GPU specificity in mind. This why i don't think a compiler, at least in the sense you seems to think about, is of any use with a GPU. I am confident that through gallium we will be able to offer a sane api to enable application to properly use horse power of GPU. Note that similar problematic exist with multi-cpu, you can't really take advantage of multiple CPU only with a specific compiler, your application have to designed for multi-CPU. Cheers, Jerome Glisse <[EMAIL PROTECTED]> _______________________________________________ xorg-driver-ati mailing list [email protected] http://lists.x.org/mailman/listinfo/xorg-driver-ati
