On Thursday 23 July 2009 14:50:48 José Fonseca wrote:
> On Thu, 2009-07-23 at 11:14 -0700, Zack Rusin wrote:
> > Before anything else the problem of representation needs to solved. The
> > two step approach that the code in there started on using is again, imho,
> > by far the best but it likely needs a solid discussion to get everyone on
> > the same page.
>
> I don't think that representation is such a big problem. IMO, gallivm
> should be just a library of TGSI -> LLVM IR building blocks. For
> example, the class Instruction should be all virtuals, and a pipe driver
> would override the methods it wants. LLVM IR -> hardware assembly
> backend would then be necessary. If a hardware has higher level
> statements which are not part of LLVM IR, then it should override and
> generate the intrinsics itself, 

I thought about that and discarded that for the following reasons:
1) it doesn't solve the main/core problem of the representation: how to 
represent vectors. Without that we can't generate anything. We are dealing 
with two main architectures here: mimd (e.g. nvidia) and simd (e.g. larrabee), 
with the latter coming in multiple of permutation. for mimd the prefered 
layout will be simple AOS (x,y,z,w), for simd it will vector wide SOA (so for 
larrabee that would be (x,x,x,x, x,x,x,x, x,x,x,x, x,x,x,x). So for SOA we'd 
need to scale vectors likely at least  between (4 components) (for simple sse) 
to (16 components). So it's not even certain that our vectors would have 4 
components.
2) It means that the driver would have to be compiled with a c++ compiler. 
While obviously simply solvable by sprinkling tons of extern "c" everywhere it 
makes the whole thing a lot uglier.
3) It means that Gallium's public interface is a combination of C and C++. So 
implementing Gallium means: oo C structures (p_context) and C++ classes. Which 
quite frankly makes it just ugly. Interfaces a lot like indention, if they're 
not consistent they're just difficult to read, understand and follow.
So while I do like C++ a lot and would honestly prefer it all over the place, 
mixing languages like that, especially in an interface is just not a good 
idea.

> or even better, generate asm statements directly from those methods.

That wouldn't work because LLVM wouldn't know what to do with them which would 
defeat the whole reason for using LLVM (i.e. it would make optimization passes 
do nothing).

> Currently, my main interest for LLVM is to speed up softpipe with the
> TGSI -> SSE2 translation. I'd like to code generate with LLVM the whole
> function to rasterize and render a triangle (texture sampling, fragment
> shading, blending, etc). I'm not particular worried about the
> representation as vanilla LLVM can already represent almost everything
> needed.

That sounds great :) 
For that you don't gallivm at all though. Or do you want to mix the 
rasterization code with actual shading, i.e. inject the fragment shader into 
the rasterizer? I'm not sure if the latter would win us anything.
If the shader wouldn't use any texture mapping then it's possibly that you 
could get perfect cache-coherency when rasterizing very small patches, but 
texture sampling will thrash thrash the caches anyway and it's going to make 
the whole process a lot harder to debug/understand.

z 


------------------------------------------------------------------------------
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to