On Thu, 2009-07-23 at 16:38 -0700, Zack Rusin wrote:
> On Thursday 23 July 2009 14:50:48 José Fonseca wrote:
> > On Thu, 2009-07-23 at 11:14 -0700, Zack Rusin wrote:
> > > Before anything else the problem of representation needs to solved. The
> > > two step approach that the code in there started on using is again, imho,
> > > by far the best but it likely needs a solid discussion to get everyone on
> > > the same page.
> >
> > I don't think that representation is such a big problem. IMO, gallivm
> > should be just a library of TGSI -> LLVM IR building blocks. For
> > example, the class Instruction should be all virtuals, and a pipe driver
> > would override the methods it wants. LLVM IR -> hardware assembly
> > backend would then be necessary. If a hardware has higher level
> > statements which are not part of LLVM IR, then it should override and
> > generate the intrinsics itself, 

First thanks for taking me seriously Zack and forgive my ignorance. I
have been playing with the idea of using LLVM in my head, but unlike you
I have no real experience.

> I thought about that and discarded that for the following reasons:
> 1) it doesn't solve the main/core problem of the representation: how to 
> represent vectors. 

Aren't LLVM vector types (http://llvm.org/docs/LangRef.html#t_vector)
good enough?

> Without that we can't generate anything. We are dealing 
> with two main architectures here: mimd (e.g. nvidia) and simd (e.g. 
> larrabee), 
> with the latter coming in multiple of permutation. for mimd the prefered 
> layout will be simple AOS (x,y,z,w), for simd it will vector wide SOA (so for 
> larrabee that would be (x,x,x,x, x,x,x,x, x,x,x,x, x,x,x,x). So for SOA we'd 
> need to scale vectors likely at least  between (4 components) (for simple 
> sse) 
> to (16 components). So it's not even certain that our vectors would have 4 
> components.

The vector width could be a global parameter computed before starting
the TGSI -> LLVM IR translation, which takes in account not only the
target platform but the input/output data types (e.g. SSE2 has different
vector widths for different data types). 

For mimd vs simd we could have two variations -- SoA and AoS. Again, we
could have this as a initial parameter, or two abstract classes derived
from Instruction, from which the driver would then derive from.

> 2) It means that the driver would have to be compiled with a c++ compiler. 
> While obviously simply solvable by sprinkling tons of extern "c" everywhere 
> it 
> makes the whole thing a lot uglier.
> 3) It means that Gallium's public interface is a combination of C and
C++. So 
> implementing Gallium means: oo C structures (p_context) and C++ classes. 
> Which 
> quite frankly makes it just ugly. Interfaces a lot like indention, if they're 
> not consistent they're just difficult to read, understand and follow.
> So while I do like C++ a lot and would honestly prefer it all over the place, 
> mixing languages like that, especially in an interface is just not a good 
> idea.

My suggestion of an abstract Instruction class with virtual methods was
just for the sake of argument. You can achieve the same thing with a C
structure of function pointers together with the included LLVM C
bindings (http://llvm.org/svn/llvm-project/llvm/trunk/include/llvm-c/)
which appears to fully cover the IR generation interfaces.

Optimization passes and code might have to be written in C++ (I really
don't know), but those things are fairly isolated from the rest of the
driver anyway. We have included third-party compilers in gallium drivers
before, and it works very well.

> > or even better, generate asm statements directly from those methods.
> 
> That wouldn't work because LLVM wouldn't know what to do with them which 
> would 
> defeat the whole reason for using LLVM (i.e. it would make optimization 
> passes 
> do nothing).

Good point. But can't the same argument be done for intrinsics? The
existing optimization passes don't know what to do with them either.

http://llvm.org/docs/ExtendingLLVM.html strongly discourages extending
LLVM, and if the LLVM IR is not good enough then the question inevitably
is: does it make sense to use LLVM at all?

I know we have been considering using LLVM on Mesa & Gallium state
tracker for GLSL -> TGSI translation. I believe it is an use case very
different from the pipe driver. But I wonder how much would LLVM gives
us there. After all, the LL in LLVM stands for low level, and TGSI is
still quite high level in some regards. (I'm playing devil's advocate
here, because I personally prefer we don't reinvent the well everytime)

> > Currently, my main interest for LLVM is to speed up softpipe with the
> > TGSI -> SSE2 translation. I'd like to code generate with LLVM the whole
> > function to rasterize and render a triangle (texture sampling, fragment
> > shading, blending, etc). I'm not particular worried about the
> > representation as vanilla LLVM can already represent almost everything
> > needed.
> 
> That sounds great :) 
> For that you don't gallivm at all though. Or do you want to mix the 
> rasterization code with actual shading, i.e. inject the fragment shader into 
> the rasterizer? 

Something like that, along the lines of
http://www.ddj.com/hpc-high-performance-computing/217200602 , but I'm
not yet convinced that recursive rasterization is the best, especially
if one does not have 16 wide SIMD instructions as Larrabee does.

> I'm not sure if the latter would win us anything.
> If the shader wouldn't use any texture mapping then it's possibly that you 
> could get perfect cache-coherency when rasterizing very small patches, but 
> texture sampling will thrash thrash the caches anyway and it's going to make 
> the whole process a lot harder to debug/understand.

I haven't though of caching yet. But I plan to write unit tests for each
IR generator component (pixel (un)packing, texture sampling, etc),
regardless if the outcome is a monolithic function or not. From my
experience so far it doesn't take more than a dozen of instructions to
make the IR hard to understand.

Jose


------------------------------------------------------------------------------
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to