On Thu, Dec 10, 2009 at 11:44 AM, Zack Rusin <za...@vmware.com> wrote:
> On Thursday 10 December 2009 11:25:48 Younes Manton wrote:
>> On Thu, Dec 10, 2009 at 5:32 AM, Zack Rusin <za...@vmware.com> wrote:
>> > On Wednesday 09 December 2009 20:30:56 Igor Oliveira wrote:
>> >> Hi Zack,
>> >>
>> >> 1) agreed. OpencCL is a complete different project and should exist in
>> >> a different repository.
>> >> 1.1) Well use Gallium as CPU backend is a software dilemma:
>> >> "All problems in computer science can be solved by another level of
>> >> indirection...except for the problem of too many layers of
>> >> indirection"
>> >> But in my opinion we can use Gallium for CPU operations too, using
>> >> gallium as a backend for all device types we maintain a code
>> >> consistency.
>> >
>> > Yes, it will certainly make the code a lot cleaner. I think using
>> > llvmpipe we might be able to get it working fairly quickly. I'll need to
>> > finish a few features in Gallium3d first. In particular we'll need to
>> > figure out how to handle memory hierarchies, i.e. private/shared/global
>> > memory accesses in shaders. Then we'll have some basic tgsi stuff like
>> > scatter reads and writes to structured buffers, types in tgsi (int{8-64},
>> > float, double}, barrier and memory barrier instructions, atomic reduction
>> > instructions, performance events and likely trap/breakpoint instructions.
>> > We'll be getting all those fixed within the next few weeks.
>>
>> Doesn't seem like the current pipe_context is suited to the
>> requirements of a compute API.
>
> Can you be more specific? Which parts you don't think are suited for it?
>
>> Should it be made larger or is another kind of context in order?
>
> I don't see anything missing from pipe_context to warrant a new interface.
> What exactly is your concern?

Well how do we keep the compute state seperate from the 3D state, and
how do you mix the two? Do you have two state trackers using the same
pipe_context and re-emitting their entire states to the HW as
necessary? Do you use two pipe_contexts? What about cards that know
about compute and keep a seperate state? When you set a shader/read
buffer/write buffer/const buffer with the pipe_context it's not clear
to me what we should do on the driver's side.

>> Under the hood on nvidia cards there's are
>> seperate hardware interfaces for compute, graphics, video, even though
>> there is some duplicate functionality, so it's not like most of the
>> code of our current pipe_context would be reused*, so to me a
>> different type of context makes sense.
>
> Really? To be honest I've never seen any compute specific hardware in nvidia,
> what is it?

The card basically has seperate state for DMA, 2D, 3D, video, compute
on nv50+, and a bunch of others. When we create a pipe_context we bind
the DMA, 2D, and 3D and some of the others and issue commands. For
nv50 we have a compute state, but we need to know what to do with
commands coming through pipe_context, are they for 3D or compute?

------------------------------------------------------------------------------
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
_______________________________________________
Mesa3d-dev mailing list
Mesa3d-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mesa3d-dev

Reply via email to