Kristian Høgsberg wrote:
> On Nov 26, 2007 3:40 PM, Jerome Glisse <[EMAIL PROTECTED]> wrote:
>> Kristian Høgsberg wrote:
>>> On Nov 22, 2007 5:37 AM, Jerome Glisse <[EMAIL PROTECTED]> wrote:
>>> ...
>>>> I will go this way too for r300/r400/r500 there is not so much registers
>>>> change with different contexts and registers which need special treatment
>>>> will be handled by the drm (this boils down to where 3d get rendered and
>>>> where is the zbuffer and pitch/tile informations on this 2 buffers; this
>>>> informations will be embedded in drm_drawable as the cliprect if i am
>>>> right :)). It will be up to client to reemit enough state for the card
>>>> to be in good shape for its rendering and i don't think it's worthwhile
>>>> to provide facilities to keep hw in a given state.
>>> Are you suggesting we emit the state for every batchbuffer submission?
>>>  As I wrote in my reply to Keith, without a lock, you can't check that
>>> the state is what you expect and the submit a batchbuffer from user
>>> space.  The check has to be done in the kernel, and then kernel will
>>> then emit the state conditionally.   And even if this scheme can
>>> eliminate unecessary state emission, the state still have to be passed
>>> to the kernel with every batch buffer submission, in case the kernel
>>> need to emit it.
>> I didn't explained myself properly, i did mean that it's up to the client
>> to reemit all state in each call to superioctl so there will be full state
>> emission but i won't check it ie if it doesn't emit full state then userspace
>> can just expect to buggy rendering. Meanwhile there is few things i don't
>> want the userspace to mess with as they likely need special treatment like
>> the offset in ram where 3d rendering is going to happen, where are ancillary
>> buffers, ... i expect to have all this informations attached to a 
>> drm_drawable
>> (rendering buffer, ancillary buffer, ...) so each call of superioctl will 
>> have
>> this:
>> -drm_drawable (where rendering will be)
>> -full state
>> -additional buffers needed for rendering (vertex buffer, textures, ...)
> 
> Great, thanks for clarifying.  Sounds good.
> 
>>>> So i don't need a lock
>>>> and indeed my actual code doesn't use any except for ring buffer emission
>>>> (only shared area among different client i can see in my case).
>>> So you do need a lock?  Could the ring buffer management be done in
>>> the drm by the super-ioctl code and would that eliminate the need for
>>> a sarea?
>> This ring lock is for internal use only, so if one client is in superioctl
>> and another is emitting a fence both need to write to ring but with different
>> path to avoid any userspace lock i have a kernel lock which any functions
>> writing to ring need to take; as writing to ring should be short this 
>> shouldn't
>> hurt. So no i don't need a lock and i don't want a lock :)
>>
>> Hope i expressed myself better :)
> 
> Hehe, I guess I've been a bit unclear too: what I want to make sure we
> can get rid of is the *userspace* lock that's currently shared between
> the X server and the DRI clients.  What hear you saying above is that
> you still need a kernel side lock to synchronize access to the ring
> buffer, which of course is required.  The ring buffer and the lock
> that protects is lives in the kernel and user space can't access it
> directly.  When emitting batchbuffers or fences, the ioctl() will need
> to take the lock, but will always release it before returning back to
> userspace.
> 
> Sounds like this will all work out after all :)
> Kristian

Yes, between i started looking at your dri2 branch and didn't find dri2
branch in your intel ddx repository did you pushed the code anywhere else ?
Would help me to see what i need to do for dri2 in ddx.

Cheers,
Jerome Glisse

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to