On 2002.06.01 23:49 Leif Delgass wrote:
> ...
> 
> I found and fixed this problem (I was using ring.tail to set the offset
> for buffer "aging" before it was incremented).
> 
> ...
> 
> I've cleaned this up a bit and done the work in COMMIT_RING.
> 
> ...
> 
> OK, I'm still not sure if I have this right, but I reduced this to a
> wmb()
> between setting the EOL flag on the new tail and clearing the EOL flag
> from the old tail.  We need to ensure that the EOL flag on the new tail
> is
> in place first so there's no chance for the card to read past the end of
> our valid data.
> 
> > > > - Why do you call mach64_do_dma_flush( dev_priv ) in COMMIT_RING?
> > >
> > > Again, that was just from my hacking/testing.
> 
> OK, this is still there and the reason is that the flush (for this path)
> checks for idle and starts a new pass if there are descriptors waiting on
> the ring.  There's still a problem here because a new pass is started
> even
> with just one descriptor in the ring.  Eventually, one takes long enough
> to get a queue started, but this isn't optimal.  I also added a flush

I had reminded of that too, bu I though that the buffers should fill up 
pretty quickly. Isn't this also because the client is sending very small 
buffers? We should be sending them across different primitives. We could 
do an extra buffering in the DRM but it would be a little difficult to 
tune it, and create unnecessary overhead. Then again, in several context 
it would probably pay off. Perhaps just don't send one (or the minimum 
number we need to queue them) descriptor buffers right way would be 
sufficient. But we also need to consider very carefully the effect of this 
in multiple non-fullscreen contexts.

> (only for this path) in freelist_get before looking for buffers to
> reclaim
> (if the freelist is empty), since the card could be idle at that point.
> 
> ...
> 
> Thanks, I'm feeling better now. :)  Now that the original path of
> flushing
> in batches is working again with the new code, I've committed everything.

Great.

> The new path is actually kind of working, but I still run into lockups
> and
> sometimes garbage being drawn.  One of the annoying things is that
> BM_GUI_TABLE can be set to some seemingly random value if a lockup
> happens
> (are there other cases?), so I've added a sanity check in the flush
> function.  Since we don't know where the card will stop, we have to
> restart it with whatever address is in BM_GUI_TABLE when it goes idle.  I
> check to make sure it's actually within the address range of the ring.
> I've also added code to reset the ring structure and BM_GUI_TABLE in
> engine_reset to account for this.  Some other notes of interest:  the
> ring_wait is really not necessary, since 128 16kB buffers can use a max
> of
> 512 of the 1024 descriptors.  We'd need 4MB of buffers to fill the ring.
> 

Besides of the reason you mentioned on a subquent post, we also don't know 
how big the buffers the client sent us should be.

> I suppose we could replace the RING_SPACE_CHECK macro with a check for
> enough buffers to start a new pass if the card is idle, then we probably
> wouldn't need the flush in freelist_get.  At the moment, the performance
> gain is modest: less than 1 fps in gears, and with the quake3 timedemo
> (vertex lighting), I went from a previous best of 19.4 fps to 20.1 fps.

I suppose this increase is with the new path?

> I think the texture management code is still the main performance problem
> for games at the moment (though with the quake3 demo it mainly crops up
> only with lightmap lighting).
> 

Yeah, but the increase in fps in glxgears should be noticeable.

Great work, Leif. I'll start working on the new path. Making a global 
review to understand it and check for any bug.

Regards,

José Fonseca

_______________________________________________________________

Don't miss the 2002 Sprint PCS Application Developer's Conference
August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm

_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to