Around 22 o'clock on Dec 8, Juliusz Chroboczek wrote:

> In my current setup, I'm using an 8:8:8 window and a 8:8:8:8 backing
> pixmap.  Once in a while, I blit the modified area of the pixmap onto
> the window by using XRenderComposite with PictOpSrc.

Because the 8:8:8 window is depth 24 and the 8:8:8;8 backing pixmap is 
depth 32, the XFree86 DDX won't store the pixmap in the frame buffer, even 
though it may well fit.

If you could use an 8:8:8 pixmap, you would then be able to use XCopyArea 
which would be plenty fast.

> they convert the primitive into spans and repeatedly invoke a method which
> does a XRenderFillRectangle with PictOpSrc; I then channel the drawing to
> both the pixmap and the window to avoid having to do a blit.  Is that okay,
> or do you recommend working with XDrawRectangle on the underlying
> drawables?

At this point, any time you can use core rendering routines in place of 
Render, you'll likely see significant performance benefits.

> When the same thing happens with AA rendering, the method that gets invoked
> is the very same one as the one that is used when doing AA text; thus, I
> put the span into a glyphset, and do XRenderCompositeString8 on a
> one-character eight-bit-deep string with PictOpSrc and a 1x1 pixmap as src.
>  Is that okay, or should I avoid manipulating glyphsets when I know that
> the given mask won't be reused.

You should not use glyphsets like this; Render shares common glyphs among 
glyphsets and so inserting them at this rate will have some performance 
impact.  Instead, just use XPutImage and a regular Composite.  
Alternatively, you can convert the span into a trapezoid and draw that 
where supported (you can check the protocol version).

> In that method, I'm seeing rendering artifacts when the span is over
> 256 pixels wide; the rendering appears to be cut at an abscissa of
> 256.

Hmm.  I may not understand your rendering code; are you sending glyphs 
larger than 256x1 to the server?  I don't know that I've ever tried that 
before...

> Finally, is it possible to add an external alpha channel to an
> existing pixmap?  I'm thinking of dynamically allocating the alpha
> channel on the first non-opaque rendering operation in cases when the
> window's pictformat has no associated pictformat with good alpha.

Yes.  You create the alpha channel pixmap, create a picture for that and 
then set the alpha-map of the RGB pixmap to the alph picture.

This would also permit you to use XCopyArea from the pixmap and store the 
pixmap in off-screen memory. which will result in essentially 
instantaneous updates (and no need for your duplicate FillRectangle call 
above).

Just as the core server started with relatively simplistic dumb frame 
buffer code and slowly improved in peroformance as people gained 
experience accelerating it with hardware, Render is currently quite a bit 
slower than it could be.  Some of the changes will require significant DDX 
redesign; in particular, supporting multiple accelerated pixmap depths is 
going to require reworking lots of code.  Developing an infrastructure to 
accelerate common Render operations will require even more work as the DIX 
level doesn't currently provide much help at all.

Keith Packard        XFree86 Core Team        HP Cambridge Research Lab


_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to