Re: [IDEA] shrink xrender featureset

2008-11-25 Thread Juliusz Chroboczek
 Well if you let me decide between software rendering on client or
 software rendering on server, I would prefer the latter.

It's not that clear cut.  At least some of the motivation behind Render is
about moving time-consuming operations into the client, notably font 
rasterisation.

There are two reasons why you may want to move stuff into the client.  One
is flexibility: for most users, it's easier to install a new version of
a library, than a new version of the X server.  This was the principal
reason why Render moves font rasterisation into the client.

The other point is that having time-consuming operations in the server
increases client latency.  Before Render, all font rasterisation happened
in the server, and this would cause noticeable pauses (with the whole
server frozen, not just a single client).

While it is possible to implement background processing in the server,
using ``Block Handlers'' (that's how I implemented the now-deceased XFree86
DPS extension), it's difficult, error-prone, and there are just three
people in the universe who know how it's done.

Juliusz
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [IDEA] shrink xrender featureset

2008-11-25 Thread Juliusz Chroboczek
 Wasn't the reason to do font rasterization primiary to give applications
 more control over font rendering?

If memory serves, Keith was trying to find a design to solve both issues
with core fonts -- lack of flexibility and latency.  There was an extended
brain-storming session on the old XFree86 list (Keith, Ralf, Rob Pike,
myself, probably others I forget), and suddenly there was this collective
insight -- client-side rasterisation solves both problems in an elegant
way.

 After all isn't that just an implementation problem of the xserver?

The fact that other clients are locked out during font rasterisation is.
However, it's tricky to fix -- it basically requires either threading the
server, or converting your font rasteriser to continuation-passing style.

The fact that fonts cannot be rasterised incrementally but must be fully
rasterised at font open time, on the other hand, is a design flaw of the
core fonts mechanism.  (Due to the fact that the core protocol requires
providing accurate ink metrics at font open time.)

 When doing e.g. gradients client-side, all hope for hw accaleration is
 lost, furthermore it would mean transferring a _lot_ of data between the
 client and the server which would pretty much kill network performance.
 Furthermore it would lead to more frequent syncs (when shm is used) or
 increased copy-overhead (when going through protocol).

In no way am I claiming that client-side gradients are the right solution.
I'm simply saying that the client- vs. server-side debate is seldom as
clear cut as a previous speaker made it, and that the pros and the cons
need to be carefully weighted.  My personal instincts tend to go towards
client-side operations whenever possible -- every complex server-side
operation I tend to see as a failure to design the right protocol-level
abstractions.

As far as network and SHM performance -- Keith convinced me at some point
that we don't currently have a good pixmap transport extension.  I'd like
to see something that uses a windowed, non-blocking form of SHM when
working locally, and some smart compression method when working remotely.
(There's no reason why the compression mechanism shouldn't have an ad-hoc
encoding for gradients, if gradients are found to be important.)

Point taken about hw acceleration, although I happen to think (or hope)
that hw acceleration of 2D graphics is going the way of the dodo.

Juliusz
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [IDEA] shrink xrender featureset

2008-11-23 Thread Clemens Eisserer
 Trapezoids for example would require implementing a rasteriser in shaders.
 Pretty much everything that doesn't get accelerated these days requires
 shaders.
 Tomorrow someone might come and ask for a different type of gradient, why
 even bother?

Well if you let me decide between software rendering on client or
software rendering on server, I would prefer the latter.
Furthermore, how would you like to generate aa geometry if not with
trapezoids - do you like to XPutImage the geometry to the server?

 Fallbacks are rarely efficient, iirc intel GEM maps memory with write
 combining, that isn't very friendly for readback.
For gradients you don't really need to do fallbacks, and for
trapezoids you can use a temporary mask.
This is all write-only, its just a matter how the driver/accaleration
architecture handle this.

 I intentionally brought this up before people actually implement this. The
 question is why not use opengl or whatever is available to do this? You're
 putting fixed stuff into a library that only hardware with flexible shaders
 can do, why not use something that just exposes this flexibility in the
 first place?
Well, first of all - because its already there...and except some not
so mature areas work quite well.
Second, Java has an OpenGL backend and currently, I am not sure wether
even the current NVidia drivers are able to run it and I am pretty
sure _none_ od the open drivers can.
I guess XRender has the adavantage that drivers are simpler to
implement compared to a ful-fledged OpenGL implementation.

Once OpenGL is stable and mature, and scalable enough to run dozens of
apps simultaneously, it should not be a problem to host XRender on top
of it.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


[IDEA] shrink xrender featureset

2008-11-22 Thread Maarten Maathuis
Currently there exist several operations in xrender that are better
off client side or through some other graphic api (imo). Think of
trapezoid rasterisation, gradient rendering, etc. Doing this stuff
client side avoids unforseen migration issues and doesn't create any
false impressions with the api users.

My suggestion would be to deprecate everything, except solid,
composite, cursor stuff and glyphs. The idea is to stop doing
seemingly arbitrary graphics operations that end up causing slowness
most of the time (if not worked around properly). At this stage
noone accelerates these operations, so there can be no complaints
about that.

xrender is here to stay, but there are limits to it, so let's accept
this and move on (for other needs).

How do others feel about this?

Maarten.
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg


Re: [IDEA] shrink xrender featureset

2008-11-22 Thread Clemens Eisserer
Hi,

 Currently there exist several operations in xrender that are better
 off client side or through some other graphic api (imo). Think of
 trapezoid rasterisation, gradient rendering, etc.
 Doing this stuff
 client side avoids unforseen migration issues and doesn't create any
 false impressions with the api users.
Well, in my opinion this is not a question where to do the stuff
(client/server) - but rather how.
Both, trapezoids and gradients cause migration because EXA currently
has a quite strict view on accaleration.
If something can be done on hw its done by hw, otherwise eerything
else is migrated out.

You still could to the same on the server you would do on the server-side.
Just imagine you copy gradients or traps to temporary surface before
you use them in a composition operation - it would be the same as
client side, except you don't need to copy everything arround.
Furthermore drivers often can fallback efficiently, like Intel drivers with GEM.

If you omit gradients or trapezoids you would also have to transport a
lot of stuff over wire - not really nice.

 My suggestion would be to deprecate everything, except solid,
 composite, cursor stuff and glyphs. The idea is to stop doing
 seemingly arbitrary graphics operations that end up causing slowness
 most of the time (if not worked around properly). At this stage
 noone accelerates these operations, so there can be no complaints
 about that.
Well at least nvidia plans to accalerate gradients as well as
trapezoids in theirproprietary drivers.
Intel also has plans to optimize gradients with shaders.

My opinion is that RENDER is quite fine, but there are some parts
where drivers are lacking.
Hopefully the situation will improve soon, at least for gradients.

- Clemens
___
xorg mailing list
xorg@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/xorg