On Sat, 29 Jan 2005 13:28:51 +0100, Nicolai Haehnle <[EMAIL PROTECTED]> wrote: > On Friday 28 January 2005 20:20, Timothy Miller wrote: > > The X server is, by definition, a priveleged process. You should be > > able to trust it to behave correctly and protect the hardware from > > errant X clients. > > I don't agree with your assumption. Why does the X server have to be a > privileged process?
It doesn't have to be. But since it CAN be, we might as well take advantage of that fact in order to make it more efficient. Let's not cripple something when we don't HAVE to. > > > I come from a world where we would (used to) run out of graphics > > memory for pixmaps quickly. Say you're in 24-bit MOX mode on a Raptor > > 2000. It's an old product, so it only has 24 megs of memory. You > > have memory for the viewable framebuffer plus ONE screen-sized pixmap. > > Everything else has to live in host memory. As such, our DDX layers > > have always had to deal with the fact that a pixmap may be either > > "accelerated" (in graphics memory) or not (in host memory), and do the > > right thing in all cases. > > > > Things are more complicated with 3D and texturing, because if a > > texture is swapped out of card memory, you can't just punt to a > > software algorithm to do the rendering. Well, you can, but it sucks, > > because you have to write a software renderer that can handle > > everything including doing depth/stencil buffer reads and updates > > properly. In X11, it's just WAY simpler, and you can fall back on CFB > > to do rendering to host memory pixmaps for you without it being an > > issue. > > We can't get away without a full software fallback anyway. e.g. because the > hardware won't support projective textures, and probably for the more > exotic OpenGL modes like feedback. This is not that big a deal because we > can just use Mesa for that. So how are 3D and 2D any different, again? This is problematic. For instance, the chip does W-buffering, while Mesa probably does Z-buffering. We'd have to rewrite parts of Mesa in order to be able to fall back on it and still have it WORK. > > Another thing to realize is that it's unusual to have more than one GL > > client running at one time, but you USUALLY have LOTS of X11 clients. > > They all allocate gobs of pixmaps, and you really CAN run out of > > graphics memory, and the user should never have to know that. > > While this is true today, I would prefer a more forward-looking approach. > With developments like the Cairo rendering library, why shouldn't *every* > local X11 client be able to accelerate rendering operations directly, > including 2D operations? In my world, network transparency of X11 is something that can never be sacrificed. Network transparency is X11's greatest strength. Indeed, I think that the fact that OpenGL isn't network transparent (other than via GLX) is one of its greatest weaknesses. With the convergence of 2D and 3D, however, my belief is that X11 and OpenGL both need to be replaced by a single, unified, centralized, network-transparent GUI system that can emulate both. > [snip] > > X11 is a bottleneck, since it's single-threaded and every graphical > > client on the box relies on it. If you don't give it means to be more > > efficient and flexible, you'll CRIPPLE it and thereby cripple all X11 > > clients. > > In general, I agree. However, does the X server really require root > privileges to be efficient enough? Why shouldn't a normal client be able to > be as efficient? Remember my Xnest example, and the possibility of a > client-side accelerated Cairo. At least under Solaris, doing an ioctl context switch to initiate DMA is so horribly slow that it was absolutely necessary for the X server to be able to initiate DMA transfers directly--it had to be able to talk to the hardware directly. But being able to talk to the hardware directly like that is a security problem for OpenGL because they're unpriveledged user processes. > Perhaps instead of a root/non-root privilege discrimination, there could be > a session-leader/session-client hierarchy. The session-leader will *not* > get full access to the hardware, because the session-leader can be a > non-root process. However, the session-leader *will* be able to control its > clients, e.g. by revoking graphics access from a broken/runaway client, and > it *will* have a higher priority when it comes to resource allocation. > An Xnest server will be the client of the "real" X server, and it will be > session-leader for "its" clients. In fact, this hierarchy could also fix > running multiple fullscreen X servers on the same hardware by default, > because the parallel X servers would no longer be special - they would just > be clients to a controlling session-leader (this master leader could at the > same time be the process that controls the memory management). Well, if the connection between client and server is like X11, where commands can be transferred in bulk, limiting the context switch overhead impact, then yes, what you say makes sense, and that goes back to my idea of unifying X11 and OpenGL in a network transparent way. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
