On Friday 28 January 2005 18:21, Timothy Miller wrote:
> I want to drop a feature:  The ability for the rendering engine to 
> access host memory.  That is, you can't have a texture in host memory. 
> Instead, DMA can do two things:  (1) fetch drawing commands, and (2) 
> move blocks of memory data in and out of the graphics memory.
> 
> What this means is that if a texture is not in graphics memory, 
> something has to be swapped out first.

Sounds reasonable.

Question: Will DMA copies and rendering (be able to) run in parallel in the 
hardware? I.e. could there potentially be some rendering going on while at 
the same time a texture is being uploaded?

> This then makes me thing about memory management.  What I would like is 
> unified memory management between GL and X.  We can implement this as a 
> daemon.  The daemon manages both graphic memory and pixmaps/textures 
> which have been swapped out.  In addition, it's good to use a user 
> process for this so that swapped out images can also be swapped to disk 
> (automatically by the VM).
>
> Our kernel driver can provide an ioctl interface which allows 
> applications to allocate memory (and when an application dies, the 
> kernel can figure it out and automatically free the resources).  While 
> this would entail some overhead, I don't think it would be so bad.
> 
> One of the jobs of the memory manager is to make sure that textures are 
> available when they're needed.  Using a LRU algorithm, it can swap 
> textures in and out just before they get used.

So your idea of a memory manager looks somewhat like this:
To the memory manager, the main entity is the "block of memory" (I expect 
the size of such a block would always be a multiple of some larger number, 
say 64KB). The manager doesn't really care whether this block contains a 
texture, a depth buffer, or chocolate cake.

Applications (this includes 3D clients and the X server itself) can allocate 
blocks, which gives them an opaque handle. They can then request that the 
block be in video memory for subsequent rendering operations. There also 
have to be some primitives that allow mmap'ing, reading and writing the 
block's contents. Something (either the kernel or a special dedicated 
daemon) moves blocks in and out of video memory as necessary.

All this happens via a kernel ioctl interface. So whether there is a user 
process behind all this is really an implementation detail, right? A 
relevant detail, but still - the applications themselves will never know 
the difference.

> While OpenGL textures all have to be in graphics memory, X11 pixmaps 
> don't (you can punt to cfb), so they can be kicked out to make room for 
> textures with no problem.  However, there is some amount of complexity 
> involved in giving X access to the data when it's been swapped out into 
> the daemon's host address space.  In that case, the X server can be 
> instructed by the daemon (through a signal handler) to move the pixmap 
> data when a swap-out needs to happen.

I'm afraid I don't understand. How are pixmaps significantly different from 
textures? I also don't like the idea of creating new special cases for the 
X server - mostly because it removes us further from being able to run X 
without root privileges, but also because it just doesn't seem very 
reasonable: Why shouldn't we be able to accelerate 2D in the X clients in a 
DRI-like fashion?

> This then leads me into some things regarding security.  With texture 
> units being unable to access host memory directly, that plugs one 
> security hole.  In fact, as I see it, there's no reason to map any part 
> of the GPU register set into the user address space.  The OpenGL process 
> can access the graphics memory all it wants without being able to muck 
> with the kernel or other user processes, and it can share memory pages 
> with the memory manager daemon also without causing any trouble. 
> Furthermore, we only ever want the user process to instruct the GPU via 
> DMA, and we can limit the DMA command set so that the worst it can do is 
> corrupt graphics memory data.  While we'll give X11 direct control over 
> DMA, user-space OpenGL will generate command packets and place them 
> appropriately into a DMA buffer, but it will have to SUBMIT those 
> commands via an IOCTl.  Then the only thing left is being able to lock 
> up the GPU.  However, as it stands, I believe that the worst that can be 
> done is to make the rasterizer loop for a long time.

Again, I don't like the idea of special-cases for the X server. Apart from 
that, however, this sounds right.

cu,
Nicolai

Attachment: pgpQRxsNl4J3B.pgp
Description: PGP signature

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to