On 8/22/06, Lourens Veen <[EMAIL PROTECTED]> wrote:

> > What abstraction should the kernel offer to userspace?
>
> That's a hard question.  Here are some groups of facilities I can
> think of right off:
>
> - Setting resolution/depth
> - Allocating virtual consoles
> - Low-level rendering (2D, 3D)
> - Graphics memory management
> - Timing facilities (wake on vertical blank interrupt, etc.)
> - Zero-copy mechanisms

So, a graphics device is a rectangular grid of a certain size. Each cell
on that grid has an associated colour which is encoded in some way.
There are various operations that change these colours. Sometimes an
event occurs which the graphics device can tell us about.

But then what about off-screen buffers? And do we need that interrupt
event? Let's try that again.

A graphics device is a collection of rectangular grids of a certain
size. Each cell on a grid has an associated colour which is encoded in
some way. There are various operations that change these colours. At
any point in time, one of the grids is designated as primary grid.

Very nice.  From the API, you should be able to allocate a virtual
console, which is a viewable surface.  Additionally, you can allocate
off-screen surfaces.  Everything would be referenced via an opaque
resource handle.

Notes: the primary grid is the one that is visible on-screen. Could we
have the abstraction queue drawing operations until it's time to
perform them (during vertical blank for example), and do away with the
interrupt?

They need to be queued for other reasons too.  For instance, with DMA,
we want to queue them up and then tell the GPU to DMA them all at
once, so we make better use of the bus bandwidth.

Perhaps that's not general enough. For example, with DOT3 bumpmapping, a
grid cell doesn't store a colour, it stores a normal vector. Now we're
suddenly approaching the realm of GPGPU. How far can we take this? How
far do we want to take it?

That's a good question.  We're going to have to build this API in
layers.  First, a way to set modes and deal with interrupts.  Then a
2D API suitable for X11 (with compositing).  Then a low-level 3D API.
Then high-level 3D.  Then programmable shader language.  It'll be as
sophisticated as DX10 before we're done.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to