On Saturday 12 March 2005 21:00, Daniel Phillips wrote:
> On Saturday 12 March 2005 02:33, Lourens Veen wrote:
> > On Saturday 12 March 2005 03:50, Timothy Miller wrote:
> > > On Wed, 9 Mar 2005 09:47:42 +0100, Martijn Sipkema wrote:
> > > > From: "Timothy Miller" <[EMAIL PROTECTED]>
> > > >
> > > > > For implementing FSAA, I can't remember whether it's called
> > > > > multisampling or supersampling, but the easy technique is to
> > > > > have a stage in the pipeline that divides the geometry down and
> > > > > manipulates the alpha channel so as to convert 2x2 pixels into
> > > > > one pixel in the framebuffer.  This is trivial to implement,
> > > > > and I can add it in.
> > > >
> > > > That won't work I think as information on subpixels is lost on
> > > > converting to a single fragment with alpha---that conversion is
> > > > to be done after all drawing is completed, I think...
> > >
> > > All 3D drawing, yes.  I was going to insert it just before the
> > > extra stuff I inserted to do 2D stuff that OpenGL doesn't account
> > > for.
> >
> > I've been thinking, would it be acceptable to lose the hardware
> > overlay scaler when FSAA is turned on, and/or to only have FSAA in
> > full-screen OpenGL mode?
>
> Why do you think the two are incompatible?

Well, if you use the hardware overlay scaler for FSAA you can't also use it 
for playing video...that is my idea below makes them incompatible.

> > The way the hardware overlay scaler works is that it takes the
> > framebuffer, and a second buffer with video data. It copies the
> > framebuffer to output, except for pixels with a certain colour key
> > where it switches to a colour-converted (YUV->RGB), scaled (with
> > linear interpolation?) version of the second buffer. So, that has to
> > happen right before the DAC. Is that all correct?
> >
> > In that case, you could add a small amount of extra logic to the
> > overlay scaler (basically, you have to be able to tell it to ignore
> > the framebuffer input altogether and just give a scaled version of
> > the second input, and you need to be able to turn off the YUV->RGB
> > conversion, which you need to do anyway) and you could trade off
> > quality against speed arbitrarily. For example, you could render the
> > scene at width*sqrt(2), height*sqrt(2) to get one half the fillrate
> > but still some AA.
>
> Why does video need a dedicated scaler instead of being handled as a
> texture?  This just leaves the YUV->RGB conversion, which can be a
> property of the window, picked up by the window ownership test.
>
> But do we really need YUV->RGB on the card?  Why not do this on the
> host?

That's been discussed before...seems like there was never really a decision 
taken. http://lists.duskglow.com/open-graphics/2004-November/000032.html

Interesting, Vladimir Dergachev says many programs on Windows (e.g. games) 
expect both the overlay scaler and 3D to work simultaneously. I suppose you 
could use it to project a fixed HUD onto the 3D scene or something.

Lourens

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to