On 8/8/06, Lourens Veen <[EMAIL PROTECTED]> wrote:
On Tuesday 08 August 2006 17:26, Timothy Miller wrote:
>
> But if you want scaling, you'll have to use the drawing engine.

Okay, well I suppose if the drawing engine accepts YUV textures in a few
different formats then we can do Xv that way.

Sortof.  I'm planning to put the conversion into the host interface.
This is to limit the conversion to one place.  So you think you're
writing YUV, but what's really happening is that it's converted to RGB
when written and converted back when read.  This may result in some
roundoff error, but the effect is to eliminate the need to convert
multiple places in the drawing engine and video controller.


> > I hadn't thought of using the palette, because the conversion is so
> > trivial (I still managed to mess it up though, the one shown below
> > is the right one). It's a good idea, because we have the palette
> > anyway. I'm not sure about using it for blinking though.
>
> Haven't you done palette animation before?  :)

Yes I have actually. I wrote a programme to demonstrate interference
between point sources once. Two sources on screen, circular waves
around them, with the phase for one source encoded in the low nibble
and the phase for the other in the high nibble. Of course, I needed a
few colours for an overlaid menu, so I actually used 15 levels for one
phase and 16 for the other, and compensated for it in the wave
equation. And it all worked: full-screen full-speed animation on a
386 :).

But that was almost ten years ago...

Well, how old is VGA?  Perhaps it's fitting!  :)


> > The problem is that we have 9 bits per pixel: 8 bits for the
> > attribute, and one for foreground/background select (from the
> > character glyph).
>
> I think you don't need the fg/bg select.  Just pick the right one
> when writing.

I'm talking about the input to the nanocontroller, that is, the input to
the whole process of getting text on screen.

I think what we'll do is let the host write directly to the graphics
memory.  The nano controller's job is to convert it after the fact.

Only if we decide to support graphics modes (640x480x16, etc.) will we
actually have to intercept read and write requests from the host.

> > With my scheme, the idea is to interpret the blink bit in the
> > attribute byte in the nanocontroller, and write the rest to the
> > frame buffer, which is 8 bits per pixel.
>
> > The nanocontroller takes care of blinking by, for half of its
> > refreshes, writing [black] to the frame buffer instead of the
> > normal data, for any pixels that have the blink bit set. The blink
> > bit itself does not end up in the frame buffer, and we don't
> > rewrite the screen more often than usual.
>
> IIRC, blinking "off" means making the whole block the background
> color, not black.

Ahhh. You may very well be right. In fact, now that you mention it, yes
I think that's how it works.

> Well, unfortunately, I don't think we have an overlay scaler like
> you're thinking of.  However, we may be able to get the drawing
> engine to do some sort of conversion that helps.

Well, we can use the palette to do the colourspace conversion, so then
we would only need it to scale the result to fullscreen for
fixed-frequency monitors. Dieter mentioned he'd like that. Of course,
if we have a drawing engine that supports paletted textures then we're
set as well.

I have been thinking of allowing paletted textures.  OpenGL doesn't
call for it, so it's not in the model, but there might be significant
value in it.  On the other hand, we shouldn't add it specifically for
VGA support.  It doesn't matter how complex or slow the emulator
program is.

> That's what I'd initially thought of, but that would make blinking
> characters blink to back, rather than the background.  Mind you, that
> is probably acceptable, since we're trying to do a half-assed
> implementation anyhow!  :)

Right. I'll see if I can adapt it to blink to the background colour.
That shouldn't be a problem. In fact it may well be easier.

> I think the only advantage to using the palette is perhaps some code
> savings.  We'll rewrite the whole fb and palette every frame.  The
> difference is that, with the palette, there are fewer datapaths in
> the conversion code, making it smaller and more likely to fit into
> the program memory.  But it may be tiny anyhow, so maybe we don't
> care.

Well, either there is an extra datapath when writing the pixels, or
there is one when writing the palette. Code-size wise that shouldn't
matter, but I guess it depends on what the nanocontroller looks like as
well. It does strike me that if the palette only needs to be set once,
then that could be part of the mode setting code, which can be unloaded
once the mode is set.

Anything that's done once can be done by the BIOS.

Right now, I'm leaning towards skipping anything involving a palette.
The conversion program is likely to be very small in any case.  If we
need more than 512 program words, I'm inclined to just double it to
1024 rather than unnecessarily complicate something else.

There are going to be some unusual things that this GPU can't do or
which have to be done in a less than idea way.  That's something we
have to live with.  To many small but neat features add up to too
much.  The main reason I'm keen on doing this CPU is that it will
serve at least two purposes:  VGA emulation and DMA control.  If it
were only one, I wouldn't be interested.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to