On Monday 02 May 2005 17:29, Patrick McNamara wrote: > --- Lourens Veen <[EMAIL PROTECTED]> wrote: > > On Sunday 01 May 2005 19:06, Timothy Miller wrote: > > > Interesting approach #1: Periodically, scan the VGA data and > > > translate/scale it to another buffer that is read by the video > > > contoller. > > > > Can we use the 3D pipe for this? Viktor already suggested putting > > commands in > > > > the DMA queue, so what if we have a small bit of logic that continuously > > loops over the text buffer and converts each two-byte character into a > > trapezoid drawing command? > > I think the nanocontroller should have the bandwidth to feed changes to the > 3d pipeline. Consider standard 80x25 text is 2000 characters. That > amounts to 2000 polys feed to the 3d pipe at say 60hz (definately a worst > case) and you get 120000 polys per second through the pipe. If the > nanocontroller runs at 200Mhz with one op per clock, that works out to over > 1600 clocks per character for the nano controller. We should be able to > meet that.
Especially if we have special drawing commands for trapezoids that have horizontal and vertical sides, so that we avoid some of the setup overhead. And it's all on-chip, we're not limited by the bus. The rasteriser should be able to do this easily even at 100 MHz. > Another thought. If we keep a copy of the VGA buffer, the nanocontroller > can compare the VGA buffer versus it's copy to determine what has changed > and only send the changes to the 3d pipe in which case your required > through put drops way off (down to on the order of 10s of polys/s for most > text screens). Hmm, good idea, if the brute force method doesn't work out. I don't think it will be needed though. > > > BTW, scaling is okay, but I consider it to be optional. Plenty of > > > notebook computers center the text display on the screen, rather than > > > scaling to the full resolution. As long as it's readable, why care? > > > Even centering is optional. > > > > Agreed, although it might be fairly trivial if we use the 3D unit as > > described > > above. > > Scaling should be automatic. We have a mapping of world coordinates to > screen coordinates as it is necessary to keep the same output at different > resolutions. It is only a matter of defining the proper area of our would > coodinates to draw our text polys to as well and setting the screen > viewport to that same area. Erm, are you talking about the hardware scaler now, or of extra logic that calculates extra parameters for the above mentioned rendering commands to make them scale the glyphs to a given resolution? > This poses an interesting possibility, also referenced above about > anti-aliasing. Just because the VGA mode is 80x25 text or, even 640x480 > graphics, doesn't mean the 3d pipe and framebuffer have to run at that. We > could effectively emulate 80x25 text on say a 1280x1024 resolution display. > Would it be worth making this as something like a card EEPROM configuration > option? The end user could choose the physical resolution to use when > running in various VGA emulation modes depending on the supported modes of > they monitor. > > I vary much like the idea of using textures for the text mode fonts. Since > we are looking at having a capable nanoprocessor, it could be used to build > the font textures from a normal VGA font map to avoid having to create > large ROM font maps and yet still have pretty text mode fonts. > > Using a texture + alpha map, I think it would go something like this: > > *Draw rectangle with solid color that is the text background color. > *Texture that rectangle with the font texture (also a solid color) using an > alpha map that desribes the character outline. > > Blinking could be handled by the nanoprocessor by changing the rectangle > color and/or not applying the font texture in the case the character is > "off". > > The one piece I am not sure about, and perhaps someone can elighten me, is > the need to change the font texture color. We don't want to have to create > a texture for each character in each color. Can that be done on the fly in > the 3d pipe? Looks like Mark Kilgard figured that out already: http://www.opengl.org/resources/code/rendering/mjktips/TexFont/TexFont.html He does draw text with transparent backgrounds though, so we'll have to fill the background in a separate pass as you describe. Which doubles the amount of drawing operations. Or....in the model we have multitexturing...if the hardware has that, then the first texture stage could give a solid colour quad (from a 16-pixel colour texture) and the second could render a solid colour on it, using the font texture with GL_MODULATE to make the right bits opaque. That would do it in a single pass I think. Lourens
pgp7BlSdInY5w.pgp
Description: PGP signature
_______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
