On 5/2/05, Patrick McNamara <[EMAIL PROTECTED]> wrote: > > Can we use the 3D pipe for this? Viktor already suggested putting commands > > in > > > > the DMA queue, so what if we have a small bit of logic that continuously > > loops over the text buffer and converts each two-byte character into a > > trapezoid drawing command? > > I think the nanocontroller should have the bandwidth to feed changes to the 3d > pipeline. Consider standard 80x25 text is 2000 characters. That amounts to > 2000 polys feed to the 3d pipe at say 60hz (definately a worst case) and you > get 120000 polys per second through the pipe. If the nanocontroller runs at > 200Mhz with one op per clock, that works out to over 1600 clocks per character > for the nano controller. We should be able to meet that. > > Another thought. If we keep a copy of the VGA buffer, the nanocontroller can > compare the VGA buffer versus it's copy to determine what has changed and only > send the changes to the 3d pipe in which case your required through put drops > way off (down to on the order of 10s of polys/s for most text screens).
Although we can give the nanocontroller access to the rendering engine, I think the code would be much simpler to just have it do all the computation. Also, I think it's pointless to do comparisons. It's a waste of time, bandwidth, and code space. I say we just have it translate in a continuous loop, even if it only translates at a 10Hz framerate. > > > The VGA font could then simply be a texture (we > > could even have an antialiased console :-)), with U and V being calculated > > from the character number. If the characters are 8x16, then the texture > > would > > > > be 128x256 and calculating U and V is just splitting the character number > > and > > > > applying a fixed shift, that is, it doesn't cost any hardware at all. > > Changing code pages or putting in custom characters in the top 128 places (I > > think VGA allows you to do that) is just a matter of changing the texture to > > another one already in memory, or loading a new texture. Calculating 8-bit > > colour values from the VGA colours is the same. That leaves blinking as the > > only complication, and it isn't that hard to periodically overwrite the > > colour values in the drawing command with black. Just a MUX really. > > > > What I didn't check is that we have the appropriate combination operations > > to > > > > be able to put both the background and the foreground colours in, if not, > > we'll have to draw them separately, or maybe we can use the texture as alpha > > and use it to blend between two single-colour textures that match the > > foreground and background colours needed. I'm sure it can be solved. > > > > > BTW, scaling is okay, but I consider it to be optional. Plenty of > > > notebook computers center the text display on the screen, rather than > > > scaling to the full resolution. As long as it's readable, why care? > > > Even centering is optional. > > > > Agreed, although it might be fairly trivial if we use the 3D unit as > > described > > above. > > > > Scaling should be automatic. We have a mapping of world coordinates to screen > coordinates as it is necessary to keep the same output at different > resolutions. It is only a matter of defining the proper area of our would > coodinates to draw our text polys to as well and setting the screen viewport > to > that same area. You know what we could do... to have the screen translated would be best suited to the controller itself, but a good use of the rendering core would be to scale the image. But I really don't care about scaling. > > This poses an interesting possibility, also referenced above about > anti-aliasing. Just because the VGA mode is 80x25 text or, even 640x480 > graphics, doesn't mean the 3d pipe and framebuffer have to run at that. We > could effectively emulate 80x25 text on say a 1280x1024 resolution display. Or 2560x2048. Just center the image on the display and scan out the framebuffer at the physical resolution of the monitor. Maybe there is some room for pixel-doubling algorithms (in the nanoprocessor) to scale up by a factor of two in X and Y. Remember, we can reload translation code. You have 512 instructions, but you can have as many programs as you want. So when switching from 80x25 to 640x480, the BIOS would load a new translation program (via PIO, because it's only 512 words). > Would it be worth making this as something like a card EEPROM configuration > option? The end user could choose the physical resolution to use when running > in various VGA emulation modes depending on the supported modes of they > monitor. Separate issue, but yes. The BIOS can read jumpers to decide which program to load, not to mention which physical resolution to use. > I vary much like the idea of using textures for the text mode fonts. Since we > are looking at having a capable nanoprocessor, it could be used to build the > font textures from a normal VGA font map to avoid having to create large ROM > font maps and yet still have pretty text mode fonts. We still need a regular VGA font in VGA format. If we're going to use the texture engine, that's fine, but the first step for each frame would be to translate the font format into textures, then use the GPU to render them. That's doable. The steps are: - Iterate over the font, translating to textures - Iterate over characters: - Load color values into GPU registers - Render texture - loop > Using a texture + alpha map, I think it would go something like this: > > *Draw rectangle with solid color that is the text background color. > *Texture that rectangle with the font texture (also a solid color) using an > alpha map that desribes the character outline. I think we can use the alpha map to select between color constants on the fly. No need to draw the background first; the background would be for alpha 0.0, and the foreground would be for alpha 1.0. > > Blinking could be handled by the nanoprocessor by changing the rectangle color > and/or not applying the font texture in the case the character is "off". The simplest way to do this is, for the foreground color, make it the same as the background color during the "off" time. Choosing to draw a solid rectangle instead would be faster but add unnecessary complexity to the algorithm. > The one piece I am not sure about, and perhaps someone can elighten me, is the > need to change the font texture color. We don't want to have to create a > texture for each character in each color. Can that be done on the fly in the > 3d pipe? Yes. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
