On 5/1/05, Patrick McNamara <[EMAIL PROTECTED]> wrote: > Viktor Pracht wrote: > > >Hi all, > > > >I thought a bit about the VGA controller, and here are the results: > > > >The design goal is to implement a VGA controller with as little hardware as > >possible. Since this won't be a standalone controller, we should use the > >existing 3D pipeline, video controller and RAM. The most obviously useful > >part is the video controller: if we somehow manage to get the VGA image into > >a 32 bpp framebuffer, then we don't have to worry about timings or any > >real-time aspects at all (we're doing VGA for compatibility, not for speed). > >The second obvious part is the RAM, which should be used as VGA RAM and as > >framebuffer for the video controller. With 256 KB of VGA RAM + 800*600*4 > >bytes of framebuffer, we're using less than 2% of the available 128MB. > > > > > > > <snip> > I had been pondering this as well. I think it's a great idea, but I'm > not sure how certain VGA functionality would be implemented. For > example, blinking text in text mode. Every so many screen refreshes, > the text would need to be turned on or off without any host CPU > intervention. Another issue, though I think it could be worked around, > is the effect of the reads and writes being converted to pipelined > commands. Reads are especially troublesome as you have to wait for the > pipeline to empty before the contents of the read are valid. I > think... I'm pondering over this as I write and keep changing my mind > about things.
The idea I have in mind is to point the memory apertures at raw graphics memory. The reads and writes are, therefore, always valid. Then the emulator reads the VGA data and translates it to a viewable framebuffer that the host doesn't know about. > > One thing I am pretty sure about though. We have to care about timings > and such. The VGA cards are desinged to give the programmer direct > access to all manner of the internal hardware and do whatever they want > with it. Can we fake it? Let's say they want access to a "current scanline" number. What if we lie? Who cares? > I do agree that we should re-use as much of the hardware as possible. > We should certainly be able to user the video controller and all > downstream hardware. And, I think with proper on the fly translation, > perhaps using a microcontroller as described, we can use a 32bpp > framebuffer and provide the neccessary emulated RAM buffers for the > various VGA modes. Here are some things we can use... Look at the model in the Pattern2D section. There's a mechanism for loading a 32x32 stipple and then applying it to the drawing. When translating a charcter to pixels, we could read the character, then use that to decide where in the font to read, and then push that data down into the the Pattern2D unit. But, depending on how capable this nanocontroller is, we might be better off just giving it memory read and write instructions, with whatever else logic is necessary to make the translation. Here's how to translate 80x25: char_loop: - Compute character address - Read character and its color info - If it's blinking and it's the right part of the blink cycle, keep going, otherwise, do something else - From the character number, compute an offset into the font table - Add the font table offset - Also compute framebuffer offset for image font_loop: - Read a byte from the font - Iterate over the bits, selecting between foreground and background colors - Write pixels to framebuffer - continue with font_loop - continue with char_loop Dealing with VGA 640x480x16 would be something like this: - Compute VGA offsets - Read each of the four bitplanes for 32 pixels - Assemble pixels into 4-bit words - Look up colors in color table (could be in framebuffer!) - Write pixels to framebuffer - loop _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
