Each instruction takes four pixel clocks, or put another way, we clock
out four pixels per instruction, correct?
Is there a valid video mode that would require a back porch less than
four pixel clocks wide? How about eight? Better yet, how about an odd
pixel width for the back porch, front porch, or either blanking interval?
Given that the maximum count is 2048, that will set the maximum usable
resolution at 2048x2048. Dual link DVI maxes at 2048x1536 which fits ok
in that. Max frequency is 330Mhz which means we need to be able to have
and instruction rate of 82.5MIPS with zero wasted cycles.
For a given instruction what is the skew between instruction execution
and flag assertion? Will the pixel data have an equal skew? In other
words, will the vsync and hsync pulse edges align with the pixel pulse
edges. Does it even matter as long as it is within the timing constraints?
I assume that we can be fetching a new scanline while we are outputting
the current one? In other words, the following code would not corrupt
the pixel data (I'm not worried about the syncs here.
FETCH 640 ;Fetch a scanline. The data is not completely valid for
about 320 instruction cycles
DELAY 640 ;We will delay a full scanline just to make sure the data is
ready
FETCH 640 ;Now, start the 2nd scaline fetch.
SEND 640 ;Start output of the first scanline
If we cannot do the above, the I don't see how you could ever output
progressive scan frames. The horizontal blank is well less than half a
scanline in time. For 640x480x60 it is 6.6us or about 26% of a
scanline. And we cannot do the following
FETCH 640
DELAY 640
SEND 320
FETCH 640
SEND 320
as that would glitch the pixel stream. Either that or I misunderstood
the purpose of the SEND operation and how pixel data is clocked out.
I'm also unclear on the purpose of the cursor control flags. I was
reading back through the earlier threads and still can't see where they
fit in. To know when to turn them on or off would require knowing
things about where the program was in the frame buffer vs where the
cursor was. That would require test and branch to be included in the
controllers instruction set.
In 32bpp mode, a 256 bit memory word represents 4 pixels which is what
we clock out per instruction. In the worst case (1bpp), that same
memory represents 256 pixels which is what we clock out per instruction,
unless you have a variable output rate so that we still only output 4
pixels per instruction meaning the actual data throughput is a factor of
32 less. Also, how do we output a horizontal resolution not divisible
by four? The following come to mind. The same problem occurs for any
sync timing that is not evenly divisible by four either.
854x480 -- Wide screen 480p
1365/66x768 -- Wide XGA, 768 line format
I hope I'm not just being dense this evening. If I am, please apply the
bat of enlightenment. :) I sat down and started pondering a general
purpose "display" program. Within some timing limitations I have a
sneaking suspicion that it is possible to write a single program to
display progressive frames and a single program to display interlaced
frames. The modeline interpreter would simply need to tweak a set of
fixed locations to effectively "program" the timing parameters in and
reload the program. I think, think mind you, that it is possible to
write a unified video controller routine for both progressive and
interlaced frames using the same ideas.
Patrick M
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)