I wonder if anyone has some ideas here, or can clear up some of my 
observations.

        Project here is a VGA display driven from STM32F4, currently on 
the F429 disco board to make life a little easier. Its running great, 
image is crisp and sharp, no blurr or jiggle going on (thanks to using DMA 
and a timer to regulate the output timing.)

        Essentially the code is all interupt driven:

        - timer (TIM2) to drive hsync/vsync/line
        - during a line, it may opt to turn up DMA to spit out pixel data
        - DMA has another timer (TIM8) to drive pixelclock

        Running at 168MHz (or 120Mhz, doesn't matter, just changes 
reoslution/refresh with same timing setup, luckily!), using libopencm3 rcc 
function.

        The main loop is basicly:
        while(1){ nop }

        Everything is fine.

        The problem is.. chanigng the main loop to _do something_ can futz 
up the image on the display.

        - using memcpy to 'blit' a chunk of image from one buffer to 
another totally blows up the display; if I brute force a for-loop instead, 
its okay. Using the Launchpad gcc toolchain, not sure if thats newlib or 
what, but whatever memcpy is doing is ugly :) (maybe it uses DMA or 
something..)

        - if I just brute force stuff, such as a for-loop to copy a 
picture onto display .. if I do 'too much', it'll blow up the image.

        I only do update during 'vblank' area anyway, but if I do 'too 
much' and exceed vblank, its the problem.

        The question is .. shouldn't you be nearly to do anythign you want 
'brute force' in the main loop, without impacting the timers/ISR's?

        - there is one volatile, the vblank flag; main loop can have:

        if ( vblank) {
                vblank = 0;
                ...
        So that shouldn't really limit anything.

        I would think having "nop" looping forever is nearly the same as a 
for-loop doing a brute force image copy in offscreen buffers. The timers 
driving the sync/line rendering should always take precedence.

        Anyone have any ideas?

        FWIW, here is a picture of the display, pretty nice and sharp:
https://www.dropbox.com/s/y8b5htd0r3ed1kg/Photo%20Jan%2030%2C%2012%2006%2031%20AM.jpg

        The main loop code is this in its entirety:

   while ( 1 ) {

     if ( vblank ) { // volatile
       vblank = 0;

       fb_lame_demo_animate ( offscreen );    // move square
       fb_clone ( offscreen, framebuffer );   // blit offscreen to onscreen

     }

     __asm__("nop");

   } // while forever

        The entire rest of application is in the VGA ISR.

        Is it just fact of life, with single core mcu, that busy work can 
take awhile for the context to switch during to ISR, and thats killing me?

        It shouldn't take half the cpu's cycles up doing the display, so I 
woudl think I'd have some ability to do non-VGA stuff in there without 
blowing up the image :/

                jeff

--
If everyone would put barbecue sauce on their food, there would be no war.

------------------------------------------------------------------------------
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
_______________________________________________
libopencm3-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libopencm3-devel

Reply via email to