The current design in which: 1. A frame is decoded by, say, NetStream and pushed to a RAM queue 2. video_stream_instance pops the image from the queue and passes it along with transform control information 3. render_handler draws the pixels makes it difficult to accelerate in hardware.
It is much better to let the video decoder (which by the way deserves its own object really which would handle a stream from whichever source) perform transformations and draw the pixels - with a callback, defaulting to the current scheme. This kind of approach also helps letting the decoder control a/v sync. Is there any special reason behind the current design that will make my approach fail?
_______________________________________________ Gnash-dev mailing list Gnash-dev@gnu.org http://lists.gnu.org/mailman/listinfo/gnash-dev