Hello Martin, Tuesday, February 13, 2007, 9:21:52 PM, you wrote: MG> Yes, that should work. The essential thing is to be able to predict an MG> upper bound on the maximum number of elements so that it cannot MG> degenerate, however cruel the movie. Once you know how your algorithm MG> works you will be able to generate a pathological test case or two MG> that tries to force your code into generating the worst-case scenario MG> you can imagine.
I implemented the algorithm now (not yet in CVS) and it works well. Currently it measures the horizontal and vertical distance between corners and uses the smallest as the criteria. Distances smaller than 10% (configurable) of the scene are merged. This factor seems to be a good choice for all kinds of movies. An test movie that contains a large, scaled bitmap (=CPU heavy) and five moving sprites over it normally played at 4.5 frames per second. With multiple ranges it now reaches 50 frames per second. Another movie with 150 randomly placed sprites leads to just one (sometimes 2 or 3) ranges and of course gives no performance gain - but it's only about 0.01% to 0.1% slower due to range calculation overhead. A real movie (an application) shows noticeably faster response time to user input, which was the main reason for me to redesign this part of Gnash. The algorithm can be modified or redesigned easily without touching the rest of Gnash. It currently also allows classic single range calculation for non-AGG renderers. I still have to clean up much of the code before committing to CVS, and of course add testcases... ;) Udo _______________________________________________ Gnash-dev mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnash-dev

