Hello Sandro, Monday, November 5, 2007, 12:05:23 PM, you wrote: SS> Topologies are made of nodes, edges and faces. SS> Two intersecting edges would be invalid (missing node info). SS> Normalizing would mean finding all intersections and defining all faces SS> by the edges that bound them. SS> I'm not sure we need such a representation.
Hmm, Bastiaan's description is a bit different. Anyway, a Flash shape should never have any intersecting edges, i.e. Flash itself normalizes the shape. SS> Scaling the query point would only mean reducing its precision to SS> match the precision of the source coordinates (twips). [...] SS> Now the question would be: is it worth it to use higher precision ? SS> At which cost ? I can't think of a real-world case in which this would SS> be relevant. Sure it is. I didn't look at the AGG (the core lib) implementation in detail, but I'm pretty sure it works just like the rendering process itself. Say, you have a shape that's only 2 TWIPs wide. If you put the shape at 1:1 scale on the stage it will be barely or not at all visible (since it is only 0.1 pixels wide). A point test at that pixel might fail (since the *renderer* might say 10% coverage is not enough). So far so good. Now, if you scale the same shape by factor 100x it becomes 200 TWIPs = 10 pixels wide. You have no problem clicking inside this shape with the mouse but the renderer would still do the point test with the normal-sized shape and thus won't give a positive result. It's theoretical math versus practical implementation... Solution: provide transformation matrix and check the transformed shape. Udo _______________________________________________ Gnash-dev mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnash-dev

