I'm using Core Image filters to apply real-time effects to vector objects.

I've run into the problem of determining just how much space I need to 
accommodate any given effect. Currently I just add a fixed percentage to the 
bounds I start with, but it's actually inadequate to do this for many effects, 
which frequently end up running off the edges of the space allotted.

The vector objects have a defined bounds which fully encloses all drawing that 
they do. When these objects are altered, that bounds is used to refresh just 
that part of the view as needed. When a CI Filter is applied, I use that 
bounds, multiply it by some scaling factor, and use that to create an offscreen 
image into which the vector object plus its CI effect is rendered. The 
resulting image is then drawn in the view. The needed space for a given effect 
varies depending on the effect and its parameters, but I see no way to compute 
that reliably. If I make the bounds some enormous scale-up of the original 
bounds to accommodate any potential effect, performance suffers dramatically 
because of all the wasted area of the view that has to be updated.

Is there any way to preflight a Core Image filter effect so I know how much 
space I'll need to draw it?


--Graham


_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to