Hey coders,
 
I'm trying to implement a somewhat simplified version of Photoshop.com or 
Aviary Phoenix for a client.  We're providing options to the user to perform 
photo-wide changes like hue/brightness/contrast and scoped changes like fixing 
blemishes, red eye, etc.  Right now we have things working okay.  We're 
applying photo-wide changes using a ShaderFilter on the main sprite and the 
scoped changes are children sprites with bitmap fills created from the result 
of ShaderJobs.  To redo/undo we just remove/add the associated ShaderFilter for 
photo-wide changes or remove/add the associated sprites for scoped changes.

This works nicely but we've run into the following problems:
 
(1) When the user zooms in on the image so that the image is really large, we 
get this warning: "Warning: Filter will not render.  The DisplayObject's 
filtered dimensions (4820, 3615) are too large to be drawn." and the filters 
disappear.  I understand filters don't work once the bitmap gets beyond 
16,777,215 pixels (even though the original bitmap data is smaller than that).  
This is too limiting for our needs.
 
(2) If the user sets a hue and then makes scoped changes like fixing red eye, 
it takes a few seconds for the changes to occur.  Without the hue set 
beforehand, it's very fast.  While the red-eye fix is always processed quickly 
(~2ms), it appears that the time delay occurs when the hue filter has to 
re-execute.  I'm assuming it's re-executing anyway...that's my understanding of 
ShaderFilters.
 
So, I went looking at Photoshop.com and Aviary and both seem to let you zoom 
into an image really far (seemingly larger than 16,777,215 pixels), set a hue, 
and see the results.  I would assume they're modifying the actual pixels 
instead of using a ShaderFilter?

If this is the case, then how are they managing undo/redo?  Here are my 
thoughts but I'd appreciate some confirmation or correction from someone who's 
more experienced then I in this area.

Let's say the user (1) uses the red eye tool (scoped change), (2) changes the 
hue (photo-wide change), then (3) uses the blemish tool.  Then the user hits 
undo, undo, undo.  Here's how I was thinking about performing these actions:

(1) Store the bitmap data of the area that will be affected for undo.  Make the 
change using a ShaderJob.

(2) Modify the hue for all pixels using a ShaderJob.  No bitmap data is stored 
for undo, only the previous hue value.

(3) Store the bitmap data of the area that will be affected.  Make the change 
using a ShaderJob.

(1st undo) Replace the affected bitmap data with the stored bitmap data.

(2nd undo) Execute a ShaderJob with the previous hue value.

(3rd undo) Replace the affected bitmap data with the stored bitmap data.

What makes me queezy about this is we could potentially be storing quite a bit 
of bitmap data for undo.  In some cases I think we might be able to run the 
reverse of a shaderjob for undo instead of just storing the previous bitmap 
data, but I don't think that's possible in some cases.

Am I way off here or am I on the right track?  Thanks.

Aaron

Reply via email to