Hi,

I am using SSIM to compare 2 video streams\sets of images and I find it to
be almost too accurate. I would like some fudge factor like other image
comparison tools have. I used to do it in an automated test suite but due
to file sizes and amounts I turned to scikit.

I do quality assurance on a render engine and we just want to make sure the
images are meaningfully identical build to build. Currently with SSIM I am
seeing things as small as 4 pixels across a 1920x1080 image different. I
personally would like to ignore those 4 pixels but still catch meaningful
items. Say if 8 pixels near each other were off keep those but if they are
8 pixels randomly through the image ignore them.

Does this sound like something logical, say using an adjacency of pixels
with a tolerance value for color and number of pixels as arguments?

See attached image for example of how little is different in the entire
image. It is a GIF zoomed in to the exact spot of 3 different pixels so
hopefully it works.

Regards,
Mathew Saunders
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to