I think you need to use Image I/O to save the failed image and examine it.
The test is capturing pixels at offsets of 20 into the window,
presumably to avoid
any "nice" blending effects the desktop may apply to the edges of windows.
If you save the image you can inspect it at leisure better assess why
the test fails.
The StaticallyShaped one for example carves out rectangles and maybe on
MacOS
the blending is happening on these too ?
-phil.
On 2/26/18, 3:40 PM, Sergey Bylokhov wrote:
Hi, Shashi.
Please let me know if there is any better logic to avoid tolerance
based color comparison.
The difference between Color(0, 0, 255) used by the background frame
and the Color(0, 0, ~216) captured by the robot is quite big.
I guess the difference should be visible by the user, but if I am not
missed something the window really draw color_blue=255, why the robot
capture wrong color?