On 12/05/2018 21:47, [email protected] wrote:
Hi Sergey, Small correction. If the HiDpi factor is 3, then assuming uniform scaling on x and y, it will be a 3 * 3 area representing a single coordinate point. getPixelColor() will always pick the first point color but the createScreenCapture() would pick the closest one pixel among the 3 * 3.

But getPixelColor() will return the real pixel which was painted by the test on the screen. If the test will draw blue/red square then getPixelColor() should return blue/red pixel(even if it was the first left/top pixel in the point).



Thanks and regards,

Shashi


On 12/05/18 9:45 PM, [email protected] wrote:
Hi Sergey, I read that slightly different. The getPixelColor() would not be accurate but createScreenCapture() works differently but advantageous to us. The reason for that is that if we have a HiDpi factor screen of say 3, then actually 3 sub pixels forms each particular location. The getPixelColor() would always choose the first pixel(0th) whereas createScreenCapture() since it converts to low res image using the nearest neighbor, it would choose the color of the most nearest point for that particular location.

We don't need the information of the 3 pixel color info but since the limitation of getPixelColor() is that it can return only one color and that color should actually be closest to that particular location. But instead we always return the first pixel color. On the other hand, createScreenCapture() would choose the closest pixel out of the 3 pixel color info, which is required in our case.

This is what I felt out of the 2 ways, please let me know what do you think of this.

Thanks and regards,

Shashi


On 12/05/18 4:38 AM, Sergey Bylokhov wrote:
Hi, Shashi.
It means that the updated test will not trust getPixelColor() which returns exact color which is drawn on the screen, but will trust createScreenCapture() which will apply transformation of the actual color. This looks odd.

On 11/05/2018 00:45, [email protected] wrote:
1. In this case we will be using the low resolution variant of the image that was captured.

2. The low resolution variant was created from the high resolution image that was actually captured from an high res display. This is done using the nearest neighbor interpolation which actually chooses the most nearest point's value and ignores the other points values completely.

This may be a reason to get a different color for the said pixel.


By using the getPixelColor():

1. This calls the getRGBPixel().

2. Here we return only the 0th pixel color ignoring the scaling effect in the case of hidpi display units, where there may be many sub pixels for the given user space coordinates.

This may be the reason for getting failed here.


Thanks and regards,

Shashi


On 11/05/18 2:39 AM, Sergey Bylokhov wrote:
Hi, Shashi.

On 10/05/2018 08:40, Shashidhara Veerabhadraiah wrote:
The test was failing because of the color value mismatches, the fix is to get an image of the part of the window for which color needs to be fetched from the image pixel.

Can you please clarify why r.getPixelColor(x, y) returns incorrect color and r.createScreenCapture(bouns{x,y,1,1}) will work, since internally both of these methods are use the same method CRobot.getRGBPixels()








--
Best regards, Sergey.

Reply via email to