Hi Sergey, I found some information wrt the use of getPixelColor() compared to the method of using the createScreenCapture() api's.

By using the createScreenCapture():

1. In this case we will be using the low resolution variant of the image that was captured.

2. The low resolution variant was created from the high resolution image that was actually captured from an high res display. This is done using the nearest neighbor interpolation which actually chooses the most nearest point's value and ignores the other points values completely.

This may be a reason to get a different color for the said pixel.


By using the getPixelColor():

1. This calls the getRGBPixel().

2. Here we return only the 0th pixel color ignoring the scaling effect in the case of hidpi display units, where there may be many sub pixels for the given user space coordinates.

This may be the reason for getting failed here.


Thanks and regards,

Shashi


On 11/05/18 2:39 AM, Sergey Bylokhov wrote:
Hi, Shashi.

On 10/05/2018 08:40, Shashidhara Veerabhadraiah wrote:
The test was failing because of the color value mismatches, the fix is to get an image of the part of the window for which color needs to be fetched from the image pixel.

Can you please clarify why r.getPixelColor(x, y) returns incorrect color and r.createScreenCapture(bouns{x,y,1,1}) will work, since internally both of these methods are use the same method CRobot.getRGBPixels()


Reply via email to