> Printing a map between raw and transformed pixel values shows that for a > given raw pixel value there are multiple transformed values i.e. the > transformation is not a function of pixel's value alone. It seems some > sort of filtering where the neighboring pixels also influence the > resultant value.
Given that insight, and rhkramers assertion that there do exist "standard" image transformation algorithms based on this aspect (local context dependant), and the fact that it is thought to be a virtue of a programmer to be lazy (in other words, re-use code) I would try to read up on a list of "typical" image transforms using that approach and try out a few using a standard library, say, opencv, PIL, ... looking at what "comes close". Again, ramping down the gamma value with imagemagick already did go in the right direction for me. Likely, it is not just one algorithm, but several for different aspects (sharpness vs brightness vs grayscale distribution etc). Maybe it helps to get someone with artistic (?) or (or best: and) graphics processing experience to look at both images. A seasoned digital photographer is likely to have a good idea as to what transforms to apply in their favourite photo processing application to make one more like the other. Finding out exactly what the vendor application does is probably very hard without reverse-engineering their code. Karsten