Hi.

As far as I understand, with L*a*b* colorspaces, a* and b* are real values
with no simple bounding, but with useful values somewhere in [-128,128].
This is fine when the input is floating point values, but when dealing with
octets, some sort of scaling is required. As far as I understand, lcms uses
an hard-coded scaling to [-128,128[.

Now, if we have an image stream in a PDF file, with three 8-bit channels, we
have a stream of octets. The colorspace of that stream can be defined by
something like:

  /Lab << /WhitePoint [ 0.9642 1 0.82491 ] /Range [ -100 100 -100 100 ] >>

The /Range argument tells us that the second and third octets, which stand
for a* and b*, must be scaled so that 0 is -100 and 255 is +100; the range
for L* is always [0, 100]. Some others PDF can have others ranges,
[-128, 127] is quite common. With the [-128, 127] range, lcms can be used
directly on the buffer of the stream, with a TYPE_Lab_8 type. But with
others ranges, it seems necessary to do the intermediate conversion, which
is quite a waste.

Is there some way to tell lcms the ranges for a* and b* so it can do the
conversion itself more efficiently?

Regards,

-- 
  Nicolas George

Attachment: signature.asc
Description: Digital signature

Reply via email to