Hi,
As far as I understand, with L*a*b* colorspaces, a* and b* are real values
with no simple bounding, but with useful values somewhere in [-128,128].
Right. What happens is anything above this margins is a hypersaturated
color very hard to find in nature.
This is fine when the input is floating point values, but when dealing with
octets, some sort of scaling is required. As far as I understand, lcms uses
an hard-coded scaling to [-128,128[.
Well, this is actually the ICC encoding, as defined in the ICC spec. In order
to avoid abrupt changes over zero axis, the encoding on 8 bits is just a +128
offset, so the effective range is -128.0 +127.0 with steps of 1. On 16 bits
things are more complex, and a sort of fixed 7.8 point is used. Again, this
particular encoding is not my election but because ICC spec states so.
Now, if we have an image stream in a PDF file, with three 8-bit channels, we
have a stream of octets. The colorspace of that stream can be defined by
something like:
/Lab << /WhitePoint [ 0.9642 1 0.82491 ] /Range [ -100 100 -100 100 ] >>
The /Range argument tells us that the second and third octets, which stand
for a* and b*, must be scaled so that 0 is -100 and 255 is +100; the range
for L* is always [0, 100]. Some others PDF can have others ranges,
[-128, 127] is quite common. With the [-128, 127] range, lcms can be used
directly on the buffer of the stream, with a TYPE_Lab_8 type. But with
others ranges, it seems necessary to do the intermediate conversion, which
is quite a waste.
Ok. Obviously if you have you Lab values normalized on any other range, you
need to scale them back. Since the ICC profile needs the -128..+127 (it
interpolates using this range) your only option is to do the scaling somewhere
in the workflow. lcms does its best in avoiding floating point, but some
combinations would need floats at all. -100..+100 for example.
There is no way to tell lcms the range, but there is a way to define the
format using user-defined formatters. Let's take the -100..+100 for example.
You need to write a short function that converts between encodings. Your
function is now supposed to read from a buffer and put the values correctly scaled
to wIn[] as 16-bits format:
// ---------------------------
unsigned char* UnrollLab100(register void* nfo, register WORD wIn[], register
LPBYTE accum)
{
cmsCIELab Lab;
BYTE L100, a100, b100;
// Read the current pixel
L100 = *accum++;
a100 = *accum++;
b100 = *accum++;
// Convert them to "normal" floating point Lab
// range L*(0 -> 100) ab*(-128.0 -> +127.996)
Lab.L =
Lab.a = ... do the scaling here using L100, a100, b100...
Lab.b =
// From floating point to 16-bits ICC encoding
cmsFloat2LabEncoded(wIn, &Lab);
// Return pointer to next pixel
return accum;
}
So far so good. Now you want the color transform to use the
UnrollLab100 decoder instead of "normal" Lab, so:
cmsSetUserFormatters(xform, TYPE_Lab_8, UnrollLab100, 0, NULL);
This works fine in 1.15, if you are using 1.14, you may need to
save the old formatter:
cmsFORMATTER unroll, pack;
DWORD du, dp;
cmsGetUserFormatters(xform, &du, &unroll, &dp, &pack);
cmsSetUserFormatters(xform, TYPE_Lab_8, UnrollLab100, dp, pack);
That's all. You can now use cmsDoTransform() directly. Of course this
will call your function for each color, so if you want performance you
may need to optimize the code as much as possible.
Hope this helps
--
Marti Maria
The littlecms project.
www.littlecms.com
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.344 / Virus Database: 267.10.15/82 - Release Date: 25/08/2005
-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Lcms-user mailing list
Lcms-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lcms-user