Hi Gerhard,

> But I could not find, where this variable is actually set - so it seems to be always 
> FALSE, 
> and thus TYPE_Lab_DBL is always converted to V2 encoding.

This is unused in 1.13. There is an undocumented function called cmsCreateLab4Profile 
that would
cover such case. The Lab encoding is a can of worms, same profile can share several 
encodings in 
different LUT types. For instance, it is perfectly legal an v4 profile holding v2 
encoding in a LUT16 
type and v4 in a LutAToB type. I'm still trying to figure out how to deal with all 
that in an easy way.

And yes, Lab8 was being converted to v4 by mistake, it should be  << 8 instead of * 
257. Have it
already fixed in development sources...  I guess the need of a new release is becoming 
evident.

Regards,
Marti.
 


----- Original Message ----- 
From: "Gerhard Fuernkranz" <[EMAIL PROTECTED]>
To: "Marti Maria" <[EMAIL PROTECTED]>
Cc: "Lcms Liste" <[EMAIL PROTECTED]>
Sent: Saturday, September 18, 2004 8:25 PM
Subject: Re: [Lcms-user] Converting Lab to CMYK


Marti Maria schrieb:

>It currently uses ICC encoding, either v2 or v4 on depending on the version of the 
>Lab profile being used as input/output.
>
Marti,

I've now also looked at the code. For me it looks like UnrollLabDouble() 
converts TYPE_Lab_DBL to either V2 or V4 PCS encoding (as you said), 
depending on "lInputV4Lab". But I could not find, where this variable is 
actually set - so it seems to be always FALSE, and thus TYPE_Lab_DBL is 
always converted to V2 encoding. On the other hand, TYPE_Lab_8 seems to 
be converted to 16-bit by multiplication with 257 in RGB_8_TO_16(), 
called from Unroll3Bytes(), i.e. TYPE_Lab_8 seems to be converted always 
to the 16-bit *V4* PCS encoding. Now I'm a bit confused. Why is Lab_DBL 
always convert to V2, and Lab_8 always to V4 encoding? Did I miss something?

Concerning the API: Shouldn't cmsDoTransform() rather hide the used 
profile version from the API, and use a Lab_16 encoding at the API which 
is *independent* of the used profile? As user of the API I rather don't 
want to need to care, which profile version I'm using. May I suggest, 
that the API should provide either TYPE_Lab_16_V2 or TYPE_Lab_16_V4 (or 
both), and convert to the actual profile's encoding internally if necessary.

Btw, I'm also a bit confused about the ICC spec. The V4 spec defines the 
new 16-bit CIELAB PCS encoding in Annex A. But even with V4 profiles, 
the lut16Type CLUT tables still use the old V2 16-bit CIELAB PCS 
encoding, this is explicitly mentioned in the spec. So where is the new 
V4 PCS CIELAB encoding actually used? Is ist used for the PCS side of 
the new lutAtoBType, ... tables? (I guess so, but I couldn't find a 
definite statement in the spec)

Best Regards,
Gerhard





-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Lcms-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/lcms-user



-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Lcms-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/lcms-user

Reply via email to