Hello,
I am trying to convert from TYPE_LAB_16 to TYPE_BGR_16 but don't find the
good way to process.
*hTransformLab2Aux = cmsCreateTransform(*
* hLabProfile,*
* TYPE_Lab_16,*
* hAuxRGBProfile,*
* TYPE_BGR_16,*
* INTENT_ABSOLUTE_COLORIMETRIC, 0);*
The two used profiles have previously been created with no errors.
hLabProfile is a simple Lab profile created with cmsCreateLab2Profile.
hAuxRGBProfile is special RGB profile based on the model of
cmsCreate_sRGBProfileTHR function but with different values for primaries.
In order to complete the conversion I used the following line codes :
*unsigned short LabEncoded[3] = {0, 0, 0};*
*cmsFloat2LabEncoded( LabEncoded, &myLab );*
*
*
*unsigned char ByteRGB2[3] = {0, 0, 0};*
*cmsDoTransform(hTransformLab2Aux, LabEncoded, ByteRGB2, 1);*
*
*
*cmsFloat64Number R = ByteRGB2[0];*
*cmsFloat64Number G = ByteRGB2[1];*
*cmsFloat64Number B = ByteRGB2[2];*
If tests in 8-bit (TYPE_LAB_16 to TYPE_BGR_8) values are OK, the execution
with 16-bit values says ByteRGB2 is corrupted. So which type can I use
instead of unsigned char in this case ? Is there a special typedef in lcms
to do that ?
The fact is I am not sure if I am dong right in using TYPE_BGR instead of
TYPE_RGB. Could I have a little example of what I have to do ? It would be
great.
Remi
------------------------------------------------------------------------------
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
_______________________________________________
Lcms-user mailing list
Lcms-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lcms-user