Well, is not directly supported by "normal" format specifiers, but one can easily add support for that by using user-defined formatters. There is a function, cmsSetUserFormatters() specifically for doing so.
Ok, thanks. I was not aware of that.
But at that point, despite lcms would accept such format, the precision would remain 16 bit. ICC profiles does not allow for more resolution, well, CLUT based ones, I mean. So, if the issue is about reading this particular encoding, user formatters is all you need. Otherwise lcms cannot handle precision above 16 bit. Please note that using 32 bit instead of 16 would double the memory/CPU requeriments for devicelink computation, and really there is no need for doing so in almost all cases.
Right. Passing 32-bit data through a 16-bit algorithm results in 16-bit precision. However, there is no other option if the data available is 32-bit.
Does LCMS copy and reformat the pixel data into its own allocated buffers, or does it read/write the user data "in place" without copying to/from buffers in some native representation?
Bob ====================================== Bob Friesenhahn [EMAIL PROTECTED] http://www.simplesystems.org/users/bfriesen
------------------------------------------------------- This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170 Project Admins to receive an Apple iPod Mini FREE for your judgement on who ports your project to Linux PPC the best. Sponsored by IBM. Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php _______________________________________________ Lcms-user mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/lcms-user
