Warning: a long and boring post follows. If you don't want to know about colour management, just shoot in whatever colour space is used by whoever does your printing. If in doubt use sRGB. Now skip to the next message. Or even better, go out and actually use your gear instead of sitting here talking about it :)

On Jun 19, 2004, at 2:06 AM, Frantisek Vlcek wrote:

Well, when both devices use different colour spaces and you do not
_convert_ between them their colour will be different.

Yes. Anyone with Photoshop can go to Image->Mode->Assign Profile. Tick the "Preview" box and pick some different profiles and you'll see the effect that the wrong space can have. Note that Photoshop lets you choose any profile in your system instead of just the device-independent colour spaces.


Just either set up both to the same colour space, or shoot in
AdobeRGB, do the editing (if any) and than _convert_ to sRGB (which is
the colour space _assumed_ by Frontier and Noritsu printers).

Oh boy, whole books have been written about this :)

I guess the first thing to think about is the colour gamut of the digital camera's sensor. If it can sense colours that are outside your working space then your whole imaging process is going to be limited by the working space itself (eg sRGB or Adobe RGB). IMO this is bad: I believe that the input and/or output hardware should be the limiting factor.

Note that the instant you convert a file into a smaller colour space, any colour information which is outside the gamut of that colour space is lost forever. How that data is handled is partly determined by the conversion's rendering intent which is something I'm not going to go into.

Some people recommend capturing as much colour information as possible, and storing it in a colour space which can actually represent all of that information. For example, when scanning slides you could use a large colour space such as EktaSpace which was designed for scanning Ektachrome slides, or something positively gigantic like Kodak's ProPhoto RGB.

The downside of large colour spaces is that using them risks a loss of tonality for a given bit depth (I'll expand on that later). When using large colour spaces it is strongly recommended that your entire workflow is in 16 bits per channel. All of the core functions of Photoshop CS support 16-bit colour, but a lot of the bundled filters do not.

The upside of archiving in a larger colour space is that when an improved display or printing technology comes along in the future, you can benefit from its ability to output a wider range of colours. For example, OLED displays seem to have a wider gamut than CRT or LCD screens (see link below).

http://www.kodak.com/US/en/corp/researchDevelopment/technologyFeatures/ oled2003T.shtml

 Benefit
of shooting AdobeRGB set camera is that sRGB is rather limited for
some tones, so if you do any adjustments to the image, it's better to
work in the larger colour space.

For a given bit depth, its all a tradeoff between tonality and gamut.

sRGB has a relatively small gamut, so 8 bits (per channel) of resolution within that gamut gives you very good tonality, but the ability to represent highly saturated colours is very limited.

A larger colour space represented by 8 bit data will give you worse tonality as the 0-255 range now has to cover a larger range of colour, so each "step" will represent a slightly larger change in colour. If the gamut is big enough, this "step" can become noticeable. This effect is known as "posterization". Going to 16 bits per channel gives much finer steps so posterization is avoided... at the expense of doubling the file size.

sRGB is quite close to what monitors can display which means that you might not easily see the difference between an sRGB picture and an Adobe RGB one on-screen, unless you desaturate them a little. Photoshop can be set up to do this automatically on the image view but it wreaks havoc with the rest of the colour workflow as the desaturated view is no longer an accurate visual representation of the image. This setting is therefore not recommended for general use.

A lot of people match their working space to the capabilities of their output device (ie printer). This is great in the short term as it gives the greatest possible tonality within the limitations of the printer - in other words you're not wasting space by storing information about colours that you can't print anyway. This approach can be a limiting factor as technology improves because you can't regenerate the data you threw away, so if a wider-gamut printer comes along you won't be able to take full advantage of it.

Working spaces are meant to be *device independent* but reality has to hit the process somewhere. I would recommend taking one of the following courses of action:

1- Ignorance is bliss. Shoot in sRGB and you'll never have to worry about the nightmare that is colour management, as long as that's what your lab uses. You'll still get great results.

2- Match your colour workflow to the capabilities of your hardware. In the short term this will give you optimal results, and you'll save on file size if the gamuts aren't too big (although I recommend making all image adjustments in 16-bit anyway, but that's another story).

Ideally this approach involves looking at CIE LAB plots of the input and output profiles, and picking the closest device-independent colour space as your working space. You can buy software to make these plots but IIRC its quite expensive. Apple have included this capability within the ColorSync Utility which comes bundled with OS X, but its probably not quite as powerful as the specialist software might be as an analytical tool (that's just my speculation). It makes for a great visual comparison, though. I put an example online a while ago at:
http://www.digistar.com/~dmann/temp/srgb_adobe1998.jpg
I can't get to digistar at the moment so it might be down. If you do see the picture, the greyed plot is Adobe RGB and the coloured one is sRGB.


The problem with this approach is that you will find yourself limited if you get better hardware in the future.

3- Preserve the most data you possibly can. Shoot RAW then process and archive in 16 bits per channel mode using the colour space which best matches (leaning towards slightly exceeding) the capabilities of your camera's sensor, or your film scanner, or whatever. When sending your file for printing, convert it to your lab's working space manually. And before converting, make extensive use of soft-proofing to avoid any surprises (you will need a calibrated monitor along with accurate monitor and printer profiles for this).

Disclaimer: the above is all theoretical. In the real world I don't think it makes a huge difference unless you're getting into some pretty high-end printing. Just make sure that your data is in the same colour space that whoever is doing your printing expects it to be in. If you're printing your own files, Photoshop and the printer driver will work all this out for you.

Embedding your working space in your image files is an excellent idea (I do it even for web photos), but please note that not everyone supports them!

Cheers,

- Dave

http://www.digistar.com/~dmann/



Reply via email to