In compositing in visual effects, dealing with many widely varying image
and data formats used to be a common problematic issue, but for the most
part it is not much of an issue anymore. Maybe the resolution in that
domain can give you some ideas for how to deal with this issue:

Guessing about the intent of the user was common at one point, but
generally led to a lot of frustration, particularly so when the operation
that the user needs to perform is different from what s/he needs to view in
order to judge the correctness of the outcome. "Data images" need to remain
untouched, and it is very frustrating to have any mysterious conversions
taking place behind the scenes (automatic gamma correction in particular
was the key issue that caused so many problems). On the other hand, it is
also frustrating to get black images, and not know what is happening.

The resolution of this issue took the form of the split of viewer from
operation. If you have a clear separation of operations that are intended
to assist the user in making visual judgement, then you are at liberty to
guess at "magic" transformations. Meanwhile, you lend clarity to the idea
that, wrt the actual image data, nothing will happen unless the user
explicitly asks for it. It's easy to lose trust in tools when magic
operations start taking place behind the scenes, even if they are

It may also be worth considering that, presenting things as error messages
or warnings when they are just information that's necessary to do
legitimate work, can wind up being somewhat off-putting. It is common, for
example, to drastically exaggerate a correction that will later be brought
back into range at a different stage in the computation. In compositing
packages (like Nuke for example), there's a lot of information that's
persistently displayed to you, that you need available to do the basic work
of building up a computation. I tried a while ago to see if it might be
possible to use the Orange Data Mining interface to get something
resembling an image processing setup. I set it aside, but it did at least
seem to have the right mix of components such that a working setup would be
imaginable with some effort. Having immediate feedback on what are not
actually errors but rather necessary steps in understanding and building up
a computation might reduce the need to rely on documentation, which in this
case might be standing in for the information that you would get from a
tighter feedback loop.


On Fri, Apr 6, 2018 at 9:44 PM, Juan Nunez-Iglesias <>

> On Sat, Apr 7, 2018, at 10:36 AM, Stefan van der Walt wrote:
> > Agreed, I don't think anyone ever uses uint32 for images.  Typically 16,
> > but perhaps also 64?
> What I'm arguing is that if anyone takes a 64-bit image and converts it to
> float, they are *not* after their image divided by 2**64.
> _______________________________________________
> scikit-image mailing list

Phone        917 375 8730
scikit-image mailing list

Reply via email to