On Friday, 1 July 2016 at 14:30:17 UTC, Benjamin Schaaf wrote:
The problem with not knowing bit depth at compile time, is that
you're now forced to store the image internally as plain bytes.
So if you wanted to add two colors, you end up with ubyte[4] +
ubyte[4] instead of int + int. At some point you're going to
have to use a proper numerical representation (ie. long), or be
faced with slow calculations (ie. bigint).
Other libraries (eg. ImageMagick) get around this by just using
longs as the internal representation. Daffodil allows you to
control this. So if you know you will never use more than 4
bytes per color, you don't have to pay for anything more. If
you don't know, you can just use 8 and essentially have the
same behaviour as ImageMagick.
Yes, I'm aware of that problem. But if you store the type
information in the image (as enum field), later on you can do the
casting to correct types and perform arithmetics the right way.
This is how opencv's cv::Mat works under the hood, also I believe
numpy.ndarray's c implementation performs the same way.
Don't get me wrong, I'm not saying your way is not correct. :)
Just explaining my viewpoint. I believe your way is a lot easier
- if you could show that it works well in production environment,
I'd be glad to adopt it!
Cheers,
Relja