John Donovan wrote:
Hi,
We have our own image format that is (among others) single-band 16-bit
unsigned height values. I've written a program to convert from the many
native GDAL formats to our format using ChunkAndWarpImage().

Everything is fine except when converting from a floating-point format.
I was expecting GDAL to do something with the scale and offset values
when warping, but it doesn't call our format's GetScale()/GetOffset()
functions at all, so consequently the fractional part of the input data
is truncated.

I'm willing to accept that my code could be at fault, so I'm open to any
and all suggestions. The only one I can think of, and I'd rather not do
it, is to tell GDAL our format is floating point, and do the scaling
manually in IWriteBlock(). But as the format can support floating point
values, it would make that code path not as clean as I would like.

John,

It is not intended that GDAL applies or unapplies pixel value scaling
values as part of normal image IO.  The scale/offset values are provided
as metadata to application who want to more appropriately interpret
scaled values.

If you want applications to be able to treat your imagery as "real values"
rather than the scaled integers then you should represent the imagery
as floating point and apply/unapply the scaling yourself all the time
when reading and writing.

It is a very dangerous approach to apply different rules depending on
whether floating point or integer data is read/written.  I advise against it.

Best regards,
--
---------------------------------------+--------------------------------------
I set the clouds in motion - turn up   | Frank Warmerdam, [email protected]
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush    | Geospatial Programmer for Rent

_______________________________________________
gdal-dev mailing list
[email protected]
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Reply via email to