Dear group

We are using GDAL - both command line and Python bindings -  for much of our 
modelling work. However, we find that the library wants to round floating point 
numbers down to single precision (Float32 or DataType=6) where what we need for 
scientific computing is double precision (Float64 or DataType=7). We have been 
digging around the documentation and the Python binding source codes, but are 
running out of ideas.

Please find attached a small Python script with an ASCII test grid that reveals 
the issue. Data which is originally in double precision gets rounded to 6 
decimal places if read directly by Gdal and also if converted first to GeoTIFF 
using gdal_translate -ot Float64 ... .


The output of the attached script is


ASCII datatype: 6 float32
TIF datatype:   7 float64

Data GDAL (ASC): 50.814723968505859
Data GDAL (TIF): 50.814723968505859
Data REF:        50.814723686393002

Error (ASC):     0.000000282112858
Error (TIF):     0.000000282112858




Grateful for your help
Cheers
Ole


Dr Ole Nielsen
Numerical Modeller
Australia-Indonesia Facility for Disaster Reduction
Mobile: +62 811 820 4637 | Phone: +62 21 398 30088 x1007 | Fax: +62 21 398 30068




Attachment: gdal_precision_test.tgz
Description: gdal_precision_test.tgz

_______________________________________________
gdal-dev mailing list
[email protected]
http://lists.osgeo.org/mailman/listinfo/gdal-dev

Reply via email to