* Markus Metz <[email protected]> [2017-03-16 22:06:12 +0100]:
On Thu, Mar 16, 2017 at 11:26 AM, Nikos Alexandris <[email protected]>
wrote:
[...]
With the p1 tif and GRASS db on the same spinning HDD, and
6 other heavy processes constantly reading from and writing to that
same
HDD, r.in.gdal took 2h 13min to import the p1 tif. 360 MB as input and
1.5
GB as output is not that heavy on disk IO. Most of the time is spent
decompressing input and compressing output.
Is it an 10000rpm disk?
I think you are on the wrong track, disk IO does not matter here. It was a
7200rpm disk, and the output of r.in.gdal was about 1.5 GB. It takes only
seconds, not hours to write 1.5 GB to a HDD.
p2 is a harder one!
export GDAL_CACHEMAX=10000
gdal_translate -co "COMPRESS=LZW"
GHS_BUILT_LDS1990_GLOBE_R2016A_3857_38_v1_0_p2.tif p2_test.tif
Also related? GTIFF_DIRECT_IO, GTIFF_VIRTUAL_MEM_IO
Again, I think you are on the wrong track, disk IO does not matter here.
And according to the GDAL documentation, GTIFF_DIRECT_IO,
GTIFF_VIRTUAL_MEM_IO apply only to reading un-compressed TIFF files.
finishes in 28 minutes.
Impressive!
Hardware does not really matter here. To be precise, the difference between
GDAL 1.11.4 and 2.1.3 is impressive, thanks to the efforts of the GDAL
development team.
Regarding GDAL 2.1.3, profiling might tell why gdal_translate is so much
faster than GRASS r.in.gdal.
Thanks Markus. Yes, on the wrong track. Useful lessons learned.
Nikos
ps- Working in a restricted environment (as in: I cannot install
whatsoever I need) is not easy.- Sure, I can possibly use a VM or
similar...
_______________________________________________
gdal-dev mailing list
[email protected]
https://lists.osgeo.org/mailman/listinfo/gdal-dev