On 3/8/2014 5:22 AM, Even Rouault wrote:
This is the problem step: gdalwarp -rcs -ts 8800 6600 -s_srs EPSG:32662 -t_srs EPSG:4326 temp.tif target.tif gdalinfo -mm -stats target.tif is showing that the range of values in the image are dramatically different on the two servers! summary old: Band 1 Block=8800x1 Type=Int16, ColorInterp=Gray Computed Min/Max=-3877.000,32767.000 Minimum=-3877.000, Maximum=32767.000, Mean=25235.731, StdDev=10612.642 summary new: Band 1 Block=8800x1 Type=Int16, ColorInterp=Gray Min=-9314.000 Max=32561.000 Computed Min/Max=-9314.000,32561.000 Minimum=-9314.000, Maximum=32561.000, Mean=19166.800, StdDev=7786.806 Ok, so you can see that the values are radically different. My question is how do I get values like the old system? These values represent temperatures and I need to get the same values. My one thought on this is that if is another side effrct of proj4 behaving differently as I had to adjust the position above to get it to align. So maybe the gdalwarp is messing up the pixel values when it reprojects it also. But I'm totally lost on how make this work correctly. Any thoughts on how to fix this?Stephen, I think we already had a discussion some time ago about differences between spherical or ellipsoidal projections, or am I confusing with someone else ?
Yes this is probably reloaded to the previous discussion sine this is the same process that we discussed before. Before I was dealing with the projection being misaligned and I fixed that by changing the bbox defined for the hdf file so it was aligned.
But now I am realizing that the pixel values are also messed up. So maybe changing the bbox was not the right thing to do.
Well, it is not clear from your experiment if the difference is due to reprojection or to the resampling method.
Yeah, I am totally lost on this. My experiment was to compare the process steps on each system to see where things we different in the hope of understand what is happening.
There's a difference between both GDAL versions, but is the new result worse than the previous one (from visual inspection) ?
The images do look similar.
Cubic spline resampling seems to produce overshoot artifacts in both situations (since -3877.000 or -9314.000 in output < 377 in input). That's probably due to the maths behind.
Right, but turning that off and using the default does not resolve the issue.
Maybe just try with the default nearest resampling to see if it is due to the resampling kernel or the reprojection.
Tried that, no joy.
I'm also wondering if your data doesn't have a nodata value that you should explicitely set. As I can only guesswould be a good candidate given that the data type is Int16. But the " _FillValue=[65535]" in the metadata makes me wonder if the datatype shouldn't be UInt16 rather than Int16 in your initial conversion from netcdf to geotiff, and the nodata would rather be 65535 ?
I tried setting setting nodata and tried UInt16. I notice in the hdf metadata that there was a valid_range=[0-32767] which might have been why the Int16 as being used.
Anyway, as I mentioned in the prior email, I'm waiting on some docs for the hdf files and I will try to reconstruct the process from that.
-Steve
Even
_______________________________________________ gdal-dev mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/gdal-dev
