Morning, I have a bit of a conundrum which I'm trying to find an answer for, and which is prompting a certain amount of contentious debate at work. I've downloaded a 1-deg x 1-deg SRTM tile from the GLCF site, SRTM_f03_n007e000.tif. Now from its name, this tile should have its origin at N 7, E 0.
When I view the metadata for the tile, I see a helpful note that says, "Metadata: AREA_OR_POINT=Area" I assume from this that the data is using "PixellsArea" Raster Space, as defined at http://www.remotesensing.org/geotiff/spec/geotiff2.5.html. This definition states that "The "PixelIsArea" raster grid space R, which is the default, uses coordinates I and J, with (0,0) denoting the upper-left corner of the image". So my guess would be that the UL corner coordinates of this raster should be (0, 8). However, that is not the case. - When I run gdalinfo against the tile, I get the following info: Upper Left ( -0.0004167, 8.0004167) - When I create a worldfile for the tile with gdal_translate, I get the following pixel coords reported in it: -0.0000000000, 8.0000000033 - Also, when I create a shapefile of this tile's extents using gdaltindex, I get the following extents reported: Extent: (-0.000417, 6.999583) - (1.000417, 8.000417) So my question is this, are we really all working with SRTM data that is offset by 1/2 a pixel because of a discrepancy between point-vs-area pixel registration? This is coming to a head for us because we need to do some raster analysis, and we're having a really hard time understandind why the data keeps spanning meridians and parallels. Would appreciate any clarification or insights. Roger -- _______________________________________________ gdal-dev mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/gdal-dev
