It would also be interested to check how this affects our compliance with the GIGS test suite (https://gigs.iogp.org/)

For example the tests for LAEA are at https://github.com/IOGP-GIGS/GIGSTestDataset/blob/main/GIGSTestDatasetFiles/GIGS%205100%20Conversion%20test%20data/ASCII/GIGS_conv_5110_LAEA_output.txt

Currently we perfectly reproduce the expected results of GIGS at the millimeter level, although the test suite asks only for a 0.05 m tolerance

$ echo 70 5 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
5214090.649    4127824.658 0.000
$ echo 60 5 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
4109791.660    4041548.125 0.000
$ echo 50 5 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
2999718.853    3962799.451 0.000
$ echo 40 5 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
1892578.962    3892127.020 0.000
$ echo 30 5 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
796781.677    3830117.902 0.000
$ echo 52 10 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
3210000.000    4321000.000 0.000
$ echo 50 0 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
3036305.967    3606514.431 0.000
$ echo 50 3 | bin/cs2cs -d 3 EPSG:4258 EPSG:3035
3011432.894    3819948.288 0.000

The formulas for LAEA in the EPSG guidance node 7-2 use the 3-term series based on square eccentricy, which is the current one of PROJ

That said, when runnings the corresponding test file converted to be used by PROJ's "gie" test, we do pass the simple one-time forward / reverse tests, but not repeated 1000 times where a 6mm drift is tolerated and we are currently at ~300 to ~1300 millimeters. Perhaps using those more precise formulas will help improve the repeated round-tripping.

Cf https://github.com/OSGeo/PROJ/pull/4247 for the details (as the tests don't pass, the test file is not run by the automated test suite, so it must be run manually as shown in the PR).

Even

Le 11/09/2024 à 11:30, Even Rouault via PROJ a écrit :

We will surely find out when we run the unit tests for those projections?

Not sure which tolerance they have and how much those projections are tested (do not forget that much of the regression test suite was automatically generated with the results PROJ produced at that time, so with quite generic tolerances, etc.). I was more thinking about manually testing with proj/cct binaries before and after, and see the differences


It is also not immediately obvious to me to correlate your proposed code with the formulas in the paper by just starring at both at the same time. Looks like some "interpretation" of the paper has been done.

It's quite straightforward, but optimized for the precalculations like PROJ was doing in /auth.cpp/. There is no "interpretation" involved.
Well, what you mention below is what I call "interpretation", that is not direct copying of ready-made formulas, where someone unqualified could trivially check if there's no typo ;-)


It is of course still possible that I made a mistake with all this, but I imagine that updating this and running the tests should help to be confident about the results.

Running the existing PROJ test suite will show if the new formulas are consistent with the current less precise ones with maybe a quite loose tolerance, not necessarily that we reach the new level of precision we claim. Hence comparing with some random test points against GeographicLib which can hopefully be considered as a reference implementation, to check we have identical results (or at least share more common decimals up to the desired precision)


--
http://www.spatialys.com
My software is free, but my time generally not.

_______________________________________________
PROJ mailing list
PROJ@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/proj

--
http://www.spatialys.com
My software is free, but my time generally not.
_______________________________________________
PROJ mailing list
PROJ@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/proj

Reply via email to