Re: [OSM-dev] Generalisation

2018-05-02 Thread Tomas Straupis
2018-05-03 1:05 GMT+03:00 Marco Boeringa wrote:
> You do realize the 1-2 years is well after the 2013 date that the Dutch
> Kadastre started to publish their work?

  Lithuania was given as contra to "the only".
  Savino (Italy) was given as contra to "the first" (as his work was
published in 2011). (It is also a "must read" for anyone interested in
generalisation)

  Anyways, there is no point of talking about who first, last, only
etc. All approaches using closed commercial software are pointless for
OSM - it cannot be reused. Everything can be done with open source so
that all code/algorithms are open and clear and there is no need to
pay piles of money for nothing.

-- 
Tomas

___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] Generalisation

2018-05-02 Thread Christoph Hormann
On Wednesday 02 May 2018, Marco Boeringa wrote:
>
> "And also it is ultimately: Demo or it didn't happen - at the moment
> the only thing you can get at 1:50k is the old style map:
> https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topo
>grafie/topraster/topraster-actueel/top50raster"
>
> This is a serious misunderstanding. This is the a rasterized version
> of the new style 1:50k vector map based on generalization, that just
> happens to look very close to the originally manually generalized
> one: that was the whole target of the effort. They make available the
> rasters for clients, as it is just an easy way to consume the data,
> styled and all.

If that is the result of what you so boldly described as the first fully 
automated generalization workflow of a national mapping organization 
that would be fairly underwhelming - both in terms of "fully automated" 
and "successful".

-- 
Christoph Hormann
http://www.imagico.de/

___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] Generalisation

2018-05-02 Thread Marco Boeringa

Hi Tomas,

You do realize the 1-2 years is well after the 2013 date that the Dutch 
Kadastre started to publish their work?


As to Lithuania, I can't speak for your country, but your Swedish Baltic 
brethren actually adopted the Dutch Kadaster's approach, including the 
developed models through a cooperation agreement:


https://kartographie.geo.tu-dresden.de/downloads/ica-gen/symposium2015/Sweden_Abstract_NMA_Workshop_Amsterdam_Dec_2015.pdf

Maybe your Lithuanian cadastre looked over the shoulders of the Sweeds? 
At the very least, they may have gotten a little inspiration... ;-), 
although they may well have developed this entirely on their own using 
the same ESRI tools. For sure, the Dutch Kadaster seems to have been 
very open about their specific development work in an international 
context...


Marco


Op 2-5-2018 om 22:24 schreef Tomas Straupis:



However, as to interesting stuff to read: our national Kadaster of the
Netherlands, is actually the very first and only national mapping
organization world wide, that has successfully managed to implement a fully
automated generalization work flow for generating 1:50k maps from 1:10k
maps, including landuses, waterways, highways, but also generalizing
build-up areas and buildings. They used a range of cartographic
generalization tools from ArcGIS (that I didn't use...).

   Congratulations to national Kadaster, but I'm not sure you're
correct about "first and only". Our local (Lithuanian) land agency (or
to be more specific gis-centras) has completed automated
generalisation 1-2 years ago (using esri tools as well). As far as I
know fully automated and done in ~day.

   Most GIS people use a work by Sandro Savino "A solution to the
problem of the generalization of the Italian geographical databases
from large to medium scale: approach definition, process design and
operators implementation". Author claims to have completed automated
generalisation for Italy and it dates to 2011. This work is very
interesting because instead of referring to closed commercial tools it
has a very detailed description of how to actually do this and that.

   Also Swiss Topo is known to be doing a very high quality
generalisation for years(?).

   But thank you for your links, it is interesting to learn how
different countries handle generalisation.




___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


[OSM-dev] osm2pgsql release 0.96.0

2018-05-02 Thread Sarah Hoffmann
Hi,

we are happy to announce a new release 0.96.0 of osm2pgsql.

This release mostly fixes a number of regressions introduced
with the switch to libosmium and brings a couple of improvements
in the build system. Contrary to what was announced for the
last version, this release still supports old-style multipolygons.

Changes include

- memory for caches and flatnode storage is freed earlier, leaving
  more RAM to Postgresql during indexing

- extend web Mercator to 89.99 latitude again, reducing broken polygons

- skip objects with no tags during initial import, improving
  performance during first import stage

- support LuaJIT for faster processing of Lua tag transforms
  (thanks to @mmd)

- update to libosmium 2.14

- windows builds for 32bit are now provided via Appveyor
  (thanks to @alirdn)

- bug fixes for tile expiry (thanks to @nakaner)

For anybody building against an external libosmium, please note
that you now also need to configure an external protozero library
separately.

Kind regards

Sarah

___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] Generalisation

2018-05-02 Thread Marco Boeringa

Christoph,

This is a bit like the Vatican saying to Galileo that the earth doesn't 
spin around the sun, but the other way around...


Have you even looked at the links I provided? I can assure you (living 
in the Netherlands myself, I think I have better appreciation of this 
specific effort), that this is no "marketing hyperbole".


They did fully automate the generalization process from a 1:10k base to 
1:25k and 1:50k. The two years update interval is the amount of time 
needed to update the base 1:10k TOP10NL vector map.


Even if you fully automate the generalization process to produce medium 
and small scale maps from large scale ones, you still need time to 
update the base 1:10k large scale one. Two years updating means they fly 
each part of the entire country - capturing 1:5k high resolution stereo 
aerials - in that period of time. That two year cycle is a huge 
achievement, in most countries, the update cycle of topographic sheets 
is minimum 5-10 years, in some cases 25 years..., meaning it takes e.g. 
a minimum 5-10 years before a certain publicized map sheet, is updated 
to the latest on the ground state as captured by new aerials (which may 
already be outdated by the time they are truly processed to map sheets). 
The two year cycle in the Netherlands, is in fact to a large extent the 
result of the automated generalization employed for the medium and small 
scale maps, freeing up workforce to maintain the base 1:10k TOP10NL 
vector map, instead of needing to maintain multiple map series concurrently.


"And also it is ultimately: Demo or it didn't happen - at the moment the 
only thing you can get at 1:50k is the old style map:

https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topraster/topraster-actueel/top50raster;

This is a serious misunderstanding. This is the a rasterized version of 
the new style 1:50k vector map based on generalization, that just 
happens to look very close to the originally manually generalized one: 
that was the whole target of the effort. They make available the rasters 
for clients, as it is just an easy way to consume the data, styled and all.


"and the processed geometry data set (without any labeling information):
https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topnl/topnl-actueel/top50nl;

And here you are actually pointing out one of the 1:50k vector products 
(in GML format) that they make available based on the described new work 
flows, so I don't understand your argument?...


Marco

 


Op 2-5-2018 om 21:02 schreef Christoph Hormann:

On Wednesday 02 May 2018, Marco Boeringa wrote:

[...]

However, as to interesting stuff to read: our national Kadaster of
the Netherlands, is actually the very first and only national mapping
organization world wide, that has successfully managed to implement a
fully automated generalization work flow for generating 1:50k maps
from 1:10k maps, [...]

That is largely marketing hyperbole.  The fact that no one tends to
define the term "fully automated" and "successfully" in such context
should serve as a warning.

Institutional mapping in various countries has used algorithmic geometry
processing in production of maps for quite some time but most of them
(including the ones in the Netherlands) tend to still maintain a
traditional view of the cartographic process.  They are for example
speaking of a two year update interval which would be quite curious if
the processes were indeed fully automated according to the common
understanding of this term.

And also it is ultimately: Demo or it didn't happen - at the moment the
only thing you can get at 1:50k is the old style map:

https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topraster/topraster-actueel/top50raster

and the processed geometry data set (without any labeling information):

https://www.pdok.nl/nl/producten/pdok-downloads/basisregistratie-topografie/topnl/topnl-actueel/top50nl

Without a styled map rendering this is not really something you can
seriously evaluate (although you can see quite a few cases of geometric
incompatibilities in the geometries).




___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] Generalisation

2018-05-02 Thread Tomas Straupis
Hello

2018-05-02 19:33 GMT+03:00 Marco Boeringa wrote:
> The generalization I wrote about was just a crude basic generalization of
> vector (building) data of OSM using a default tool of ESRI's ArcGIS. The
> specific tool used (Simplify Polygon), has more advanced settings than
> standard Douglas Peucker, but by itself does nothing really special other
> than weeding out vertices / nodes. I just attempted to use it with different
> tolerances to see what the results would be, and concluded the resulting
> defects in building topology, were not worth the reduction in file size.

  You've probably used this:
  
http://desktop.arcgis.com/en/arcmap/10.3/tools/cartography-toolbox/simplify-polygon.htm

  And I'm talking about this:
  
http://desktop.arcgis.com/en/arcmap/10.3/tools/coverage-toolbox/simplify-building.htm

  But both of these are closed and tied to their proprietary
architecture. And there is even less information on implementation
than in first(?) scientific paper about actual building generalisation
algorithms: Sester M. ( 2000 ) Generalization Based on Least Squares
Adjustment In: ISPRS (ed.)

> However, as to interesting stuff to read: our national Kadaster of the
> Netherlands, is actually the very first and only national mapping
> organization world wide, that has successfully managed to implement a fully
> automated generalization work flow for generating 1:50k maps from 1:10k
> maps, including landuses, waterways, highways, but also generalizing
> build-up areas and buildings. They used a range of cartographic
> generalization tools from ArcGIS (that I didn't use...).

  Congratulations to national Kadaster, but I'm not sure you're
correct about "first and only". Our local (Lithuanian) land agency (or
to be more specific gis-centras) has completed automated
generalisation 1-2 years ago (using esri tools as well). As far as I
know fully automated and done in ~day.

  Most GIS people use a work by Sandro Savino "A solution to the
problem of the generalization of the Italian geographical databases
from large to medium scale: approach definition, process design and
operators implementation". Author claims to have completed automated
generalisation for Italy and it dates to 2011. This work is very
interesting because instead of referring to closed commercial tools it
has a very detailed description of how to actually do this and that.

  Also Swiss Topo is known to be doing a very high quality
generalisation for years(?).

  But thank you for your links, it is interesting to learn how
different countries handle generalisation.

-- 
Tomas

___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev


Re: [OSM-dev] Generalisation

2018-05-02 Thread Marco Boeringa

Hi Tomas,

The generalization I wrote about was just a crude basic generalization 
of vector (building) data of OSM using a default tool of ESRI's ArcGIS. 
The specific tool used (Simplify Polygon), has more advanced settings 
than standard Douglas Peucker, but by itself does nothing really special 
other than weeding out vertices / nodes. I just attempted to use it with 
different tolerances to see what the results would be, and concluded the 
resulting defects in building topology, were not worth the reduction in 
file size.


Of course, if your city's buildings are far more detailed than the 
average building in OSM, e.g. an import of official government data as 
measured up to centimeter level using land surveying techniques, with 
rather large vertex counts on average, I can imagine that even simple 
techniques to generalize, may reduce vertex counts more than I achieved. 
And also depends a lot on how much you personally tolerate artefacts... 
(mine is low for buildings).


However, as to interesting stuff to read: our national Kadaster of the 
Netherlands, is actually the very first and only national mapping 
organization world wide, that has successfully managed to implement a 
fully automated generalization work flow for generating 1:50k maps from 
1:10k maps, including landuses, waterways, highways, but also 
generalizing build-up areas and buildings. They used a range of 
cartographic generalization tools from ArcGIS (that I didn't use...).


The results achieved by the Dutch Kadaster, closely mimic what Imagico 
states as a more holistic approach to generalization, and are largely 
truth to how cartographers traditionally manually generalized maps. In 
fact, one of the key aspects of the workflow developed by the Dutch 
Kadaster, was to mimic as closely as possible the inherent "rules" their 
cartographers used and developed over decades or more than a century, to 
"generalize" maps to smaller scales.


However, if you read the level of effort needed to achieve this (years 
of development by a small team of employees / researchers, and a huge 
tool chain build up), and the sheer processing power needed to do such a 
sophisticated generalization, it is utterly clear you cannot do this in 
real time. It is only worth the effort in organizations like the 
national mapping agencies, where the ultimate gain of automatization to 
fully replace manual conversion of topographic maps from one scale to 
another, or to keep different workflows for different scale map series 
(1:10k,1:25k,1:50k,1:100k,1:250k etc.) alive, far outweighs the effort 
to develop such a generalization tool chain and workflow in the long run.


They now call this workflow "AUTOgen"

The Dutch Kadaster was actually awarded a prize by ESRI for this work. 
See also this ArcNews bulletin (pages 19-21): 
https://www.esri.com/~/media/Files/Pdfs/news/arcnews/winter1314/winter-2013-2014.pdf


Some links to this work by the Dutch Kadaster:
- 
https://repository.tudelft.nl/islandora/object/uuid:12f0c152-958a-4688-a56d-5e30f7540a68/datastream/OBJ
- 
https://www.kadaster.com/documents/33433/36597/An+overview+of+the+Dutch+approach+to+automatic+generalisation/6af54a07-3188-41db-81d2-fdc2e1d4094b
- 
https://www.kadaster.com/documents/33433/36597/Feasibility+study+on+an+automated+generalisation/dbf664a7-160f-456d-9559-32263ef6793f
- 
https://www.researchgate.net/publication/299455974_Automated_generalisation_in_production_at_Kadaster_NL


Links to English pages of Dutch Kadaster;
https://www.kadaster.com/international-consultancy
https://www.kadaster.com/automatic-generalisation

Other interesting information regarding buildings (LODs) from research 
involving one the person also involved in the Kadaster work:

https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XXXVIII-4-C26/7/2012/isprsarchives-XXXVIII-4-C26-7-2012.pdf

(Note: I wasn't involved in any of this by the way, just know of this work)

Marco

Op 16-4-2018 om 19:23 schreef Tomas Straupis:

2018-04-16 19:34 GMT+03:00 Marco Boeringa wrote:

No, buildings are not the most interesting. I once generalized all buildings
in Denmark. It only reduced the storage by maybe 5%, at the high cost of
heavily distorting a large number of them. Most buildings in OSM are in fact
already in their most generalized state: just 4 nodes. Unless you think
triangles is a suitable representation ;-)

   Interesting, what algorithm did you use?

   I'm playing around in Vilnius which has urban houses, big block
houses, industrial zones and old town with lots of connected buildings
of very irregular shapes.
   In Vilnius there are 54267 buildings tagged with 366979 vertexes.
   Clustering them with distance of 5m gets 45810 objects (of course
with the same number of vertexes).
   Removing buildings with area < 100 and having neighbours in < 500
meters I'm left with 28974 buildings with 299224 vertexes.
   Simplification (amalgamating buildings in the cluster and trying to
remove edges < 20m)