Hi Peter, hi everybody,

I did some tests with the history-plugin and was able to import an area
of interest (AOI) of the full-planet.dump by filtering it through a
bounding-polygon. It worked without problems (I had to remove the
relation s of course, because this is not supported yet).

How can I find out if an object (point, way) is "visible" = marked as
deleted? By importing an AOI of a planet-file this information is not
necessary, because just the currently visible objects are imported (only
these are part of the planet-file), but if it comes to the history-dump
there might be nodes that were visible and have been deleted in a more
recent version. How could we handle this information? I guess this
information is not available in the history-dump, is it? What happens to
"deleted" objects while creating the full-history.dump? Are they
ignored? Is just the "invisible" version ignored? Is the "invisible"
version in the history-dump but you can not reproduce the information?

Marco

Am 26.08.2010 00:19, schrieb Peter Körner:
> Hi Marco
>
> The first snapshot is out. Unfortunately the hstore migration progress
> Brett is still in let the pgsnapshot tests fail, which is why hudson
> is not providing nightly builds anymore.
>
> Because of that you'll need to compile osmosis yourself. I attached
> instructions to this mail that also include the concrete plugin usage.
>
> The following tasks are available:--write-pgsql-history and
> --write-pgsql-history-dump. They correlate closely to --write-pgsql
> and --write-pgsql-dump.
>
> All features that are marked as experimental may work or they may not
> and of course they will be painfully memory intensive on larger
> datasets because of the lack of a good store implementation.
>
> Peter
>
> Am 25.08.2010 15:16, schrieb Marco Lechner - FOSSGIS e.V.:
>> Hi Peter,
>>
>> I'm very intersted in your history-extension and I'm going to test as
>> soon as a first snapshot is available. Will it be possible to eat an
>> --bound-polygon stream from osmosis? Or will it just import the whole
>> history-plane?
>>
>> Marco
>>
>> Am 25.08.2010 15:14, schrieb Peter Körner:
>>> Hi all
>>>
>>> After a little playing around I now got an idea of how I'm going to
>>> implement everything. I'll keep as close as possible at the regular
>>> simple schema and at the way the pgsql tasks work.
>>>
>>> Just as with the optional linestring/bbox builder, the history import
>>> tasks will serve more then one scheme. I'm leaving relations out,
>>> again.
>>>
>>> the regular simple scheme
>>> ->  its the basis of all but not capable of holding history data
>>>
>>> + history columns
>>> ->  create and populate an extra column in way_nodes to store
>>>     the way version.
>>> ->  change the PKs of way_nodes to allow
>>>     more then one version of an element
>>>
>>> + way_nodes version builder
>>> ->  create and populate an extra column in way_nodes that holds the
>>> node
>>>     version that corresponds to the way's timestamp
>>>
>>> + minor version builder
>>> ->  create and populate an extra column in ways and way_nodes to store
>>>     the ways minor versions, which are generated by changes to the
>>> nodes
>>>     of the way between version changes of the way self.
>>>
>>> + from-to-timestamp builder
>>> ->  create and populate an extra column in the nodes and ways table
>>> that
>>>     specifies the date until which a version of an item was "the
>>> current
>>>     one". After that time, the next version of the same item was
>>>     "current" (or the item was deleted). the tstamp field in contrast
>>>     contains the starting date from which an item was "current".
>>>
>>> + linestring / bbox builder
>>> ->  just the same as with the regular simple scheme, works for all
>>>     version and minor-version rows
>>>
>>> Until the end of the week I'll get a pre snapshot out that can
>>> populate the history table with version columns and changed PKs. The
>>> database created from this can be used to test Scotts SQL-Only
>>> solution [1].
>>>
>>> It will also contain a first implementation of the way_nodes version
>>> builder but only with an example implementation of the NodeStore, that
>>> performs bad on bigger files.
>>>
>>>
>>> Brett, the pgsql tasks currently write (in COPY mode) all data to temp
>>> files first. The process seems to be
>>>
>>> PlanetFile ->  NodeStoreTempFile ->  CopyFormatTempFile -> 
>>> PgsqlCopyImport
>>>
>>> in osm2pgsql the copy data is pushed to pgsql via unix pipes (5 or 6
>>> COPY transactions running at the same time in different connections).
>>> This approach skips the CopyFormatTempFile stage. Is there any special
>>> reason this approach isn't used in the pgsnapshot package?
>>>
>>>
>>> Peter
>>>
>>>
>>> [1]
>>> <http://lists.openstreetmap.org/pipermail/dev/2010-August/020308.html>
>>>
>>> _______________________________________________
>>> osmosis-dev mailing list
>>> [email protected]
>>> http://lists.openstreetmap.org/listinfo/osmosis-dev
>>
>>
>> _______________________________________________
>> osmosis-dev mailing list
>> [email protected]
>> http://lists.openstreetmap.org/listinfo/osmosis-dev

_______________________________________________
osmosis-dev mailing list
[email protected]
http://lists.openstreetmap.org/listinfo/osmosis-dev

Reply via email to