On 15.12.2011 12:10, Frederik Ramm wrote:
But: Anyone who really wants to, and has the resources to, can set up a
full database today, feed it with minutely diffs through Osmosis, and
allow a merry band of replication clients down the line.
problem is that Postgres repliation (the bundled one
Am 16.12.2011 11:09, schrieb Hartmut Holzgraefe:
So even a public PG replication master would only make sense for those
who run exactly the same architecture, or multiple masters for
different architectures would be needed ... :/
At one of the Hack-Weekends someone played around with
On 16.12.2011 11:21, Peter Körner wrote:
At one of the Hack-Weekends someone played around with distributing the
SQL-Commands issued by osm2pgsql via XMPP.
with the SQL command execution, especially the index creation, being
the most expensive part of an osm2pgsql run this would onyl save
Am 16.12.2011 12:47, schrieb Hartmut Holzgraefe:
On 16.12.2011 11:21, Peter Körner wrote:
At one of the Hack-Weekends someone played around with distributing the
SQL-Commands issued by osm2pgsql via XMPP.
with the SQL command execution, especially the index creation, being
the most expensive
Hi,
On 12/16/11 12:49, Peter Körner wrote:
with the SQL command execution, especially the index creation, being
the most expensive part of an osm2pgsql run this would onyl save about
1/4 to 1/3 of the total planet import execution time i'm afraid ...
It would help with keeping updated: you
On 01/-10/-28163 12:59 PM, Peter Körner wrote:
Am 16.12.2011 12:47, schrieb Hartmut Holzgraefe:
On 16.12.2011 11:21, Peter Körner wrote:
At one of the Hack-Weekends someone played around with distributing the
SQL-Commands issued by osm2pgsql via XMPP.
with the SQL command execution,
Hi,
We're working
hard on getting the relevant hardware in place to start trialling this
out, but it's a big project.
Many thanks for the insight
The original topic was about replication for rendering, so a comment
on that
Whooops, I haven't read that thread far enough in the past to
On 01/-10/-28163 12:59 PM, sly (sylvain letuffe) wrote:
Hi,
We're working
hard on getting the relevant hardware in place to start trialling this
out, but it's a big project.
Many thanks for the insight
The original topic was about replication for rendering, so a comment
on that
Well, I wonder why there isn't a feasible to do this with Postgres
replication yet.
There was a talk at FOSS4G 2011
(http://scanningpages.wordpress.com/2011/09/18/postgis-replication-foss4g/)
but I did not attend, slides are not that detailed, and the videos are not
available so far. I've just
Hi,
On 12/09/2011 02:11 AM, fatzopilot wrote:
Well, I wonder why there isn't a feasible to do this with Postgres
replication yet.
I think this is simply because it is not feasible. It is difficult
enough to keep replication going over unreliable connections but the
initial setup is
Hi,
On jeudi 15 décembre 2011, Frederik Ramm wrote:
But: Anyone who really wants to, and has the resources to, can set up a
full database today, feed it with minutely diffs through Osmosis
That is true, but there are no solutions for bellow minute synchronisation
(like near real time
On 15/12/11 11:44, sly (sylvain letuffe) wrote:
That is true, but there are no solutions for bellow minute synchronisation
(like near real time synchronisation)
For all practical purposes minutely diffs are real time as far as OSM is
concerned.
Altough the need for real time
On 15/12/11 12:32, sly (sylvain letuffe) wrote:
Exemples of bad requests are :
http://www.openstreetmap.org/api/0.6/relation/1403916/full
That looks like something I need to investigate. My guess is that it is
hitting the timeout but it should be reporting that better.
Re-Hi,
Exemples of bad requests are :
http://www.openstreetmap.org/api/0.6/relation/1403916/full
That looks like something I need to investigate. My guess is that it is
hitting the timeout but it should be reporting that better.
I guess you guessed it well, there is just too much (to
On 15 December 2011 12:32, sly (sylvain letuffe) li...@letuffe.org wrote:
Will the maintenance and stress on this server (therefore on it's sys admins)
whould be painless if all read api calls were directed to other servers ?
Let me reply to this in slightly hand-wavy terms.
Most of the load
On Thu, Dec 01, 2011 at 07:14:29AM +0100, Yves wrote:
Really?
What is a decent server ?
We're using (I think) a 64-bit quad-core VM with 6 gigs of ram and about ~100
GB of
disk. With a close-to-default feature filter (see imposm docs on mapping
files) it takes between 48 and 50 hours.
In
Yes, I'd like the whole rendering stack to become more lightweight, at least
for small extracts, so that more people can play with rendering their own
tiles, either on their home laptop/desktop or on fairly cheap servers.
Absolutely. A custom rendering server needs to be able to run on low-spec
Nick Whitelegg wrote:
It's certainly something that should be striven for as I suspect that
financial constraints are much more of an issue for the OSM
community than know-how
Developer availability is more of an issue than either. You've been around
here long enough to know that should be
On 12/1/2011 8:39 AM, Richard Fairhurst wrote:
Source material:
https://github.com/kothic/kothic-js/wiki/Tiles-format
https://github.com/kothic/kothic-js/wiki/How-to-prepare-map-style
Just found the following at the first link:
All coordinates of features should be Spherical Mercator
Frederik Ramm wrote:
Hi,
On 11/30/2011 07:25 AM, Ákos Maróy wrote:
What I've tried so far is importing the current planet-XXX.osm.bz2 file
into PostGIS via osm2pgsl, which I have used with the --slim option, as
without it the memory load exceeded the 16GB memory I had in my system
On 01/-10/-28163 12:59 PM, Jukka Rahkonen wrote:
Frederik Ramm wrote:
[...]
After this succeeded, I wanted to try to replicate this database, so I
created a pg_dump using the -Fc switch
This is a bad idea because a significant amount of osm2pgsql import time
is spent building indexes, and
Kai Krueger wrote:
On 01/-10/-28163 12:59 PM, Jukka Rahkonen wrote:
It can be slow but it is not always a bad idea. I have a very lean Linux
virtual server with about 700 MB of memory and it is very slow to import
even Finnish excerpt with osm2pgsql. In addition import tends to fail
totally
Jukka Rahkonen-2 wrote
[...]
For me it takes many hours with the Finnish dataset and if it fails it
happens in some Going over pending ways phase. I will need to make some
further tests some day so I can give you better information.
If it is at the very beginning of the Going over pending
Dear All,
Thank your the detailed responses.
Indeed, I'm using the pg_dump pg_restore method to transfer a database
to a system running a different version of posgresql. my hope is that
this is faster than loading everything from scratch using osm2pgsql.
so it seems I'll just wait see :)
On Miércoles, 30 de Noviembre de 2011 07:25:25 Ákos Maróy escribió:
I wonder what ways are there to speed up importing an OSM planet file
into a PostGIS database?
What I've tried so far is importing the current planet-XXX.osm.bz2 file
into PostGIS via osm2pgsl
Imposm, man, imposm. Takes 48
Really?
What is a decent server ?
Yves
--
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.
Iván Sánchez Ortega i...@sanchezortega.es a écrit :
On Miércoles, 30 de Noviembre de 2011 07:25:25 Ákos Maróy escribió:
I wonder what ways are there to speed up importing an OSM
Hi,
I wonder what ways are there to speed up importing an OSM planet file
into a PostGIS database?
What I've tried so far is importing the current planet-XXX.osm.bz2 file
into PostGIS via osm2pgsl, which I have used with the --slim option, as
without it the memory load exceeded the 16GB memory I
On Wed, Nov 30, 2011 at 12:25 AM, Ákos Maróy a...@maroy.hu wrote:
or, to put it in the other perspective: what hardware would make be
needed to make this process faster?
I know there has been some work in osm2pgsql to make it multithreaded
to help out a few parts that ARE CPU bound (maybe more
On Wed, Nov 30, 2011 at 07:25, Ákos Maróy a...@maroy.hu wrote:
thus, I wonder, what good ways are there to speed up this process?
or, to put it in the other perspective: what hardware would make be
needed to make this process faster?
According to the benchmark page SSD can make things fast.
Hi,
On 11/30/2011 07:25 AM, Ákos Maróy wrote:
What I've tried so far is importing the current planet-XXX.osm.bz2 file
into PostGIS via osm2pgsl, which I have used with the --slim option, as
without it the memory load exceeded the 16GB memory I had in my system
significantly (it was using about
30 matches
Mail list logo