Thanks it worked :D
Nic Roets wrote:
There's a bug in the code that generated this week's planet. You
should either wait until next week or filter the planet with the
following command:
bzcat /osm/planet-10*.osm.bz2 |egrep -v '#[0-9]*;'|...
There has been a long discussion on 'dev',
Nic Roets wrote:
(since we got rid of the segments)
From 8.2 GB to 8.1 GB:
http://planet.openstreetmap.org/
Maybe something is wrong with it.
I don't know if anybody has the same problem but I can't manage to
complete an extract with osmosis. I'm doing the same thing as everytime
and it
There's a bug in the code that generated this week's planet. You
should either wait until next week or filter the planet with the
following command:
bzcat /osm/planet-10*.osm.bz2 |egrep -v '#[0-9]*;'|...
There has been a long discussion on 'dev', mentioning other remedies.
On Sat, Mar 13, 2010
Thx for help, I'll try it.
Now I have to follow 'dev' too :D
Nic Roets wrote:
There's a bug in the code that generated this week's planet. You
should either wait until next week or filter the planet with the
following command:
bzcat /osm/planet-10*.osm.bz2 |egrep -v '#[0-9]*;'|...
There
Will this also be a problem if you try to import via osm2pgsql into postgres?
Thanks,
John
On 3/13/10, hbogner hbog...@gmail.com wrote:
Thx for help, I'll try it.
Now I have to follow 'dev' too :D
Nic Roets wrote:
There's a bug in the code that generated this week's planet. You
should
My understanding is that all Xml compliant* parsers will abort at the
file offsets that Frederik mentions.
My advice is to use the egrep filter when in doubt, because you will
loose no more than a dozen lines in a planet file of billions of
lines.
*: (My split program is not compliant and will
That is very deep c++ code!
care to comment on how it works?
would be very interested to understand its performance ! looks very fast.
mike
On Sat, Mar 13, 2010 at 7:06 PM, Nic Roets nro...@gmail.com wrote:
My understanding is that all Xml compliant* parsers will abort at the
file offsets that
Hello James,
I wanted to split the planet into overlapping bboxes like this (click
to see actual size):
http://dev.openstreetmap.de/gosmore/
On talk I described how I was dissatisfied with osmosis's memory
consumption. So I came up with this observation: Most entities will
end up in one or two
you are bunziping the code ? you are scanning the bzip blocks?
it is faster than the bunzip. But maybe you mean that it is very fast.
I have experimented with bziprecover to extract blocks on their own,
i made a perl script to extract blocks from a wikipedia file that can be
used to run the
No. It runs on the uncompressed planet, like this :
bzcat /osm/planet-10*.osm.bz2 | /osm/gosmore/bboxSplit \
-85.05113 73.125009.44906 180.0 gzip 0720048510241024.osm.gz \
-25.48295 120.58594 72.91964 180.0 gzip 0855020310240587.osm.gz \
-85.05113 98.43750
(since we got rid of the segments)
From 8.2 GB to 8.1 GB:
http://planet.openstreetmap.org/
___
talk mailing list
talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/talk
On 11 March 2010 15:50, Nic Roets nro...@gmail.com wrote:
(since we got rid of the segments)
From 8.2 GB to 8.1 GB:
http://planet.openstreetmap.org/
Interesting...
There has been a change to the dumping script since the previous week:
http://trac.openstreetmap.org/changeset/20396
But more
lots of dupe node removal?
On Mar 11, 2010, at 3:50 PM, Nic Roets wrote:
(since we got rid of the segments)
From 8.2 GB to 8.1 GB:
http://planet.openstreetmap.org/
___
talk mailing list
talk@openstreetmap.org
No.
From 8.2 GB to 8.1 GB:
http://planet.openstreetmap.org/
planet-091007.osm.bz2 09-Oct-2009 03:37 7.4G
planet-091014.osm.bz2 14-Oct-2009 20:35 7.2G
And I'm sure it has happened before.
What exactly were you trying to tell us? :)
On 11 March 2010 16:03, Lars Francke lars.fran...@gmail.com wrote:
planet-091007.osm.bz2 09-Oct-2009 03:37 7.4G
planet-091014.osm.bz2 14-Oct-2009 20:35 7.2G
I tweaked the bz2 compression block size around then, which would
account for that size
15 matches
Mail list logo