Le 29/04/2015 11:18, Jochen Topf a écrit :

Looking at your proposal you seem to be very concerned with file size but not
so much with read/write speed. From my experience reading and writing PBF is
always CPU bound. Removing complexity could speed this up considerably. But
if the price is that we need zlib (de)compression it might not be worth it,
because it is rather CPU and memory intensive. Currently you can save quite
a lot of CPU time if you do not compress the PBF blocks but leave them
uncompressed.[...]

Exactly. From my soon to be released code:

"Tests were made on planet-141112.osm.pbf (28406347563 bytes) with an
AMD FX 8350 (8 cores) 4 GHz, 32 GB, data on a WD Red 2TB SATA 3 disk,
system: Ubuntu 14.04.1 installed on an Intel SSD SATA 3.
No inserts were made on the database or dumps, only pbf parsing.

reference (time cp planet-141112.osm.pbf /dev/null): 205s (bandwidth: 132 MiB/s)

with liprotobuf-c.so, default allocator, without assembly support: 799s (bandwidth: 33.9 MiB/s) with liprotobuf-c.so, sw_pool_t allocator, little assembly support: 629s (bandwidth: 43.1 MiB/s)"

Most of the time is spent in zlib, libprotobuf-c and memory allocations. I've addressed the last point using x86_64 assembly language and a pool allocator. I think a rewrite of an optimized libprotobuf library would help to gain some speed but the cost is very high (at least for my application).

Better advice I can give: use SSD ;-)

Best regards


_______________________________________________
dev mailing list
[email protected]
https://lists.openstreetmap.org/listinfo/dev

Reply via email to