I did some more testing.
I've taken a smaller area and put everything into a tmpfs but even with
the .pbf as well as the tmp files of osmosis both being stored in ram,
the performance isn't too good. It improved to about 1500 objects/second
but this would still means that all ways (according to
Hi,
the osm2city software should be changed to use an osm2pgsql database
instead of an osmosis database. Not only can a planet be imported in
less than a day with osm2pgsql (if you have SSDs), but also the
osm2pgsql database already has correctly built geometries for all
objects, whereas osm2city
Hi,
Building ways and relations requires fast random access, not sequential
read / write speed. I think it likely your HDD raid is the culprit, as
the 96 RAM won't allow you to process everything in RAM. All of the
recent osm2pgsql benchmarks with high throughput for building ways and
The same problem applies to a 3.4 GB .pbf file.
The nodes were done quickly but as soon as it started processing the
ways, it got super slow.
merspieler:
> I've imported small extracts in the past but I've never actually
> monitored the performance of these as they were done in reasonable time.
>
No, Imposm as it's own schema.
I never used Osmosis to import a complete planet file, but I would find
reasonable to start with a small extract like stated in osm2city documentation.
Yves___
dev mailing list
dev@openstreetmap.org
I've wanted to use osm2pgsql but the schema is a different one.
The software [1] I'm going to use the db with only supports the osmosis one.
As for the hardware:
2x Xeon E5 8 cores/16 threads
96GB ram
5x 4TB HDD in a RAIDZ2
I've done some benchmarking of the raid and osmosis doesn't even reach
Same as Frederic, but also proposing Imposm, also quite fast.
A brief hardware description would allow to exclude some bottlenecks.
Yves ___
dev mailing list
dev@openstreetmap.org
https://lists.openstreetmap.org/listinfo/dev
Hi,
first question: are you absolutely sure you need an Osmosis import -
does your use case not work with an osm2pgsql import?
Best
Frederik
--
Frederik Ramm ## eMail frede...@remote.org ## N49°00'09" E008°23'33"
___
dev mailing list
I'm currently trying to import the planet.osm.pbf file with osmosis.
While it's quite fast with nodes (took about 7h) it massively lows down
when it comes to the ways.
INFO: Processing Node 6814667967, 307145.5708858228 objects/second.
Oct 04, 2019 9:13:37 AM
OK, I will create the tables as InnoDB tables directly, this is surely
better ;)
Joachim Zobel schrieb:
There is another approach. ALTER all InnoDb tables to
MyISAM, run
osmosis and then ALTER them back to InnoDb.
It seems that the INSERT approach scales badly to large InnoDb tables.
Be
I have taken a look on your script. I think it could be useful, IF it is
quicker than using Osmosis.
Do you have some benchmarks?
Best,
Andi
Joachim Zobel schrieb:
Am Samstag, den 29.11.2008, 18:45 +0100 schrieb Andreas Kalsch:
I decompress the data before putting them into Osmosis, but
Whenever I have seen that error in the past it has been because I have
data in the tables already. Changing table types shouldn't have any
effect on duplicate key checks. It'd be worth double checking to make
absolutely sure you have no data in the tables already.
Andreas Kalsch wrote:
Yes
Am Sonntag, den 30.11.2008, 21:38 +0100 schrieb Andreas Kalsch:
I have split up the sql script in two parts - 1st to create table
just
with primary keys on auto_increment fields, 2nd to alter the
populated
tables.
Be aware that an ALTER TABLE on MySQL (at least with InnoDb) always
copies
Am Montag, den 01.12.2008, 14:49 +0100 schrieb Andreas Kalsch:
Brett Henderson schrieb:
Whenever I have seen that error in the past it has been because I
have
data in the tables already. Changing table types shouldn't have
any
effect on duplicate key checks. It'd be worth double
Andreas Kalsch wrote:
Brett Henderson schrieb:
Whenever I have seen that error in the past it has been because I
have data in the tables already. Changing table types shouldn't have
any effect on duplicate key checks. It'd be worth double checking to
make absolutely sure you have no data
Am Montag, den 01.12.2008, 19:15 +0100 schrieb Joachim Zobel:
Is there any chance there is a 0 PK? This may trigger an auto
increment.
To bee more specific:
mysql CREATE TABLE test (t INTEGER PRIMARY KEY AUTO_INCREMENT);
Query OK, 0 rows affected (0.11 sec)
mysql INSERT INTO test(t)
What I will try now:
1) combine(There is another approach. ALTER all InnoDb tables to
MyISAM, run
osmosis and then ALTER them back to InnoDb.
It seems that the INSERT approach scales badly to large InnoDb tables.
Sincerely,
Joachim
, It scales always bad; no exceptions. Some advise on the
OK,
1) does not work
- Osmosis needs the InnoDB tables:
Write to database ..
Nov 30, 2008 2:35:09 PM com.bretth.osmosis.core.Osmosis main
INFO: Osmosis Version 0.29
Nov 30, 2008 2:35:09 PM com.bretth.osmosis.core.Osmosis main
INFO: Preparing pipeline.
Nov 30, 2008 2:35:09 PM
Am Sonntag, den 30.11.2008, 15:38 +0100 schrieb Andreas Kalsch:
com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException:
Duplicate entry '1' for key 1
Did you start with empty tables? This looks like remains from a
cancelled previous run.
Sincerely,
Joachim
Osmosis is very slow on my server.
There is this option --write-null (--wn) which can be useful just to
check integrity of data. So I think Osmosis does everytime check it and
this could be a bottleneck because it has to cache some data in memory.
Can this be the origin for being slow (next to
Hi,
Andreas Kalsch wrote:
Osmosis is very slow on my server.
The Java implementations of gzip/bzip are notoriuosly slow. If you are
working with compressed data, you might see an improvement if you first
uncompress the file and then use osmosis to process it in raw form;
later use an
Frederik Ramm wrote:
The Java implementations of gzip/bzip are notoriuosly slow. If you are
working with compressed data, you might see an improvement if you first
uncompress the file and then use osmosis to process it in raw form;
later use an external utility to compress the output if
I decompress the data before putting them into Osmosis, but it's still slow.
So back to my question -- ;)
(The best would be raw dump files for MySQL's LOAD DATA INFILE - I can
imagine that it would be pretty quick)
Stefan de Konink schrieb:
Frederik Ramm wrote:
The Java implementations of
Andreas Kalsch wrote:
I decompress the data before putting them into Osmosis, but it's still
slow.
So back to my question -- ;)
(The best would be raw dump files for MySQL's LOAD DATA INFILE - I can
imagine that it would be pretty quick)
If you want to try your luck I can provide you a
Am Samstag, den 29.11.2008, 18:45 +0100 schrieb Andreas Kalsch:
I decompress the data before putting them into Osmosis, but it's still
slow.
So back to my question -- ;)
(The best would be raw dump files for MySQL's LOAD DATA INFILE - I
can
imagine that it would be pretty quick)
This has
Am Samstag, den 29.11.2008, 18:45 +0100 schrieb Andreas Kalsch:
I decompress the data before putting them into Osmosis, but it's still
slow.
So back to my question -- ;)
(The best would be raw dump files for MySQL's LOAD DATA INFILE - I
can
imagine that it would be pretty quick)
There is
Andreas Kalsch wrote:
Osmosis is very slow on my server.
There is this option --write-null (--wn) which can be useful just to
check integrity of data. So I think Osmosis does everytime check it and
this could be a bottleneck because it has to cache some data in memory.
Can this be the
Frederik Ramm wrote:
Hi,
Andreas Kalsch wrote:
Osmosis is very slow on my server.
The Java implementations of gzip/bzip are notoriuosly slow. If you are
working with compressed data, you might see an improvement if you first
uncompress the file and then use osmosis to process
Brett Henderson wrote:
It's worth noting that you can pipe direct from native bzip to osmosis
like this to avoid uncompressed files on disk:
bzcat myfile.osm.bz | osmosis --read-xml - --write-xx...
If you are at it, and have a multi processor:
http://compression.ca/pbzip2/
Stefan
If somebody can prove with real numbers on very large files that LOAD
DATA INFILE is much faster than the current osmosis approach of doing
normal inserts (ie. not just a few per cent) I'll add it to osmosis. I
just haven't done so because my efforts have been focused elsewhere.
Andreas
Brett Henderson wrote:
If somebody can prove with real numbers on very large files that LOAD
DATA INFILE is much faster than the current osmosis approach of doing
normal inserts (ie. not just a few per cent) I'll add it to osmosis. I
just haven't done so because my efforts have been
Brett Henderson wrote:
Stefan de Konink wrote:
Brett Henderson wrote:
If somebody can prove with real numbers on very large files that LOAD
DATA INFILE is much faster than the current osmosis approach of doing
normal inserts (ie. not just a few per cent) I'll add it to osmosis.
I just
Hi,
Stefan de Konink wrote:
But if he first uncompresses the data basically osmosis looses its main
argument to be superior to any other approach.
Nobody ever claimed osmosis was superior to any other approach. To me,
Osmosis is the Swiss Army Knife of OSM - it can do almost anything and
33 matches
Mail list logo