[OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
Hello, thanks. Solved. I think the problem was that I was downloading the file to a remote disk (R: mapped to \\lanserver\data) Another question: after exporting the whole planet (recently) to Postgres, what is the size of the largest table created (which I presume will take up 80% of the whole DB)? You can get the table size with: SELECT pg_size_pretty(pg_total_relation_size('big_table')); Regards, Juan Lucas --- On Tue, 6/22/10, Grant Slater openstreet...@firefishy.com wrote: From: Grant Slater openstreet...@firefishy.com Subject: Re: [OSM-talk] Failed to download 9.5 GB planet To: Dirk-Lüder Kreie osm-l...@deelkar.net Cc: talk@openstreetmap.org Date: Tuesday, June 22, 2010, 11:29 AM 2010/6/22 Dirk-Lüder Kreie osm-l...@deelkar.net: Am 21.06.2010 18:12, schrieb Juan Lucas Domínguez Rubio: 16:23:53 (1.02 MB/s) - Connection closed at byte 1621101924. Retrying. --16:23:53-- http://ftp.heanet.ie/mirrors/openstreetmap.org/planet-100618.osm.bz2 (try: 2) = `planet_100618.osm.bz2' Connecting to ftp.heanet.ie|193.1.193.64|:80... connected. HTTP request sent, awaiting response... 500 ( Arithmetic result exceeded 32 bits. ) 16:23:53 ERROR 500: ( Arithmetic result exceeded 32 bits. ). Try a different mirror, or try it via ftp. (if that's possible) Can anyone confirm if there is a problem with the heanet mirror? Juan: you could try FTP or rsync too. ftp://ftp.heanet.ie/mirrors/openstreetmap.org/planet-100618.osm.bz2 or rsync://ftp.heanet.ie/mirrors/openstreetmap.org/planet-100618.osm.bz2 Regards Grant ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
* Juan Lucas Domínguez Rubio juan_lucas...@yahoo.com [2010-06-24 01:34 -0700]: Another question: after exporting the whole planet (recently) to Postgres, what is the size of the largest table created (which I presume will take up 80% of the whole DB)? I can't speak for the whole planet.osm file (so this might be useless), but I have (roughly) an extract of the United States. The largest table, planet_osm_ways, is 50 GB. The next-largest table, planet_osm_nodes, is 21 GB. After that is planet_osm_line at 8 GB. -- ...computer contrarian of the first order... / http://aperiodic.net/phil/ PGP: 026A27F2 print: D200 5BDB FC4B B24A 9248 9F7A 4322 2D22 026A 27F2 --- -- Last night I met upon the stair A little man who wasn't there. He wasn't there again today. I think he's from the NSA! --- -- ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
On Thu, Jun 24, 2010 at 4:34 AM, Juan Lucas Domínguez Rubio juan_lucas...@yahoo.com wrote: Hello, thanks. Solved. I think the problem was that I was downloading the file to a remote disk (R: mapped to \\lanserver\data) Another question: after exporting the whole planet (recently) to Postgres, what is the size of the largest table created (which I presume will take up 80% of the whole DB)? based on my planet and minutely mapnik: 8 GB polygon 21 GB line 2 GB point 43 GB nodes 3 GB roads 50 GB ways 4 GB rels overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment. ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
On 25 June 2010 00:28, Richard Weait rich...@weait.com wrote: overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment. Is there a way to reduce this overhead without re-importing? ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
From: Richard Weait rich...@weait.com Subject: Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet) To: talk@openstreetmap.org Date: Thursday, June 24, 2010, 4:28 PM On Thu, Jun 24, 2010 at 4:34 AM, Juan Lucas Domínguez Rubio juan_lucas...@yahoo.com wrote: Hello, thanks. Solved. I think the problem was that I was downloading the file to a remote disk (R: mapped to \\lanserver\data) Another question: after exporting the whole planet (recently) to Postgres, what is the size of the largest table created (which I presume will take up 80% of the whole DB)? based on my planet and minutely mapnik: 8 GB polygon 21 GB line 2 GB point 43 GB nodes 3 GB roads 50 GB ways 4 GB rels overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment. ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk Hello, thanks. That's much more than what I expected. With a small example, I obtained a 1:3 ratio between the .osm format and the table size, so I estimated ~50 GB for the whole DB. Regards, Juan Lucas ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
On Thu, Jun 24, 2010 at 10:39 AM, John Smith deltafoxtrot...@gmail.com wrote: On 25 June 2010 00:28, Richard Weait rich...@weait.com wrote: overall disk use ~ 130 GB and growing about 2.5 GB/week at the moment. Is there a way to reduce this overhead without re-importing? I'm not sure I understand your question. You can import a bounding box or extract and have smaller tables. You can import without --slim, if you have the hardware for it, and lose some large tables. But then you lose the ability to update unless you do a re-import. Other alternatives? ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] OSM Postgres table sizes (was: Failed to download 9.5 GB planet)
On 25 June 2010 04:37, Richard Weait rich...@weait.com wrote: I'm not sure I understand your question. Over time, the overhead increases, not just the amount of data. You can import a bounding box or extract and have smaller tables. You can import without --slim, if you have the hardware for it, and I didn't mean without the slim option. lose some large tables. But then you lose the ability to update unless you do a re-import. That's my question, how to eliminate overhead in the database without re-importing. ___ talk mailing list talk@openstreetmap.org http://lists.openstreetmap.org/listinfo/talk