Re: [OSM-dev] Some points about cloudmade's style editor (as issues)
My motivation to post them here was that some cloudmade developrs read them and put them on a high position on their to do list, because what I propose is not really a technical challenge. Some simple changes to get a great result ... Am 02.11.11 16:47, schrieb Richard Fairhurst: Andreas Kalsch wrote: Probably you have some arguments about my proposals. Only that they should probably be on CloudMade's own lists. ;) cheers Richard -- View this message in context: http://gis.638310.n2.nabble.com/Some-points-about-cloudmade-s-style-editor-as-issues-tp6955706p6955759.html Sent from the Developer Discussion mailing list archive at Nabble.com. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] osmo - Populate a MongoDB collection with OpenStreetMap data
Currently I have no app that is ready to show an example. Once the data is in Mongo, you can do a lot with it with simple means since it's no SQL and you have all data in 1 collection. Example of my consecutive tool chain that I use to make some experiments: I use node.js and Christian Kvalheim's //https://github.com/christkv/node-mongodb-native to write fast jobs for iteration over many objects since this is faster than Mongo's system functions and better than Mongo's MapReduce because of the shared state. Writing jobs means to complete and adapt data. For more special tasks like map rendering you need to build your own separate index and tool chain. It's up to you to build a performant index to serve your needs. osmo is just suited to give you fast access by primary key. Inspiration for osmo was Jochen Topf's osmium, but I wanted a small and cleaner code base focussed only on database population. Probably my data model could inspire a discussion about simplifying data models, too? Am 11.09.11 19:46, schrieb Jaak Laineste: 2011/9/11 Andreas Kalschandreaskal...@gmx.de: Probably this is useful for you, too, so let me announce osmo, a performant way to populate MongoDB with OSM data: https://github.com/akidee/osmo Please give me some feedback, or you even want to contribute. Do you have any applications for that? Any estimates for performance for standard tasks: map rendering, geocoding, routing, vector data queries (xapi) ? ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] osmo - Populate a MongoDB collection with OpenStreetMap data
Probably this is useful for you, too, so let me announce osmo, a performant way to populate MongoDB with OSM data: https://github.com/akidee/osmo Please give me some feedback, or you even want to contribute. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Indexing dataspaces
I am currently reading this Google paper, which is probably interesting for indexing OSM data: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.102.1480rep=rep1type=pdf Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] capitals; normalizing true, yes and 1 - additional idea
How I would do post-processing OSM tags: name: 'München' name:en: 'Munich' noexit: 'yes' addr:street: 'Weg' date:'2011-01-02' becomes: name: { _: 'München' en: 'Munich' } noexit: true addr: { street: 'Weg' } date: new Date(...) This could be the result of a post-processing script so that your application can rely on better data consistency. It makes sense to put namespaced tags in their own object. Of course, finally, it depends on your application. It's more well-suited for creating views than for indexing. Andi Am 01.02.11 20:32, schrieb M∡rtin Koppenhoefer: 2011/2/1 Lennardl...@xs4all.nl: Fortunately, for capital, we only use yes and not the other 2 variants in the main mapnik map. It's not logical to add these at this point. We already have to normalise true and 1 to yes for bridges and tunnels, and if those variants would disappear from the database and the wiki, all the better. yes, at least we could _recommend_ to not use true or 1 any more in the wiki. It is really pointless to encourage mappers to use any of those three when in the end it will create trouble and effort for data consumers without any benefit for the mappers. Of course there will remain -1 to indicate the opposite direction in oneway streets. Anyway, shouldn't this now move to [tagging] ? done cheers, Martin ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Changeset is not applied - just for me?
It would be very helpful for me to know, if anybody has the same issue: changesets are not applied: /home/andi/.libraries/osmosis-SNAPSHOT-r24679/bin/osmosis --read-replication-interval /backup/downloads/osm/replicate/rlp/ --simplify-change --write-pgsimp-change database=rlp validateSchemaVersion=yes user=andi password=... host=127.0.0.1 This is /backup/downloads/osm/replicate/rlp/configuration.txt : # The URL of the directory containing change files. baseUrl=http://planet.openstreetmap.org/hour-replicate # Defines the maximum time interval in seconds to download in a single invocation. # Setting to 0 disables this feature. maxInterval = 0 Thanks, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] geofabrik snapshots: Cannot read germany
I read all data using dumping and copy (fastest way). 1) For the PBF version this failed, because of 2 errors (!) in the nodes dump. 2 timestamps were malformed like this: 2005-08-01 21:11:38)0200 instead of 2005-08-01 21:11:38+0200 So all tables were populated but the nodes table. From the Postgres docs: COPYstops operation at the first error. This should not lead to problems in the event of aCOPY TO, but the target table will already have received earlier rows in aCOPY FROM. These rows will not be visible or accessible, but they still occupy disk space. It seems that there is no way to ignore malformed CSV lines. I think using triggers is a bad idea because it will degrade performance. 2) Today I have tried the same with the OSM.BZ2 version. But the bz2 is malformed, I cannot decompress it. I would try to read the whole planet, if my current server setup would allow it. So I am very thankful that geofabrik provides the dumps, and I hope this helps to make your service better. Smaller dumps of the Bundesländer seem to work. Now I will give the europe dump a try. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] geofabrik snapshots: Cannot read germany
Impressing: $ md5sum germany_.osm.bz2 germany.osm.bz2 fa71489fc34d1139422f2bbfd305255c germany_.osm.bz2 af34ab1dd2a6716e7cfb8fc1614ef107 germany.osm.bz2 After restart: fa71489fc34d1139422f2bbfd305255c germany_.osm.bz2 af34ab1dd2a6716e7cfb8fc1614ef107 germany.osm.bz2 It seems that cp on my server caused some errors. What do I need to do to run memcheck on debian? Am 13.12.10 15:23, schrieb Frederik Ramm: Hi, Andreas Kalsch wrote: 1) For the PBF version this failed, because of 2 errors (!) in the nodes dump. 2 timestamps were malformed like this: 2005-08-01 21:11:38)0200 instead of 2005-08-01 21:11:38+0200 That's interesting... 2) Today I have tried the same with the OSM.BZ2 version. But the bz2 is malformed, I cannot decompress it. The current germany.osm.bz2 (1122470613 bytes, md5 sum 6f798b1b3a3b75b454fff0d9efc17fe3) decompresses fine on my machine. Is it possible that you have faulty RAM in your server? There's only one bit of difference between the ) and + characters. Just to be on the safe side, could you perhaps run a memcheck or, if you'd rather not shut the machine down, make a few copies of a very large file and run md5sum on those? Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Problem with geofabrik datasets?
I started the script with yesterday's Great Britain PBF and today's Germany PBF, and there was no error. It seems there was an error in yesterday's Germany data. It would be worse if that happended with changesets. Am 12.12.10 12:07, schrieb Frederik Ramm: Hi, Andreas Kalsch wrote: This problem occurs with the current germany.osm AND pbf ( 11-Dec-2010 ) in the latest developer releases (24xxx): The exception you report occurs while unpacking the PBF file. Are you saying that if you download and use the .osm.bz2 you are seeing the same problem? There's reason to believe that *something* is not 100% right with the Geofabrik PBF files because I have had complaints from a number of Windows users (exclusively) saying that when they try to process the PBF with the mkgmap splitter they get only nodes in their output files. Also a guy on talk-de complained that something as simple as osmosis --read-pbf europe.osm.pbf --write-xml europe.osm aborts somewhere down the node section, without any error message, and correctly closing the output file with /osm - again, under Windows. I don't know if your probem and that reported by the Windows users are related. I am using Osmosis r24507 to create the PBF files, and pbf2osm to create the bz2 files later. Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Applying a changeset
Can you help me applying a changeset with the latest snapshots? This is my command: /home/andi/.libraries/osmosis-SNAPSHOT-r24679/bin/osmosis --read-replication-interval /backup/downloads/zltl/replicate/rlp/ --write-pgsimp-change database=rlp validateSchemaVersion=yes user=andi password=... host=127.0.0.1 Dec 12, 2010 8:05:01 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version SNAPSHOT-r24679 Dec 12, 2010 8:05:02 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Preparing pipeline. Dec 12, 2010 8:05:02 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Launching pipeline execution. Dec 12, 2010 8:05:02 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Pipeline executing, waiting for completion. Dec 12, 2010 8:05:02 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Pipeline complete. Dec 12, 2010 8:05:02 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Total execution time: 659 milliseconds. This is /backup/downloads/zltl/replicate/rlp/configuration.txt : # The URL of the directory containing change files. baseUrl=http://planet.openstreetmap.org/hour-replicate # Defines the maximum time interval in seconds to download in a single invocation. # Setting to 0 disables this feature. maxInterval = 0 Doesn't this mean that all changesets up to now should be applied? In my case, no changeset is applied. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Problem with geofabrik datasets?
This problem occurs with the current germany.osm AND pbf ( 11-Dec-2010 ) in the latest developer releases (24xxx): /home/andi/.libraries/osmosis-SNAPSHOT-r24679/bin/osmosis --read-pbf file=/backup/downloads/zltl/germany.osm.pbf --ws database=de validateSchemaVersion=yes user=... password=... host=127.0.0.1 Dec 12, 2010 1:38:50 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version SNAPSHOT-r24679 Dec 12, 2010 1:38:50 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Preparing pipeline. Dec 12, 2010 1:38:51 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Launching pipeline execution. Dec 12, 2010 1:38:51 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Pipeline executing, waiting for completion. java.util.zip.DataFormatException: incorrect data check at java.util.zip.Inflater.inflateBytes(Native Method) at java.util.zip.Inflater.inflate(Inflater.java:238) at java.util.zip.Inflater.inflate(Inflater.java:256) at crosby.binary.file.FileBlockPosition.parseData(FileBlockPosition.java:40) at crosby.binary.file.FileBlockHead.readContents(FileBlockHead.java:78) at crosby.binary.file.FileBlock.process(FileBlock.java:117) at crosby.binary.file.BlockInputStream.process(BlockInputStream.java:15) at crosby.binary.osmosis.OsmosisReader.run(OsmosisReader.java:37) at java.lang.Thread.run(Thread.java:619) Dec 12, 2010 1:38:55 AM org.openstreetmap.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-pbf failed java.lang.Error: java.util.zip.DataFormatException: incorrect data check at crosby.binary.file.FileBlockPosition.parseData(FileBlockPosition.java:43) at crosby.binary.file.FileBlockHead.readContents(FileBlockHead.java:78) at crosby.binary.file.FileBlock.process(FileBlock.java:117) at crosby.binary.file.BlockInputStream.process(BlockInputStream.java:15) at crosby.binary.osmosis.OsmosisReader.run(OsmosisReader.java:37) at java.lang.Thread.run(Thread.java:619) Caused by: java.util.zip.DataFormatException: incorrect data check at java.util.zip.Inflater.inflateBytes(Native Method) at java.util.zip.Inflater.inflate(Inflater.java:238) at java.util.zip.Inflater.inflate(Inflater.java:256) at crosby.binary.file.FileBlockPosition.parseData(FileBlockPosition.java:40) ... 5 more Dec 12, 2010 1:38:55 AM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. org.openstreetmap.osmosis.core.OsmosisRuntimeException: One or more tasks failed. at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.waitForCompletion(Pipeline.java:146) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:92) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:37) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) at org.codehaus.classworlds.Launcher.main(Launcher.java:31) It didn't happen for smaller data sets like rheinland-pfalz.osm/pbf ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Missing Bing zoom level at some places
+1 from here. I wonder that the bing imagery is not scaled on higher zoom levels in both JSOM and Potlatch2. The imagery will not be better by scaling, but the point is that mappers can make more detailed edits then. Andi Am 06.12.10 13:16, schrieb Chris Browet: Hi, It looks like Bing is missing some zoom levels at some places. E.g. go to http://www.openstreetmap.org/edit?editor=potlatch2lat=-12.94668lon=-38.50028zoom=17 http://www.openstreetmap.org/edit?editor=potlatch2lat=-12.94668lon=-38.50028zoom=17 (with bing enabled of course) and zoom in. Would someone have an idea on how to anticipate this, as to implement it in the editors? I would have thought going thru http://dev.virtualearth.net/REST/v1/Imagery/Metadata/Aerial/0,0?zl=1mapVersion=v1key=.. http://dev.virtualearth.net/REST/v1/Imagery/Metadata/Aerial/0,0?zl=1mapVersion=v1key=... would have given me the info, but the xml happily gives me back provider attribution even though there is no tiles... (same on the Bing site, btw). - Chris - ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis questions about applying changesets
1) What does the data_type U in the actions table mean? 2) It seems that Osmosis violates pk_aktions (primary key), so it would be better to replace it by a simple index. 3) This command does not seem to retrieve all changesets. After call, the latest timestamp was 20.11.2010. ${OSM_DIRNAME_OSMOSIS}bin/osmosis \ --read-replication-interval $OSM_DIRNAME_REPLICATION \ --write-pgsimp-change database=$OSM_DB_NAME validateSchemaVersion=no user=$OSM_DB_USER password=$OSM_DB_PW host=$OSM_DB_HOST My server's time is correct. 4) I think I am right that action M means insert? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
Postgres is a dirty hack when you write serverside JS. But this not a problem, because it is stable and has many features. Am 21.11.10 16:43, schrieb Sarah Hoffmann: On Sat, Nov 20, 2010 at 11:52:58AM +1100, Brett Henderson wrote: On Fri, Nov 19, 2010 at 10:50 PM, Sarah Hoffmannlon...@denofr.de wrote: On Fri, Nov 19, 2010 at 09:37:33AM +0100, Andreas Kalsch wrote: If you're applying diffs to the database you can enhance the osmosisUpdate() function (initially empty, but can be customised) to keep your separate tags tables up to date during each diff application. You will need to run the pgsql_simple_schema_0.6_action.sql script against the database so that all actions during a diff are logged and can be used by your osmosisUpdate function to know which records need to be re-processed. Is it possible to truncate the actions table for myself so that a separate script can access the changes? Simply copy away the information from the action table somewhere persistent in the osmosisUpdate function. Works fine. However, +1 from me for an action table that can be truncated manually. Is there likely to be a noticeable performance improvement in doing this? I doubt that. Compared to the entire update task, the overhead of copying is negligible. It is more a design question. I prefer to keep osmosis and the scripts for derived tables strictly apart. Doing part of the update process for derived tables in the osmosisUpdate function intermangles the two and is very difficult to debug. What was the idea behind osmosisUpdate? To allow the code to be executed within the same transaction as the changeset application? My preference if the overhead is small would be to add a contrib script to Osmosis that installs a non-truncating table that is updated by osmosisUpdate, and a customised osmosisUpdate function. It keeps the pgsql tasks simpler if I can do that. I would have expected that an implementation without update function and a persistent action table is simpler. Or do you mean, providing both variants is too complex? In that case, don't worry about it. The current osmosisUpdate does what I need and writing an apropriate function is simple. I'll just no longer think of it as a quick and dirty hack but as the proper way to do it. ;) Sarah ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
Am 20.11.10 01:38, schrieb Brett Henderson: On Fri, Nov 19, 2010 at 7:37 PM, Andreas Kalsch andreaskal...@gmx.de mailto:andreaskal...@gmx.de wrote: Hi Brett, thanks for your elaborate answer! Now I am up to date. Some ideas regarding my use case ... Am 18.11.10 23:50, schrieb Brett Henderson: Hi Andreas, The change was made mostly for performance reasons. With a full planet imported into the database, bounding box style queries are now approximately 10 times faster. This is due to a couple of reasons: * All data (with the exception of relations) is now clustered by geographical location. This drastically improves performance where data is being processed for a limited area. * The nodes and ways tables are the only tables that have a geometry column, thus other data must be embedded in those tables in order to make use of clustering. My concept is always to use _1_ table for all geometries and to create extractions when I need them. Because a geom column can store any type, so it is a more unifying concept. I'm not following. Are you suggesting that node and way tables be merged? For my project I create geometries from features in a separate step and put them all in one table. This is more for playing than for a more special purpose like map rendering. So I don't want to suggest any changes here. I don't understand your comment regarding NoSQL. The main change is that now you will have to deal with a more complex hstore column type on the nodes/ways tables, but otherwise the same data still exists and can still be manipulated with SQL statements. The data is less relational that it was previously, but tag data is not terribly useful without access to parent entities so grouping them together shouldn't result in loss of functionality. You can still populate separate tags tables if you wish by running your own separate query to pull the hstore column apart. This is what I need to do sooner or later, when I will update. It's important for me to use a separate table for tags, because I run a script that will correct the tags of relations (from outer ways to relations), and I don't want to rewrite this and other scripts that depend on this schema every time the version changes. Running an extra script that fetches the hstore tags and puts them into a separate table will add the time that PBF gave me ;( My main concern is that with the next big schema update I _have_ to patch the schema. On the long run it is great to be conservative about such changes or *) My best suggestion is to continue running the old Osmosis. The old version still works so don't upgrade. As Frederik suggests you can run the two versions alongside each other and pipe data between them as necessary. Can you point me to an example? If you're applying diffs to the database you can enhance the osmosisUpdate() function (initially empty, but can be customised) to keep your separate tags tables up to date during each diff application. You will need to run the pgsql_simple_schema_0.6_action.sql script against the database so that all actions during a diff are logged and can be used by your osmosisUpdate function to know which records need to be re-processed. Is it possible to truncate the actions table for myself so that a separate script can access the changes? This is another important point. In the moment, I manually populate my own current_features tables after an update that are populated with all features, whose tstamp is = the time of the last update. A little overhead ... I see that this table exists in 0.36 as well, so I could use it, if I can truncate it manually? As Sarah suggests, the way to do this is to create your own table and populate it from the actions table within the osmosisUpdate function. The overhead in doing this should be relatively small. This is what I do now - creating an additional table action The older Osmosis 0.36 is still available so you don't have to upgrade. It remains compatible with 0.6 XML files. Finally, if there is enough demand for the older schema style the old tasks can be pulled back out of SVN and run alongside the new ones, but I'm not keen to do that without good reason. I did consider trying to support both styles of table in the same tasks by dynamically detecting what tables are installed, but it increases the code complexity considerably and I didn't think the effort was worthwhile. *) With that, you would provide a downward compatible solution that I would appreciate a lot! I'm hesitant to do this for one person's use case. I don't mean to be unhelpful, but I have to be very careful about where spend my limited time on Osmosis and for this reason I try to keep things as simple as possible
Re: [OSM-dev] Is there a way to use simple schema without hstore
Hi, thank you, Brett, for me this is the perfect setup. I hope that others will find it useful as well. The names are OK. Two problems: 1) I read XML or PBF, dump it to CSV and then read the dump, but now my feature tables are blank because of these errors: ERROR: extra data after last expected column CONTEXT: COPY nodes, line 1: 1257995454452010-09-30 21:23:30+02005922698 010120E61034034B64D592214088BE164F98894A40 ERROR: extra data after last expected column CONTEXT: COPY ways, line 1: 39994781744731970-01-07 21:53:09+01004957195 mapping_status=incomplete,highway=secondar... ERROR: extra data after last expected column CONTEXT: COPY relations, line 1: 295223299751970-01-15 10:56:44+01005014762 type=multipolygon I use COPY instead of \copy like this: COPY nodes FROM 'path/to/nodes.txt'; ... I don't know why this happens. The number and type of columns of the simple schema table and the CSV do fit. Reading straight into database works (but is much slower). 2) --write-pgsimp-changeresults in this error log: Nov 20, 2010 2:49:20 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version SNAPSHOT-r24310 Nov 20, 2010 2:49:20 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Preparing pipeline. Nov 20, 2010 2:49:20 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Launching pipeline execution. Nov 20, 2010 2:49:20 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Pipeline executing, waiting for completion. Nov 20, 2010 2:49:23 PM org.openstreetmap.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-replication-interval failed org.openstreetmap.osmosis.core.OsmosisRuntimeException: Unable to insert action with type=NODE, action=MODIFY and id=996097244. at org.openstreetmap.osmosis.pgsimple.v0_6.impl.ActionDao.addAction(ActionDao.java:80) at org.openstreetmap.osmosis.pgsimple.v0_6.impl.EntityDao.modifyEntity(EntityDao.java:263) at org.openstreetmap.osmosis.pgsimple.v0_6.impl.NodeDao.modifyEntity(NodeDao.java:74) at org.openstreetmap.osmosis.pgsimple.v0_6.impl.ChangeWriter.write(ChangeWriter.java:123) at org.openstreetmap.osmosis.pgsimple.v0_6.impl.ActionChangeWriter.process(ActionChangeWriter.java:48) at org.openstreetmap.osmosis.core.container.v0_6.NodeContainer.process(NodeContainer.java:58) at org.openstreetmap.osmosis.pgsimple.v0_6.PostgreSqlChangeWriter.process(PostgreSqlChangeWriter.java:71) at org.openstreetmap.osmosis.core.sort.v0_6.ChangeSorter.complete(ChangeSorter.java:64) at org.openstreetmap.osmosis.replication.v0_6.ReplicationDownloader.processComplete(ReplicationDownloader.java:93) at org.openstreetmap.osmosis.replication.v0_6.BaseReplicationDownloader.runImpl(BaseReplicationDownloader.java:284) at org.openstreetmap.osmosis.replication.v0_6.BaseReplicationDownloader.run(BaseReplicationDownloader.java:345) at java.lang.Thread.run(Thread.java:619) Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint pk_actions at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1795) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:367) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:321) at org.openstreetmap.osmosis.pgsimple.v0_6.impl.ActionDao.addAction(ActionDao.java:77) ... 11 more Nov 20, 2010 2:49:23 PM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. org.openstreetmap.osmosis.core.OsmosisRuntimeException: One or more tasks failed. at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.waitForCompletion(Pipeline.java:146) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:92) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:37) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) at org.codehaus.classworlds.Launcher.main(Launcher.java:31) Is it always guaranteed that a change for
Re: [OSM-dev] Is there a way to use simple schema without hstore
Hi Brett, thanks for your elaborate answer! Now I am up to date. Some ideas regarding my use case ... Am 18.11.10 23:50, schrieb Brett Henderson: Hi Andreas, The change was made mostly for performance reasons. With a full planet imported into the database, bounding box style queries are now approximately 10 times faster. This is due to a couple of reasons: * All data (with the exception of relations) is now clustered by geographical location. This drastically improves performance where data is being processed for a limited area. * The nodes and ways tables are the only tables that have a geometry column, thus other data must be embedded in those tables in order to make use of clustering. My concept is always to use _1_ table for all geometries and to create extractions when I need them. Because a geom column can store any type, so it is a more unifying concept. I don't understand your comment regarding NoSQL. The main change is that now you will have to deal with a more complex hstore column type on the nodes/ways tables, but otherwise the same data still exists and can still be manipulated with SQL statements. The data is less relational that it was previously, but tag data is not terribly useful without access to parent entities so grouping them together shouldn't result in loss of functionality. You can still populate separate tags tables if you wish by running your own separate query to pull the hstore column apart. This is what I need to do sooner or later, when I will update. It's important for me to use a separate table for tags, because I run a script that will correct the tags of relations (from outer ways to relations), and I don't want to rewrite this and other scripts that depend on this schema every time the version changes. Running an extra script that fetches the hstore tags and puts them into a separate table will add the time that PBF gave me ;( My main concern is that with the next big schema update I _have_ to patch the schema. On the long run it is great to be conservative about such changes or *) If you're applying diffs to the database you can enhance the osmosisUpdate() function (initially empty, but can be customised) to keep your separate tags tables up to date during each diff application. You will need to run the pgsql_simple_schema_0.6_action.sql script against the database so that all actions during a diff are logged and can be used by your osmosisUpdate function to know which records need to be re-processed. Is it possible to truncate the actions table for myself so that a separate script can access the changes? This is another important point. In the moment, I manually populate my own current_features tables after an update that are populated with all features, whose tstamp is = the time of the last update. A little overhead ... I see that this table exists in 0.36 as well, so I could use it, if I can truncate it manually? The older Osmosis 0.36 is still available so you don't have to upgrade. It remains compatible with 0.6 XML files. Finally, if there is enough demand for the older schema style the old tasks can be pulled back out of SVN and run alongside the new ones, but I'm not keen to do that without good reason. I did consider trying to support both styles of table in the same tasks by dynamically detecting what tables are installed, but it increases the code complexity considerably and I didn't think the effort was worthwhile. *) With that, you would provide a downward compatible solution that I would appreciate a lot! Is it necessary that Osmosis makes the schema checks? What about giving each schema a unique ID and then let the user point Osmosis to this ID and let it fail, if the user has installed the wrong schema? Finally, I didn't make the change without careful consideration. I do try to keep schemas stable, and when they do change I provide an upgrade script to allow migration between them. But the performance gains achieved through use of hstore were too great to ignore. Retrieving heavily populated 1x1 degree areas from a database containing a full planet used to take approximately 1 hour, but this is now down to well under 10 minutes. On the long run, this is an argument ;) I am critical, because I still haven't thought through all dependant scripts that do something with tags. But there are many ... Hope that helps, Brett On Thu, Nov 18, 2010 at 8:18 PM, Andreas Kalsch andreaskal...@gmx.de mailto:andreaskal...@gmx.de wrote: Is there a way to use simple schema in Osmosis without hstore? And why was this changed? A separate table for tags can more easily be indexed. I think it is not a good idea to use hstore because then we can drop SQL, use NoSQL for storing data and use PostGIS/Postgres for Geometry only. What do you think? Best, Andi ___ dev mailing list dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
One simple answer: The drivers do not work appropriately with complex SQL data types. In PHP or node.js I will get a string that I have to parse, in MongoDB, I get a proper object or list. If I used hstore in a consequent way (I like consequence and unification), I would have sets in sets, and this is the same as a document oriented database. But just intermingling things for fun does not make the world better. MongoDB, for example, unifies worlds by simply using JSON. I don't have to manually parse things I do not need to parse. Am 19.11.10 09:47, schrieb Sven Geggus: Andreas Kalschandreaskal...@gmx.de wrote: I think it is not a good idea to use hstore because then we can drop SQL, use NoSQL for storing data and use PostGIS/Postgres for Geometry only. In the real world there is no black and white! Shure, hstore is comparable to NoSQL aproaches, but why should it be a bad thing to use a best of both worlds aproach? Sven ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
Am 19.11.10 10:06, schrieb Frederik Ramm: Hi, Andreas Kalsch wrote: One simple answer: The drivers do not work appropriately with complex SQL data types. In PHP or node.js I will get a string that I have to parse, in MongoDB, I get a proper object or list. If I used hstore in a consequent way (I like consequence and unification), I would have sets in sets, It seems to me that you are mistaking consequence for exaggeration. In many cases - especially when dealing with large real-world datasets as opposed to a nice little hello-world application -, a healthy compromise works better than grabbing one concept and trying to make the world fit that concept. I am sure there are some good uses for hstore, but as soon as you use it, you are waiting for something like a document-oriented database. I ask myself: Why do I need normal columns when there is hstore? Of course there are some answers like special indexing ... the fact: Intermingling both concepts inside one database will make queries and schema design more complex than necessary - many, many time-consuming choices you do not need to do in the NoSQL world. If you take a look at all Postgres data types, you have a myriad of choices. Often, a simple design will win, especially when you will build something more complex on top of it. It's only one step away from switching to a document store. Example for unnessessary complex schema design: http://wiki.openstreetmap.org/wiki/DE:HowTo_minutely_hstore My personal point is that my system relies on the 0.36 schema and I simply cannot change all dependant scripts. But just intermingling things for fun does not make the world better. I think you're misunderstanding. hstore has not been implemented for fun. (Are you aware that PostgreSQL can extend column indexes to hstore keys?) Probably I am wrong ... yes I know that you can index hstore with a GIST. MongoDB, for example, unifies worlds by simply using JSON. I don't have to manually parse things I do not need to parse. In turn, you will have a hard time getting the performance required for a planet-wide application out of MongoDB. OK, can you explain further what the bottlenecks would be? Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
I agree - my approach is the playground approach: At first, keep it simple. For your purpose it is correct to split the table. What about a partial index over 1 table? This is possible with Postgres. Am 19.11.10 11:03, schrieb Sven Geggus: Andreas Kalschandreaskal...@gmx.de wrote: Example for unnessessary complex schema design: http://wiki.openstreetmap.org/wiki/DE:HowTo_minutely_hstore You are welcome to design a better database scheme suitable for rendering :) osm2pgsql output is evolution _not_ design. Using a join in every single SQL request is not an Option in this case at least performancewise. Probably I am wrong ... yes I know that you can index hstore with a GIST. GIN Sven ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Is there a way to use simple schema without hstore
Is there a way to use simple schema in Osmosis without hstore? And why was this changed? A separate table for tags can more easily be indexed. I think it is not a good idea to use hstore because then we can drop SQL, use NoSQL for storing data and use PostGIS/Postgres for Geometry only. What do you think? Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Is there a way to use simple schema without hstore
It's always great to try something new, but we use Postgres as the database and it is SQL, where I use tables. Somehow I feel forced to learn something I and others will not use very often. It is better to be conservative about schemas. The problem: I have written some software that heavily relies on the 0.36 simple schema. Now I have to use 0.36 and cannot use the new PBF format. It would be good to be able to choose between hstore and the old schema that did my job very well. What about GROUP BY over single hstore k/v pairs - is this possible? hstore feels like having a table inside a cell ... Am 18.11.10 14:00, schrieb Frank Broniewski: Am 18.11.2010 10:18, schrieb Andreas Kalsch: Is there a way to use simple schema in Osmosis without hstore? And why was this changed? A separate table for tags can more easily be indexed. I think it is not a good idea to use hstore because then we can drop SQL, use NoSQL for storing data and use PostGIS/Postgres for Geometry only. What do you think? Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev Hello Andi, You can create an index for the tags column. hstore supports gist and gin indexes and plus it saves you a m:n join. And I don't see why using hstore data type is like using NoSQL? You can still extract the tags into a seperate table, if you like of course ;-) Frank ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Disallowing certain characters in tag keys
I agree with whitespace - this can be very confusing = To add: Make keys lowercase (or even remove diacritics), because keys are always simple names. Am 16.10.10 20:44, schrieb Jochen Topf: Hi! I am currently fighting some issues where tags with strange characters in them need to be represented in a URL for Taginfo. Lots of other websites probably will have similar issues. Characters like /, ?,, etc. have special meaning in URLs so if they appear in tags I can't have those tags in URLs. Sometimes escaping characters as %XX helps, sometimes not. And those problems are not confined to web pages and URLs only. Special characters that need escaping are often a problem. We can't really do anything about that with regard to tag values, they must be allowed to contain all those characters. But it would help at least a little if we knew those characters can never appear in tag keys. And I can't really see a legitimate reason why we need those characters in keys. Looking at the database almost all cases where they appear in keys are obvious errors. Out of the about 2 different keys, there are only about 190 keys with problematic characters in them (another about 800 with whitespace). Really the only case that I can't immediately rule out as errors or see an alternative tagging are tag keys like maxspeed:weight7.5. And with those you can already see the problems: Some of them have gt; instead of the . So I'd like us to think about whether we can disallow a few characters from appearing in tag keys. Technically this would mean changing the API to check for those characters, removing any that are already in the database (can be done with normal manual edits because there are so few cases) and adding checks to the editors so that they can give meaningful error messages. Shouldn't be too hard. So, what characters am I talking about? I haven't drawn up a complete list and we certainly would need to discuss this further. Here is a preliminary list: Whitespace Should use '_' instead of whitespace in keys, whitespace are also very confusing for users, especially at beginning and end of a text. /+?#;%' Special characters in XML, HTML and/or URLs. \' Characters often used for quoting. =Because its used in many places as the separation character between tag key and tag value. If we disallow this, we can always treat one string like foo=bar as k:foo, v:bar without any ambiguities. This is a small list of special characters, all other characters should still be allowed. That means tag keys can still be in Chinese or whatever. We'd just disallow a few characters of which we know that they will make problems again and again. And to emphasize this again: I am only talking about tag keys. Tag values must be allowed to contain the full Unicode set of characters. Jochen ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] API 0.7+: Split node concept?
Please don't make end users change too much on the next update! I think the current data model is pretty OK, it is more a data data model than a sematical one. And I think we should keep it like that ;) Andi Am 12.10.10 21:45, schrieb Chris Browet: I am wondering (I wonder a lot lately ;-)) if some have already given a thought to the fact that nodes actually represent 2 different concepts in the current api: - a node in the geometrical sense, i.e. used to define a linestring/way - a POI Wouldn't keep the node element only for POI (i.e. with tags) a better idea? E.g something like this: node version=0 lon=3.5348711 lat=54.1945783 timestamp=2010-10-12T12:35:18Z user= id=-9 tag k=highway v=traffic_light/ /node way version=0 timestamp=2010-10-12T12:35:02Z user= id=-2 point lon=3.5317073 lat=54.1929773/ nd ref=-9/ point lon=3.5377391 lat=54.1960297/ /way Seems to me that it would: - be less confusing, both for consumers and editors - save db space - save memory/CPU cycle on the consumer side What do you think? - Chris - ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] API 0.7+: Split node concept?
+1 This is the point Am 12.10.10 22:33, schrieb Alan Millar: Frankly, one of the main problems with the classic GIS shapefile-style data paradigm is that it does not give you good topological connectivity information, and therefore is inadequate for OSM's multi-use data model. If you think topology can be inferred or derived from position, check out the various list archives for discussions on duplicate node removal, powerlines, and layers. You'll find a lot of good information on it. - Alan ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Announcing: Simple Map Editor (GSoC 2010)
Am 20.08.10 05:45, schrieb Michael Daines: - I can only add predefined tags - I think there should be a more common way to add tags. The best way is to simplify adding relevant tags by using data mining (Which tags are often used together with the tags already defined?) to propose new tags instead of manually predefining tags! Or do you have implemented it already this way but the test data set lacks some data? Anyway, I (not a beginner) want to be able to add any tag I want. The idea is that as a beginner you wouldn't yet be sure what tags add anyway. (Or wouldn't even yet care how OSM uses tags or nodes, etc.) This is more of a design decision than a technical limitation. Of course, it is about showing an inexperienced user what makes sense by using intelligent predefinitions based on algorithmical choice. Nobody needs to care about features or even tags and keys ;) - I want to be able to zoom out to see the whole geometry. Right now the editor loads map data through the standard /map bbox query. Loading a lot of map data when zoomed out would be slow, (it is already slow on the production API) but perhaps in the future, XAPI could be used to only load the kinds of things the editor can actually do things with? You only need to reload data when the user unselects a feature. Zooming out is just to get an overview. Anyway, this is just a detail. - I agree with Sebastian Klein: Do not force the user to cancel or save, but save the whole as one changeset. How do you see this working? Just opening a changeset at the beginning of the editing session and letting it close automatically? Or would it be reflected in the interface? You select a feature, it's getting highlighted, the tag box is opened, then you edit tags. As soon as you unselect the feature by selecting another one or clicking anywhere, your changes are saved on the client, somewhere appears a red SAVE button after your first edit. That's it. No need to explain changeset or anything to the user ;) I want to avoid presenting the concept of the changeset and also make it clear that some action is taking place when editing. I don't want the user's edits to be silently transmitted to the servers without them realizing it. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Applying outer way attributes to multi-polygons
http://lists.openstreetmap.org/pipermail/dev/2009-July/016215.html Am 14.08.10 11:52, schrieb M∡rtin Koppenhoefer: 2010/8/2 Ben Supnikbsup...@xsquawkbox.net: - If the outer multipolygon only has one way, then __maybe_- that way might contain all of the attributes I care about? This second one makes me nervous because I could have a state park way where the boundary of the park is a water way with an island in it...the island (inner way) might be meant to cut a hole in some attributes (like water) but not others (like park). I think that most logical would be to 1. put tags for the whole (e.g. park) on the outer way 2. put tags for the object (outer minus inner) in the relation. 1. is eventually requiring partially duplicate ways for cases where otherwise several outer ways would be needed (tagging different area-features and way-features to an open way might cause problems) 2. might cause problems when the outer way and the relation have incompatible tags. cheers, Martin ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] New OSM binary fileformat implementation.
What about some metrics (performance, size)? Data is the same, whether binary or not. So binary really has to pay off significantly. Am 01.08.10 13:39, schrieb Brett Henderson: On Sun, Aug 1, 2010 at 7:34 PM, Erik Johansson erjo...@gmail.com mailto:erjo...@gmail.com wrote: On Sun, Aug 1, 2010 at 2:35 AM, Brett Henderson br...@bretth.com mailto:br...@bretth.com wrote: On Sun, Aug 1, 2010 at 2:26 AM, Frederik Ramm frede...@remote.org mailto:frede...@remote.org wrote: Scott, others, Scott Crosby wrote: I would like to announce code implementing a binary OSM format that supports the full semantics of the OSM XML. [...] The changes to osmosis are just some new tasks to handle reading and writing the binary format. [...] This was 3 months ago. What's the status of this project? Are people actively using it? Is it still being developed? Can the Osmosis tasks be used in the new Osmosis code architecture (see over on osmosis-dev) that Brett has introduced with 0.36? I'm curious about this as well. The main reason for me introducing the new project structure was to facilitate the integration of new features like this. They're relatively easy to add (some Ant and Ivy foo required ...), [...] The code hasn't changed a lot, but the build processes have. Well that's one of the thing Scott said he had no clue on how to do. From Scotts mail: Scott Crosby: // TODO's Probably the most important TODO is packaging and fixing the build system. I have no almost no experience with ant and am unfamiliar with java packaging practices, so I'd like to request help/advice on ant and suggestions on how to package the common parsing/serializing code so that it can be re-used across different programs. I'll help incorporate this into the rest of Osmosis. There's a few things to work through though. * Is there a demand for the binary format in its current incantation? I'm not keen to incorporate it if nobody will use it. * Can the code be managed in the main OSM Subversion repo instead of GIT? * Is any code reuse between Osmosis and other applications required? If only the Osmosis tasks will be managed in the Osmosis project and a component with common functionality managed elsewhere then I need to know how the common component will be managed and published for consumption in Osmosis. Brett ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Postgres amazing spatial indices
To make it even faster: Can you scale Postgres indexes across serveral servers? Andi Am 28.07.10 09:48, schrieb Juan Lucas Domínguez Rubio: After exporting the OSM planet to Postgres, I get -among others- a 43 GB table with lines, and this random spatial query only takes one second to respond :-D SELECT name FROM p100618_line WHERE ST_Intersects(GeometryFromText('POLYGON((-111.296 47.503,-111.296 47.51,-111.27 47.51,-111.27 47.503,-111.296 47.503))', 4326), way); Hooray for spatial indices! Regards, Juan Lucas ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] OSM and CouchDB/GeoCouch
What about rewriting this stuff in C? I have written a MySQL importer in C some time ago, so what about reusing its XML parsing part? If you are interested, I'll put it on Github. Andi Am 03.07.10 22:43, schrieb Ian Dees: On Sat, Jul 3, 2010 at 1:17 PM, Nolan Darilek no...@thewordnerd.info mailto:no...@thewordnerd.info wrote: On 07/03/2010 01:09 PM, Nolan Darilek wrote: On 07/02/2010 01:52 PM, Serge Wroclawski wrote: Similarly, Ian Dees and I have written a server using MongoDB, which also provides functionality such as auto-sharding and built in map/reduce. Is this work available anywhere? How did you find performance to be, and to what uses did you put it? I've done some experiments creating a LibOSM MongoDB backend and found its performance fairly bad, but I don't have the most optimal server for it, and probably didn't use MongoDB to its limits. If you experienced good performance for real-time operations then I'd be very interested in seeing how you managed it so I might adopt the techniques and see if I have any better luck. It seemed to me that a dump of the entire planet would require a substantial server to serve up, so I abandoned the work, but would very much like to revive it if it's at all workable. The code is here: http://github.com/iandees/mongosm It took several days to import a planet file. The majority of the CPU time was spent serializing/deserializing BSON in Python and the Mongo server had very little CPU time so if I use a language with a faster BSON implementation it might be faster. Serge was working on a way to import diff files to maintain mintutely updates. I was working on an HTTP API interface. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] OSM and CouchDB/GeoCouch
The web frontend is mainly for searching - it needs fast reads, horizontal scaling across several servers. In the backend it needs functions that check for geometric relations or that compute new geometries. It must be able to handle huge, complex geometries. Fast geo indexes for both frontend and backend. I can drop the C in CAP. I don't want to use both PostGIS and Couch for the search frontend, so I want to decide by some clear metrics what to use. My plan is to use Couch as soon as I need to scale the frontend, and PostGIS will still be used for complex computations in the backend. However, it would be great, if the community can experiment with your code and proove how well a NoSQL solution will perform compared to a RDBMS for the OpenStreetMap tool layers. Andi Am 02.07.10 00:58, schrieb Lars Francke: were there any successful attempts to read OSM data into CouchDB and Geocouch? Does somebody know of a backend? I have done something like that and can provide some code at the end of July (I won't be back home before then). It really is just a different kind of schema. But the exact schema depends on what you want to use it for. Do you want to use it as an alternative to the API/PostgreSQL or a convenient storage on mobile devices, routing data, mapnik backend etc. CouchDB is kinda great (even though the documentation sucks) but it's just one out of many tools and may or may not be a good fit. Cheers, Lars ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] OSM and CouchDB/GeoCouch
Hi, were there any successful attempts to read OSM data into CouchDB and Geocouch? Does somebody know of a backend? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Using relations (or NoSQL?) to save data
Am 06.05.10 16:36, schrieb Roland Olbricht: Am Dienstag, 4. Mai 2010 22:12:02 schrieb Andreas Kalsch: I am still reading some old mailing list posts ... What about a relation with type=data, which is a relation that can include tags and other relations recusively? It is a really good idea to store definitions of tags directly into the database, but relations leave some drawbacks open: - A description might easily surpass 255 characters. Or you might want to use markup (e.g. for a link or for emphasizing things). Right, so what about dropping this limit? Even better for OSM data as well. - Often tags apply only to a part of the world. Or, even worse, do have different meanings in different part of the worlds. Think of different maxspeed restrictions on motorways in different parts of the world. But tags and value are global, semantics can vary, of course. This is our job: to describe a tag depending on language / country / ... so that everybody will understand it by reading the right description appended by a country or language code. I'd encourage you to start using those relations now but we should have a more versatile solution with the next API. See http://wiki.openstreetmap.org/wiki/API_v0.7#Classes It seems that the class feature was suggested mainly for the purpose of storing tag descriptions. So isn't it unnecessary work to define an own feature instead of reusing relations and dropping the length limit? Additionally, to include a bounding polygon is a bad idea. It's better to reference the relation ID of the multipolygon. Example in OSM XML: relation ... tag k=type v=data/ tag k=class v=CountryCodeToFeatureMapping/ member member_class=relation member_id=12345 role=de/ member member_class=relation member_id=23456 role=fr/ /relation relation ... tag k=type v=data/ tag k=class v=ValueDef/ tag k=key v=highway=motorway/ tag k=description:de v=German tag k=implies:fr v=highway=trunk/ tag k=photo v=http://.../ /relation So we reuse OSM's data as well to define the area where the property applies. A simple class to map from country code to admin area ID is setup, so that we can demonstrate that we have got the best data - we can exactly show what a country code means geometrically. Let's think about setting up MongoDB to handle nested objects and schema changes more easily. In MongoDB, collections are the equivalent of tables, so we would have several collections or classes, examples in JSON: 1) AreaLabel, example: { de: {class: 'Relation', id: 12345}, ... } 2) KeyDef 3) ValueDef, example: { keyDef: {key:'highway'}, value: 'motorway', description: { de: '...', fr: '...' }, impliesValueDefs: { fr: { keyDef: {key:'highway'}, value: 'trunk' } }, photo: { __: 'http://...' /* '__' means global */ } } 4) Preset 5) Group ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Using relations to save data
This is what I didn't propose - introducing a new data type to make things more complicated. Semantically, a prototype-based object system - know from ECMA - is related to my concept, but my idea is just about data and using OSM's existing structures and functionality to save data which should be machine readable. Using prototypes for common features would complicate things a lot, because mappers would need to use an additional definition next to the feature list in the wiki. Am 05.05.10 01:39, schrieb Scott Crosby: On Tue, May 4, 2010 at 3:12 PM, Andreas Kalsch andreaskal...@gmx.de mailto:andreaskal...@gmx.de wrote: I am still reading some old mailing list posts ... What about a relation with type=data, which is a relation that can include tags and other relations recusively? This relation has no geometric reference but it is just there to save data. So we could reuse relations for a purpose which is not the main OSM one - instead of expensively defining a new data type. This type could be used to save tag definitions ( http://wiki.openstreetmap.org/wiki/Machine-readable_Map_Feature_list ) regularly in the database to be able to access the data with the API easily, which already provides versioning and changesets: FYI: This sounds like a prototype-based object system. http://en.wikipedia.org/wiki/Prototype-based_programming An adaptatation for a prototype-based system for OSM might be something like: prototype id=... inherits id=... inherits id=... tag ... tag ... /prototype way id=... inherits id=... inherits id=... tag ... tag ... /way I just read [1] below, and it may be productive to look at a prototype-based object model. At a fine-grained level, there might be one prototype for each road, which has the common tags used by that road (such as the name). Each way composing that road then inherits that prototype, possibly adding in additional tags ('bridge'), or inheriting a second prototype, eg, for a bicycle path, which could represent a shared bridge. Or at a coarser level, there could also be a highway archeotype for all highways, which inherits a road archeotype for all roads, and a bikepath archeotype for all bike paths, etc. Migration could occur incrementally, make a prototype, move all ways/nodes/etc into inheriting from it. Repeat. Instead of having a geometric object with some properties, we instead think of objects with some properties (like “this is a museum” and “this has the name Natural History Museum”) and the added property of “this object is positioned at such and such a location”. ... So the geometry is not the object itself, as it is now, but it is just one property of some kind of abstract object. I believe this is indeed the way many pros are doing it - there is an object and the geometry is one of many properties of the object. It is a concept to keep in mind for the more distant future; I don't think we should aim to do it with the current implementation of relations though. Bye Frederik [1]http://www.remote.org/frederik/tmp/towards-a-new-data-model-for-osm.pdf ___ dev mailing list dev@openstreetmap.org mailto:dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Using relations to save data
I am still reading some old mailing list posts ... What about a relation with type=data, which is a relation that can include tags and other relations recusively? This relation has no geometric reference but it is just there to save data. So we could reuse relations for a purpose which is not the main OSM one - instead of expensively defining a new data type. This type could be used to save tag definitions ( http://wiki.openstreetmap.org/wiki/Machine-readable_Map_Feature_list ) regularly in the database to be able to access the data with the API easily, which already provides versioning and changesets: relation id=... tag k=type v=data / tag k=class v=tag-def / tag k=key v=name / tag k=onway v=true / tag k=description:de v=... / tag k=display-name:de v=... / member type=relation id=... role=implies / ... /relation Having a client-side framework with UI to access and change the data according to the model - a pendant to JOSM - makes sense. I want to add this idea to proposed uses of relations in the wiki. Andi Am 19.02.09 23:35, schrieb Frederik Ramm: Hi, Steve Hill wrote: I've been thinking about ways to improve the way objects are tagged in OSM - for a long time I've seen some problems with the way we currently tag things, and I finally got around to writing down some of my thoughts on the subject. I *had* been wondering; we had the usual recurring left-right tagging discussion but the bi-monthly Absolutely New And Improved Tagging Scheme was overdue for a while. Thanks for jumping in and helping us out ;-) Your concept is utterly unworkable of course with the current software landscape, but if we leave that aside for a moment, then you do have an interesting point, in fact one that was raised by Jochen and myself in our April 2007 data model paper[1], back when we were still young and believed we could change the world. Quoting from that paper: Instead of having a geometric object with some properties, we instead think of objects with some properties (like “this is a museum” and “this has the name Natural History Museum”) and the added property of “this object is positioned at such and such a location”. ... So the geometry is not the object itself, as it is now, but it is just one property of some kind of abstract object. I believe this is indeed the way many pros are doing it - there is an object and the geometry is one of many properties of the object. It is a concept to keep in mind for the more distant future; I don't think we should aim to do it with the current implementation of relations though. Bye Frederik [1]http://www.remote.org/frederik/tmp/towards-a-new-data-model-for-osm.pdf ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Nominatim json -- coment
I miss a license notice in every map tile ;) Am 26.02.10 15:48, schrieb Peter Körner: Frederik Ramm schrieb: Brian, Just at the moment I'm at a loss to know why I spend my free (unpaid) time developing a tool when a small mistake gets me a load of abuse. Sorry I didn't want to bug you, I just encountered that the tool did not work any more and wanted to talk here about why this is so an how we can fix it. I always blench to change so. elses code without talking to him first. Peter, since you seem to be familiar with the JSON output, can you suggest a method to put the attribution in the file without offending the JSON spec? Just as it is now it's fine: [{place_id:538008,licence:Data Copyright OpenStreetMap Contributors, Some Rights Reserved. CC-BY-SA 2.0.,osm_type:node,osm_id: . . . Thank you all for quickly fixing this! Peter ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] ocean tiles are dry
It seems the US are doing land reclamation ... Am 24.02.10 00:50, schrieb Apollinaris Schoell: any ideas what is going on here? http://www.openstreetmap.org/?lat=35.2lon=-78.9zoom=5layers=B000FTFT http://www.openstreetmap.org/?lat=10.28lon=-78.83zoom=6layers=B000FTF ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis --fast-read-xml
Yeah, now it's working. But I still miss --fast-read-xml - it isn't supported. Andi Am 22.02.10 13:49, schrieb Brett Henderson: This has been fixed in the latest version (0.34) of Osmosis. On Tue, Feb 16, 2010 at 11:17 AM, Brett Henderson br...@bretth.com mailto:br...@bretth.com wrote: I'll take a look as soon as I can ... sorry for the inconvenience. On Mon, Feb 15, 2010 at 10:38 PM, André Riedel riedel.an...@gmail.com mailto:riedel.an...@gmail.com wrote: forgott to send it to the list ;-) -- Forwarded message -- Date: 2010/2/14 Subject: Re: [OSM-dev] Osmosis error 2010/2/14 Andreas Kalsch andreaskal...@gmx.de mailto:andreaskal...@gmx.de: Error occurs in Osmosis 0.33: /backup/projects/3rdparty/osmosis/bin/osmosis --rx file=/backup/projects/data/gos/andi/bremen.osm --wd database=b validateSchemaVersion=no ... populateCurrentTables=no java.io.FileNotFoundException: /backup/projects/3rdparty/osmosis/config/plexus.conf (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.init(Unknown Source) at java.io.FileInputStream.init(Unknown Source) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:386) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) at org.codehaus.classworlds.Launcher.main(Launcher.java:31) The plexus.conf can be found here: http://svn.openstreetmap.org/applications/utils/osmosis/trunk/config/ but the osmosis output isn't better with it: === org.codehaus.plexus.classworlds.launcher.ConfigurationException: Unhandled configuration (1): ?main is org.openstreetmap .osmosis.core.Osmosis from osmosis.core at org.codehaus.plexus.classworlds.launcher.ConfigurationParser.parse(ConfigurationParser.java:303) at org.codehaus.plexus.classworlds.launcher.Configurator.configure(Configurator.java:135) at org.codehaus.plexus.classworlds.launcher.Launcher.configure(Launcher.java:132) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:405) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) at org.codehaus.classworlds.Launcher.main(Launcher.java:31) === Ciao André ___ dev mailing list dev@openstreetmap.org mailto:dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis error
Error occurs in Osmosis 0.33: /backup/projects/3rdparty/osmosis/bin/osmosis --rx file=/backup/projects/data/gos/andi/bremen.osm --wd database=b validateSchemaVersion=no ... populateCurrentTables=no java.io.FileNotFoundException: /backup/projects/3rdparty/osmosis/config/plexus.conf (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.init(Unknown Source) at java.io.FileInputStream.init(Unknown Source) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:386) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352) at org.codehaus.classworlds.Launcher.main(Launcher.java:31) What is this file and why do I need it? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis and current tables
Is there a way to make Osmosis populate and refresh just the current tables and ignoring the history tables? It would be faster and it is easier to work with tables which have exactly one version - the recent - for every feature. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Errors with populateCurrentTables=yes on refresh based on partial dataset
Using populateCurrentTables=yes on refresh will make Osmosis stop. When I use populateCurrentTables=no, then all is OK, but then I don't have updated data in the current tables. What can I do about that? Feb 14, 2010 4:15:28 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version 0.31.1 Feb 14, 2010 4:15:29 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Preparing pipeline. Feb 14, 2010 4:15:29 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Launching pipeline execution. Feb 14, 2010 4:15:29 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Pipeline executing, waiting for completion. ERROR: insert or update on table current_way_nodes violates foreign key constraint current_way_nodes_node_id_fkey DETAIL: Key (node_id)=(204308) is not present in table current_nodes. STATEMENT: INSERT INTO current_way_nodes (id, node_id, sequence_id) VALUES ($1, $2, $3) Feb 14, 2010 4:15:50 PM org.openstreetmap.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-rxc failed org.openstreetmap.osmosis.core.OsmosisRuntimeException: Unable to insert current way node with way id=722 and node id=204308. at org.openstreetmap.osmosis.core.apidb.v0_6.impl.ChangeWriter.write(ChangeWriter.java:755) at org.openstreetmap.osmosis.core.apidb.v0_6.impl.ActionChangeWriter.process(ActionChangeWriter.java:56) at org.openstreetmap.osmosis.core.container.v0_6.WayContainer.process(WayContainer.java:61) at org.openstreetmap.osmosis.core.apidb.v0_6.ApidbChangeWriter.process(ApidbChangeWriter.java:67) at org.openstreetmap.osmosis.core.xml.v0_6.impl.ChangeSourceElementProcessor$ChangeSinkAdapter.process(ChangeSourceElementProcessor.java:135) at org.openstreetmap.osmosis.core.xml.v0_6.impl.WayElementProcessor.end(WayElementProcessor.java:109) at org.openstreetmap.osmosis.core.xml.v0_6.impl.OsmChangeHandler.endElement(OsmChangeHandler.java:96) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(Unknown Source) at org.openstreetmap.osmosis.core.xml.v0_6.XmlChangeReader.run(XmlChangeReader.java:107) at java.lang.Thread.run(Unknown Source) Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on table current_way_nodes violates foreign key constraint current_way_nodes_node_id_fkey Detail: Key (node_id)=(204308) is not present in table current_nodes. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:343) at org.openstreetmap.osmosis.core.apidb.v0_6.impl.ChangeWriter.write(ChangeWriter.java:752) ... 20 more Feb 14, 2010 4:15:50 PM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. org.openstreetmap.osmosis.core.OsmosisRuntimeException: One or more tasks failed. at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.waitForCompletion(Pipeline.java:146) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:85) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:30) ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osmosis and current tables - temporary solution
Deleting all old versions before inserting the new one via triggers will slow down the script a bit, but the checks for the latest version can be omitted and you can use populateCurrentTables=no: CREATE OR REPLACE FUNCTION nodes_insert() RETURNS trigger AS $$ DECLARE BEGIN DELETE FROM node_tags WHERE id = NEW.id; DELETE FROM nodes WHERE id = NEW.id; RETURN NEW; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION ways_insert() RETURNS trigger AS $$ DECLARE BEGIN DELETE FROM way_tags WHERE id = NEW.id; DELETE FROM way_nodes WHERE id = NEW.id; DELETE FROM ways WHERE id = NEW.id; RETURN NEW; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION relations_insert() RETURNS trigger AS $$ DECLARE BEGIN DELETE FROM relation_tags WHERE id = NEW.id; DELETE FROM relation_members WHERE id = NEW.id; DELETE FROM relations WHERE id = NEW.id; RETURN NEW; END; $$ LANGUAGE 'plpgsql'; CREATE TRIGGER node_insert BEFORE INSERT ON nodes FOR EACH ROW EXECUTE PROCEDURE nodes_insert(); CREATE TRIGGER way_insert BEFORE INSERT ON ways FOR EACH ROW EXECUTE PROCEDURE ways_insert(); CREATE TRIGGER relation_insert BEFORE INSERT ON relations FOR EACH ROW EXECUTE PROCEDURE relations_insert(); Am 14.02.10 15:40, schrieb Andreas Kalsch: Is there a way to make Osmosis populate and refresh just the current tables and ignoring the history tables? It would be faster and it is easier to work with tables which have exactly one version - the recent - for every feature. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] When are ways/relations in a changeset?
When I change the position of a node, will the way be in the changeset, too? When I change members of relations (without adding / removing members), will the relation be in the changeset, too? I have currently no data to test these cases, so does anybody know how the results of these operations is? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] This #petition needs your votes: Vote for legal use of Google's aerial imagery for #OpenStreetMap tracing!
I see, I misunderstood 0 votes on the left-hand side, which are just my own votes. But 1452 is not very much, and this petition cannot be spreaded enough. Andi Thomas Wood schrieb: Erm, its been all over the mailing lists about 2 weeks ago, it was featured on the OpenGeoData blog, it has 1,452 positive votes as I write this 2009/10/3 Andreas Kalsch andreaskal...@gmx.de: I have found this in the OpenStreetMap news, and I wondered why I have given the first vote for it. http://twitter.com/kalsch/status/4582749178 Please spread this! Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] This #petition needs your votes: Vote for legal use of Google's aerial imagery for #OpenStreetMap tracing!
I have found this in the OpenStreetMap news, and I wondered why I have given the first vote for it. http://twitter.com/kalsch/status/4582749178 Please spread this! Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Problem retrieving wiki pages - comparison
Yes, of course, but when the config will change next week, how will I know this? I have to do an extra check. I would like to download OSM wiki websites like I download every other website. Roland Olbricht schrieb: Anyone who can solve this puzzle, so that we can download with simple commands ?-) What about wget -O - http://wiki.openstreetmap.org/wiki/Map_Features | gunzip file ? Cheers, Roland ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Problem retrieving wiki pages - solution
Explicitly omit accepted encodings: 1) wget --header='Accept-Encoding: ' 'http://wiki.openstreetmap.org/wiki/Main_Page' 2) ?php $streamContext = stream_context_create(array( 'http' = array( 'header' = 'Accept-Encoding: \r\n' ) )); file_get_contents('http://wiki.openstreetmap.org/wiki/Main_Page', 0, $streamContext); Roland Olbricht schrieb: Anyone who can solve this puzzle, so that we can download with simple commands ?-) What about wget -O - http://wiki.openstreetmap.org/wiki/Map_Features | gunzip file ? Cheers, Roland ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Problem retrieving wiki pages
When I retrieve Wiki pages via wget I get a confusing bunch of characters: ?????[?$??=??~ry?te?jwUwg??G?G?wF??v?lA?| ??(AJ(?5???=|??'???{?_?3#3w????n77??l?|???}?ëO???5????O/?{em?l??W???o)t?i(??w-kkdz3?? '??G???K???V???a(???^p?_??:74?{zz*joQ?\??J? ??|k?h???S7v?5???e?}?z??o;-O73w???b?KlS_[?Fn|??7??- ?N??{?5r}7t? ??p??w?ajv?w??? Dz-gf?q?x????3g0?OP$??w?l4?8[???,t?+N??|????{??? etc. The funny thing is, that when I open the file in TextWrangler, the page's HTML will be shown properly - every other way to show the file fails. The default text editor or Forefox will just show the inrecognizable characters. To adjust request headers or output encoding will not work, too. Every other website I have tried works as expected. Does somebody know what is going on here? Some days ago everything still worked properly. Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Postgres und Osmosis
When I call the binary, JDBC will not be found. When I call it like this ... java --classpath 3rdparty/osmosis/osmosis.jar:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar org.openstreetmap.osmosis.core.Osmosis --rx temp/berlin.osm --wd database=osm_testpassword=.. ... there's the result: 30-Jul-09 9:42:41 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version 0.31.1 30-Jul-09 9:42:42 AM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. java.lang.NoClassDefFoundError: org.openstreetmap.osmosis.core.TaskRegistrar at java.lang.Class.initializeClass(libgcj.so.90) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:73) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:30) Caused by: java.lang.ClassNotFoundException: org.java.plugin.PluginClassLoader not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:3rdparty/osmosis/osmosis.jar,file:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar,file:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.Class.initializeClass(libgcj.so.90) ...2 more It seems that there are still some standard Java JARs missing in the classpath. Does someone know which ones? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Postgres und Osmosis
It's the Java 1.5 version from http://cloudmade-osmosis.s3.amazonaws.com/api0.6-java1.5 ... Debian etch delivers packages for Java 1.5 by default. Brett Henderson schrieb: Hi Andreas, How did you obtain osmosis? I assume you're not using a copy obtained from the following location: http://wiki.openstreetmap.org/index.php/Osmosis Brett Andreas Kalsch wrote: When I call the binary, JDBC will not be found. When I call it like this ... java --classpath 3rdparty/osmosis/osmosis.jar:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar org.openstreetmap.osmosis.core.Osmosis --rx temp/berlin.osm --wd database=osm_testpassword=.. ... there's the result: 30-Jul-09 9:42:41 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version 0.31.1 30-Jul-09 9:42:42 AM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. java.lang.NoClassDefFoundError: org.openstreetmap.osmosis.core.TaskRegistrar at java.lang.Class.initializeClass(libgcj.so.90) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:73) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:30) Caused by: java.lang.ClassNotFoundException: org.java.plugin.PluginClassLoader not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:3rdparty/osmosis/osmosis.jar,file:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar,file:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.Class.initializeClass(libgcj.so.90) ...2 more It seems that there are still some standard Java JARs missing in the classpath. Does someone know which ones? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Postgres und Osmosis
Thank you, it's working. I didn't intend to install the GNU stuff ... Andreas Kalsch schrieb: It's the Java 1.5 version from http://cloudmade-osmosis.s3.amazonaws.com/api0.6-java1.5 ... Debian etch delivers packages for Java 1.5 by default. Brett Henderson schrieb: Hi Andreas, How did you obtain osmosis? I assume you're not using a copy obtained from the following location: http://wiki.openstreetmap.org/index.php/Osmosis Brett Andreas Kalsch wrote: When I call the binary, JDBC will not be found. When I call it like this ... java --classpath 3rdparty/osmosis/osmosis.jar:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar org.openstreetmap.osmosis.core.Osmosis --rx temp/berlin.osm --wd database=osm_testpassword=.. ... there's the result: 30-Jul-09 9:42:41 AM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version 0.31.1 30-Jul-09 9:42:42 AM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. java.lang.NoClassDefFoundError: org.openstreetmap.osmosis.core.TaskRegistrar at java.lang.Class.initializeClass(libgcj.so.90) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:73) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:30) Caused by: java.lang.ClassNotFoundException: org.java.plugin.PluginClassLoader not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:3rdparty/osmosis/osmosis.jar,file:3rdparty/osmosis/lib/compile/postgresql-8.3-603.jdbc4.jar,file:/usr/lib/jvm/java-1.5.0-gcj-4.3-1.5.0.0/jre/lib/rt.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.ClassLoader.loadClass(libgcj.so.90) at java.lang.Class.initializeClass(libgcj.so.90) ...2 more It seems that there are still some standard Java JARs missing in the classpath. Does someone know which ones? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] MySQL GIS extensions - some tipps
Marcus Wolschon schrieb: On Thu, Jul 23, 2009 at 8:35 PM, Andreas Kalschandreaskal...@gmx.de wrote: How much would that be without filtering? Since I have no clue what are relevant tags and relations for you. Relevant features are features which represent a GeoObject. Nodes which are just part of ways and ways which are just part of multipolygons are not relevant in this context. Ah, I see. As I need the ways that share a given node and tags on nodes in ways are very important I would only store a POINT on the nodes but not use any GIS-Extension on the ways. (such as POLYLINE) Do I loose precision if I store the lat+lon of a node as a POINT and convert it back? You can save them as int or better as float. I use float - you will not loose precision and don't have to convert. I`m asking because I have an edit-button that exports the currently visible area and starts the latest JOSM with it so co-drivers can fix mistakes in the map right as the driver encounters them. Marcus ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Mediawiki API
Tobias Knerr schrieb: Andreas Kalsch wrote: What about installing the Mediawiki API for the OpenStreetMap wiki? This one? http://wiki.openstreetmap.org/api.php Thanks ;) Thought you use the standard endpoint path/w/api.php ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] MySQL GIS extensions - some tipps
Marcus Wolschon schrieb: On Wed, Jul 22, 2009 at 4:25 PM, Andreas Kalschandreaskal...@gmx.de wrote: I don't render maps with it, so I don't know how it scales. But Mapnik will connect easier to Postgres/PostGIS. I will outsource rendering for my project. Note: I don`t use Mapnik. I was talking about _interactive_ rendering. It`s a routing-application and I`m just testing if offering MySQL as an additional supported local map-format works out. Other offered formats are OsmBin, the H2-embedded database, xml-files (for testing), in-memory, OK, I am sure this is scalable - but you should compare yourself. Haven't done that so far. Can you decompose a POINT into lat+lon in an SQL-query? If not, how much space is wasted by having all coordinates twice? Yes. Yes - you can decomposit them or Yes - with X() and Y() Yes - you have to store a Point AND Lat+Lon as separate columns? No - you don't have to. But I store them separately into two tables because I don't want all nodes to be gisified. Using lat/lon as int will need 8 bytes. Using a Point will use 20 bytes. So if you don't need to index a point, save it as lat/lon. But I just put relevant features into the geo database - I save nodes and ways which have relevant tags, and relations as GeometryCollections and MultiPolygons. It makes no sense to put nodes and ways into the GIS table, which are just parts of ways/relations and do not play an own role. Result: 5 of 17.3 GB (for Europe) is for GIS data. I use the GIS table for analysis and re-computing. It pays off. How much would that be without filtering? Since I have no clue what are relevant tags and relations for you. Relevant features are features which represent a GeoObject. Nodes which are just part of ways and ways which are just part of multipolygons are not relevant in this context. How many features are relevant as their own: - nodes 4 % - ways 99 % - relations 81 % ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Variable scale maps
This would indeed be possible with current tiles - take a look at http://maps.cartifact.com/ - it is built with Flash, but you could easily build it with JS, too. Christoph Boehme schrieb: I had a similar thought a couple of weeks ago when I tried to find my way using the map on the small screen of my hand held gps. Afterwards, Rob Annable told me about Bendy Maps (see http://schulzeandwebb.com/hat/ for an example). It's not exactly what I had in mind but still very cool :-) Christoph Kelly Jones wrote: Most maps of small areas have a constant scale: eg: 100 pixels = 1 mile. Has anyone created maps w/ variable scales? An example would be: 100 pixels = 1 mile at the edge of the map, but 100 pixels = 100 feet at the center of the map. This would be useful for driving directions type maps, where small streets close to the destination are more important than large streets far away from the destination ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis and dbType
r...@mesolt11:/ops# osmosis/bin/osmosis --read-xml file=bremen.osm --wd dbType=mysql host=127.0.0.1 database=api06_test user=xx password=xx Jun 24, 2009 11:52:54 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Osmosis Version 0.31 Jun 24, 2009 11:52:55 PM org.openstreetmap.osmosis.core.Osmosis run INFO: Preparing pipeline. Jun 24, 2009 11:52:55 PM org.openstreetmap.osmosis.core.Osmosis main SEVERE: Execution aborted. org.openstreetmap.osmosis.core.OsmosisRuntimeException: Argument dbType for task 2-wd was not recognised. at org.openstreetmap.osmosis.core.pipeline.common.TaskManagerFactory.createTaskManager(TaskManagerFactory.java:64) at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.buildTasks(Pipeline.java:50) at org.openstreetmap.osmosis.core.pipeline.common.Pipeline.prepare(Pipeline.java:112) at org.openstreetmap.osmosis.core.Osmosis.run(Osmosis.java:79) at org.openstreetmap.osmosis.core.Osmosis.main(Osmosis.java:30) How can I make Osmosis the argument dbType work? ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] A consistent format for the multipolygon relation
I don't like this way of discussion. It leads to nowhere ... The point I wanted to demonstrate was to start an initiative to get multipolygons into the right format because the quality of OSM data is crucial for all projects which use it. To get a more consistent definition of multipolygons is an important step to make it easier for everyone (including the map renderers) to build on top of the data. I don't want any silver plates, too, but bite-sized data. Pretty easy, huh? Best, Andi Wolfgang Schreiter schrieb: Frederik Ramm frede...@remote.org schrieb im Newsbeitrag news:4a35459c.7070...@remote.org... Hi, Wolfgang Schreiter wrote: the style of your response suggests to me that for some reason you take this personally. Not the case; I only found it somewhat condescending of someone who has not made much of an appearance in OSM at all to say that something (whatever it might be) would be required for OSM if it takes itself seriously, implying that we're just a bunch of clowns at the moment. All right, so I haven't written a book, just put in a couple of hundred hours on top of a full-time job. And I cannot remember having mentioned clowns at all. We have had our share of professionals offering their advice, but it was hardly ever advice that came from knowing OSM well - only advice transferred from other, usually non-crowdsourced, non-open, non-spare-time, and non-hobbyist projects based on the totally unreflected assumption that as OSM grew up it would certainly have to do as the professionals do. I've seen a lot of good projects go down because people were too sure of themselves and felt they had no need to listen to advice. The success of osm depends on the ability of software to make use of the data, on the relative simplicity to produce such software, and on the possibility for end-users to understand the provided data and form a mental model about it. No. OSM's success on the user side is absolutely non-critical. Pray nobody reads that. For someone who owns a business, this is a desastrous statement, even if it were true. Interest for what we do is so big that everyone is eager to incorporate our data as soon as practically possible. OSM data is being converted in all kinds of formats and data models without us having to move at all. Our part in this is to make sure that the community remains intact, that mappers join the project and keep with it, that our body of data grows and is kept in order. That is the crucial bit - not how easy it is to use our data in a run-off-the-mill GIS system. That bit is being accounted for by anyone with an old-style GIS background and some programming skills. I'm afraid I haven't made myself clear. Extracting the data may be run-of-the-mill, but interpreting it isn't. The more tags we have, the more of an expert system it will take to figure out what they mean. The same holds for those who join and/or try to keep the data in order. If you've read the spec I mentioned, you will know that its geometry model goes way beyond what can be usefully tagged at the moment, including a clear polygon definition and collections of linestrings and polygons, just to name the most important. As I have pointed out, those linestrings do not have topology and thus are rather useless for what we want to do. The spec you quoted is a geometry spec, not a topology spec. You will be able to draw maps wirh that, but you won't be able to do routing if you are too fixated on geometry alone. Look to GDF if you're desperate for an ISO standard that is nearer to what OSM tries to do. OSM currently isn't able to clearly define an area. Whatever highflying ideas those in the inner circle may have, they'd better face the realities first. and it would indeed be wise to support what the industry out there is already doing. I reject the idea that anything the industry is doing is worth following. Worth looking at, perhaps. These are not idiots, they're doing this for a reason, and it's called demand. Or, in case that was still not concrete enough: in Austria alone, there are currently on the order of 750 geometries that are perfectly valid in osm but not digestible by quite a few GIS-enabled databases How sad. Luckily our own software nevertheless works with them. Maybe those GIS-enabled databases should be improved ;-) Our own software gets patched up on a daily basis for everything it can't do. That's a poor excuse for a lack of design, and a rather expensive method of operation where programming power costs money. *including our own*. As I have already pointed out to you, OSM does not use a GIS-enabled database. No, osm doesn't, it's just a pile of data. Users of that data, including and foremost people writing tools for osm, do. Doesn't
Re: [OSM-dev] Gisify relations
Don't understand me wrong. I like OSM's data model how it is. I have studied the OpenGIS features and it makes sense to build a system on top of OpenGIS if you are interested to index the geometries of OSM objects. All a question of data representation. So in the end it would be great, if all multipolygons would fit to the new standard ( http://wiki.openstreetmap.org/wiki/Multipolygon#Advanced_multipolygons ) where MultiPolygons are supported. Then it is much easier to transform OSM multipolygons into OpenGIS MultiPolygons. That's it. I just want that it is easier for developers to built on top of OSM. It's not about OSM supporting official standards. But the standards built inside OSM should be consistent. The first algorithm I implemented cared about the order of the ways as LineStrings, which make up the rings of the polygon. This will just work properly for recently created objects. For older ones I have to create the rings by combining the ways in the right order. This algorithm is not as performant because I have to find the order for myself or even have to find out which inners belong to which outer. Conclusion: Make a robot change all multipolygon relations to fit into the new model - so the software built on top of OSM (starting with the Map renderers) doesn't need to care any more of different representations. Andi Frederik Ramm schrieb: Hi, Wolfgang Schreiter wrote: I'm also interested in this topic from a quality assurance point of view (identification of impossible/invalid overlaps). We could exchange our ideas and experiences here. However, I'm using Postgres/POstGIS, which has a richer set of geometry functions and is also the database of choice for osm. To be clear, OSM only switched to Postgres a few months ago and had been running Mysql for quite a long time before. Also, OSM really uses Postgres and not PostGIS, i.e. does not make use of any geometry extensions. Have you already taken a look at the OGC spec (http://www.opengeospatial.org/standards/sfa)? IMHO this will be the first standard that osm will find impossible to ignore, provided is takes it self seriously. Care to explain why exactly you think SFA is in any way relevant to OSM? It describes geometric objects and their relationships and functions to be called on them - but you don't seriously suggest that OSM should implement an API supporting geometric operations, do you? The SFA spec even contains a description of how to model text attributes including a list of allowable font colours (blanchedalmod is ok, applegreen sadly isn't). Surely something we cannot ignore if we want to take ourselves seriously! On a more serious note, SFA - much like all the traditional GIS stuff I've seen, Shapefiels and PostGIS being other examples - defines curves and surfaces as first-level standalone data types, next to points. In OSM, however, curves and surfaces are one level down from points - they use points as their building blocks. This is required to retain proper inter-object topology (which I do not see SFA speak of - topology for them only comes into play talking about the shape of an individual object). This means that even if OSM were to adopt SFA in one way or another, representing OSM information in SFA form would mean a data loss, and thus SFA access would be restricted to read-only. Remind me again why our seriousity would depend on working with a standard that cannot even model the richness of our data? Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Gisify relations
Thanks, I have heard of this script but I will use a MySQL branch which includes fully featured GIS functions. Question: Could it easily be rewritten to work with MySQL or is there too much PGSQL specific stuff inside? To avoid a discussion: I use MySQL because GIS will be just a part of the application and I need all the easy MySQL stuff like memory tables and query cache. Andi Iván Sánchez Ortega schrieb: El Viernes, 12 de Junio de 2009, Andreas Kalsch escribió: Is there a script to gisify OSM relations - a script which creates OpenGIS multipolygons or geometrycollections inside PostGIS or MySQL as WKT ? Let me point you to osm2pgsql: http://wiki.openstreetmap.org/wiki/Osm2pgsql AFAIK, the script creates a PostGIS DB with fully working geometry columns (not just WKT). You should try it out and see if it fits your needs. Cheers, ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Gisify relations
Hi, this looks nearly like what I need. But let me specify my need a little bit further. The script needs to be: 1) complete: all interesting nodes, all ways, all relevant relations - MultiPolygons for as many relations as possible 2) correct: decision LineString or Polygon for ways, interpretation of boundaries/multipolygons including holes (inner/exclave roles) It seems that not all multipolygons are tagged with a certain order. But I want to include as many multipolygons as possible 3) performant: at best all transforming steps on the DB server - possible for both nodes and ways but not for boundaries/multipolygons. I have solved nodes and ways, and I think the osm2pgsql script is the best starting point to understand how the boundaries are exactly working. I am currently not sure how to interpret the linearrings, because they can be made of several ways. But some polygons start with inner and then comes outer, so I don't know to which outer the inners belong to. This can be difficult if there are several outer. Unlike it is explained in the Wiki. And so on ... ;) So thanks for your first replies, I will take a closer look into the scripts now. Andi Frederik Ramm schrieb: Hi, Andreas Kalsch wrote: Is there a script to gisify OSM relations - a script which creates OpenGIS multipolygons or geometrycollections inside PostGIS or MySQL as WKT ? No but there's this: http://wiki.openstreetmap.org/wiki/Boundaries.pl It generates .poly files from relations. The .poly files are basically text files with a list of lat/lons and I guess they could be converted quite easily into something else. In fact, if you do a little bit of Perl you could use the shapelib to create shapefiles directly from above script. Bye Frederik ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Gisify relations
Frederik Ramm schrieb: Hi, Andreas Kalsch wrote: 1) complete: 2) correct: 3) performant: It strikes me as odd that someone who wants to write something complete, correct, and peformant should deal with OSM. Are you sure you have the right data for your philosophy, or the right philosophy for our data? Correct. I am already into OSM for over a year and I know that those points are just ideals. It's somewhere in between. As much philosophy as necessary to get as much data as possible. Without this chaos OSM wouldn't be where it is now. But more well-tuned robot scripts to force some data into order would be a good idea. I have solved nodes and ways, and I think the osm2pgsql script is the best starting point to understand how the boundaries are exactly working. Be warned though that osm2pgsql is not exactly a prime example of dealing with complex multipolygons. I am currently not sure how to interpret the linearrings, because they can be made of several ways. But some polygons start with inner and then comes outer, so I don't know to which outer the inners belong to. Simply construct rings out of all the members and then check inside which outer ring each inner ring lies. - There's always the danger of people creating intersecting inner and outer rings but of course that should not happen. This is the point. It seems that the order of ways does not matter. So to simply connect all equal endpoints to get rings instead of relying on the new tagging scheme is currently a better practice ... Check out this for more info: http://wiki.openstreetmap.org/wiki/Relation:multipolygon I have added some ramblings some time ago to the discussion - but that was another issue. Also note that many people don't use multipolygon relations but instead if they want a hole in an area, they create two half-donut shaped areas. If you're so bent on correctness and completeness then you will have to detect such cases and form proper polygons with holes from them ;-) There are many cases which are so odd (e.g. ways with tags instead of relations) that I will just ignore them. I want to focus on the most frequent and logical cases. ;-) ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Extracting house-number from string - solution (?)
A function inside a PHP class I coded some time ago to populate a database with Google Maps API data. The code is currently not maintained any more so I am not sure if it's still working correctly. The input string is not a complete geocode request but the thoroughfare of an address, made of street and house number. Just give it a try - it worked fine for me, as far as I remember: /** * thoroughfare-Angabe in Straßenname und Hausnummer aufsplitten * * @param $thoroughfare string * @param $country string Country-Code, von dem abhängt, ob Hausnummer links oder rechts steht * @return array(thoroughfareName, houseNumber) */ berücksichtige HK - kein leerzeichen vor Hausnr // OK static function splitThoroughfare($thoroughfare, $country) { $parts = preg_split(/[ \,]+/, $thoroughfare); $partsC = count($parts); if ($partsC == 1) return array($parts[0], NULL); // I wrote this class for the Google Geocoding API, and sometimes Google returned the house number on the left side - no matter of the country // I think this is fixed no and so self::$googleForcesHouseNumberLeft is false $houseNumberIsLeft = self::$googleForcesHouseNumberLeft ? true // I18n::$houseNumberLeftCountries is an array with all country codes (upper case) of countries where house number is left : in_array($country, I18n::$houseNumberLeftCountries); $summand = $houseNumberIsLeft ? 1 : -1; for ( $i = $houseNumberIsLeft ? 0 : $partsC - 1; $houseNumberIsLeft ? ($i $partsC) : ($i = 0); $i += $summand) { // part of street name found if ( preg_match('/\p{L}{2}/u', $parts[$i]) || preg_match('/\p{L}/u', $parts[$i]) preg_match('/^\D+$/', $parts[$i]) ) { if ($houseNumberIsLeft) { $thoroughfare = array_slice($parts, $i); $houseNumber = array_slice($parts, 0, $i); } else { $thoroughfare = array_slice($parts, 0, $i + 1); $houseNumber = array_slice($parts, $i + 1); } return array( implode(' ', $thoroughfare), implode(' ', $houseNumber) ); } } } ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] JOSM: Several tags with same key
I'd like to add several tags with the same key. In the most cases this no good practice, but with urls it makes sense, e.g. to add several images to one feature. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] api0.6 - only one value per key?
Shaun McDonald schrieb: On 30 Jan 2009, at 15:11, Andreas Kalsch wrote: Question: I have found a PGSQL schema for v0.6 where key and value are TEXT fields. Is the length of 255 still correct for v0.6? Are you sure it is the rails schema, there are other gpsql database schemas for osm data out there, which aren't related to the rails/master db setup. I am not sure, which setup this schema belongs to. I just searched for a 0.6 schema which fits to the description of 0.6 API database changes. BrettH (Osmosis developer) has just explicitly published a 0.5 schema which includes history tables. There are some imported values which are longer than 255 characters. Strictly speaking at the moment it is 255 latin characters, which can is different to 255 UTF-8 multi-byte characters, hence the problem at the moment of large UTF-8 multibyte strings being truncated and causing invalid UTF-8 errors. MySQL 5 will let you use up to 255 wide chars, if your database is Unicode. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] api0.6 - only one value per key?
marcus.wolsc...@googlemail.com schrieb: On Wed, 28 Jan 2009 00:07:20 +, Simon Ward si...@bleah.co.uk wrote: [Moved to dev; followups to dev] On Tue, Jan 27, 2009 at 10:58:25PM +, Ævar Arnfjörð Bjarmason wrote: I think multiple keys with the same name should be allowed for a node/way/relation. AFAIK it's only the editors that don't currently let you do this. Yes, the API and data format supports it, but only for another 2 months or so until we switch to 0.6 where it won't be allowed. I cannot find any such restriction in http://wiki.openstreetmap.org/wiki/0.6 . Could someone please clarify this? I implemented no changes to attribute-handling for the 0.6-support in my software and need to know if I have to explicitely disallow this to happen, write test-cases,... . I would not restrict this. For qualifying keys like highway or amenity it makes sense, but what about image - there could be several associated to one feature. My proposal is: - Make keys case insensitive and give them an id internally. - Make tags from a key id and a case sensitive value, the tag has an id, too. I have implemented this in an experimental way and it makes sense in any way. Let me know, if you are interested in SQL. The effort to manage this is a little bit higher, but you have a consistent data model which is more lightweight and quicker for lookups (PK lookup is better than k/v lookup). What about this? Question: I have found a PGSQL schema for v0.6 where key and value are TEXT fields. Is the length of 255 still correct for v0.6? There are some imported values which are longer than 255 characters. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osmosis cannot connect to Database
This is a problem of MySQL. I have connection errors with the C API, too. Has anyone had these errors with MySQL 5.0.x on Debian, too? The error is: MySQL server has gone away (2006) Andreas Kalsch schrieb: Hi, suddenly Osmosis does not connect any more to my database. The first time this happend was while I was updating a database with a changefile. When I tried to repeat the update the problem was there from the beginning. - Connection data is correct (PHP connects successfully) - Both initializing and updating does not work any more - Reinstalling Osmosis 0.29 does not work, too. The possible cause is that I apt-get upgraded my system and I use MySQL 5.0.75-1-log (Debian) now. The mailing list shows one thread for this issue, but there is no real solution ( http://lists.openstreetmap.org/pipermail/dev/2007-August/006068.html ) ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis cannot connect to Database
Hi, suddenly Osmosis does not connect any more to my database. The first time this happend was while I was updating a database with a changefile. When I tried to repeat the update the problem was there from the beginning. - Connection data is correct (PHP connects successfully) - Both initializing and updating does not work any more - Reinstalling Osmosis 0.29 does not work, too. The possible cause is that I apt-get upgraded my system and I use MySQL 5.0.75-1-log (Debian) now. The mailing list shows one thread for this issue, but there is no real solution ( http://lists.openstreetmap.org/pipermail/dev/2007-August/006068.html ) Andi Output: Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Launching pipeline execution. Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Pipeline executing, waiting for completion. Jan 4, 2009 9:42:59 PM com.bretth.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-xml failed com.bretth.osmosis.core.OsmosisRuntimeException: Unable to establish a database connection. at com.bretth.osmosis.core.mysql.common.DatabaseContext.getConnection(DatabaseContext.java:92) at com.bretth.osmosis.core.mysql.common.DatabaseContext.prepareStatement(DatabaseContext.java:131) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.initialize(MysqlWriter.java:319) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.process(MysqlWriter.java:1004) at com.bretth.osmosis.core.xml.v0_5.impl.NodeElementProcessor.end(NodeElementProcessor.java:99) at com.bretth.osmosis.core.xml.v0_5.impl.OsmHandler.endElement(OsmHandler.java:109) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1774) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2930) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at javax.xml.parsers.SAXParser.parse(SAXParser.java:198) at com.bretth.osmosis.core.xml.v0_5.XmlReader.run(XmlReader.java:109) at java.lang.Thread.run(Thread.java:619) Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** com.mysql.jdbc.CommunicationsException MESSAGE: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** java.io.EOFException MESSAGE: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. STACKTRACE: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1997) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:573) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1044) at com.mysql.jdbc.Connection.createNewIO(Connection.java:2748) at com.mysql.jdbc.Connection.init(Connection.java:1553) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:207) at com.bretth.osmosis.core.mysql.common.DatabaseContext.getConnection(DatabaseContext.java:81) at com.bretth.osmosis.core.mysql.common.DatabaseContext.prepareStatement(DatabaseContext.java:131) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.initialize(MysqlWriter.java:319) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.process(MysqlWriter.java:1004) at com.bretth.osmosis.core.xml.v0_5.impl.NodeElementProcessor.end(NodeElementProcessor.java:99) at
Re: [OSM-dev] Osmosis cannot connect to Database
I installed it from Sun. You can see it in the trace below my entry: ...at com.sun.org.apache... Roberto Navoni schrieb: One week ago, I had the same problem . Check if you are using sun java virtual machine ... don't use other kind of java machine because you can have some problem with db . Best Regards Roberto Navoni Hi, suddenly Osmosis does not connect any more to my database. The first time this happend was while I was updating a database with a changefile. When I tried to repeat the update the problem was there from the beginning. - Connection data is correct (PHP connects successfully) - Both initializing and updating does not work any more - Reinstalling Osmosis 0.29 does not work, too. The possible cause is that I apt-get upgraded my system and I use MySQL 5.0.75-1-log (Debian) now. The mailing list shows one thread for this issue, but there is no real solution ( http://lists.openstreetmap.org/pipermail/dev/2007-August/006068.html ) Andi Output: Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Launching pipeline execution. Jan 4, 2009 9:42:58 PM com.bretth.osmosis.core.Osmosis main INFO: Pipeline executing, waiting for completion. Jan 4, 2009 9:42:59 PM com.bretth.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-xml failed com.bretth.osmosis.core.OsmosisRuntimeException: Unable to establish a database connection. at com.bretth.osmosis.core.mysql.common.DatabaseContext.getConnection(DatabaseContext.java:92) at com.bretth.osmosis.core.mysql.common.DatabaseContext.prepareStatement(DatabaseContext.java:131) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.initialize(MysqlWriter.java:319) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.process(MysqlWriter.java:1004) at com.bretth.osmosis.core.xml.v0_5.impl.NodeElementProcessor.end(NodeElementProcessor.java:99) at com.bretth.osmosis.core.xml.v0_5.impl.OsmHandler.endElement(OsmHandler.java:109) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1774) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2930) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at javax.xml.parsers.SAXParser.parse(SAXParser.java:395) at javax.xml.parsers.SAXParser.parse(SAXParser.java:198) at com.bretth.osmosis.core.xml.v0_5.XmlReader.run(XmlReader.java:109) at java.lang.Thread.run(Thread.java:619) Caused by: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** com.mysql.jdbc.CommunicationsException MESSAGE: Communications link failure due to underlying exception: ** BEGIN NESTED EXCEPTION ** java.io.EOFException MESSAGE: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. STACKTRACE: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1997) at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:573) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1044) at com.mysql.jdbc.Connection.createNewIO(Connection.java:2748) at com.mysql.jdbc.Connection.init(Connection.java:1553) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285) at java.sql.DriverManager.getConnection(DriverManager.java:582) at java.sql.DriverManager.getConnection(DriverManager.java:207) at
Re: [OSM-dev] which Java Verison to use (1.6 vs. 1.5); was: Error compiling osmosis
It is OK that Osmosis supports just 1.6, because it has no UI (has it?). So there are 2 reasons: - You use it more on servers with Linux, so 1.6 is supported. - The user has the competence to deal with this kind of things, including setting up his own DB. JOSM - in opposite - should be as easy as possible to use (= downloading and starting to edit), so that as many people as possible contribute to OSM. So focus on new features and don't try to support ancient versions. Best, Andi Brett Henderson schrieb: Sending this to dev because I'm curious to hear thoughts from the JOSM guys. I agree it would be nice to standardise but the reasons for the difference as I understand it are: JOSM is an end user tool where wide platform support is necessary. Java 1.5 is more widely available than Java 1.6. OSX in particular hasn't had support for 1.6, not sure if that's changed yet. JOSM is maintaining compatibility with Java 1.5. *However* it should be possible to compile JOSM on Java 1.6, it just won't run on 1.5 if you do so. There may be some warnings around the use of the @Override annotation (I don't know the details here) but I don't think these should be show stoppers. Osmosis is newer than JOSM, and is less end-user focused. I don't support 1.5 because I'm using some newer features of the 1.6 platform. From memory, these are Java 2D libraries required for accurate polygon support, concurrent libraries, and collection libraries. Of these it might be possible to support 1.5 with some additional effort but the polygon support in particular is hard to do properly on 1.5 because 1.6 added double accuracy 2D calculations. Here's my suggestion. OpenJDK is a Java 1.6 platform. I assume that is what Debian will provide out of the box. Compile both JOSM and Osmosis using this 1.6 platform. They will both then run on the OpenJDK provided by Debian. Neither will run on older 1.5 JDKs. You aren't building binary distributions of JOSM for cross-platform use therefore you don't need to support 1.5. 1.6 provides some very useful features, it has been released for around 2 years now, and is supported on the vast majority of platforms out there. Supporting 1.5 just isn't a high priority for me. With the open source OpenJDK out there I was hoping the need to support ancient java releases would be eliminated. Thoughts welcome. Brett Joerg Ostertag (OSM Tettnang/Germany) wrote: OK ... so josm doesn't want java 1.6 and osmosis doesn't want java 1.5 This doesn't really make it easy to create debian packages out of these ... Couldn't we agree on one Version of Java for all OSM Software. This would macke packaging much easier for me. - Joerg On Mittwoch 24 Dezember 2008, you wrote: The minimum required java version for osmosis is 1.6. You appear to be using version 1.5. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Inconsistent history tables - solved
Problem was solved quickly. InnoDB shows false row counts in PHPMyAdmin, to count(*) is the solution. Hi, I have imported a dump into MySQL with Osmosis and current_nodes has more rows than nodes and the count of the other tables isn't equal either. I think the current tables must have the some row count like the history tables on initial dump. Probably I am wrong and you know more. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Inconsistent history tables
Hi, I have imported a dump into MySQL with Osmosis and current_nodes has more rows than nodes and the count of the other tables isn't equal either. I think the current tables must have the some row count like the history tables on initial dump. Probably I am wrong and you know more. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Places from OSM and geonames
Hi, I have updated my application to find places in Germany. You specify an OSM tag and the location. I will update the database as soon as possible to include more data inside Europe and the US. My focus is now to support more important features like editing and featuring third party APIs. I have left out the tag statistics because to find places seems to be more interesting. URL: http://78.47.150.5/opw/dev/ Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Ranked geonames search with tag statistics
Hi, I have created a ranked geonames search with OpenStreetMap tag statistics - based on extracted OpenStreetMap points of interest. The ranking is experimental and currently just working for Germany. Give it a try and let me know what you think: http://78.47.150.5/opw/dev/ I cannot guarantee that the service is working all the time, because I am still working on it. Best, Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Slow Osmosis
OK, I will create the tables as InnoDB tables directly, this is surely better ;) Joachim Zobel schrieb: There is another approach. ALTER all InnoDb tables to MyISAM, run osmosis and then ALTER them back to InnoDb. It seems that the INSERT approach scales badly to large InnoDb tables. Be aware that an ALTER TABLE on MySQL (at least with InnoDb) always copies the table into the new strucure. If you do an ALTER TABLE ... ENGINE=InnoDb this is unavoidable. All other ALTER TABLE statements are questionable. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Slow Osmosis
I have taken a look on your script. I think it could be useful, IF it is quicker than using Osmosis. Do you have some benchmarks? Best, Andi Joachim Zobel schrieb: Am Samstag, den 29.11.2008, 18:45 +0100 schrieb Andreas Kalsch: I decompress the data before putting them into Osmosis, but it's still slow. So back to my question -- ;) (The best would be raw dump files for MySQL's LOAD DATA INFILE - I can imagine that it would be pretty quick) This has been already done, also its not simply a raw dump: http://www.heute-morgen.de/scabies/ Be aware that it has not had much testing yet. Sincerely, Joachim ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Slow Osmosis
What I will try now: 1) combine(There is another approach. ALTER all InnoDb tables to MyISAM, run osmosis and then ALTER them back to InnoDb. It seems that the INSERT approach scales badly to large InnoDb tables. Sincerely, Joachim , It scales always bad; no exceptions. Some advise on the DB techies even include to drop any primary and foreign keys (including sequences) at inserting. That is optimal. Stefan, If you are at it, and have a multi processor: http://compression.ca/pbzip2/ Stefan) If this is still slow, I will try 2) the raw file approach Thank you for your help! I will send you the results. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Slow Osmosis
OK, 1) does not work - Osmosis needs the InnoDB tables: Write to database .. Nov 30, 2008 2:35:09 PM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Nov 30, 2008 2:35:09 PM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Nov 30, 2008 2:35:09 PM com.bretth.osmosis.core.Osmosis main INFO: Launching pipeline execution. Nov 30, 2008 2:35:10 PM com.bretth.osmosis.core.Osmosis main INFO: Pipeline executing, waiting for completion. Nov 30, 2008 2:35:26 PM com.bretth.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-xml failed com.bretth.osmosis.core.OsmosisRuntimeException: Unable to insert a relation into the database. at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.flushRelations(MysqlWriter.java:783) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.complete(MysqlWriter.java:885) at com.bretth.osmosis.core.xml.v0_5.XmlReader.run(XmlReader.java:111) at java.lang.Thread.run(Thread.java:619) Caused by: com.mysql.jdbc.exceptions.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 1 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:931) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1631) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1723) at com.mysql.jdbc.Connection.execSQL(Connection.java:3256) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1313) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1585) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1500) at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1485) at com.bretth.osmosis.core.mysql.v0_5.MysqlWriter.flushRelations(MysqlWriter.java:781) ... 3 more Nov 30, 2008 2:35:26 PM com.bretth.osmosis.core.Osmosis main SEVERE: Execution aborted. com.bretth.osmosis.core.OsmosisRuntimeException: One or more tasks failed. at com.bretth.osmosis.core.pipeline.common.Pipeline.waitForCompletion(Pipeline.java:141) at com.bretth.osmosis.core.Osmosis.main(Osmosis.java:55) - Although I use AMD Athlon 64 X2 5600+ Dual Core (in a virtual environment) pbzip2 does not work. You have to use file which have been compressed with pbzip2 and not bzip2. Are there dumps created with pbzip2? What I will try now: 1) combine(There is another approach. ALTER all InnoDb tables to MyISAM, run osmosis and then ALTER them back to InnoDb. It seems that the INSERT approach scales badly to large InnoDb tables. Sincerely, Joachim , It scales always bad; no exceptions. Some advise on the DB techies even include to drop any primary and foreign keys (including sequences) at inserting. That is optimal. Stefan, If you are at it, and have a multi processor: http://compression.ca/pbzip2/ Stefan) If this is still slow, I will try 2) the raw file approach Thank you for your help! I will send you the results. Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Slow Osmosis
Osmosis is very slow on my server. There is this option --write-null (--wn) which can be useful just to check integrity of data. So I think Osmosis does everytime check it and this could be a bottleneck because it has to cache some data in memory. Can this be the origin for being slow (next to my server) ? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Slow Osmosis
I decompress the data before putting them into Osmosis, but it's still slow. So back to my question -- ;) (The best would be raw dump files for MySQL's LOAD DATA INFILE - I can imagine that it would be pretty quick) Stefan de Konink schrieb: Frederik Ramm wrote: The Java implementations of gzip/bzip are notoriuosly slow. If you are working with compressed data, you might see an improvement if you first uncompress the file and then use osmosis to process it in raw form; later use an external utility to compress the output if applicable. But if he first uncompresses the data basically osmosis looses its main argument to be superior to any other approach. (Yes I think this point should be made) Stefan ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] By the way ... spatial indexes
... are nearly 4 times faster than multi.colum indices for lat/lon in MySQL. Extracting 1 column / 17.000 rows out of 1 million takes .39 secs vs. 1.39 - I think all Postgres guys know a similar value. I think this is a pretty impressing result. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] More about spatial indexes
Marcus Wolschon schrieb: Andreas Kalsch schrieb: ... are nearly 4 times faster than multi.colum indices for lat/lon in MySQL. Extracting 1 column / 17.000 rows out of 1 million takes .39 secs vs. 1.39 - I think all Postgres guys know a similar value. I think this is a pretty impressing result. Does the spatial extension come pre-installed with mysql? = can I expect it to be present for a home-user with no knowledge of software that has a mysql running? It is preinstalled even in MySQL 4. It implements the OpenGIS standard for MyISAM tables without some rarely used features. You have to use the special types Point, LinearString ..., a spatial index ( alter table x add spatial index(column) ) and some functions to make operations upon the data - e.g. the prominent example whether points are inside a bounding box. This is for what R-Trees are implemented. See http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html To take a deeper look inside the online doc is a good idea. There are some very useful tips about SQL optimization and: Learn to use the console if you have not done yet. Most MySQL admin tools don't support OpenGIS types. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osmosis: Bounding polygon does not support change data as input?
Brett Henderson schrieb: On Fri, Sep 19, 2008 at 2:18 AM, Andreas Kalsch [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Thank you for your help. The fourth point means that in my planned application I am not going to use OSM database but just a subset of it - so I need kind of trigger which updates my application data from my local OSM database. I have tried to do this the way I described, but it still does not work. I described it in another post this morning. Either ... osmosis/bin/osmosis --read-xml-change data/osm/20080916-20080917.osc --apply-change --write-mysql host=localhost database=osm_de user=user password=xxx and ... osmosis/bin/osmosis --read-xml-change data/osm/20080916-20080917.osc --read-mysql host=localhost database=osm_de user=root password=qay --apply-change --write-mysql host=localhost database=osm_de user=user password=xxx do not work. What is the right syntax? It should be like this (note use of --write-mysql-change). osmosis/bin/osmosis --read-xml-change data/osm/20080916-20080917.osc --write-mysql-change host=localhost database=osm_de user=user password=xxx Note that you can also use the --read-change-interval task which will automate the downloading of change files and eliminate the need for the --read-xml-change task. You can easily automate this via cron allowing you to use minute changesets if you wish. http://wiki.openstreetmap.org/index.php/Osmosis/DetailedUsage#--read-change-interval_.28--rci.29 Does not seem to work: mesolt11:/home/andi/opw# osmosis/bin/osmosis --read-change-interval --wnc Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.Osmosis main INFO: Launching pipeline execution. Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.Osmosis main INFO: Pipeline executing, waiting for completion. Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.pipeline.common.ActiveTaskManager waitForCompletion SEVERE: Thread for task 1-read-change-interval failed com.bretth.osmosis.core.OsmosisRuntimeException: Unable to load properties from config file ./configuration.txt at com.bretth.osmosis.core.merge.v0_5.impl.DownloaderConfiguration.loadProperties(DownloaderConfiguration.java:50) at com.bretth.osmosis.core.merge.v0_5.impl.DownloaderConfiguration.init(DownloaderConfiguration.java:35) at com.bretth.osmosis.core.merge.v0_5.ChangeDownloader.download(ChangeDownloader.java:226) at com.bretth.osmosis.core.merge.v0_5.ChangeDownloader.run(ChangeDownloader.java:388) at java.lang.Thread.run(Thread.java:619) Sep 22, 2008 12:04:27 AM com.bretth.osmosis.core.Osmosis main SEVERE: Execution aborted. com.bretth.osmosis.core.OsmosisRuntimeException: One or more tasks failed. at com.bretth.osmosis.core.pipeline.common.Pipeline.waitForCompletion(Pipeline.java:141) at com.bretth.osmosis.core.Osmosis.main(Osmosis.java:55) I think I was right that - --read-change-interval has no args - I use --wnc to forward the change stream to nowhere So I have tested applying a world change set to a small MySQL database - Luxemborg. Needs half an hour. So for small subsets you better replace the whole database. Let me know if you have any problems with the --write-mysql-change task, it hasn't been heavily used yet. Have you looked at PostgreSQL? Osmosis provides a pgsql-simple schema which is very similar to the OSM MySQL schema but doesn't have history tables and uses some geo-spatial extensions. It is already being used by a couple of people and may be more appropriate depending on your usage. I haven't tried it yet, because my app will run with MySQL. If there is a way to use _one_ connection for both a PostgreSQL and a MySQL database, please tell me. ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Osmosis: Bounding polygon does not support change data as input?
Brett Henderson schrieb: Andreas Kalsch wrote: I have set up a OSM database with Germany data. Now I want to update my data daily. I have not found diff files for Germany so I want to use the global diff files, which are not too big. To re-import German data every day would be too expensive - it already took some hours. In Osmosis, I tried this: osmosis/bin/osmosis --read-xml-change data/osm/20080916-20080917.osc --bounding-polygon data/germany2pts.txt --apply-change --write-mysql host=localhost database=osm_de user=user password=xxx Result: Sep 17, 2008 10:05:15 PM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Sep 17, 2008 10:05:15 PM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Sep 17, 2008 10:05:16 PM com.bretth.osmosis.core.Osmosis main SEVERE: Execution aborted. com.bretth.osmosis.core.OsmosisRuntimeException: Task 2-bounding-polygon does not support data provided by default pipe stored at level 1 in the default pipe stack. at com.bretth.osmosis.core.pipeline.common.PipeTasks.retrieveTask(PipeTasks.java:154) at com.bretth.osmosis.core.pipeline.common.TaskManager.getInputTask(TaskManager.java:164) at com.bretth.osmosis.core.pipeline.v0_5.SinkSourceManager.connect(SinkSourceManager.java:51) at com.bretth.osmosis.core.pipeline.common.Pipeline.connectTasks(Pipeline.java:69) at com.bretth.osmosis.core.pipeline.common.Pipeline.prepare(Pipeline.java:111) at com.bretth.osmosis.core.Osmosis.main(Osmosis.java:49) So am I right thinking, that the bounding polygon filter does not support the input of change data? That's correct. There is no way of writing a bounding box task for managing changeset data. Take this scenario for example. A way already exists in your database related to several nodes. Somebody modifies the way but doesn't touch the nodes. The next changeset will contain the updated way within a modify element but not the nodes. A bounding box task will have no way of knowing whether the way is inside or outside the bounding box. This is a clear fact, of course. But Osmosis could retrive the nodes from the database instead and try to find it out, like you described below. All is not lost however. The simple answer is just to import the diff for the entire world. It's approximately 10MB of compressed data per day so the database will only grow steadily. The slightly more complicated answer is to import the entire world diffs but then run an additional query to delete data not inside the bounding box. This would only have to be run occasionally. OK, so a solution could be: - load planet diff - apply it completely - make copy of current database with only points within the polygon and delete the old database - every n days - after that triggering changes to my application database which uses just a subset of the whole data, so that I use no data which is outlide the polygon This is a compromise that would work for me. A more complicated solution (involving some coding) could be as follows: Before applying a changeset, re-order the changeset to preserve referential integrity. Osmosis can do this already with the --sort task. Only write nodes that are inside the bounding box and check to see if each node already exists in the database. For every way write, check the nodes that should by this point already exist in the database to see if the way is inside the bounding box. For every relation write, check to see if any members already exist in the database. I am sure this would take very much time. Sorting a file is surely not a very quick task ;) I don't have any immediate plans to attempt the correct solution above due to lack of time, but it would be neat if somebody could get it working. Brett ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] Osmosis: Bounding polygon does not support change data as input?
I have set up a OSM database with Germany data. Now I want to update my data daily. I have not found diff files for Germany so I want to use the global diff files, which are not too big. To re-import German data every day would be too expensive - it already took some hours. In Osmosis, I tried this: osmosis/bin/osmosis --read-xml-change data/osm/20080916-20080917.osc --bounding-polygon data/germany2pts.txt --apply-change --write-mysql host=localhost database=osm_de user=user password=xxx Result: Sep 17, 2008 10:05:15 PM com.bretth.osmosis.core.Osmosis main INFO: Osmosis Version 0.29 Sep 17, 2008 10:05:15 PM com.bretth.osmosis.core.Osmosis main INFO: Preparing pipeline. Sep 17, 2008 10:05:16 PM com.bretth.osmosis.core.Osmosis main SEVERE: Execution aborted. com.bretth.osmosis.core.OsmosisRuntimeException: Task 2-bounding-polygon does not support data provided by default pipe stored at level 1 in the default pipe stack. at com.bretth.osmosis.core.pipeline.common.PipeTasks.retrieveTask(PipeTasks.java:154) at com.bretth.osmosis.core.pipeline.common.TaskManager.getInputTask(TaskManager.java:164) at com.bretth.osmosis.core.pipeline.v0_5.SinkSourceManager.connect(SinkSourceManager.java:51) at com.bretth.osmosis.core.pipeline.common.Pipeline.connectTasks(Pipeline.java:69) at com.bretth.osmosis.core.pipeline.common.Pipeline.prepare(Pipeline.java:111) at com.bretth.osmosis.core.Osmosis.main(Osmosis.java:49) So am I right thinking, that the bounding polygon filter does not support the input of change data? ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Spatial vs. multi-column indexes for points
On Thu, Sep 11, 2008 at 08:31:23PM +0200, Andreas Kalsch wrote: All, thanks for your quick responses! Quad tiles look like a smart way to create an index. So to lookup a single point or a quad tile, this is fine. But for my application I need another lookup - by bounding box with any ratio and size. Is there a way to look up a special bounding box with this index? I think this will be a little more complicated without conventional multi-column indices. Hmm, I think you could take a maximum number of quad lookups which contain the requested box, what do you think? The code in the api calculates all possible quadtile areas and does a quadtile in ( x, y, z, a, b, c, d, e, f ) or quadtile between x and y etc Can you give me the position of this code in SVN, I am sure will understand it more deeply then. -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Spatial vs. multi-column indexes for points
My current optimization includes: - using mediumint for lat/lon - enough for ~2 meters resolution - using a bounding box first for point+radius calculation and then selecting the circle with a Pythagoras approximation, which is exact enough For the first, I want to use MySQL. If you are interested in some benchmark data, I can post ... As an aside, there is currently a lack of good benchmarks on the different optimizations hacks for spatial querying, in different circumstances, so anything contributing towards guidelines for developers would be most helpful OK, some things I have quickly writte down while testing, made on: - MySQL 5.0.67 - MacBook Intel Core 2 Duo - 2 GHz - Mac OS 10.4 Time in secs There are several ways to get the points within a circle ... - I precompute some values which will be constant during the query - this is quicker - I use mediumints for lat/lon 1) correct, bust most expensive: SET @latitude=48; SET @longitude=13; SET @latitudeM = getMed(@latitude); /* MySQL function */ SET @longitudeM = getMed(@longitude); SET @latitudeSin=sin(radians(@latitude)); SET @latitudeCos=cos(radians(@latitude)); SELECT BENCHMARK( 100, degrees(acos( @latitudeSin*sin(radians(230)) + @latitudeCos*cos(radians(230))*cos(radians([EMAIL PROTECTED])) ))); .62 secs 2) approximation - really good results for radius up to 10-20°: SET @longitudeFactor=cos(radians(@latitude)); SELECT BENCHMARK( 100, sqrt(pow(@latitudeM - 230, 2) + pow(@longitudeFactor * (@longitudeM - 25), 2)) ); .40 secs 3) OK, sqrt can be omitted in the resulting query SELECT BENCHMARK( 100, pow(@latitudeM - 230, 2) + pow(@longitudeFactor * (@longitudeM - 25), 2) ); .36 secs If there are more people interested in concrete computation based on 500.000 rows I can post them, too. Andi -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
Re: [OSM-dev] Spatial vs. multi-column indexes for points
All, thanks for your quick responses! Quad tiles look like a smart way to create an index. So to lookup a single point or a quad tile, this is fine. But for my application I need another lookup - by bounding box with any ratio and size. Is there a way to look up a special bounding box with this index? I think this will be a little more complicated without conventional multi-column indices. Hmm, I think you could take a maximum number of quad lookups which contain the requested box, what do you think? My current optimization includes: - using mediumint for lat/lon - enough for ~2 meters resolution - using a bounding box first for point+radius calculation and then selecting the circle with a Pythagoras approximation, which is exact enough For the first, I want to use MySQL. If you are interested in some benchmark data, I can post ... Instead of indexing by precise coordinates you could index by virtual tiles as it is done in OSM's main DB since a year ago with nice performance boost: http://wiki.openstreetmap.org/index.php/QuadTiles good luck, Stefan On Thu, Sep 11, 2008 at 1:49 PM, Andreas Kalsch [EMAIL PROTECTED] wrote: Hey, last week I made some experiments with huge datasets of lat/lon points. I use MySQL 5.0, which partially support GIS extensions, including R-trees. But it is still not able to make queries based on the GIS features, so I have to use the normal way - multi-column indexes on lat/lon columns. It works well but probably there is a way to make it even quicker ;) Has anybody used GIS successfully in MySQL or PGSQL and can tell me how the performance compares between the two techniques? Thanks, Andi -- Pt! Schon das coole Video vom GMX MultiMessenger gesehen? Der Eine für Alle: http://www.gmx.net/de/go/messenger03 ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev -- Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev
[OSM-dev] OSM dumo download cancels
My planet.osm download cancels every time. I do it via Firefox. My first try canceled @ 237M, my second @ 1.1G I tried it via curl: (18) transfer closed with 4171221735 bytes remaining to read Why do the downloads cancel every time although my disk has enough space left? Andi ___ dev mailing list dev@openstreetmap.org http://lists.openstreetmap.org/listinfo/dev