On Wednesday, June 11, 2014, Roland Olbricht <[email protected]> wrote: > > On the other hand, a mechanical change of data can be performed as easy > during postprocessing than in the database. This is known in programming in > "don't store an information when it is easier to recompute it". > > You may earn real fame if you have a good filtering ruleset that flatirons > all suspect data. If you publish this as a postprocessing script, it is > useful. If you apply that to flatiron the database, in 99% justified cases > and 1% on otherwise on purpose crafted data, then you will earn shame > instead, because that same script could be perceived as doing vandalism. > > It's potentially feasible to postprocess data. It's hard to collect data. So > please don't make collecting data harder. Please make rather postprocessing > data easier.
Couldn't this be even worse than applying those changes directly in the database? By using post-processing, the 'corrected' data can not be edited: the original data stays in the database. Errors are introduced between the fetching of the data and its display. And even if you try to have as little exceptions as possible, automated correction algorithms will never be able to work 100% right. In my understanding, these exceptions then have to be either – explicitly mapped (by means of, like, name:correct=yes tags, which sound like a horrible idea) or – added to a separate database specific to the post-processing software (and if you have five different ones used by different map renderers, each with their own problems and exceptions, you'll have a lot of work verifying and correcting everything). And that doesn't sound very efficient. Is that right or am I missing something? BTW: First post on this list, hi! I'm OSM user M!dgard, I live in Belgium. Kind regards Ruben aka M!dgard _______________________________________________ talk mailing list [email protected] https://lists.openstreetmap.org/listinfo/talk

