On Mon, Oct 22, 2012 at 8:52 PM, Frederik Ramm <frede...@remote.org> wrote:

>
>   2. Generalizations: simplifications of roads, polygons etc. for a
>>     certain map scale.
>>
>
> Same process - either you share the generalized data or you share the
> algorithm that produces it. If, for example, you were to import with ImpOSM
> which does generalisations when importing, that's all you'd have to say.
>
>   3. Finding suitable label placements.
>>  4. Extracting topology from the data (like multipolygon processing,
>>
>>     merging of polygons, road segments etc.).
>>  5. Running other complex algorithms on the OSM data.
>>
>>
>> This preprocessing can be done "on-the" fly or (in case of Mapnik) as a
>> separate prerequisite step.
>>
>
> The boundary between what is done as a separate step, leading to a derived
> database, and what is done on the fly as part of the rendering process may
> sometimes be muddy but I guess in these situations they are pretty clear.
>
> Another interesting question is how easy the algorithm you specify must
> be. It is clear that the algorithm cannot include "buy some Navteq data and
> then do this", or "buy ArcGIS and then do that" - but what if the algorithm
> includes "run this code, it will take 1000 days", or "make sure your
> machine has at least 1 TB of RAM, then continue as follows...".
>
>
The first question is what is the purpose of that method description? If
the purpose is to enable _anyone_ repeating the same process, then I see a
big problem with this interpretation: it effectively means you cannot use
closed source software to generate publicly distributed maps. In one case
you might not be the owner of the source code (ArcGIS as an example), so
you cannot really describe the actual algorithm behind it. In another case,
if you're the owner of the code, you'll either be forced to write length
documents describing your algorithms, or release the source code. And BTW
under what terms/license that document/source code is released? What
prevents a company XYZ then using that source code to do processing of
completely different Databases (not OSM's)?

I don't see how this this clause can be enforced is the scenarios I've
mentioned. Here are some possible outcomes:

   1. The owner of the code has to open-source the code (which could mean
   tossing away a large investment in time & money and giving it free to the
   competition). Who ensures that the source code is complete enough to enable
   the repetition of the process?
   2. The owner writes a crappy document describing the algorithm that no
   one can follow (I've seen a lot of such scientific articles). Who will
   ensure that such documents are usable?
   3. The owner releases a "derivative DB" which (since the processing is
   done in-memory) is just an binary (almost) random stream of data, difficult
   to read and process for anyone without the original source code. Does he
   need to release the documentation of the data format?

Maybe I'm missing something, I don't know.

Igor
_______________________________________________
legal-talk mailing list
legal-talk@openstreetmap.org
http://lists.openstreetmap.org/listinfo/legal-talk

Reply via email to