I think that the underlying issue in wikidata tags is that they are external 
IDs. Not human readable, they cannot be entered 'by hand' nor verified on the 
Once you accept them in OSM, you can't really complain about bots. 

Yves (who still think such UIDs are only needed for the lack of good query 

Le 26 septembre 2017 19:08:33 GMT+02:00, Yuri Astrakhan 
<yuriastrak...@gmail.com> a écrit :
>> > p.s. OSM is a community project, not a programmers project, it's
>> > people, not software :-)
>> It's both.  OSM is first and foremost is a community, but the result
>our effort is a machine-readable database.  We are not creating an
>encyclopedia that will be casually flipped through by humans. We
>data that gets interpreted by software, so that it can render maps and
>searchable.  For example, if every person uses their own tag names and
>to record things, the data will have nearly zero value.  We must agree
>conventions so that software can understand our results - which is
>what we have been doing on wiki and in email channels. Any tag and
>that cannot be recognized and processed by software is effectively
>>   Totally agree. If some script can automatically add new tag
>> (wikidata) without any actual WORK needed, then it is pointless,
>> anybody can run an auto-update script.
>  When ordinary (non geek) mappers do ACTUAL WORK - add wikipedia
>> data, they add wikipedia link, not wikidata "stuff".
>While sand castles may look nice, they don't last very long. When
>people add just the Wikipedia article, that link quickly gets stale and
>become irrelevant and often incorrect. The wikipedia article titles are
>stable. They get renamed all the time - there are tens of thousands of
>in OSM already that I found.  Often, community renames wp articles
>there are more than one meaning, so they create a new article with the
>name in its place - a disambig page.  There is no easy way to analyse
>wikipedia links for content - you cannot easily determine if the
>article is about a person, a country, or a house, which makes it
>to check for correctness.
>When I spend half an hour of my time researching which WP article is
>for an object, I do not want that effort to be wasted just because
>else puts a disambig page in its place, and I have to redo all my work.
>  When data consumers want to get a link to corresponding wikipedia
>> article, doing that with wikipedia[:xx] tags is straightforward.
>> the same with wikidata requires additional pointless and time
>> consuming abrakadabra.
>no, you clearly haven't worked with any data consumers recently. Data
>consumers want Wikidata, much more than wikipedia tags - please talk to
>them. Wikidata gives you the list of wikipedia articles in all
>languages, it lets you get multi-lingual names when they are not
>in OSM, it allows much more intelligent searches based on types of
>it allows quality controls.  The abrakadabra is exactly what one has to
>when parsing non-standardized data.
>>   Validation of wikipedia tag values can and IS already done using
>> data versus wikipedia-geolocated data extracts/dumps.
>> Sure, it can be via dump parsing, but it is a much more complicated
>querying.  Would you rather use Overpass turbo to do a quick search for
>some weird thing that you noticed, or download and parse the dump? 
>people would rather do the former. Here is the same thing - you *could*
>validation via a dump, but that barrier of entry is so high, most
>wouldn't.  With the new OSM+Wikidata tool, which is already getting
>hundreds of thousands requests (!!!) , it is possible to get just the
>you need, and fix the problems that have been always present, but
>And all that is possible because of a single tag.
talk mailing list

Reply via email to