Hi Martin 

Martin Koppenhoefer <dieterdre...@gmail.com> skrev: (23 augusti 2020 18:27:58 
CEST)
>
>
>sent from a phone
>
>> On 23. Aug 2020, at 13:55, pangoSE <pang...@riseup.net> wrote:
>> 
>> We could e.g. set a verification-needed
>> flag on objects edited in a changeset with "please review".
>
>
>while you can (already) add a fixme tag, I fear that creating a special
>feature for less reliable information could lead to people being
>encouraged to adding more “guess work” because they “set the unreliable
>flag so what’s the problem?“

Yeah, that's a good point. We are social animals. 

>
>I just had an idea: You could calculate a reliability index for each
>and every object in OpenStreetMap (and maybe for each of their tags, by
>looking at the mapping experience of the person that added it. 

Beautiful idea! I'm gonna try that with a small country. Working on a small 
excerpt of the planet could be problematic because the whole user history will 
not be available. Hmm that means big data is the only viable way forward.

>In a
>more complex iteration, it could also take the reliability of specific
>mappers into account by analyzing whether things they add or modify are
>kept or changed by following mappers (and it would probably have to
>take time into account, because if something is changed after a long
>time it is more probable that it was because of a change in the real
>life and not because of bad representation, and maybe also the kind of
>change). 

I love this idea too. This is what I have done in my head editing in Sweden for 
multiple years. I have a very short list of editors that frequently map in a 
way I don't like or make errors e.g. things showing up in keep right and they 
were the last editor. Usual suspects 😅. I always try communicating with the 
them and most respond and we find a way forward, but some never react to 
changeset comments and just keep on what they are doing.

Not reacting to changesets comments is another red flag.

>It could also be done according to the field of thing (e.g.
>this mapper does reliable work with buildings or this mapper is an
>expert for outdoor routes but does poor work in cities, or is an expert
>for railways, etc. etc.)

Yes this is a good observation. OSM is hard. It takes time to learn all the 
long ropes.

>
>There is a lot of stuff that could be analyzed, immense. All the
>history is still available with all the user information...

I get your point. 

You could also flag changesets with huge BBOXes and filter away those done by 
experienced mappers and those concerning one big relation.

Using to this search https://duckduckgo.com/?q=osm+history+analysis I just 
found 
https://heigit.org/big-spatial-data-analytics-en/ohsome/ which seem very 
promising 😃

I will contact them and see if I can use and contribute to their platform to 
get the information I want.

A good algorithm for finding and rating experienced mappers is crucial. If 
anyone already has made one or ideas for improvements please share 😃

Feel free to add to this wikipage 
https://wiki.openstreetmap.org/wiki/Algorithms_for_QA

I just signed up for Fuga cloud and I'm  gonna start playing with the history 
data in python and postgresql to crunch the numbers for a small country if 
ohsome turns out not to be suitable.

Thanks for sharing your ideas 😃

Cheers
pangoSE

_______________________________________________
talk mailing list
talk@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk

Reply via email to