Re: [OSM-talk] [HOT] Quality (was: The point on the OSM Response to the DR Congo Nord Kivu Ebola outbreak)
Yes, Quality should be be integrated at all levels, from Documentation, Editing tools, Projects monitoring particularly in the context of Mapathons to catch problems rapidly and correct. And yes validation is the last step, the last barrier to catch Quality problems and correct. After the experience with Mapathons in the last few years, we are surely at this point where we need to revise our global process and suggest where improvements would contribute to this Quality Quest. Bjoern, in a HOT discussion about the Ebola Response in Butembo, gave us a link to some Documentation used in the context of Mapathons. It is important to propose such documentation specific to Mapathons.https://lists.openstreetmap.org/pipermail/hot/2018-December/014667.html Documentation easily accessible in iD with the ? shortcut is also a good point. Such easy access to documentatin should be part of the various OSM editors. But it should also focus on specific skills like Trace a building, Correct irregular geometry, Adjust the offset of imagery, Classify roads. Links to short videos would also greatly help the beginners. There are projects more complex with aspects such as the density of urban areas, imagery quality and offset and it is important to restrict access based on OSM experience for more complex projects and this is now possible for the various Tasking Manager projects. Taking this step for the Butembo Ebola response this week dramatically improved the quality of the data produced. But still, I often observed that some occasionnal contributors to Mapathons continue to produce some Fantasy buildings more then a year after they started editing. This is indication of how it is important not only to provide good documentation and tools to beginners, to restrict more complex jobs, but also to better accompany and motivate the OSM beginners. Let's be a community. Let's go back to our roots! We should stop to have thousand of one day contributors that produce inadequate data that often is not corrected afterward. Irregular geometries in the OSM database are probably more then 90% of the time an indication of incorrect mapping. Highlighting Irregular geometries and overlaps in editors such as iD and JOSM would faciliate revision by beginners. It could be integrated in the JOSM validation process. iD could also have such a validation process. Monitoring of Quality and OSM edits need tools to quickly identify such problems. The Overpass and JOSM could provide the possibility to query for irregular geometries and overlaps. Such addition in Overpass would offer to the Mapathons the possibility to visually monitor Quality of editing with the participants using for example a list of OSM user id's. This could also be used for edition in JOSM. And imagine the Mapathon participants that view the progress on a «Live Quality Map» plus «Quality statistics». This would be both motivating and pedagogic. There were some regression with the Tasking Manager updates for the possibility to monitor the users. For example, the Activity and Stats section do not let see on the map the squares mapped by a particular contributor. It is then uneasy to revise the edits of a specific contributor that do not map appropriately. On the other hand, it is now possible to restrict the access to Validation. . regard Pierre ___ talk mailing list talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/talk
Re: [OSM-talk] [HOT] Quality (was: The point on the OSM Response to the DR Congo Nord Kivu Ebola outbreak)
This is a good discussion. Chiming in here to share some additional thoughts and work we’ve been doing on the Tasking Manager this fall. I agree with what people have said. Quality isn’t just a “let’s improve how we validate” problem. This also isn’t only an editor problem — through many of the mapathons, the Tasking Manager is an entry point for new mappers and can be a source of both the problems and the solution. It’s a multi-factored problem and we're trying to work on the Tasking Manager side to help out here. TM developers are definitely aware of the fact that many new mapathon mappers start using an OSM editor through the TM and so there is a chance to greatly support quality improvements here. As Steve mentioned at the HOT Summit we started to dig into some of these problems through a couple workshops. Notes from the two workshop sessions are here if anyone is interested: - Data quality improvements workshop: https://github.com/hotosm/hot-summit-2018/wiki/Design-Workshop-:-Data-Quality-Validation-Improvements-with-TM-iD-Editor - AI & ML workshop: https://github.com/hotosm/hot-summit-2018/wiki/Design-Workshop-:-AI-&-Machine-Learning---Integrating-into-HOT-Tools As we’ve been working on Tasking Manager this fall and preparing for some additional development this spring, we’ve been digging into a couple items related to quality: 1. Onboarding. Onboarding not only starts with good training at mapathons but can happen in app as well. We’ve only just started to dig into many ideas on ways that we can improve the training aspect of what people need to know before they start mapping. Some notes from a recent design convo we had: https://github.com/hotosm/tasking-manager/wiki/Onboarding-Idea-Generation-Session-Notes 2. New mapper mapping experience. In relation to onboarding and training, the type of mapping or the way data is exposed to new mappers is vastly different from a well-trained, experienced mapper. The Tasking Manager workflow currently tries to meet both mapper needs in the middle - which might be part of the problem at the moment. In January we’re going to be taking a look at the entire experience to dig into where and how things should change to improve on this front. 3. Testing ML as a quality support tool. Machine learning outputs can be a huge support here and that hasn’t been well tested. As we’ve been working on the first part of the ML strategy HOT outlined this fall, giving real-time feedback to a mapper will be extremely helpful in improving the data quality: https://www.hotosm.org/updates/integrating-machine-learning-into-the-tasking-manager/. What the exactly looks like is yet to be determined, but we’re hoping to have a prototype in January about how ML can be used to integrate a complexity measure into a task grid square and ultimately can help set mapping expectations. Along with volunteering opportunities to work on technology projects with HOT, we do have a job opening that will include working on the Tasking Manager: https://www.hotosm.org/jobs/technical-project-manager/. On Thu, Dec 13, 2018 at 7:50 AM Stephen Penson wrote: > To build on Jean- Marc's point, one thing I raised at the HOT Summit and > also recently to the London Missing Maps team is the need to tackle the > errors at the source. Having validators is vital, but I believe we can > improve the initial mapping through a few tweaks in the way new mappers are > trained. > > Personally, what I believe would be really powerful is the creation of a > way for new mappers to understand the importance of high quality mapping. > > For instance, if it were possible within ID Editor to not only highlight > overlapping buildings but ALSO explain why overlapping buildings have an > impact, then people would be able to relate and therefore change their > behaviours. > > For example, the tool could highlight that overlapping buildings can > result in inaccurate population density calculations which can have an > impact on humanitarian response (see previous messages from Pierre > Belland's HOT mailing list post on the DRC as a case study). If we can > explain this to people in a compelling way, I believe the quality of the > mapping would improve. > > If something could be built within the current tool set (e.g. embedded > text/video within ID validation) this should hopefully ensure consistency. > > Combining such tweaks with real-time monitoring tools, such as Bjoern > suggests, should improve quality at mapathons. > > Essentially, people attend Missing Maps mapathons to contribute to a > worthy cause. People wish to map the best they can, so if more (and > consistent) support is offered, the quality will improve. > > Thanks > > Steve > -- > *From:* Jean-Marc Liotier > *Sent:* 12 December 2018 22:30 > *To:* talk@openstreetmap.org; h...@openstreetmap.org > *Subject:* Re: [HOT] Quality (was: The point on the OSM Response to the > DR Congo Nord Kivu Ebola outbreak) > > On
Re: [OSM-talk] [HOT] Quality (was: The point on the OSM Response to the DR Congo Nord Kivu Ebola outbreak)
To build on Jean- Marc's point, one thing I raised at the HOT Summit and also recently to the London Missing Maps team is the need to tackle the errors at the source. Having validators is vital, but I believe we can improve the initial mapping through a few tweaks in the way new mappers are trained. Personally, what I believe would be really powerful is the creation of a way for new mappers to understand the importance of high quality mapping. For instance, if it were possible within ID Editor to not only highlight overlapping buildings but ALSO explain why overlapping buildings have an impact, then people would be able to relate and therefore change their behaviours. For example, the tool could highlight that overlapping buildings can result in inaccurate population density calculations which can have an impact on humanitarian response (see previous messages from Pierre Belland's HOT mailing list post on the DRC as a case study). If we can explain this to people in a compelling way, I believe the quality of the mapping would improve. If something could be built within the current tool set (e.g. embedded text/video within ID validation) this should hopefully ensure consistency. Combining such tweaks with real-time monitoring tools, such as Bjoern suggests, should improve quality at mapathons. Essentially, people attend Missing Maps mapathons to contribute to a worthy cause. People wish to map the best they can, so if more (and consistent) support is offered, the quality will improve. Thanks Steve From: Jean-Marc Liotier Sent: 12 December 2018 22:30 To: talk@openstreetmap.org; h...@openstreetmap.org Subject: Re: [HOT] Quality (was: The point on the OSM Response to the DR Congo Nord Kivu Ebola outbreak) On 12/12/18 2:16 AM, Ralph Aytoun wrote: I am also concerned about the quality of the mapping that is tying up projects because it takes up so much validation time. [..] This perception is (don't take it personally - I answer your message but I'm not singling you out) a symptom of a widespread problem: quality perceived as a separate activity, an extra cost tacked on the actual productive work. Considering the quality assurance process as a distinct set of activities has the very unfortunate effect of creating an unnecessary conflict with production. So: - Start with a clearly defined objective quality goal, just adequate for the planned purpose of the data - Teach contributors that not meeting this goal is worse than doing nothing: negative value - Monitor contributions in real time, to catch deviations before they snowball... I love Bjoern's idea, though OSMCHA works for me - Reiterate ! Quality is the essence of the whole activity, not a distinct step. Yes, it spoils the fun for new contributors thrilled to start mapping away and see their gamified metrics take off spectacularly in a rain of digital achievement awards. But it also helps them make sense of what they are doing instead of launching them on an open ended trip with a hazy purpose - and what is better than to find meaning in a task ? Normative leadership may feel incompatible with a flat collaborative forum such as Openstreetmap, but it makes sense within a directed project with a declared purpose, to which contributors voluntarily participate. If they trust the project leadership enough to join as contributors, they may expect the normative guidance and even be disappointed not to feel it from the leadership. ___ talk mailing list talk@openstreetmap.org https://lists.openstreetmap.org/listinfo/talk