Isaac added a comment.

  Able to start thinking about this again and a few thoughts:
  
  - Machine-in-the-loop: when we built quality models for the Wikipedia 
language communities, it was with the idea that the models could potentially 
support the existing editor processes for assigning article quality scores -- 
e.g., https://en.wikipedia.org/wiki/Wikipedia:Content_assessment. This 
generally aligns with our machine-in-the-loop practice of only building models 
that clearly could support and receive feedback from existing community 
processes. For the Wikidata, while there are reasonable guidelines 
<https://www.wikidata.org/wiki/Wikidata:Item_quality> for item quality, the 
only community-generated data was a one-off labeling campaign from 2020 via 
Wiki labels <https://meta.wikimedia.org/wiki/Wiki_labels/en>. This presents a 
major challenge: how do we improve on the existing ORES model to make it more 
maintainable / effective without a clear feedback loop that can be used to 
validate/update the model? One possible approach is to instead treat this as a 
task-identification model -- i.e. instead of seeking to model quality directly 
and therefore allowing vague features like the total # of references, we could 
design a model that seeks to explicitly build a list of missing/to-be-improved 
properties/aliases/descriptions/references. This list of changes could then 
always be converted into a quality score -- e.g., by computing a simple ratio 
of existing properties to missing properties or something like that -- but that 
would be secondary to the model. The community process that can provide 
feedback for this style of model then is just the regular editing process 
(albeit quite weakly because an edit doesn't tell you what else is missing). 
Eventually, it could feed into an actual interface similar to the Growth team's 
structured tasks 
<https://www.mediawiki.org/wiki/Growth/Personalized_first_day/Structured_tasks> 
that would provide even more direct feedback, but in the meantime this still 
feels much more machine-in-the-loop than a direct quality model.
  - Reducing data drift: alongside this shift in design from quality -> task 
identification, we can also make the model more sustainable by doing less 
hard-coding of outliers (like asteroids 
<https://github.com/wikimedia/articlequality/blob/master/articlequality/feature_lists/wikidatawiki_data/items_lists.py>)
 and try to redesign the model to adapt to the existing structure of Wikidata 
when it is trained. For example, taking more the approach previously taken for 
external identifiers / media 
<https://github.com/wikimedia/articlequality/blob/master/articlequality/feature_lists/wikidatawiki_data/property_datatypes.py>
 where the relevant data structures that inform the model are easy to 
auto-generate and thus could be updated with each model training. This could be 
extended to e.g., lists of properties that commonly have references and lists 
of properties that commonly appear for a given instance-of.
    - Then the model would take an item as input and perhaps go something like:
      - Extract it's instance-of and sitelinks
      - Sitelinks would be used to help determine which aliases/descriptions 
should exist
      - Instance-ofs would be used to identify which properties are expected
      - For each of those expected properties, it would either be rated as 
missing, incomplete (missing reference etc.), or complete
      - And then all of this information could be compiled as specific tasks
      - And for the quality score, the list of tasks could be compared against 
the existing data to come to some general score.
    - The challenge then still is in the smart compiling of expected properties 
for a given instance-of, but I feel much better about the structure of this 
model because it's more transparent and anyone who is familiar with Wikidata 
could easily inspect the list of expected properties for a given instance-of 
and tweak it.
    - I'm now working on extracting the list of existing properties for each 
instance-of to see if most have a clear set of common properties

TASK DETAIL
  https://phabricator.wikimedia.org/T321224

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: Isaac
Cc: diego, Miriam, Isaac, Astuthiodit_1, karapayneWMDE, Invadibot, Ywats0ns, 
maantietaja, ItamarWMDE, Akuckartz, Nandana, Abdeaitali, Lahi, Gq86, 
GoranSMilovanovic, QZanden, LawExplorer, Avner, _jensen, rosalieper, 
Scott_WUaS, Wikidata-bugs, aude, Capt_Swing, Lydia_Pintscher, Mbch331
_______________________________________________
Wikidata-bugs mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to