| Smalyshev added a comment. |
ensuring that the data in the WDQS nodes accurately reflects the data upstream of the service, or at least that the data is consistent between query nodes
I am not sure how you would propose ensuring that. Given the database of almost 7 billion triples, it is not feasible to compare two of them or to verify the whole database against Wikidata.
as far as I can tell these events aren't actively monitored
How would you propose to actively monitor these?
for each node monitored to spot when data has gotten lost on the way
If we had a process to know in advance which data has gotten lost, we could use the same process to recover the data. The whole problem is we don't know when the data is lost - this happens when existing process of data synchronization does not work for some reason.
Cc: Alexsdutton, WMDE-leszek, Multichill, agray, Jheald, Magnus, Pintoch, gerritbot, Mathew.onipe, Stashbot, Lydia_Pintscher, EBjune, debt, Joe, Smalyshev, Gehel, Aklapper, Legado_Shulgin, Nandana, thifranc, AndyTan, Davinaclare77, Qtn1293, Lahi, Gq86, Lucas_Werkmeister_WMDE, GoranSMilovanovic, Th3d3v1ls, Hfbn0, QZanden, merbst, LawExplorer, Zppix, D3r1ck01, Jonas, Xmlizer, Wong128hk, jkroll, Wikidata-bugs, Jdouglas, aude, Tobias1984, Manybubbles, faidon, Mbch331, Jay8g, fgiunchedi
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
