Joe added a comment.
In T214362#4967944 <https://phabricator.wikimedia.org/T214362#4967944>, @Addshore wrote: > > We currently still want to be able to compute the check on demand, either because the user wants to purge the current constraint check data, or if the check data does not already exist / is outdated. > It could be possible that later down the line we put the purging of this data into the job queue too, and once we have data for all items persistently stored in theory the user would never ask for an items constraint check and it not be there (thus no writing to the storage on request) > But that is not the immediate plan. My point here is quite subtle but fundamental - if we can split reads and write to this datastore based on the HTTP verb, so that constraints would be persisted only via either a - a specific job enqueued (by user request or by b - a POST request it would be possible to store these data in the cheapest k-v storage we have, the ParserCache. That would allow typically be cheaper and faster than using a distributed k-v storage like Cassandra, which I'd reserve for things that need to be written to from multiple datacenters. TASK DETAIL https://phabricator.wikimedia.org/T214362 EMAIL PREFERENCES https://phabricator.wikimedia.org/settings/panel/emailpreferences/ To: Joe Cc: Marostegui, Joe, daniel, Agabi10, Aklapper, Addshore, Nandana, kostajh, Lahi, Gq86, Lucas_Werkmeister_WMDE, GoranSMilovanovic, QZanden, merbst, LawExplorer, _jensen, D3r1ck01, SBisson, Eevans, mobrovac, Hardikj, Wikidata-bugs, aude, GWicke, jayvdb, fbstj, santhosh, Jdforrester-WMF, Mbch331, Rxy, Jay8g, Ltrlg, bd808, Legoktm
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
