daniel created this task.
daniel added projects: Wikidata, MediaWiki-extensions-WikibaseRepository, Performance.
Herald added subscribers: PokestarFan, Aklapper.


ChangeDispatcher has various options governing batch size, dispatch interval, lock retention, etc. These settings are designed to allow a tradeoff between optimization through batching and acceptably low delays until changes are processed on the client wiki. This tradeoff however strongly depends on the size of the client wiki (or rather, the number of entities used on the client wiki, and the number of edits to those entities on the repo).

However, with the current setup, we have to find a compromise between good settings for large wikis and good settings for small wikis, leading to situations like T171263: Wikidata Dispatcher and Job Queue is overflowed.

To allow us to optimize for both large and small client wikis, we should be able to run dispatchChanges cron jobs with different settings for different groups of wikis. To achieve this, we could add a chd_group column to wb_change_dispatch, and add an option --group to dispatchChanges, which filters be the value in that new DB field.



To: daniel
Cc: aude, Aklapper, hoo, PokestarFan, daniel, GoranSMilovanovic, QZanden, Vali.matei, Volker_E, Izno, Wikidata-bugs, GWicke, Mbch331, Jay8g
Wikidata-bugs mailing list

Reply via email to