https://bugzilla.wikimedia.org/show_bug.cgi?id=67117

Aaron Schulz <[email protected]> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |[email protected]

--- Comment #3 from Aaron Schulz <[email protected]> ---
* The delay could be in theory be based on the current max slave lag...but I
simple approach would be to just pick a value. 10 seconds would be very safe
under non-broken circumstances.
* It's not a huge deal, but it would be nice to use the API (which also works
with things like OAuth if we wanted hubs with more access). Brad might have an
opinion on special page vs API. I don't feel strongly, but we should get it
right given the Link header and cache interaction.
* You could still use the recent changes table to reduce the number of jobs.
There could be at most one de-duplicated job per hub that would grab all titles
changes from the last time and send them to the hub. When the job succeeds (not
HTTP errors posting the changed URIs), it could bump the time. The time table
could be a simple DB table. The job could be delayed to give it time to cover a
larger time range. The range could be "last time" to present (or a smaller
range to limit the number of items, especially for the new hub case). Maybe 5
minutes. It could hopefully batch more titles in one or a few HTTP requests (or
use pipelining or at least some curl_multi).

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to