https://bugzilla.wikimedia.org/show_bug.cgi?id=38136

--- Comment #1 from Markus Krötzsch <[email protected]> 2012-07-06 
11:40:35 UTC ---
Thanks for the patch. We should integrate this (I did not do this yet, but I
wanted to give you a quick reply at least).

The original reason for using batches of 100 was that individual pages are
usually so quick to process that a delay after each seemed unnecessary. We can
change this. However, if you have problems with 100 pages, you might already
have problems with 1 page (often, there are many more short/simple pages than
"slow" pages). Maybe try running your update script with a larger nice value to
avoid it from blocking more important processes. This only has effect if the
problem is not in the database system (which applies the same priority to all
queries). If your problems persist, esp. if it is due to the MySQL part of the
processing, then it would be nice to know why exactly your pages need so much
CPU for refreshing. We are currently looking into storage optimizations and are
interested in testing their efficacy on sites that have one or the other
performance issue.

-- 
Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to