Hi all, one of my scripts needs to check on categories in many languages, for a single script call. Right now, it's disabled because it drove runtime up massively.
So, in essence , it goes through a list of languages, often >100. It opens a DB connection on Labs to the respective Wikipedia, does some query, and closes again. This appears to take a lot of time, and also seems to block other instances of the script running in parallel. I was thinking of trying pconnect (persistent DB connection), but I don't know the scope of this, and I don't want to keep DB connections to essentially all Wikipedias permanently open... For a moment I considered randomizing the order of the language in case the "blocking" is some form of collision on the same DBs, but that sounds far-fetched. Anyone have ideas about how to make this faster/more scalable? Magnus
_______________________________________________ Labs-l mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/labs-l
