I've dug up graphs for these APIs:

- globalusage:
https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1397062971.274&from=-14days&target=MediaWiki.API.globalusage.tp50

The effect of the caching deployed on the 24th (
https://gerrit.wikimedia.org/r/#/c/127438/) is striking on this one. It
seems like the spike caused by the launch to nl & fr wikipedias last night
is reasonable and subsided very quickly.

- imageusage:
https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1397062971.274&from=-14days&target=MediaWiki.API.imageusage.tp50

same story as globalusage

- userinfo
https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1397062971.274&from=-7days&target=MediaWiki.API.userinfo.tp50

More spiky, yet quite stable, but my understanding is that Media Viewer is
far from being the only consumer of that API call. Not sure how we could
differentiate the effect of Media Viewer from the rest of the traffic for
this one.

-filerepoinfo:
https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1397062971.274&from=-14days&target=MediaWiki.API.filerepoinfo.tp50

This one is the odd bird compared to the other ones, as it's noticeably
growing, but the scale shows us that it's called a lot less than the
others. The effect of the caching launch on the 24th is counter-intuitive:
there are more invocations and they're more spiky afterwards. Might be
worth double-checking that caching was done right for that one.

- imageinfo:
https://graphite.wikimedia.org/render/?width=586&height=308&_salt=1397062971.274&from=-14days&target=MediaWiki.API.imageinfo.tp50

This is the one that we can't cache at the moment. It looks quite stable
through the nl/fr launch, though. We might have to wait a few days to be
sure but it doesn't look like a noticeable increase.

Are these the right graphs to look at to see if these APIs aren't going
nuts and won't take down the servers when we release to bigger wikis?

On a related note, is this the right dashboard for API servers?
http://ganglia.wikimedia.org/latest/?r=month&cs=&ce=&m=cpu_report&s=by+name&c=API+application+servers+eqiad&h=&host_regex=&max_graphs=0&tab=m&vn=&hide-hf=false&sh=1&z=small&hc=4

I'm trying to assess the danger of launching to bigger wikis:
https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/523 and
at this point it doesn't look like API requests are worrying. It would be
great if someone from ops could confirm that I looked at the right things
and whether or not there are signs that are worrying in there that I didn't
see.

I'll also be looking at image scaler stats separately, but I wanted to
bring this up in this discussion, since API request caching or lack thereof
was a concern to a lot of people. I'm searching for any data that could
confirm whether or not we're doing enough in preparation for the bigger
deployments of Media Viewer.





On Tue, Apr 29, 2014 at 12:51 AM, Max Semenik <[email protected]>wrote:

> On Mon, Apr 28, 2014 at 3:01 PM, Gergo Tisza <[email protected]> wrote:
>
>> Agreed. Of the requests we make, filerepoinfo and users essentially never
>> change, imageusage and globalusage we can pretend to be static since we
>> don't care about small inaccuracies; the problematic one is imageinfo.
>>
>
> You could just load the needed parts of filerepoinfo via ResourceLoader.
>
> _______________________________________________
> Multimedia mailing list
> [email protected]
> https://lists.wikimedia.org/mailman/listinfo/multimedia
>
>
_______________________________________________
Multimedia mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/multimedia

Reply via email to