| Anomie added a comment. |
In T189409#4041526, @Reedy wrote:And as Max said, there's no difference between the app servers that wikidata/wikivoyage run in in comparison to Wikipedia for example. Their is a difference between database servers though
On the other hand, I believe it's the same database servers being hit for Wikidata data whether it's Wikipedia or Wikivoyage fetching the data.
In T189409, @RolandUnger wrote:There are 170 expensive Lua calls:
- 50 mw.title.new('Media:' .. image).exists calls, about 10 ms each
- 120 mw.wikibase.getEntity( id ) calls, about 50 ms each
I don't know how you determined those timings, but they don't seem to represent actual CPU usage.
Enabling Scribunto's sampling profiler on Wikimedia wikis is annoyingly complicated (you have to hack in a "forceprofile" parameter in the submission of a preview with the 2010 wikitext editor, e.g. by appending "&forceprofile=1" to the <form> tag's action attribute, or use the action API's action="">), but for https://de.wikivoyage.org/wiki/Halle_%28Saale%29 I see (on one preview)
Scribunto_LuaSandboxCallback::addStatementUsage 4280 ms 52.2% Scribunto_LuaSandboxCallback::callParserFunction 1680 ms 20.5% Scribunto_LuaSandboxCallback::addSiteLinksUsage 780 ms 9.5% Scribunto_LuaSandboxCallback::getEntity 700 ms 8.5% Scribunto_LuaSandboxCallback::gsub 280 ms 3.4% recursiveClone <mwInit.lua:41> 100 ms 1.2% Scribunto_LuaSandboxCallback::fullUrl 60 ms 0.7% Scribunto_LuaSandboxCallback::getEntityStatements 60 ms 0.7% Scribunto_LuaSandboxCallback::getExpandedArgument 40 ms 0.5% Scribunto_LuaSandboxCallback::match 40 ms 0.5% [others] 180 ms 2.2%
Note the granularity of the sampling is 20ms, so those 40ms and 60ms measurements are probably noise. If you want a clearer picture, you'd probably want to collect data from multiple previews of the page and compare.
That points to Wikibase code as using most of the time. addStatementUsage, as far as I can tell, is called when you access anything in the claims table of an entity. Are you doing a lot of that?
Another 1.6 seconds are used inside frame:callParserFunction() (or possibly frame:extensionTag(), which uses that internally). Are you doing a lot of that?
I note that getEntity() only clocks in at 700ms, not the 6000ms from your "120 mw.wikibase.getEntity( id ) calls, about 50 ms each". That's not too surprising. "Expensive" functions are usually marked as such since they do database accesses that could be problematic if too many are done during the processing of a page, but waiting on a database access doesn't use CPU time. But I also note the total wall clock time for the preview is only 1 second longer than the total CPU time used, so there's still not as much waiting going on there as your numbers imply.
In case of many template/module calls parallel computing could be a mean reducing the time for waiting (of course not the computing time itself).
That's not terribly likely, since neither PHP nor Lua support that as far as I know.
It could be helpful to store Lua data for the whole parsing process which will not be deleted after a module call.
You can already do that with mw.loadData(), for static data or data that you can compute without reference to the parameters of any particular #invoke. There's intentionally no way for one #invoke to save data to be used by a later #invoke, however (see T67258).
Cc: Anomie, Reedy, thiemowmde, Lydia_Pintscher, MaxSem, RolandUnger, Lahi, Gq86, Darkminds3113, GoranSMilovanovic, QZanden, Marostegui, LawExplorer, Vali.matei, Minhnv-2809, SundanceRaphael, Volker_E, Luke081515, Wikidata-bugs, aude, GWicke, Dinoguy1000, jayvdb, MrStradivarius, Jackmcbarn, Mbch331, Jay8g, Krenair
_______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
