https://bugzilla.wikimedia.org/show_bug.cgi?id=41268

--- Comment #5 from Gregor Hagedorn <[email protected]> 2012-10-22 
21:02:42 UTC ---
(In reply to comment #4)
> is not added up or multiplied by higher depths, it is exponentiated. If a 
> query
> has 10 results and recursively embeds it's own page for rendering, you get 100
> results at level 2, 1000 at level three, and so on. And the query could easily
> have 100 results instead of 10 to start with. This cannot be limited at the
> storage level because the query is only answered once (with 10 results); the
> rest is rendering and, more specifically, recursive template transclusion in
> MW. The problem are not (just) the 10000 result rows, but the 1000 template
> inclusions that need to processed to get them.

I agree but I believe these are separate issues. Recursive template
transclusion is independent of ask queries and exists in non-SMW installations
as well. Mediawiki carries a cost-based limitation mechanism that prevents this
from creating a DoS situation. Is it perhaps possible to hook into this
mechanism, feeding it with appropriate cost estimates? Is this perhaps the
missing piece?

With respect to the queries alone, I think we agree that it is possible to
manage a cumulative result count in a recursion. I personally believe it is not
necessary to have full prediction, it is acceptable to limit each query to a
given result count and only stop once the template/ask recursion exceeds the
cumulative maximum. That is with single = 500 and total = 1000, it would be
possible to generation max 1499 result (if the penultimate ask was 999 when the
last query was started.

-- 
Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.
You are on the CC list for the bug.

_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to