"Tim Starling" changed the status of MediaWiki.r109469 to "fixme" and commented 
it.
URL: http://www.mediawiki.org/wiki/Special:Code/MediaWiki/109469#c30330

Old Status: new
> New Status: fixme

Commit summary for MediaWiki.r109469:

In FileBackend/FileOp:
* Added a sane default max file size to FileBackend. Operation batches need to 
check this before trying anything.
* Temporarily adjust the PHP execution time limit in attemptBatch() to reduce 
the chance of dying in the middle of it. Also added a maximum batch size limit.
* Added some code comments.

Tim Starling's comment:

This seems like a very large can of worms that you're opening here. 

You're calling set_time_limit() for every file operation. Suppose some caller 
is in an infinite loop calling some file operation, but the file operation 
itself is quite fast -- say 1% of each loop iteration. So the chances that 
time() will have a different value in __construct() and __destruct() would be 
only 1%, so 99 times out of 100, __destruct() will just restore the timer with 
its initial value. So the timer interval will slowly decay over a period of 100 
times the configured interval, before it finally gets to 1 second. As long as 
the loop continues to take less than 1s per iteration, it will just continue 
forever. 

Do we really need this?

_______________________________________________
MediaWiki-CodeReview mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/mediawiki-codereview

Reply via email to