Roan Kattouw wrote:
> The problem here seems to be that thumbnail generation times vary a
> lot, based on format and size of the original image. It could be 10 ms
> for one image and 10 s for another, who knows.
>
>   
yea again if we only issue the big resize operation on initial upload 
with a memory friendly in-place library like vips I think we will be 
oky. Since the user just waited like 10-15 minutes to upload their huge 
image waiting an additional 10-30s at that point for thumbnail and 
"instant gratification" of seeing your image on the upload page ... is 
not such a big deal.  Then in-page use derivatives could predictably 
resize the 1024x786 ~or so~ image in realtime again instant 
gratification on page preview or page save.

Operationally this could go out to a thumbnail server or be done on the 
apaches if they are small operations it may be easier to keep the 
existing infrastructure than to intelligently handle the edge cases 
outlined. ( many resize request at once, placeholders, image proxy / 
deamon setup)

> AFAICT this isn't about optimization, it's about not bogging down the
> Apache that has the misfortune of getting the first request to thumb a
> huge image (but having a dedicated server for that instead), and about
> not letting the associated user wait for ages. Even worse, requests
> that thumb very large images could hit the 30s execution limit and
> fail, which means those thumbs will never be generated but every user
> requesting it will have a request last for 30s and time out.
>
>   

Again this may be related to the use of unpredictable memory usage of 
image-magic when resizing large images instead of a fast memory confined 
resize engine, no?

_______________________________________________
Wikitech-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to