It seems Mariano's story has a happy ending. Congratulations. But on a
second thought, can anyone explain why "if you quickly reload pages,
they fail" in the very first caching-download version? Caching
download can improve speed, can with a side effect of bypassing
priviledge check, but no matter what, it shall not cause content fail
to load.

I remember I once tried @cache(...) but encounter similar problems,
then I give up. :-(  Nice to pick it up if someone can throw some
light. Thanks!

Regards,
iceberg

On May5, 11:39am, Mariano Reingart <[email protected]> wrote:
> ...... after using fast_download (changing headers and using
> stream) it runs really quickly!
>
> (I know, serving through apache would be even faster, but in this case
> I prefer portability and a easy configuration)
>
> You can see how it's running here:
>
> http://www.pyday.com.ar/rafaela2010/
>
> (look at images at the sidebar)
>
> Thanks so much,
>
> Mariano >
>
>
>
> >> On May 4, 9:04 pm, Mariano Reingart <[email protected]> wrote:
> >>> I thought so,
>
> >>> I had to modify mydownload so browsers do client-side caching,
> >>> speeding up the web-page load:
>
> >>> def fast_download():
> >>>     # very basic security:
> >>>     if not request.args(0).startswith("sponsor.logo"):
> >>>         return download()
> >>>     # remove/add headers that prevent/favors caching
> >>>     del response.headers['Cache-Control']
> >>>     del response.headers['Pragma']
> >>>     del response.headers['Expires']
> >>>     filename = os.path.join(request.folder,'uploads',request.args(0))
> >>>     response.headers['Last-Modified'] = time.strftime("%a, %d %b %Y
> >>> %H:%M:%S +0000", time.localtime(os.path.getmtime(filename)))
> >>>     return response.stream(open(filename,'rb'))
>
> >>> TODO: handle If-Modified-Since (returning 304 if not modified), but as
> >>> you said, let the browser do that if so much performance is needed (so
> >>> far, fast_download is working fine for me now :-)
>
> >>> Thanks very much for your help, and please let me know if there is
> >>> anything wrong with this approach,
>
> >>> Best regards,
>
> >>> Mariano
>
> >>> On Tue, May 4, 2010 at 10:23 PM, mdipierro <[email protected]> 
> >>> wrote:
> >>> > caching downloads does not make sense. This is because the role of
> >>> > download is to check permissions to download a file (if they are set).
> >>> > if you cache it then you do not check. If you do not need to check do
> >>> > not use download. Use
>
> >>> > def mydownload():
> >>> >     return
> >>> > response.stream(open(os.path.join(request.folder,'uploads',request.args(0))
> >>> >  ,'rb'))
>
> >>> > or better use the web server to download the uploaded files.
>
> >>> > On May 4, 6:11 pm, Mariano Reingart <[email protected]> wrote:
> >>> >> To cache images, I'm trying to do:
>
> >>> >> @cache(request.env.path_info,60,cache.ram)
> >>> >> def download(): return response.download(request,db)
>
> >>> >> But seems that is not 
> >>> >> working:http://www.web2py.com.ar/raf10dev/default/index
> >>> >> (see images at sidebar, if you quickly reload pages, they fail)
>
> >>> >> The book says something about response.render, but nothing about 
> >>> >> download...
> >>> >> Anyway, I'm not sure if this is a good use of @cache, are there any 
> >>> >> other way ?
>
> >>> >> BTW, why Cache-Control: no?...
>
> >>> >> Best regards,
>
> >>> >> Mariano 
> >>> >> Reingarthttp://www.sistemasagiles.com.arhttp://reingart.blogspot.com

Reply via email to