This is essentially the recommendation of Steve Souders in his
O'Reilly book "High Performance Web Sites", page 34. There are a few
browsers that mis-report whether they can handle gzip; this is one way
of handling some of them.
I didn't know about Paste, will have a look, thanks.
John
On 31 Mar, 18:40, "Gary Bernhardt" <[EMAIL PROTECTED]> wrote:
> On 3/31/08, bloofa <[EMAIL PROTECTED]> wrote:
>
>
>
> > Hi there,
>
> > I've seen a few queries in the list archives about this. This is what
> > I use:
>
> Paste also has middleware for this, if you'd rather not build your own.
>
> What prompted you to include the user-agent regex? Did you run into
> problems with some browsers?
>
>
>
> > GZIP_PAT = re.compile('^Mozilla/[5,6,7,8,9]|.*MSIE [6,7,8,9].*')
>
> > def gzip_response(resp):
> > accepts = web.ctx.env.get('HTTP_ACCEPT_ENCODING',None)
> > if accepts and accepts.find('gzip') > -1:
> > browser = web.ctx.env.get('HTTP_USER_AGENT',None)
> > if browser and GZIP_PAT.match(browser): #ok to compress for
> > this browser
> > web.webapi.header('Content-Encoding','gzip')
> > zbuf = cStringIO.StringIO()
> > zfile = gzip.GzipFile(mode='wb', fileobj=zbuf,
> > compresslevel=9)
> > zfile.write(resp)
> > zfile.close()
> > data = zbuf.getvalue()
> > web.webapi.header('Content-Length',str(len(data)))
> > web.webapi.header('Vary','Accept-Encoding', unique=True)
> > #don't vary by user-agent, defeats caching
> > return data
> > return resp
>
> > class foo:
> > def GET(self,whatever):
> > ...
> > print gzip_response( render.foo( whatever ))
>
> --
> Garyhttp://blog.extracheese.org
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---