On Dec 12, 7:27 am, Kevin Dangoor <[EMAIL PROTECTED]> wrote:
>That's true. From the tests I've been reading lately, making fewer
> requests to the server has a bigger impact on the client-side
> performance than most anything else.

I've found that to be true as well.  Even with keep-alive, the cost (in
time) for each request seems to be worth just as much as compression.

I realize that I'm moving off-topic, but I think that a CherryPy filter
(now called a 'hook') is the place to do javascript compression (read:
removing comments and whitespace, changing variable names to
single-letters) and also (of course) gzip compression.  A CherryPy
filter could be made that does both of these things, takes into account
browser quirks, and caches the results for better performance.  On top
of that, such a filter could (potentially) also read through the html,
and convert the multiple javascript and css links and includes into one
JS file request and one CSS file request, again caching the results.

Has this already been done?  Does anyone forsee more than typical
trouble with changing the html output as I suggested in the last part
there?  This seems to be something TurboGears should include, as it's
useful at the framework level.

Saving half a second in load time makes all the difference with today's
web apps!

-ian


--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups 
"TurboGears Trunk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/turbogears-trunk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to