On 07/25/10 10:30, Peter Korsgaard wrote:
"beebee" == beebee<[email protected]>  writes:

Hi,

  >>  for i in `find /var/www -name \*.html' -o -name '*.css' -o -name '*.js'`;
  >>  do
  >>  gzip -c -9<   $i>   $i.gz
  >>  done
  >>
  >>  Would work for most people.
  >>
  >>  -- Bye, Peter Korsgaard

  beebee>  correct me if I'm wrong but I thought the BB goal was light and fast
  beebee>  rather than "simple". Missing out some of the less used options in 
std
  beebee>  tools is presumably to lighten the footprint , not to make things
  beebee>  simple.

Simple is imho also a very large part of it.

  beebee>  while it may take a few lines more code to check and maintain a cache
  beebee>  the runtime overhead would be minimal compared to the overhead of
  beebee>  gzipping files each and every time they are requested.

But the point of my implementation is NOT to gzip for every request,
only once manually when a file is added/edited.


Yes I realised that, I was continuing from my earlier observation that it may make more sense to adopt the lighttpd approach to this and do the gzip to cache once.

This would seem to imply a trivial amount of extra code and would avoid the need to "find".

What is the slowdown overhead going to be doing that find command for every single request if the directory has a lot of files. This may well be the case and cannot be precluded.


I'm not knocking your idea , I already said I thinks it's a good addition , so worth getting it implemented in the best way.

It just feels a bit unfinished to expect the user to maintain gz files manually on each update of the source file. He also has to be made aware of how this feature works to insure he does that.

regards.

_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to