new helper module Apache::TestUtil

2001-09-02 Thread Stas Bekman

Here is a new Apache::TestUtil that makes:

- the writing tests that mess with files and dirs much easier.
- it automatically takes care of cleanup for all files and dirs created.
- makes it simpler to write debugable tests: ok $expected eq $received.

After the patch I've attached a new vhost_alias.t that uses it, so you can
see how it's used and how shorter it makes the test. If it's all fine,
I'll commit the changes.

--- /dev/null   Sat Apr 14 19:06:21 2001
+++ Apache-Test/lib/Apache/TestUtil.pm  Sun Sep  2 18:45:08 2001
@@ -0,0 +1,109 @@
+package Apache::TestUtil;
+
+use strict;
+use warnings FATAL = 'all';
+use File::Find ();
+use Exporter ();
+
+our $VERSION = '0.01';
+our @ISA = qw(Exporter);
+our @EXPORT = qw(t_cmp_str t_cmp_num t_write_file t_open_file t_mkdir
+ t_del_tree);
+
+our %CLEAN = ();
+
+# t_cmp_str($expect,$received,$comment)
+# returns the result of string comparison of $expect and $received
+# first prints all the arguments for debug.
+##
+sub t_cmp_str{
+my ($expect,$received,$comment) = @_;
+print testing : $comment\n if defined $comment;
+print expected: $expect\n;
+print received: $received\n;
+$expect eq $received;
+}
+
+# t_cmp_num($expect,$received,$comment)
+# returns the result of numerical comparison of $expect and $received
+# first prints all the arguments for debug.
+##
+sub t_cmp_num{
+my ($expect,$received,$comment) = @_;
+print testing : $comment\n if defined $comment;
+print expected: $expect\n;
+print received: $received\n;
+$expect == $received;
+}
+
+# t_write_file($filename,@lines);
+# the file will be deleted at the end of the tests run
+#
+sub t_write_file{
+my $file = shift;
+open my $fh, $file or die can't open $file: $!;
+print STDERR writing $file\n;
+print $fh join '', @_ if @_;
+close $fh;
+$CLEAN{files}{$file}++;
+}
+
+# t_open_file($filename);
+# open a file for writing and return the open fh
+# the file will be deleted at the end of the tests run
+
+sub t_open_file{
+my $file = shift;
+open my $fh, $file or die can't open $file: $!;
+print STDERR writing $file\n;
+$CLEAN{files}{$file}++;
+return $fh;
+}
+
+# t_mkdir($dirname)
+# create a dir
+# the dir will be deleted at the end of the tests run
+
+sub t_mkdir{
+my $dir = shift;
+
+mkdir $dir, 0755 unless -d $dir;
+$CLEAN{dirs}{$dir}++;
+}
+
+# deletes the whole tree(s) or just file(s)
+###
+sub t_del_tree{
+for my $file (@_) {
+next unless -e $file;
+File::Find::finddepth(sub {-d $_ ? rmdir : unlink; }, $file);
+}
+}
+
+
+END{
+
+# cleanup first files than dirs
+map { unlink $_  } grep {-e $_  -f _ } keys %{ $CLEAN{files} };
+map { t_del_tree($_) } grep {-e $_  -d _ } keys %{ $CLEAN{dirs}  };
+
+}
+
+1;
+__END__
+
+
+=head1 NAME
+
+Apache::TestUtil - Utilities for writing tests
+
+=head1 SYNOPSIS
+
+
+
+=head1 DESCRIPTION
+
+
+
+=cut
+


Here is the new vhost_alias:

use strict;
use warnings FATAL = 'all';

use Apache::Test;
use Apache::TestUtil;
use Apache::TestRequest;
use Apache::TestConfig ();

my $url= '/index.html';
my $cgi_name   = test-cgi.sh;
my $cgi_string = test cgi for;
my $root   = htdocs/modules/vhost_alias;

my @vh = qw(www.vha-test.com big.server.name.from.heck.org ab.com w-t-f.net);

plan tests = @vh * 2, ['vhost_alias']  \have_cgi;

Apache::TestRequest::scheme('http'); #ssl not listening on this vhost

my $config = Apache::TestRequest::test_config();
my $vars   = Apache::TestRequest::vars();
local $vars-{port} = $config-port('mod_vhost_alias');

## test environment setup ##
t_mkdir($root);

foreach (@vh) {
my @part = split /\./, $_;
my $d = $root/;

## create VirtualDocumentRoot htdocs/modules/vhost_alias/%2/%1.4/%-2/%2+
## %2 ##
if ($part[1]) {
$d .= $part[1];
} else {
$d .= _;
}
t_mkdir($d);

$d .= /;
## %1.4 ##
if (length($part[0])  4) {
$d .= _;
} else {
$d .= substr($part[0], 3, 1);
}
t_mkdir($d);

$d .= /;
## %-2 ##
if ([EMAIL PROTECTED]) {
$d .= [EMAIL PROTECTED];
} else {
$d .= _;
}
t_mkdir($d);

$d .= /;
## %2+ ##
for (my $i = 1;$i  @part;$i++) {
$d .= $part[$i];
$d .= . if $part[$i+1];
}
t_mkdir($d);

## write index.html for the VirtualDocumentRoot ##
t_write_file($d$url,$_);

## create directories for VirtualScriptAlias tests ##
$d = $root/$_;
t_mkdir($d);
$d .= /;

## write cgi ##
my $cgi_content = SCRIPT;
#!/bin/sh
echo Content-type: text/html
echo
echo $cgi_string $_
SCRIPT

t_write_file($d$cgi_name,$cgi_content);
chmod 0755, $d$cgi_name;

}

## run tests ##
foreach (@vh) {
## test VirtalDocumentRoot ##
ok t_cmp_str($_,
 GET_BODY($url, Host = $_),
 VirtalDocumentRoot test
  

Re: forcing cleanup

2001-09-02 Thread Stas Bekman
On Fri, 31 Aug 2001, Doug MacEachern wrote:

 On Fri, 31 Aug 2001, Stas Bekman wrote:

  When you cvs update, make sure to run './t/TEST -clean'. I've noticed that
  one of the subtests in alias.t was failing. It was because extra.conf.in
  has been changed, but the stale autogenerated extra.conf was used.
 
  what about using MD5 checksums, to decide whether to rebuild .in files or
  not, to cause less errors and questions?

 just comparing mtimes would be good enough.

this works for me. I had to exit on reconfiguration, since otherwise it'd
fail to continue. Should I reconfigure in a different way, so I won't have
to exit?

This patch: force reconfiguration when:

- httpd is newer than conf/httpd.conf
- conf/*.in are newer than the autogenerated conf/*

Index: Apache-Test/lib/Apache/TestConfig.pm
===
RCS file: 
/home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestConfig.pm,v
retrieving revision 1.50
diff -u -r1.50 TestConfig.pm
--- Apache-Test/lib/Apache/TestConfig.pm2001/08/28 16:02:56 1.50
+++ Apache-Test/lib/Apache/TestConfig.pm2001/09/02 13:45:23
@@ -774,6 +774,30 @@
 close $out or die close $conf_file: $!;
 }

+sub need_reconfiguration{
+my $self = shift;
+my @reasons = ();
+my $vars = $self-{vars};
+
+# if httpd.conf is older than httpd executable
+push @reasons,
+$vars-{httpd} is newer than $vars-{t_conf_file}
+if -e $vars-{httpd} 
+   -e $vars-{t_conf_file} 
+   -M $vars-{httpd}  -M $vars-{t_conf_file};
+
+# if .in files are newer than their derived versions
+if (my $extra_conf = $self-generate_extra_conf) {
+for my $file (@$extra_conf) {
+push @reasons, $file.in is newer than $file
+if -e $file  -M $file.in  -M $file;
+}
+}
+
+return @reasons;
+}
+
+
 #shortcuts

 my %include_headers = (GET = 1, HEAD = 2);
Index: Apache-Test/lib/Apache/TestRun.pm
===
RCS file: 
/home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestRun.pm,v
retrieving revision 1.46
diff -u -r1.46 TestRun.pm
--- Apache-Test/lib/Apache/TestRun.pm   2001/08/31 10:22:31 1.46
+++ Apache-Test/lib/Apache/TestRun.pm   2001/09/02 13:45:23
@@ -400,6 +400,13 @@
 $self-opt_clean(1);
 }

+if (my @reasons = $self-{test_config}-need_reconfiguration) {
+warning forcing current re-configuration:;
+warning \t- $_. for @reasons;
+$self-opt_clean(1);
+exit;
+}
+
 $self-configure;

 if ($self-{opts}-{configure}) {


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://eXtropia.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: cvs commit: httpd-2.0/modules/filters mod_include.c mod_include.h

2001-09-02 Thread Cliff Woolley
On 2 Sep 2001 [EMAIL PROTECTED] wrote:

 jerenkrantz01/09/01 18:09:02

   Modified:.CHANGES
modules/filters mod_include.c mod_include.h
   Log:
   Make mod_include check for BYTE_COUNT_THRESHOLD on a per-bucket basis
   rather than on a per-character basis.  A significant amount of time
   was spent checking the limit.  A better place to check for the threshold
   is when we read the bucket in not as we read each character in the bucket.

   If a bucket manages to be 200MB, it is not this code's problem as it
   is a mere filter.

   I ran this with the mod_include stuff in httpd-test and it looks good
   from here.

The httpd-test mod_include tests are probably insufficient to test this
code.  They have lots of recursion and check just about all of the
imaginable flavors of tag types, but none of them have really BIG
content... not even close to big enough to warrant a brigade split.  Maybe
we should add a test that includes biggish data from the middle of a
biggish file?  Somewhere around 32k should be sufficient, I'd guess...

--Cliff


--
   Cliff Woolley
   [EMAIL PROTECTED]
   Charlottesville, VA





Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Justin Erenkrantz

On Sat, Sep 01, 2001 at 06:19:32PM -0700, Ryan Bloom wrote:
 On Saturday 01 September 2001 14:56, Justin Erenkrantz wrote:
 
 I have a few problems with this.  1)  We have consistantly declined to
 accept the mod_Gzip from remote communications.

mod_gzip implements the gzip algorithm.  It also happens to be a 300k 
source file (~11,000 lines).  mod_gz is a 14k file and is 446 lines 
and relies on zlib.

Knowing the people on this list I will bet that the size of the file 
went a long way for us not accepting Remote Communications's version 
in the core distribution.  My cause for not accepting mod_gzip would
be that implementing the gzip algorithm is better left to someone
else - like the guy who wrote gzip.  I mean no offense to Remote
Communications as I'm sure their implementation is sound.

I will accept the most concise and correct version - at this time, 
Ian's version is the only one being offered.  (My one complaint
about mod_gz is that it needs to be moved out of Eastern Europe.)

  2)  I keep hearing that
 zlib has more memory leaks than a sieve.  

As Cliff has pointed out, if there are memory leaks in zlib, then
this will sniff them out.  Both Opera and Mozilla rely on zlib for 
handling their gzip Transfer-Encoding.  I am sure that their products
would benefit if the Apache Group can fix their memory leaks.

There hasn't been a release of zlib since 1998.  I'm not concerned 
about this (because if there were memory leaks, *someone* would have
addressed them by now).  I'm also not amused by the suggestion that 
this may be a security hole.  This is a read-only operation that 
we are performing.

   3)  I don't believe that we
 should be adding every possible module to the core distribution.  I
 personally think we should leave the core as minimal as possible, and
 only add more modules if they implement a part of the HTTP spec.

I believe that this does implement a core component of the HTTP spec
- namely Transfer-Encoding.  Most current browsers have gzip support 
built-in (Netscape, Opera, and Mozilla do - I bet IE does, but I 
don't use it) - however, we have never supported this.  I believe it
is time that we do so.  Not having implemented this is a major 
oversight on our part.

 Putting every module into the core is NOT the answer to this problem.
 IMNSHO,  Apache should be a minamilistic web server.  If we don't need
 it in the core,   it shouldn't be there.  Personally, I would remove
 mod_dav from the server too, because it doesn't implement part of
 RFC2616.

I believe that DAV belongs in our standard module set because it is 
treated as an extension to HTTP.  The same goes for our inclusion 
with SSL.

I believe that anything that adds value to the server out-of-the-box
belongs in the main repository.  Things like mod_pop3 and mod_mbox
are special modules that no one really cares much about.  We did them
as proofs of concepts - if they help, cool, but they aren't part of
the officially supported httpd-2.0 code - you don't want to maintain
or enhance mod_pop3 and I don't want to maintain or enhance mod_mbox.

My +1 for adding this to the core distribution of modules stands.
I fully believe that adding this functionality to the server is
completely worth it.

Can I either receive two more +1s or a straight out veto?  (AIUI,
a non-veto -1 is treated as a -0.  Please correct me if I am wrong.)
-- justin




[PATCH] lazy evaluation to speed up mod_include

2001-09-02 Thread Brian Pane

(This is a repost.  My original context diffs from August 23 won't
apply cleanly to the current code base (due to changes in some adjacent
code), so I've generated this new patch against the latest version of
mod_include.c in CVS.)

--Brian

Index: modules/filters/mod_include.c
===
RCS file: /home/cvspublic/httpd-2.0/modules/filters/mod_include.c,v
retrieving revision 1.138
diff -u -r1.138 mod_include.c
--- modules/filters/mod_include.c2001/09/02 01:09:021.138
+++ modules/filters/mod_include.c2001/09/02 07:02:28
@@ -96,32 +96,53 @@
 static apr_hash_t *include_hash;
 static APR_OPTIONAL_FN_TYPE(ap_register_include_handler) *ssi_pfn_register;
 
+/*
+ *
+ * XBITHACK.  Sigh...  NB it's configurable per-directory; the compile-time
+ * option only changes the default.
+ */
+
+module include_module;
+enum xbithack {
+xbithack_off, xbithack_on, xbithack_full
+};
+
+typedef struct {
+char *default_error_msg;
+char *default_time_fmt;
+enum xbithack *xbithack;
+} include_dir_config;
+
+#ifdef XBITHACK
+#define DEFAULT_XBITHACK xbithack_full
+#else
+#define DEFAULT_XBITHACK xbithack_off
+#endif
+
 #define BYTE_COUNT_THRESHOLD AP_MIN_BYTES_TO_WRITE
 
 /*  Environment function 
-- */
 
+/* Sentinel value to store in subprocess_env for items that
+ * shouldn't be evaluated until/unless they're actually used
+ */
+static char lazy_eval_sentinel;
+
 /* XXX: could use ap_table_overlap here */
 static void add_include_vars(request_rec *r, char *timefmt)
 {
-char *pwname;
 apr_table_t *e = r-subprocess_env;
 char *t;
 apr_time_t date = r-request_time;
 
-apr_table_setn(e, DATE_LOCAL, ap_ht_time(r-pool, date, timefmt, 0));
-apr_table_setn(e, DATE_GMT, ap_ht_time(r-pool, date, timefmt, 1));
-apr_table_setn(e, LAST_MODIFIED,
-  ap_ht_time(r-pool, r-finfo.mtime, timefmt, 0));
+apr_table_setn(e, DATE_LOCAL, lazy_eval_sentinel);
+apr_table_setn(e, DATE_GMT, lazy_eval_sentinel);
+apr_table_setn(e, LAST_MODIFIED, lazy_eval_sentinel);
 apr_table_setn(e, DOCUMENT_URI, r-uri);
 if (r-path_info  *r-path_info) {
 apr_table_setn(e, DOCUMENT_PATH_INFO, r-path_info);
-}
-if (apr_get_username(pwname, r-finfo.user, r-pool) == APR_SUCCESS) {
-apr_table_setn(e, USER_NAME, pwname);
-}
-else {
-apr_table_setn(e, USER_NAME, unknown);
 }
+apr_table_setn(e, USER_NAME, lazy_eval_sentinel);
 if ((t = strrchr(r-filename, '/'))) {
 apr_table_setn(e, DOCUMENT_NAME, ++t);
 }
@@ -137,6 +158,42 @@
 }
 }
 
+static const char *add_include_vars_lazy(request_rec *r, const char *var)
+{
+char *val;
+if (!strcasecmp(var, DATE_LOCAL)) {
+include_dir_config *conf =
+(include_dir_config *)ap_get_module_config(r-per_dir_config,
+   include_module);
+val = ap_ht_time(r-pool, r-request_time, 
conf-default_time_fmt, 0);
+}
+else if (!strcasecmp(var, DATE_GMT)) {
+include_dir_config *conf =
+(include_dir_config *)ap_get_module_config(r-per_dir_config,
+   include_module);
+val = ap_ht_time(r-pool, r-request_time, 
conf-default_time_fmt, 1);
+}
+else if (!strcasecmp(var, LAST_MODIFIED)) {
+include_dir_config *conf =
+(include_dir_config *)ap_get_module_config(r-per_dir_config,
+   include_module);
+val = ap_ht_time(r-pool, r-finfo.mtime, 
conf-default_time_fmt, 0);
+}
+else if (!strcasecmp(var, USER_NAME)) {
+if (apr_get_username(val, r-finfo.user, r-pool) != 
APR_SUCCESS) {
+val = unknown;
+}
+}
+else {
+val = NULL;
+}
+
+if (val) {
+apr_table_setn(r-subprocess_env, var, val);
+}
+return val;
+}
+
 /* --- Parser functions 
--- */
 
 /* This function returns either a pointer to the split bucket 
containing the
@@ -716,6 +773,9 @@
 tmp_store= *end_of_var_name;
 *end_of_var_name = '\0';
 val = apr_table_get(r-subprocess_env, 
start_of_var_name);
+if (val == lazy_eval_sentinel) {
+val = add_include_vars_lazy(r, start_of_var_name);
+}
 *end_of_var_name = tmp_store;
 
 if (val) {
@@ -962,12 +1022,15 @@
 if (!strcmp(tag, var)) {
 const char *val = apr_table_get(r-subprocess_env, 
tag_val);
 
+if (val == lazy_eval_sentinel) {
+val = add_include_vars_lazy(r, tag_val);
+}
 if (val) {
 switch(encode) 

Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Eli Marmor

Justin Erenkrantz wrote:

 mod_gzip implements the gzip algorithm.  It also happens to be a 300k
 source file (~11,000 lines).  mod_gz is a 14k file and is 446 lines
 and relies on zlib.
 
 Knowing the people on this list I will bet that the size of the file
 went a long way for us not accepting Remote Communications's version
 in the core distribution.  My cause for not accepting mod_gzip would
 be that implementing the gzip algorithm is better left to someone
 else - like the guy who wrote gzip.  I mean no offense to Remote
 Communications as I'm sure their implementation is sound.

If I recall correctly, this "guy who wrote gzip" (or - to be precise -
one of the two guys who wrote it) is working with Remote Communications.

If it's true, it means that he feels OK with their implementation (maybe
it's similar?). Having one less library to depend on, is an advantage and
not a disadvantage, even if it requires mod_gzip to be 300K (I believe
that the 2.0 version will be smaller, thanks to the I/O filtering).

Maybe we should simply ask him; His name is Mark Adler, more details at:
http://www.alumni.caltech.edu/~madler/

Note: I don't know mod_gz but only mod_gzip.
-- 
Eli Marmor
[EMAIL PROTECTED]
CTO, Founder
Netmask (El-Mar) Internet Technologies Ltd.
__
Tel.:   +972-9-766-1020  8 Yad-Harutzim St.
Fax.:   +972-9-766-1314  P.O.B. 7004
Mobile: +972-50-23-7338  Kfar-Saba 44641, Israel



Re: [PATCH] RE: make distclean doesn't

2001-09-02 Thread Greg Stein

On Fri, Aug 31, 2001 at 09:16:15PM -0700, Ryan Bloom wrote:
 On Friday 31 August 2001 19:31, William A. Rowe, Jr. wrote:
  From: Greg Stein [EMAIL PROTECTED]
  Sent: Friday, August 31, 2001 9:30 PM
 
   On Fri, Aug 31, 2001 at 03:02:32PM -0700, Ryan Bloom wrote:
   ...
exports.c shouldn't be cleaned, correct, because it is a part of the
distribution, or at least it should be if it isn't already. 
config.nice is not a part of the distribution however, and should be
removed by make distclean.
  
   -1 on *any* form of clean that tosses config.nice
  
   That holds *my* information about how I repeatedly configure Apache. That
   is a file that I use, and is outside of the scope of the
   config/build/whatever processes. Its entire existence is to retain the
   information. Cleaning it is not right.
 
  What are you talking about?  We are talking about cleaning for packaging to
  _other_ computers, not yours.  That's what rbb is speaking of by
  'distclean', clean enough for redistribution.
 
 Exactly.  The whole point and definition of make distclean, is that it cleans
 things to the point that it could be redistributed to another machine.  If you 
 are just trying to clean the directory, then make clean is what you want.  If
 make clean doesn't remove enough for you, then something else is wrong.

I use distclean on my computer all the time. Along with extraclean. Neither
of those targets should toss config.nice. *That* is what I mean.

To be clear: nothing in our build/config/whatever should remove config.nice


Clean rules are about cleaning out state that might affect a build in some
way. So we toss object files, generate makefiles, the configure script,
whatever. But config.nice doesn't fall into that camp because it is not a
stateful file. It is for the user to rebuild what they had before.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Greg Stein

On Sat, Sep 01, 2001 at 07:50:19PM -0700, Ryan Bloom wrote:
 On Saturday 01 September 2001 18:53, Cliff Woolley wrote:
  On Sat, 1 Sep 2001, Ryan Bloom wrote:
...
   2)  I keep hearing that zlib has more memory leaks than a sieve.
 
  Maybe it does, but that can be dealt with.  Even so, it shouldn't be a
  consideration here IMO, at least not yet.  If it really is leaky (which it
  very well might be but we should prove it rather than just going on what
  we heard), then it's a consideration, but it'd be better for somebody to
  just *fix* zlib than for us to throw out both mod_gz and mod_gzip because
  of zlib's deficiencies (assuming we care, which we probably do).
 
 I disagree that this shouldn't be a consideration.  If we are distributing a module
 that relies on a library that leaks, then we are suggesting people use a leaking
 library.  I would be fine fixing zlib, but if that can't be done, then a memory leak
 in zlib would make me a -0.5 very quickly.

We ship code that uses it. zlib fixes their code (or somebody else fixes
it). Now our module rocks. Chicken and egg... A ton of other projects use
zlib. I see no reason for us to avoid it. If it *does* happen to have leaks,
then (when somebody finds this to be true) they could fix it.

   3)  I don't believe that we should be adding every possible module to
   the core distribution.  I personally think we should leave the core as
   minimal as possible, and only add more modules if they implement a
   part of the HTTP spec.

The gzip content encoding is part of the HTTP spec. 

  My personal opinion is that this one is important enough that it should go
  in.  Most clients support gzip transfer coding, and it's a very real
  solution to the problem of network bandwidth being the limiting factor on
  many heavily-loaded web servers and on thin-piped clients (read: modem

Agreed!

  users).  mod_gz(ip) could provide a significant throughput improvement in
  those cases, where the CPU is twiddling its thumbs while the network pipe
  is saturated.  This fills a gap in Apache that could be a very big deal to
  our users.  (It's not like it's a big or obtrusive module either, but size
  is not the final consideration in what goes in and what doesn't.)
 
 You know what's really funny?  Every time this has been brought up before,
 the Apache core has always said, if you want to have gzip'ed data, then
 gzip it when you create the site.  That way, your computer doesn't have to
 waste cycles while it is trying hard to serve requests.  I personally stand by
 that statement.  If you want to use gzip, then zip your data before putting it 
 on-line.  That doesn't help generated pages, but perl can already do gzip, as
 can PHP.

But it isn't invisible if you do it with Perl, PHP, Python, or CGI. A
person has to explicitly code it.

I'm really looking forward to mod_gz(ip) in Apache 2.0 so that Subversion
can transfer its content in compressed form. All of that comes out of a
database... it can't be precompressed, so that leaves a filter to do the job
as it hits the wire. Doing large checkouts are almost *always* network bound
rather than server/client bound. Compressing those files is a *huge* win.

 -1 (vote, not veto), for putting mod_gz in the core.

Please use -0.9 or something. It gets really confusing to see -1 and never
know what it means. As I've said before: -1 really ought to always mean a
veto. But... that's just me :-)


Needless to say, I'm +1 on the concept. It's a big win for everybody. I
haven't reviewed the code yet, so no commentary there.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Sander Striker

Hi,

From what I have seen on the list I am on the +1 side of
adding mod_gz(ip) to the distribution.  Ofcourse, my vote
doesn't count since I don't have httpd commit.

I find the following arguments convincing (summarized):

 - The gzip content encoding is part of the HTTP spec.
 - Most clients support gzip transfer coding.
 - It is a real solution to the problem of network bandwidth
   being the limiting factor on many heavily-loaded web servers
   and on thin-piped clients.
 - It makes the compression transparent to the admin of the
   site and allows for dynamically generated content (which
   can grow quite large) to be compressed aswell.

I haven't seen anything that held on the negative side yet.

Sander



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Daniel Veillard

On Sat, Sep 01, 2001 at 06:19:32PM -0700, Ryan Bloom wrote:
 zlib has more memory leaks than a sieve.  3)  I don't believe that we
 should be adding every possible module to the core distribution.  I
 personally think we should leave the core as minimal as possible, and
 only add more modules if they implement a part of the HTTP spec.

  though not *required* by the HTTP spec, there is a clear signal
from it that compression should be supported by default. If you
don't believe me, just ask the spec authors !

Daniel

-- 
Daniel Veillard  | Red Hat Network http://redhat.com/products/network/
[EMAIL PROTECTED]  | libxml Gnome XML XSLT toolkit  http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/



RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Günter Knauf

Hi,
I was glad as Ian contributed his mod_gz; I tested it on Linux and Win32 and it works 
for me. 
The problem I see with 3rd party modules is not mainly that they are 'invisible', I've 
found tons of modules and often 3 or more for the same purpose, but many modules were 
only written for Unix and fail to compile on other platforms because they often 
heavily use system libs which arent available on the new target; so many modules must 
be ported to the target os and the concept of the Apache api is gone. And even if a 
module compiles without changes and no porting is needed it's not guaranteed to run. 
The best sample is mod_gzip: I use it on Linux and Win32, but on NetWare the module 
compiles fine but doesnt work! 
I think this will not happen with Ian's module because it uses only Apache apis, so 
once the server runs on a platform mod_gz will do too (ok, as far as zlib is ported to 
that platform, but that's true for nearly every platform).
I was also in contact with Kevin, but he couldnt help me with the issue on NetWare...

 Ian, I'll chat with Kevin on getting you a copy of the code. Although I
 think he will want to wait until Apache 2.x goes beta. He's the author
 and it's his decision.
please ask Kevin for a copy for me too.

Thanks!

Guenter.

PS: if you take a look at mod_gzip 1.3.xx you will see that more than half of the 300k 
are only debug messages...




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Peter J. Cranstone

Hi All,

I think Sander sum it up nicely.

-   It is part of the spec. Apache should implement the spec.
-   Almost all new browsers support IETF content encoding/transfer
encoding. In testing with MSIE 6.x and Netscape 6.1
compression works fine.
-   The biggest users of mod_gzip are outside the USA. Why? Because
they pay for bandwidth.
-   There are some large institutions (financial markets) that use
mod_gzip to reduce HTML/JavaScript etc.
-   It supports dynamic and static content.
-   You can compress SSL (with some hacks)

A couple of other issues.

-   Netware. With a little help this can be fixed. However the
majority of the net runs either Apache, IIS, iPlanet or Zeus. 
-   Apache 2.x is not yet stable for all platforms.
-   Debug code and size of mod_gzip... We can remove the debug code.
It's stable enough now after 9 months of solid  testing to pull it.

In closing... Here is the biggest reason to include mod_gzip

-- compression (transfer encoding/content encoding) part of the HTTP
spec! --

for those who hate long emails you can stop reading here

soap box

Apache's market share dropped last month. Micro$oft IIS 5.0 is making
headway and IT INCLUDES AN ISAPI GZ FILTER. It happens to be a pig and
it does not support compressed POST transactions (mod_gzip does) and it
has issues compressing Javascript. But bottom line, Microsoft is
supporting the standard and even though the first pass is rough it will
get better. Which means that if people figure out what European users
have been saying for nearly a year that compressing HTML etc really
makes a difference then Apache needs to embrace the light. Mod_gzip was
released with an Apache style license with this thought in mind. The
writing is on the wall, if Micro$oft sees a benefit to adding
compression then it's only a matter of time before everyone is demanding
it be there. My thought is that it would be better for Apache to be
first rather than playing catch up.

On a personal note. Kevin and I have been on this forum long enough to
know what the rules are. We released mod_gzip under an Apache style
license for one reason. So Apache would benefit. Sure Kevin is the
author and has continued to do an incredible job supporting the code,
but now others have joined the mod_gzip forum and have taken up the
challenge. On October 13th 2001 it will have been a year since the code
came out. It has not undergone any changes since March 2001 and is now
considered stable for Apache 1.3.x users. The 2.x version is only
waiting for one thing which is *beta*. What the server is stable enough
to run for months and users are upgrading to the new version we will
release mod_gzip for 2.x under exactly the same license as 1.x version.

end

Regards


Peter


-Original Message-
From: Sander Striker [mailto:[EMAIL PROTECTED]] 
Sent: Sunday, September 02, 2001 6:12 AM
To: [EMAIL PROTECTED]
Subject: RE: [PATCH] Add mod_gz to httpd-2.0


Hi,

From what I have seen on the list I am on the +1 side of
adding mod_gz(ip) to the distribution.  Ofcourse, my vote doesn't count
since I don't have httpd commit.

I find the following arguments convincing (summarized):

 - The gzip content encoding is part of the HTTP spec.
 - Most clients support gzip transfer coding.
 - It is a real solution to the problem of network bandwidth
   being the limiting factor on many heavily-loaded web servers
   and on thin-piped clients.
 - It makes the compression transparent to the admin of the
   site and allows for dynamically generated content (which
   can grow quite large) to be compressed aswell.

I haven't seen anything that held on the negative side yet.

Sander




RE: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Günter Knauf

Hi,
 A couple of other issues.

 - Netware. With a little help this can be fixed. However the
I will provide any help I can give; I'm able to compile and run the module, I've 
compiled the debug version and have already sent an output to Kevin; the issue is that 
the work files are always empty...(he assumed that I use SSL but I didnt load any 
other modules except status and info).
 majority of the net runs either Apache, IIS, iPlanet or   Zeus.
yes, I run Apache on NetWare.
 - Debug code and size of mod_gzip... We can remove the debug code.
 It's stable enough now after 9 months of solidtesting to pull it.
I didnt mention it as negative but only to explain why the code is 300kb; 
and it's very useful when the module doesnt run as on NetWare...

If you want to help me a bit in finding out why it doesnt work with NetWare feel free 
to contact me directly...

Thanks, Guenter.

PS: I have also another issue with mod_gzip together with a counter module: when 
mod_gzip is loaded the counter increments 2 times every access (true on all platforms, 
but maybe the counter isnt clean).




Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread William A. Rowe, Jr.

From: Jerry Baker [EMAIL PROTECTED]
Sent: Saturday, September 01, 2001 10:32 PM


 Ryan Bloom wrote:
  
  You know what's really funny?  Every time this has been brought up before,
  the Apache core has always said, if you want to have gzip'ed data, then
  gzip it when you create the site.  That way, your computer doesn't have to
  waste cycles while it is trying hard to serve requests.  I personally stand by
  that statement.  If you want to use gzip, then zip your data before putting it
  on-line.  That doesn't help generated pages, but perl can already do gzip, as
  can PHP.
 
 Gzip'ing html into files is a hopeless waste of disk space and clutter.
 That means for every file you have to have a gzipped and non-gzipped
 version for browsers that cannot handle it. Then you have to configure
 Apache to check for and serve the proper file to the proper browser. It
 makes Web page maintenance a severe PITA as you have to re-gzip a doc
 everytime it is modified and upload both files.

Interesting point for gzip authors in general ... if it won't save a second
network packet - it is _not_ worth it (think favicon.ico, icon.gif (or any
self-compressed format), or littleframeset.html).

Probably always need to set some 'threshhold' of 8kb (minimally) that the
webserver absolutely ignores, and some include or exclude list by mime type
to describe the value of further compression.  Even if the file is requested
to be gzip'ped, and it's targetted for the cache, set a flag at the end that
says hey, I saved -2% on this file, don't let us do _that_ again!  File foo
shouldn't be cached, and then internally add foo to the excludes list for
any gzip filter.

Bill




Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Rodent of Unusual Size

Ryan Bloom wrote:
 
 I believe that putting a module into the core says that we
 are willing to support it, and that we believe the quality of
 the module is as high as the rest of the core.

With that I can certainly agree.

 I would like to make that statement as few times as possible.

See, *that* I would like to say as *many* times as possible,
with equal confidence in all cases.

 The more we say it, the more likely we are to be wrong once.

That sounds like letting 'we should' wait upon 'we dare not.' :-)
I do not know about anyone else, but I personally think the
effort has historically been directed toward making Apache
as featureful and *good* as possible, not making it perfect
and keeping it so small that we limit functionality in order
to keep out the possibility of bugs.  Warts go along with this
stuff.  One of the strong facets has been letting people
work on whatever they want to work on, as long as they were
willing to support it, and nearby eyeballs would help watch for
bugs.  Saying you can do that only within a limited set of
rules is, IMHO, contrary to the spirit of the project.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Rodent of Unusual Size

Ryan Bloom wrote:
 
 If we don't need it in the core, it shouldn't be there.

Since there is no reason to drive 100+ MPH, auto manufacturers
should not make vehicles capable of going that fast.

'Needed in the core' -- what of the current modules are 'needed
by the core?'  Nothing in the core needs mod_speling, or
mod_rewrite, or...

I think this is a cockeyed metric to use.  Un- or insufficiently-
tested modules, maybe, but shutting things out because of 'core
need'?  I do not think that is valid.
-- 
#kenP-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

All right everyone!  Step away from the glowing hamburger!



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread William A. Rowe, Jr.

From: Ryan Bloom [EMAIL PROTECTED]
Sent: Saturday, September 01, 2001 10:50 PM


 On Saturday 01 September 2001 20:10, Cliff Woolley wrote:
 
  Perhaps modules distributed through official httpd subprojects are more
  visible/more trusted, but we don't really know one way or the other on
  that front yet.

 I can agree with this, but this is something we need to fix.  There are many
 ways to fix it.  Fixing modules.apache.org would be a very good first step.

++1, we want to _highlight_ our extra projects at http://httpd.apache.org/more/!
And perhaps provide some links from pages like manual/mod/index.html with a
Not here?  Try the third-party modules, (registered by their authors) 
at http://modules.apache.org/; for completeness.

 Putting every module into the core is NOT the answer to this problem.  IMNSHO,
 Apache should be a minamilistic web server.  If we don't need it in the core,
 it shouldn't be there.  Personally, I would remove mod_dav from the server too,
 because it doesn't implement part of RFC2616.

I was thinking about this.  We are discussing if /modules/proxy/ should be readded
to the core tree over at [EMAIL PROTECTED]  My observation there is that the
RFC for mod_proxy may be expanded outside of the http schema in the future.  We
already have adjunt Connection-Upgrade/tls over http extensions to that RFC.  This
argument will give us headaches until we decide some simple rules to add a module
as a core or sub project.

WebDAV isn't a 'done' protocol, and really started out as the WebDA protocol
(versioning deferred until we figure out how it should be described.)  I'm sure
it will grow.  mod_ssl isn't 'done' either, until we implement the Connection-Upgrade
schema.  Not only are some folks in the world bound _never_ to download it (as it
is in conflict with their own laws) but some folks in the world are prohibited by
the US from downloading it (the T5).  And it too is supplimented by the TLS RFCs.

One thorn in everyone's side was that mod_proxy implemented a HTTP/1.0 superset, 
although the rest of the server was essentially HTTP/1.1.

IMHO, if a modules is on a different standards track (or likely to be extended on
one) it should probably live in it's own subproject.

Who on the proxy team wants to wait for the core to bump a version just because
proxy wants us to.  The proxy team finishes implementation of another extension,
they want to get an alpha/beta/release out the door!  If they implement another
proxy-protocol, they want to get that out the door (sftp anyone?)

How about one last example ;-?  Lets say SQL support is deemed 'important'.  An
httpd-sql subproject might implement several aaa modules, with some additional
support/ apps, to allow SQL stuff.  Then they get the hairbrained idea to .htaccess
as a SQL table (for performance.)  This project could grow (with a charter of
'extending httpd core file-based mechanisms through a generic SQL layer') as far
as they wanted to take it.

In a perfect world, apache-2.0.26-gold-core.tar.gz contains just the core.  Then
give the world apache-2.0.26-gold-all-gold-20010915.tar.gz with all _released_ 
subprojects effective 2001.09.15 rolled in!  The adventurous could try the
apache-2.0.26-gold-all-beta-20010915.tar.gz.  Finally, the looney (most folks
that follow this list) can grab apache-2.0.26-gold-all-alpha-20010915.tar.gz
for alpha releases of every module (probably a longer list, since some subprojects
at a given time likely haven't gone gold just yet.)

Subprojects would probably have an easier time with a user of the gold apache core
release, since the bugs are more likely to be _in_ their module, not somewhere
else.  Now we stand a (small) chance of keeping up.

Add some decent user support outside of the (sometime now hard-to-reach) nttp
support in comp.infosystems.www.servers. and these could be really useful.  With
a [EMAIL PROTECTED], [EMAIL PROTECTED], proxy-users... and actually
make that a really strong system, by using mod_mbox on http://httpd.apache.org/
to allow folks to browse those threads.  We must be one of the last strong projects
with _no_ user's list (what's up with that ;-?)

Subprojects shouldn't be our orphans.  The give small author groups well deserved
recognition.  They need to become our crowning jewels.

Bill






Re: [PATCH] RE: make distclean doesn't

2001-09-02 Thread Jim Winstead

On Sun, Sep 02, 2001 at 01:25:07PM -0400, Cliff Woolley wrote:
 On Sun, 2 Sep 2001, William A. Rowe, Jr. wrote:
  Which means it has nothing to do with cleaning the tree to a
  distribution state (or state 'ready for distribution'.)
 
 See, I think that's the difference of interpretation here.  *I* interpret
 distclean to mean not ready for distribution but back to essentially
 the way it was when I unpacked the distribution.  The difference being
 that I'd be irritated as hell if I lost my configuration information which
 has no impact on the state of the build environment, as opposed to the
 Makefiles and so on which do affect the state.  If I make distclean, it
 means I want to start over again with the first step out of the tarball,
 namely configure.  It doesn't mean I want to lose the options I passed to
 configure.

it may be worth following the gnu project's lead on these targets,
since they use the same names.

  http://www.gnu.org/prep/standards_55.html#SEC55

(for them, distclean == what is in the tarball.)

jim



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Justin Erenkrantz

On Sun, Sep 02, 2001 at 12:43:09PM -0500, William A. Rowe, Jr. wrote:
  The gzip content encoding is part of the HTTP spec. 
 
 By implementation, or reference?  Sure Content-encoding is part of the spec, and
 it's defined to allow authors to extend their support to any number of encodings,
 and forces the server to use only what the client claims as supported encodings.

gzip is mentioned explicitly in RFC 2616 as a supported content-encoding
(one registered with IANA).

 +1 with caviat to follow.

Cool.  BTW, do these +1 (concept) votes count towards mod_gz's 
inclusion?  I now count three binding +1 (concepts) for this (Cliff,
Greg, and OtherBill).  Or, do I need to wait until I receive two
+1s for mod_gz explicitly?

 So we need any gzip filter to drop out quick if 1. it knows this mime type should
 not be encoded; 2. it sees the content-encoding is already gz; 3. it sees the
 uri/filename whatever in a list of exclusions (that could be automagically grown
 when it hits files that _just_don't_compress_well_.

I would think that this is a httpd.conf configuration issue.  Remember,
we add mod_gz via:

AddOutputFilter GZ html

Ian's version did make sure that the content-type was text/html before
encoding.  But, I don't think that is necessary.  If the admin sets
mod_gz via the OutputFilter semantics, that's their wish (remember,
you must *ask* for mod_gz).  I'd rather not see these types of 
assumptions in any gzip module we implement.

(I need to double check that mod_gz and mod_include don't interfere
with each other...mod_gz should run *after* mod_include has processed
the data...)

 If we like, the entire zlib (168kb tar/gz'ed) could be distributed through /srclib/
 instead of by reference.  It really isn't that large, and matches what we do today
 with pcre and expat.  If we like, drop it into apr-util/encoding/unix/lib/zlib and
 only expose it through thunks (allowing someone to come up with more robust or faster
 apr-util thunks to source that we can _not_ redestribute.)  The more I contemplate,
 the stronger my +1 for apr-util remapping, where appropriate on some platforms.

Almost every platform I am aware of includes zlib by default now.
Solaris, AIX, Linux, FreeBSD.  Ian has all of the MSVC files (I didn't
post them as I want to place it in modules/filters which is different
than where Ian originally had it) - so I know he has something working
on Win32.  So, I don't think that we need to distribute zlib.  Nor do 
we need to perform thunking.  It's just so common now.

AIUI, zlibc is completely transparent to any application.  -- justin




Re: [PATCH] RE: make distclean doesn't

2001-09-02 Thread Ryan Bloom

On Sunday 02 September 2001 10:28, Jim Winstead wrote:
 On Sun, Sep 02, 2001 at 01:25:07PM -0400, Cliff Woolley wrote:
  On Sun, 2 Sep 2001, William A. Rowe, Jr. wrote:
   Which means it has nothing to do with cleaning the tree to a
   distribution state (or state 'ready for distribution'.)
 
  See, I think that's the difference of interpretation here.  *I* interpret
  distclean to mean not ready for distribution but back to essentially
  the way it was when I unpacked the distribution.  The difference being
  that I'd be irritated as hell if I lost my configuration information
  which has no impact on the state of the build environment, as opposed
  to the Makefiles and so on which do affect the state.  If I make
  distclean, it means I want to start over again with the first step out of
  the tarball, namely configure.  It doesn't mean I want to lose the
  options I passed to configure.

 it may be worth following the gnu project's lead on these targets,
 since they use the same names.

   http://www.gnu.org/prep/standards_55.html#SEC55

 (for them, distclean == what is in the tarball.)

+1.  If we are going to use their syntax, we should also use their
semantics.  I will check with some other packages later today to see
what they do with make distclean.

Ryan
__
Ryan Bloom  [EMAIL PROTECTED]
Covalent Technologies   [EMAIL PROTECTED]
--



Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread TOKILEY


Hello all...
Kevin Kiley here...

Here is a mixture of comment/response regarding mod_gzip and the
ongoing conversation(s)...

There is a (short) SUMMARY at the bottom.

Justin ErenKrantz's original post...

 Ian has posted his mod_gz filter before, now I'd like to give it a +1.

 I told him I'd look at it a while ago, but never got a chance to do 
 so.  So, I spent this morning cleaning up the configuration and a bit 
 of the code to fit our style (nothing major).

 I'd like to add this to the modules/filters directory (which seems
 like the most appropriate place).

 Justin

Ryan Bloom responded...

 I have a few problems with this.  1)  We have consistantly declined to
 accept the mod_Gzip from remote communications.  

Correct... and for a while the reason was because everyone thought the
module was using ZLIB and there has been a long standing aversion to
including ANY version of GNU ZLIB ( or any other GNU stuff ) into the
Apache tree. We have personal emails from your board members stating
that to be the case. If that aversion has evaporated then there is a TON of
GNU based stuff that is now 'eligible' for inclusion in the core
distribution, right?

GNU issues aside... we have consistently been told that the other reason
mod_gzip would not be included is because of the 'support' issue. Apache
has a standing rule that no module goes into the distribution unless you
are absolutely SURE that it will be adequately supported. Makes sense.

We have been consistently supporting this technology for years now.
Ian himself said he 'Just got bored and decided not to do any real
work one weekend' and cranked out a filter demo that happened to
include ZLIB. He has not demonstrated any intention of actually
'supporting' it other than making a few modifications to the 'demo'
( and even those have yet to appear ).

If that doesn't raise the issue of 'will it be supported' I don't
know what would.

mod_gzip has NEVER used 'ZLIB' for a number of reasons... the primary
one being that ZLIB is inadequate for the purpose. ZLIB is a 
non-multithread-safe
file based compression library. The secondary reason is actually so 
that you folks wouldn't have to worry about the 'GNU' issues you have if/when
you wanted to use the code. Read the comments at the top of mod_gzip.c
right underneath the Apache license section which has always been at
the top of mod_gzip and the syntax of which was personally approved
by at least 3 of your top level board members via personal email(s).

 2) I keep hearing that zlib has more memory leaks than a sieve.  

It does. Any non-multithread-safe file oriented library does if you
just slap it into a multithread environment and start using it without
putting critsecs around the calls to serialize it.

 3)  I don't believe that we
 should be adding every possible module to the core distribution.  

That's always been the rule. It is still a mystery when/how something
'rises' to the level of importance to be included in the Apache
distribution ( E.g. WebDav, etc ). That being said... see next comment.

 I personally think we should leave the core as minimal as possible, and
 only add more modules if they implement a part of the HTTP spec.

mod_gzip does that. It makes Apache able to perform the IETF Content-Encoding
specification that has ALWAYS been a part of HTTP 1.1.

 Before this goes into the main tree, I think we need to seriously think
 about those topics.

Think HTTP 1.1 compliance.

 I would be MUCH happier if this was a sub-project, or just a module
 that Ian distributed on his own.

 Ryan


Cliff Wooley wrote...

 Ryan Bloom wrote...

 I have a few problems with this.  1)  We have consistantly declined to
 accept the mod_Gzip from remote communications.

 That's true, though that was for 1.3.  Just now with Peter's message is
 the first time I've heard that mod_gzip for 2.0 was even nearing release.
 I'm not prejudiced... whichever one is better wins.  :)

We have said any number of times that even the ORIGINAL version of mod_gzip
was coded/tested against the ORIGINAL (alpha) release of Apache 2.0. It was
only when we realized how long Apache 2.0 was away from either a beta or
a GA that we ported it BACKWARDS to Apache 1.3.x so people could start using
it right away... and they have ( thousands of folks ).

As recently as a few weeks ago we said again that a 2.0 version of mod_gzip
was 'ready to go' but we just wanted to make sure the filtering API's were
going to stop changing ( which they still were at the time ).

Now our only concern is that the filtering I/O is actually WORKING
the way it is supposed to from top to bottom. Even recent messages
regarding Content-length support and such indicate there is still 
some work to be done. We just want the existing Apache 2.0 I/O
scheme to be CERTIFIED by the people that wrote it ( E.g. BETA at least )
before we CERTIFY our own product(s) against that same Server product.

If you were conviced that mod_gzip was only for Apache 1.3.x then 

Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread George Schlossnagle

 In contrast, with an 11,000-line implementation like mod_gzip, it's
 much less likely that other developers will be able to troubleshoot
 the code quickly if it breaks while the original authors are on 
 vacation.

A quick perusal of thesource for the 1.3 version of mod_gzip (which I've 
been happily using for 3 weeks now), leads me to believe that 90% of the 
11,000 lines are debug code #ifdef'd out.




Re: [PATCH] Add mod_gz to httpd-2.0

2001-09-02 Thread Pier Fumagalli

Ryan Bloom [EMAIL PROTECTED] wrote:

 If you want to use gzip, then zip your data before putting it on-line.  That
 doesn't help generated pages, but perl can already do gzip, as can PHP.

And Tomcat 4.x :)

Pier




Re: cvs commit: httpd-2.0/modules/http mod_mime.c

2001-09-02 Thread William A. Rowe, Jr.

AddOutputFilter GZ html  (server-level)
AddOutputFilter Includes html  (directory-level)

I think what you want is 

AddOutputFilter Includes html.

SetOutputFilterByType gz text/* 

That syntax doesn't exist...

yet :)

Bill




Re: cvs commit: httpd-2.0/modules/http mod_mime.c

2001-09-02 Thread Justin Erenkrantz

On Sun, Sep 02, 2001 at 10:17:46PM -0500, William A. Rowe, Jr. wrote:
 Thank you, first off, for catching my foobar.
 
 But -1 on the solution.  This is contrary to the syntax of AddSomething.
 AddSomething in mod_mime (and SetSomething in core) always _replaces_ the
 prior declaration!!!

If I setup mod_bar's filter on a per-server level for a mime-type and 
then I also want the same mime-type in a directory to be handled by 
mod_foo's filter as well, I want *both* filters to be executed.  How 
do you propose to handle that?  

The easiest way I see to configure this is:

AddOutputFilter BAR html
Directory /baz
AddOutputFilter FOO html
/Directory

Maybe I'm missing something here.  -- justin




Re: cvs commit: httpd-2.0/modules/http mod_mime.c

2001-09-02 Thread William A. Rowe, Jr.

From: Justin Erenkrantz [EMAIL PROTECTED]
Sent: Sunday, September 02, 2001 10:40 PM


 On Sun, Sep 02, 2001 at 10:17:46PM -0500, William A. Rowe, Jr. wrote:
  Thank you, first off, for catching my foobar.
  
  But -1 on the solution.  This is contrary to the syntax of AddSomething.
  AddSomething in mod_mime (and SetSomething in core) always _replaces_ the
  prior declaration!!!
 
 If I setup mod_bar's filter on a per-server level for a mime-type and 
 then I also want the same mime-type in a directory to be handled by 
 mod_foo's filter as well, I want *both* filters to be executed.  How 
 do you propose to handle that?  

Not this way.  No other mod_mime variable behaves the way you you are trying.
I'm not kidding about adding a Set{Input|Output}FilterByType/SetHandlerByType
so when we ask folks to rely upon mime types, they can actually do so.



 The easiest way I see to configure this is:
 
 AddOutputFilter BAR html
 Directory /baz
 AddOutputFilter FOO html
 /Directory
 
 Maybe I'm missing something here.  -- justin

Yes... please read mod_mime.html.  AddSomething is not additive, and can't be.
The server config is often a nightmare to grok as things stand today.  Don't make
things harder by absusing fairly consistant definitions such as AddSomething or
SetSomething.  The inner container always overrides the outer.

So the inner needs AddOutputFilter FOO;BAR html - as of today.  I suggested an
entire +|- syntax as well, it was somewhat booed since existing +|- syntaxes are
perceived as confusing.  Here, well I think it's necessary.

None of this is addressing filter ordering across commands yet.  I said 8 months
ago we've done an inadequte job of defining the use-filters syntax.  I'm saying
the very same thing today.

Bill