Re: new download page

2002-10-27 Thread Thom May
* Joshua Slive ([EMAIL PROTECTED]) wrote :
 Pier Fumagalli wrote:
 
 On 27/10/02 0:54, David Burry  wrote:
 
 
 
 
 Right.  If we had very reliable mirrors and a good technique for keeping 
 them that way, I'd be fine with doing an automatic redirect or fancy DNS 
 tricks.  But we don't have that at the moment.
 
 I looked into it back in the days, but the only way would be to go down to
 RIPE (IANA in the US) to see where that IP is coming from, doing some 
 weirdo
 WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
 offtopic! :-)
 
 See: http://maxmind.com/geoip/

Or just ask BGP... http://www.supersparrow.org/
-Thom



RE: new download page

2002-10-27 Thread Sander Striker
 From: Bill Stoddard [mailto:bill;wstoddard.com]
 Sent: 27 October 2002 03:15

 --On Sunday, October 27, 2002 12:30 AM +0100 Pier Fumagalli
 [EMAIL PROTECTED] wrote:

 Ok, as long as it's clear! :-) I'm very dumb, but I know other
 people smarter than me who also have the same problem with
 SourceForge... You simply forget! :-)

 Well, I agree with Pier.  I'm an idiot, too.  I absolutely can't
 stand SourceForge's mirroring system (which is essentially what that
 page is moving us to).  It tells me that I'm downloading a file, but
 when I try to download it by hitting the link, I get an HTML file
 that shows me mirrors where I can download it.  Eh, no.

Glad to see that I'm not the only idiot that was bitten by this before ;)

 To be blunt, any link from that download page must go directly to a
 tarball not to a page that lists mirrors.  I've offered ASF-wide
 suggestions to the mirroring problem.  I still think the best
 strategy is to do round-robin DNS of dists.apache.org (and indicate
 that those servers aren't necessarily trusted).  -- justin

 
 I don't have a problem at all with the way downloads have been done. FWIW, I
 agree with Justin here.

I agree aswell.

Sander



Re: cvs commit: apr/network_io/win32 sockets.c

2002-10-27 Thread Jeff Trawick
William A. Rowe, Jr. [EMAIL PROTECTED] writes:

 FWIW, somehow this patch breaks Win32 with APR_HAVE_IPV6.
 
 The Apache service named  reported the following error:
  [Sat Oct 26 22:45:29 2002] [crit] (OS 11001)No such host is known.  : 
alloc_listener: failed to set up sockaddr for :: .

I don't see how any of this commit affects that path...

didn't you say almost exactly the same thing after commiting something
to turn on IPv6 for Win32 about a week ago?

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: RE : mod_auth_ldap

2002-10-27 Thread John K . Sterling
note  as i said in the original email, the problem was that mod_auth 
was enabled, not a problem with auth_ldap.

sterling

On Wednesday, October 23, 2002, at 05:19 PM, Estrade Matthieu wrote:

Hi,

I finally made mod_auth_ldap work.

First, basic authentication:

AuthName auth
AuthType Basic

Then, disable Basic authoritative to let the Authorization continue to
mod_auth_ldap.

AuthBASICAuthoritative Off

Then my LDAP Config

Maybe this documentation about AuthBasicAuthoritative directive, should
be added by a link in mod_auth_ldap documentation.

Regards,

Estrade Matthieu

-Message d'origine-
De : Thomas Bennett [mailto:thomas.bennett;eds.com]
Envoyé : Wednesday, October 23, 2002 9:43 PM
À : Estrade Matthieu
Objet : Re: mod_auth_ldap

On Thu, 24 Oct 2002 04:33, Estrade Matthieu wrote:

Hi,

I am using apache 2.0 + proxy + mod_auth_ldap

i have this error in my log:

[Wed Oct 23 17:35:59 2002] [error] [client 192.168.100.1] (9)Bad file
descriptor: Could not open password file: (null)
and return an error 500


Add
AuthLDAPAuthoritative on
to stop it from trying another authentication type when ldap fails.



this is my vhost auth conf:

Location /
AuthName test
AuthType basic
AuthLDAPEnabled On
AuthLDAPUrl

ldap://192.168.100.2:389/cn=backoffice,dc=company,dc=com?uid

Require valid-user
/Location

when i do this query, with anonymous login, directly on ldap server,
it's working


I suggest you look closely at your basedn:
cn=backoffice,dc=company,dc=com
I simply use o=EDS  but of course our sever might be set up 
differently.

Regards
Thomas Bennett



___
Haut Débit: Modem offert soit 150,92 euros remboursés sur le Pack 
eXtense de Wanadoo !
Profitez du Haut Débit à partir de 30 euros/mois : 
http://www.ifrance.com/_reloc/w





httpd bounced on daedalus

2002-10-27 Thread gregames
...at Sunday, 27-Oct-2002 07:07:35 PST, to install a couple of patches on top of
2.0.43 to fix:

* junk left in the scoreboard after a graceful restart with smaller MaxClients
* byterange filter was applying ranges to redirect responses

Greg



Re: new download page

2002-10-27 Thread Johannes Erdfelt
On Sun, Oct 27, 2002, Thom May [EMAIL PROTECTED] wrote:
 * Joshua Slive ([EMAIL PROTECTED]) wrote :
  Pier Fumagalli wrote:
  
  On 27/10/02 0:54, David Burry  wrote:
  
  
  
  
  Right.  If we had very reliable mirrors and a good technique for keeping 
  them that way, I'd be fine with doing an automatic redirect or fancy DNS 
  tricks.  But we don't have that at the moment.
  
  I looked into it back in the days, but the only way would be to go down to
  RIPE (IANA in the US) to see where that IP is coming from, doing some 
  weirdo
  WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
  offtopic! :-)
  
  See: http://maxmind.com/geoip/
 
 Or just ask BGP... http://www.supersparrow.org/

Network routes don't necessarily mean that server is best. Bandwidth
varies greatly between different routes as well as server load.

Plus, supersparrow is mostly a proof of concept. Dents (the underlying
DNS server that I've mostly wrote) is a long way off from being
production ready, and the method supersparrow uses doesn't scale well
(telnet to a Cisco router).

Anyway, it's next to impossible to make a perfect decision about the
best server to use. IMHO, if you make the decision for the user (by
only returning certain servers via DNS, etc) then it should be close to
a perfect choice.

Otherwise, you may just want to let the user choose themselves by just
listing the mirrors and their location and let the user choose.

JE




Re: new download page

2002-10-27 Thread Justin Erenkrantz
--On Saturday, October 26, 2002 9:33 PM -0400 Joshua Slive 
[EMAIL PROTECTED] wrote:

I like this system better because:

1. It is perfectly transparent to the users.  They know exactly
where they are downloading from and are given options for
alternative locations.


You are missing my point: you are creating an extra step that is not 
needed.  There are plenty of solutions to this problem that do not 
require this level of indirection.

For example, you could incorporate the CGI script logic into a shtml 
file that has a choice list representing each mirror (and method). 
The links on our download page would be recomputed as you select the 
mirror.  I still prefer a round-robin DNS as that doesn't require any 
CGI scripting.

2. It is extremely simple to configure and maintain.


No, it's not.  Currently, we have bogus mirrors.  For example, I see 
apache.towardex.com listed as a mirror for me.  When I click on the 
link, it gives me a 404.  That is inacceptable.

If you want to force users to do this scheme, then you have to ensure 
that we don't list broken mirrors.

3. It can be put into place NOW.


No, I don't think we can deploy this because we have so many busted 
mirrors.

I'd rather we do the right solution, then do a broken solution.  This 
is a broken solution that will result in too much confusion for our 
users.  Please do not switch to this.  -- justin


Re: new download page

2002-10-27 Thread Joshua Slive
Justin Erenkrantz wrote:


You are missing my point: you are creating an extra step that is not
needed.  There are plenty of solutions to this problem that do not
require this level of indirection.

For example, you could incorporate the CGI script logic into a shtml
file that has a choice list representing each mirror (and method). The
links on our download page would be recomputed as you select the
mirror.  I still prefer a round-robin DNS as that doesn't require any
CGI scripting.


This seems to be exactly the same number of steps to me.  In the current 
page you select the file and then the mirror.  With your idea, you 
select the mirror and then the file.  I don't have any problem with your 
suggestion, other than the fact that it isn't implemented.

 2. It is extremely simple to configure and maintain.


No, it's not.  Currently, we have bogus mirrors.  For example, I see
apache.towardex.com listed as a mirror for me.  When I click on the
link, it gives me a 404.  That is inacceptable.

If you want to force users to do this scheme, then you have to ensure
that we don't list broken mirrors.



Bullsh**.

1. Most of the mirrors are fine.  That particular one is entered in our 
mirror list incorrectly.

2. Every page lists two guaranteed working sites at the bottom: nagoya 
and daedelus.  I'm thinking of also adding ibiblio to that.

3. If you find a problem with a mirror listing, why don't you fix it 
rather than complaining about it?

4. Even deadelus is not a guarenteed working site at the moment.

5. Nobody is forced to do anything.  Clear links are still provided to 
http://www.apache.org/dist/httpd/.

 3. It can be put into place NOW.


No, I don't think we can deploy this because we have so many busted
mirrors.

I'd rather we do the right solution, then do a broken solution.  This is
a broken solution that will result in too much confusion for our users.
Please do not switch to this.  -- justin



So would you prefer a state where some users might need to try two or 
three links to get an actual download, or a state where daedelus is 
completely unresponsive?  If patterns are followed, there is a good 
chance we could have serious capacity problems on both daedelus and 
nagoya tommorow morning.

If you have a better solution, then do something about it.  We have been 
talking about this for months, but nobody has stepped forward to 
actually do it.  I implemented the solution that was within my technical 
and time limits.  It is a working solution, and I believe it is superior 
both from a user point of view and from a resource management point of view.

Joshua.






Re: new download page

2002-10-27 Thread Justin Erenkrantz
--On Sunday, October 27, 2002 11:46 AM -0500 Joshua Slive 
[EMAIL PROTECTED] wrote:

This seems to be exactly the same number of steps to me.  In the
current page you select the file and then the mirror.  With your
idea, you select the mirror and then the file.  I don't have any
problem with your suggestion, other than the fact that it isn't
implemented.


No, it isn't.  We'd select a random default mirror.  (The key is the 
closer.cgi functionality would be incorporated into download.html.)

1. Most of the mirrors are fine.  That particular one is entered in
our mirror list incorrectly.


And, everytime someone breaks the mirrors.list file, we're going to 
break downloads.  A fair number of commits to mirrors.list are bogus 
and break the file.  If we want to switch httpd downloads to relying 
on mirrors, then we have to be careful about the integrity of that 
file.  (Something we have refused to enforce in the past because we 
don't want to hurt people's feelings.)

2. Every page lists two guaranteed working sites at the bottom:
nagoya and daedelus.  I'm thinking of also adding ibiblio to that.


ibiblio is not an affiliated site, but a large and respected mirror. 
Yet, I believe the guaranteed working sites should be only those 
under ASF (or ASF member) control.  There is no accountability for 
problems with ibiblio.  Therefore, I would be hesitant to say that it 
is a guaranteed working site.

3. If you find a problem with a mirror listing, why don't you fix
it rather than complaining about it?


Because I just noticed it, and it wasn't obvious what the failure 
condition was (from a 404, how am I supposed to know that it is 
inputted incorrectly?).

4. Even deadelus is not a guarenteed working site at the moment.


But, it is the 'master' site (as well as the rsync master).


5. Nobody is forced to do anything.  Clear links are still provided
to http://www.apache.org/dist/httpd/.


I don't believe that the closer.cgi file makes it clear enough where 
to download from in the event of an error.  Hmm, I'll add a 
disclaimer to the top of the page.  -- justin


Re: new download page

2002-10-27 Thread Joshua Slive
Justin Erenkrantz wrote:



No, it isn't.  We'd select a random default mirror.  (The key is the
closer.cgi functionality would be incorporated into download.html.)



Sure, you can do that.  But in that case, you really do need to make 
absolutely sure that every mirror works every time.  What I have 
implemented allows the user to gracefully fallback to a working mirror.

So go for it.  All you need to do is implement a monitoring system for 
mirrors, and then your proposed shtml page.  I agree it would be 
superior to what I have done.

Until you do that, my system is better than what we have been using.

Joshua.



Re: cvs commit: apr/network_io/win32 sockets.c

2002-10-27 Thread William A. Rowe, Jr.
At 06:49 AM 10/27/2002, Jeff Trawick wrote:

I don't see how any of this commit affects that path...

It may not.  There -are- problems with the IPV6 port on Win32, yet
and still...

didn't you say almost exactly the same thing after commiting something
to turn on IPv6 for Win32 about a week ago?

Yes, however Win32 would start with IPv6 with the

Listen 80

directive last week.  Something changed this week that the default
IP [0::0] no longer works correctly.  Named IPs were giving me trouble
all along.

Bill






Re: new download page

2002-10-27 Thread Justin Erenkrantz
--On Sunday, October 27, 2002 12:33 PM -0500 Joshua Slive 
[EMAIL PROTECTED] wrote:

Sure, you can do that.  But in that case, you really do need to
make absolutely sure that every mirror works every time.  What I
have implemented allows the user to gracefully fallback to a
working mirror.


No, because there would be a selection box that allows the selection 
of which mirror to use.  So, it would still allow for graceful 
fallback in the event that the 'default' mirror is down.

I'm trying to write it up now.  I'm also cleaning up closer.cgi while 
I'm at it.  -- justin


Re: new download page

2002-10-27 Thread Ask Bjoern Hansen
On Sat, 26 Oct 2002, David Burry wrote:

[...]
 too... hmm..  This is probably getting to be too complex of a suggestion for
 anyone to do with volunteer time and resources but still just an idea... ;o)

ftp'ing to ftp://ftp.perl.org/pub/CPAN/ generally sends you to a
nearby CPAN mirror.

ftp://ftp.apache.ddns.develooper.com/pub/apache/dist/ should find an
Apache mirror not on the other side of the world.


 - ask

-- 
ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();




Re: new download page

2002-10-27 Thread Ask Bjoern Hansen
On Sat, 26 Oct 2002, Joshua Slive wrote:

  WHOIS parsing and stuff... _WAY_ overkilling... Anyhow this is going waaay
  offtopic! :-)

 See: http://maxmind.com/geoip/

 If someone wants a little project, it shouldn't be too hard to integrate
 this into the existing closer.cgi script.

FWIW, that's what my dynamic DNS thing I just mentioned is using.
It translates the mirrors.dist file into a configuration file which
is then used by the DNS servers.


 - ask

-- 
ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();




Re: new download page

2002-10-27 Thread Pier Fumagalli
On 27/10/02 19:26, Ask Bjoern Hansen [EMAIL PROTECTED] wrote:
 On Sat, 26 Oct 2002, David Burry wrote:
 
 ftp://ftp.apache.ddns.develooper.com/pub/apache/dist/ should find an
 Apache mirror not on the other side of the world.

We want downloads working with HTTP... Anyhow, how do you do that? Can we
move the logic on Apache.ORG so that something like mirror.apache.org will
be pointing at your closest mirror? (You should improve it, I end up in
hungary 19 hops, but got some mirrors at 5/6)...

Pier




Re: new download page

2002-10-27 Thread Bojan Smojver
On Sun, 2002-10-27 at 09:56, Pier Fumagalli wrote:
 Erik Abele wrote:
  
  +1. great idea, but I think the mirror sites should be mentioned more
  than only once.
 
 Agreed, it's one of those things I hate most of SourceForge... I _always_
 screw up, copy the link from my browser to my terminal on the wget command
 line parameter, and end up with a few-kb long HTML file...

Welcome to the club :-)

Bojan




Re: new download page

2002-10-27 Thread Justin Erenkrantz
--On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz 
[EMAIL PROTECTED] wrote:

I'm trying to write it up now.  I'm also cleaning up closer.cgi
while I'm at it.  -- justin


Well, that took *way* longer than I wanted to.  Anyway, a rough 
sketch of what I'm thinking of is here:

http://www.apache.org/dyn/mirrors/httpd.cgi

And, to prove that this new system isn't any worse than the old one:

http://www.apache.org/dyn/mirrors/list.cgi

This is a python-based CGI script that uses Greg Stein's EZT library 
(much kudos to Greg for this awesome tool).  It allows for the 
separation of the layout from the mirroring data.  Therefore, it 
makes it really easy to do the above with only one CGI script 
(httpd.cgi and list.cgi are symlinked to the same file) that has 
multiple 'views' and templates.

We would probably have to work a bit on the layout and flesh it out 
some, but this is the idea that I had.

Source at:

http://www.apache.org/~jerenkrantz/mirrors.tar.gz

If I could run CGI scripts from my home dir, I wouldn't have stuck 
this in www.apache.org's docroot, but CGI scripts are not allowed 
from user directories.  ISTR mentioning this before and getting no 
response from Greg or Jeff.  -- justin


Re: cvs commit: apr/network_io/win32 sockets.c

2002-10-27 Thread Jeff Trawick
William A. Rowe, Jr. [EMAIL PROTECTED] writes:

 Yes, however Win32 would start with IPv6 with the
 
 Listen 80
 
 directive last week.  Something changed this week that the default
 IP [0::0] no longer works correctly.  Named IPs were giving me trouble
 all along.

I know what you mean by 0::0, but it is really ::, and Apache
assumes that apr_sockaddr_info_get/getaddrinfo() can grok :: (and
0.0.0.0 too, for that matter).

My tree on Linux (2 days old?) is handling Listen nn fine (Apache
looks up :: and magic happens).

Note that some older Unix variants (AIX 4.3.3 without maintenance,
Solaris 8 without maintenance, surely other platforms too) simply
didn't have getaddrinfo() working properly and IPv6 just didn't work
right for Apache+APR.  ./configure catches some bogosities and
disables IPv6 support outright (no working getaddrinfo) when
appropriate.  So be alert to Microsoft perhaps not getting it perfect
either.

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: new download page

2002-10-27 Thread Joshua Slive
Justin Erenkrantz wrote:


--On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz
 wrote:

 I'm trying to write it up now.  I'm also cleaning up closer.cgi
 while I'm at it.  -- justin


Well, that took *way* longer than I wanted to.  Anyway, a rough sketch
of what I'm thinking of is here:

http://www.apache.org/dyn/mirrors/httpd.cgi



Looks good.  +1.

Joshua.





Re: new download page

2002-10-27 Thread David Burry
Awesome script...  I hadn't thought of doing it this way, this is better
than what I was thinking.. it seems to address everyone's concerns too in
the best way that's still within our resources.

Dave

- Original Message -
From: Justin Erenkrantz [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, October 27, 2002 1:50 PM
Subject: Re: new download page


 --On Sunday, October 27, 2002 9:39 AM -0800 Justin Erenkrantz
 [EMAIL PROTECTED] wrote:

  I'm trying to write it up now.  I'm also cleaning up closer.cgi
  while I'm at it.  -- justin

 Well, that took *way* longer than I wanted to.  Anyway, a rough
 sketch of what I'm thinking of is here:

 http://www.apache.org/dyn/mirrors/httpd.cgi

 And, to prove that this new system isn't any worse than the old one:

 http://www.apache.org/dyn/mirrors/list.cgi

 This is a python-based CGI script that uses Greg Stein's EZT library
 (much kudos to Greg for this awesome tool).  It allows for the
 separation of the layout from the mirroring data.  Therefore, it
 makes it really easy to do the above with only one CGI script
 (httpd.cgi and list.cgi are symlinked to the same file) that has
 multiple 'views' and templates.

 We would probably have to work a bit on the layout and flesh it out
 some, but this is the idea that I had.

 Source at:

 http://www.apache.org/~jerenkrantz/mirrors.tar.gz

 If I could run CGI scripts from my home dir, I wouldn't have stuck
 this in www.apache.org's docroot, but CGI scripts are not allowed
 from user directories.  ISTR mentioning this before and getting no
 response from Greg or Jeff.  -- justin





RE: new download page

2002-10-27 Thread James Cox

 --On Sunday, October 27, 2002 12:33 PM -0500 Joshua Slive
 [EMAIL PROTECTED] wrote:

  Sure, you can do that.  But in that case, you really do need to
  make absolutely sure that every mirror works every time.  What I
  have implemented allows the user to gracefully fallback to a
  working mirror.

 No, because there would be a selection box that allows the selection
 of which mirror to use.  So, it would still allow for graceful
 fallback in the event that the 'default' mirror is down.

 I'm trying to write it up now.  I'm also cleaning up closer.cgi while
 I'm at it.  -- justin

FWIW, and if you don't mind using php, take a look at

http://cvs.php.net/cvs.php/php-master-web/scripts/mirror-test?login=2

(i suggest the make version, runs faster, needs latest cvs of wget)

and

http://cvs.php.net/cvs.php/php-master-web/scripts/mirror-summary?login=2

convert this code to look at a seperated-values file, if desired, or use the
database. Either way, you can easily adapt this to maintain a dynamic list
of mirrors, or at least provide status updates on mirrors.

 -- james




Linux SSL cert creation on RedHat 7.2

2002-10-27 Thread Vernon Webb
Can someone tell me where I can find a good how-to or can someone please 
explain how to setup and create ssl certs and keys for an Apache driven web 
site? 

I've used the following to create the crt:
 openssl genra -out privkey.pem
 openssl req -new -key privkey.pem -out cert.crt

But what else do I need? Don't I need a key? I've noticed in the default 
httpd.conf that the follwoing is used:

SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl.crt/server.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl.key/server.key

So how do I create the key? I tried using the server's key, but when I look 
at it is says localhost.localdomain

Thanks


-- 
This message has been scanned for viruses and
dangerous content by Webb Solutions' MailScanner,
and is believed to be clean.




Fw: Linux SSL cert creation on RedHat 7.2

2002-10-27 Thread Vernon Webb
Let me expand a bit please. I followed these directions:

http://www.ssl.com/apache_mod_SSL.asp

and did this:

openssl genrsa -des3 -out websites.key 1024
openssl req -new -key websites.key -out swingingpenpals.csr

My question is that it seems I'm missing a step where I create the crt 
file. How do I do that without going through one of the pay cert vendors?

Thanks

V

-- 
This message has been scanned for viruses and
dangerous content by Webb Solutions' MailScanner,
and is believed to be clean.




Re: Linux SSL cert creation on RedHat 7.2

2002-10-27 Thread Bojan Smojver
This is what I do.

Bojan

On Mon, 2002-10-28 at 11:13, Vernon Webb wrote:
 Can someone tell me where I can find a good how-to or can someone please 
 explain how to setup and create ssl certs and keys for an Apache driven web 
 site? 
 
 I've used the following to create the crt:
  openssl genra -out privkey.pem
  openssl req -new -key privkey.pem -out cert.crt
 
 But what else do I need? Don't I need a key? I've noticed in the default 
 httpd.conf that the follwoing is used:
 
 SSLEngine on
 SSLCertificateFile /etc/httpd/conf/ssl.crt/server.crt
 SSLCertificateKeyFile /etc/httpd/conf/ssl.key/server.key
 
 So how do I create the key? I tried using the server's key, but when I look 
 at it is says localhost.localdomain
 
 Thanks
 
 
 -- 
 This message has been scanned for viruses and
 dangerous content by Webb Solutions' MailScanner,
 and is believed to be clean.
 


#!/bin/sh

if [ $# -lt 1 ]; then
  echo $0: usage: $(basename $0) name
  exit 1
fi

NAME=$1

openssl req -new  $NAME.csr
openssl rsa  privkey.pem  $NAME.key
openssl x509 -in $NAME.csr -out $NAME.crt -req -signkey $NAME.key -days 365



Re: Linux SSL cert creation on RedHat 7.2

2002-10-27 Thread Vernon Webb

That worked great. Thanks!!

-- 
This message has been scanned for viruses and
dangerous content by Webb Solutions' MailScanner,
and is believed to be clean.




Re: Branch of docs tree: Re: Authentication in 2.0

2002-10-27 Thread William A. Rowe, Jr.
At 09:35 PM 10/27/2002, Rich Bowen wrote:
On Sun, 27 Oct 2002, William A. Rowe, Jr. wrote:

 MAIN branch - current development, 2.1 stays here.
   \--- APACHE_2_0_BRANCH [when we declare 2.1, we 'freeze' 2.0]
  \--- APACHE_2_2_BRANCH [as we prepare to release 2.2, we branch]

I have had very little luck in the past with trying to to branches in
cvs. I expect it's not hard, if there is someone at the helm that knows
more about cvs than I. So, in general, this seems like a good idea, and
I know that a lot of projects operate this way, and it seems to work
really well.

It does.  Of course this is based on real practice.  The most aggravating
thing is trying to maintain 'new development' parked off on a branch.
Branches should be reserved for 'known states', not the unknown, wild
west of httpd development.  That's why I've phrased the questions in
STATUS as I have, where new-dev is always parked on the MAIN branch.

Several of us would be happy to walk other committers through the
process of committing their accepted patches back to other branches.
I work within branches all day, they really aren't that dark and scary.

However, if the list votes to accept the stable/dev odds-evens, and
chooses not to branch, we can look at the multiple-tree solution.

Another nit about branches, the httpd-2.0 cvs name begins to be sort
of silly.  But better an odd name than confuse every script rsync'ing
to httpd-2.0.  Maybe by 3.0 we will decide on a new repository.

As with the 1.3/2.0 docs split, I'd be concerned about older branches
getting maintained. There are a lot of people using 1.3 still, but the
docs are far inferior to the 2.0 docs in many ways, and almost nobody is
working on them. I guess the thinking is that once we have moved on to
the next version, nothing needs to be done to the older version of the
docs, but my theory tends to be that documentation can *always* be made
better.

Of course.  This is the same with the older code, as well.  Security patches
happen.  Bugs get fixed.  There is nothing stopping good code or docs from
being applied to the old versions, just testing or proofreading.

The idea is to keep the 'wild west' of proactive httpd development (and that
would include httpd-docs development, too!) exactly where it is today.
After vetting/voting, changes that don't break compatibility are committed 
to the stable, and even prior releases.

ALL this depends on httpd relying on a stable APR release.  That side of
the wall is getting very close, and is well defined as of apr-1.0 to maintain
the sort of backwards compatibility that the new proposal requires.

I appear to have more concerns than solutions. Right now, we are in a
situation where people are coming to the httpd.apache.org/docs-2.0 site
and getting answers that are just wrong for them, and clearly that has
to get addressed first.

Yes, this has to be resolved.  We began debate, but the list got -very-
quiet on the topic this week.  Is that because we all agree?  Is that just
folks feeling they are being railroaded into adopting this change? Is it
nothing but too much busyness and not enough time to comment?

I'm not psychic, so I'm floating the issue in STATUS for consideration.

Bill




Re: cvs commit: httpd-2.0 STATUS

2002-10-27 Thread Brian Pane
   +* Adopt backwards compatibility for future Apache 2.0 releases
   +  such that MMN changes and eliminating non-experimental modules 
   +  are deferred for the next minor version bump (e.g. 2.1, 2.2 
   +  or 3.0).
   ++1: wrowe
   + 0: 
   +-1: 

Does this mean all MMN changes?  Or just MMN major number changes?

Thanks,
Brian





Re: cvs commit: httpd-2.0 STATUS

2002-10-27 Thread William A. Rowe, Jr.
At 01:08 AM 10/28/2002, Brian Pane wrote:
   +* Adopt backwards compatibility for future Apache 2.0 releases
   +  such that MMN changes and eliminating non-experimental modules 
   +  are deferred for the next minor version bump (e.g. 2.1, 2.2 
   +  or 3.0).
   ++1: wrowe
   + 0: 
   +-1: 

Does this mean all MMN changes?  Or just MMN major number changes?

No MMN Major modifications.

The traditional MMN minor changes will continue on the release branch
when applicable.




New download page up on daedalus

2002-10-27 Thread Justin Erenkrantz
Well, I came back from watching the World Series (woo-hoo! Go 
Angels!) and I figured that I could just tidy up a bit on the mirror 
download page that I posted earlier.  All of the relevant bits are 
now checked in, and I'm now reasonably comfortable with the results 
(saw 2 +1s, so it seems okay for now).

So, I've switched the httpd.apache.org 'download from a mirror' site 
to link to this new page:

http://httpd.apache.org/download.cgi

Much thanks to Joshua for the initial thrust.  =-)

For those that are interested, that link is really a rendering of 
download.html that has been run through EZT and the python 
mirrors.cgi script.  Therefore, any content changes to that page 
should be made to download.xml and then propogated back to 
download.html (by anakia) where EZT will then pick it up.  In short, 
treat download.xml as the definitive content...  Hope that makes 
sense to anyone who'd have to maintain it.

If the system doesn't manage to blow itself up after a few days, I 
think it'd be a good idea to remove the 'from here' link as well, but 
I'd prefer to leave a well-marked escape hatch for now.  -- justin


Re: New download page up on daedalus

2002-10-27 Thread Aaron Bannert
On Sun, Oct 27, 2002 at 11:24:07PM -0800, Justin Erenkrantz wrote:
 If the system doesn't manage to blow itself up after a few days, I 
 think it'd be a good idea to remove the 'from here' link as well, but 
 I'd prefer to leave a well-marked escape hatch for now.  -- justin

I probably missed something in how your script works, but don't
we still want the from here link for people who are using an
explicit mirror url? That's the lazy link I use from my mirror's
front page so that I can dig into the download pages.

-aaron