Re: RFC for a Perchild-like-MPM

2005-02-10 Thread Leif W
Nick Maynard [EMAIL PROTECTED]; [EMAIL PROTECTED]:03
GMT-5
The problem is: SSL is *NOT* usable for virtual hosting. You need an
separate socket for each SSL vhost, so you'll probably prefere
several independent httpd's - maybe then stripped down w/o any vhost
support.
You're right - SSL is not usable for name-based vhosts.  However it
should be fine for normal vhosts.  Are you suggesting I use a separate
httpd for every SSL host I have?  That's a waste of resources, and
horribly inefficient.  Can metuxmpm deal with non name-based SSL
vhosts?
Hi.  I hang out mostly on the users list, but have played with basic
HTTPS configuration (using SSL or TLS).  As I understand, HTTPS works
fine with any VirtualHost, so long as it is based on a unique ip:port
combination.  That is the current alternative to name-based virtual
hosting.  It doesn't necessarily mean you need a unique ip address, if
you're willing or able to use non-standard ports.  Otherwise it does
require unique ip addresses and port 443.  That is IP Based Virtual
Hosting.  I guess it is somewhat of a misnomer.  It should be Socket
Based Virtual Hosting, but IPBVH implies that only default ports are
used.  If you must use a single ip:port combination for HTTPS, then it's
not possible, due to the point at which the SSL/TLS layers takes over to
ensure the ultimate security of the connection.
I'm not an expert with SSL or TLS, but my intuitive response would be to
modify the HTTPS protocol to establish an unsecured connection, send the
host header (ignore anything else) to pick the right certificate, and
then use that to secure the connection.  But my intuition also tells me
that this would probably open up some avenues for attacks which might
seriously degrade the effectiveness as compared to securing a connection
from the outset, or otherwise add many levels of complexity and possibly
inefficiency to ensure the connection is secured after some data was
transferred.  It would be great to have a compromise to allow some host
header data be sent to an unsecured socket for the purposes of NBVH, and
then hand off to another socket to respond securely.
It may be my naivety, but I think anything is possible if perhaps not
easy, because we tell the computer what to do, the computer doesn't tell
us what to do, or else we have given it too much power and need to step
back.  Until I find the time to educate myself about the HTTPS, SSL and
TLS protocols, and practice using something like OpenSSL and the Apache
2.x module programming, then I personally can't explore the feasability
or implementation of such things.  But hopefully someone else beats me
to it.
Leif



Re: UNIX MPMs [ot?]

2005-02-10 Thread Leif W
Nick Kew [EMAIL PROTECTED]; [EMAIL PROTECTED]:11 GMT-5
I agree the documentation should be better.  Also we should properly 
document
the perchild-like options, since that is frequently-requested.  In the
meantime, here's a list of things to look at if you want 
perchild-like:
 * Metux MPM
 * mod_ruid  (Linux only)
 * fastcgi (CGI plus)
 * suexec (for CGI)
Hi, sorry if this is off-topic, but I just want to make sure I 
understand this problem.  Last month I read an email on another list 
(suPHP) in which someone was upset about the security of Apache 2.0.x 
with all file i/o and cgi being done by a single user, and the perchild 
MPM being broken.  The frustration is that it is difficult, if not 
impossible (and potentially not even portable) to get all of these 
workarounds working together.  And the clinching belief is that these 
should all be handled in the core of Apache, or with a working MPM.

Here I post as complete a list I can think of including the new ones I 
see above.

* cgiwrap
* FastCGI
* Metux MPM
* mod_perl
* mod_php
* mod_ruid  (Linux only)
* suexec
* suphp
It's already a huge list of workaround and compatibility and portability 
for an admin could be a nightmare.  I do not know if there are even more 
security wrappers needed for other language modules.  Can anyone add to 
the list some things which might commonly be used in concert?  Is there 
any direction given from the top of the Apache group in regards to 
what gets attention?  In the message on the suPHP list, it is implied 
that there is in general a mentality that security is not a priority (at 
least regarding setuid per request as perchild MPM would like to do), 
only competing with MS/IIS.

I'm not implying anything, I don't know what to believe, so that's why I 
ask.  I'm just trying to understand where the breakdown is.  A feature 
that people want, the lack of which spawns a sloppy slew of incompatible 
workarounds, but no one around to respond and code it or fix what's 
available.  The strength of Apache was always *nix, so why abandon 
security on *nix for the sake of portability to Windows?  It's the 
natural impression given by first glance of the timeline of events, not 
an accusation.  Or is it just coincidence that someone (or many people) 
lost interest in perchild and there's been noone to pick up the slack, 
and other people just happened to want to increase portability to 
windows?

I mean, I like having a windows port, because I can at least practice 
using Apache somewhat, and it expands the development platform, but I 
won't ever, ever, EVER run it on Windows in production, simply because 
I'd never run Windows in production.  Except insofar as to show Windows 
users a shining example of free software, and offer the idea of using an 
entire OS filled with shining examples of free software engineering. 
;-)  Toungue in cheek of course, with the ugly little problems such as 
this code abandonment of vital features at the back of my mind.  I don't 
mean to start an OS flame war, so please don't respond with that in 
mind.  :-)  If other people would like to use Windows, it takes nothing 
away from me, I'm just stating opinion based on my own interaction and 
experience with Apache, Win, and *nix (Linux  FreeBSD).

Leif



Re: UNIX MPMs [ot?]

2005-02-10 Thread Leif W
Nick Kew [EMAIL PROTECTED]; [EMAIL PROTECTED]:15 GMT-5
On Thursday 10 February 2005 14:10, Leif W wrote:
Hi, sorry if this is off-topic, but I just want to make sure I
understand this problem.  Last month I read an email on another list
(suPHP) in which someone was upset about the security of Apache 2.0.x
with all file i/o and cgi being done by a single user, and the 
perchild
MPM being broken.
That's rather different.  If you care *at all* about security, you 
won't
be running PHP as a module.  So suexec is a complete solution there.
Does this idea extend to any other modules as well?  Are they all 
insecure simply because of Apache's design?  Is that where the security 
problem lies?  The module code can not be run as a separate user with 
fewer privileges per request?

Leif



Re: UNIX MPMs [ot?]

2005-02-10 Thread Leif W
Sander Striker [EMAIL PROTECTED]; [EMAIL PROTECTED]:35 GMT-5
From: Leif W [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 10, 2005 3:10 PM
things which might commonly be used in concert?  Is there any 
direction given
from the top of the Apache group in regards to what gets attention?
No, there is not.  The committers are free to work on what they want.
Ok, just wasn't sure how the ASF worked, some combination of directed 
and volunteer effort or something.  Not do this or else strict 
direction, but please focus energy here if you can, a laidback 
direction.

In the message on the suPHP list, it is implied that there is in 
general a
mentality that security is not a priority
Given the way we handle security issues I don't think this remark will 
hold water.
That's quoted out of context.  Security holes get prompt attention.  I 
was referring specifically to security as presented by perchild, as 
noted in the parenthisized expression below which was part of the same 
sentence.  Sorry if my wording misconstrued the point.

(at least regarding setuid per request as perchild MPM would like to 
do),
Apparently there are a lot of people with the itch, but nobody 
scratching it.
[...]
Well, we are volunteers you know ;).  I'm sure you could find someone 
to work
on perchild on a contract basis, making your itch (one of) the 
developers itch.
Or even an external party who would submit patches.
I'd be more than happy to scratch this itch, but I haven't the coding 
ability or speed, testing environment, resources or time right now.  I'd 
do it for the fee of training me how to do it.  :p  I'll need to consult 
the reference manuals.  If I had financial resources I'd use them to 
encourage itches like this to be scratched, because when admin get burnt 
out over a missing feature, I'd like to give back that enthusiasm and 
fun and peace of mind.  Coding is hard work and people deserve something 
for the time, even if enjoyment of the coding effort is usually enough. 
FWIW, I try to payback to this specific in the only way I can, by 
helping on the users list, and occasionally try to submit simpler code 
to other projects.

Well, what is vital depends on context.  Apparently it isn't as vital, 
since
2.x is certainly used without this vital mpm.

Agreed, it would be very nice to see perchild development picked up 
again.  Or
metux integrated in the main distro (it'd need review and all that, 
and ofcourse
desire from the metux developers to do so).  For me personally, it 
isn't a big
enough itch to start scratching it.  Proxy and caching are a lot 
higher on my
personal agenda.  As are some other features I still am desperately 
seeking the
time for to work on.
There may be a discrepancy between what developers in general consensus 
think is vital, what is vital to individual developers, what admin think 
is vital, or people of different platforms, or what combinations of 
technologies are being used in concert.  I'm thinking vital in terms of 
a common problem which many experience, for whom various workarounds do 
not work that well.

To that end I am just curious about what it takes to have something of 
that magnitude eventually committed.  If no developers are currently 
interested in a topic, who reviews it?  What if someone applies to be a 
developer and says this is vital to them and a portion of the user base? 
Wether RTC or CTR, some patches of a lesser magnitude (affecting only 
one module for instance) seem to fly right through, yet other patches 
hang around a long time.  I am just curious if status quo vs. who you 
know plays any part in the process.  If so then it has to be planned 
for and addressed by anyone attempting to make a contribution.  I'm not 
so good figuring this stuff out, so that's why I ask.

The problem is that you drag in the *nix vs Windows argument.  Why do 
we need
to bother with that at all?
Hmm, sorry, I didn't even see that happening.  I did not feel like I was 
requesting a discussion of the A vs. B, but it maybe opened the wrong 
door, and I'm sorry for that.  I mentioned something about A and B, 
thought that it might be mistaken for an argument of pro A con B, and 
stated that I didn't wish to discuss it in that context, and hoped that 
would be enough.  The initial discussion was presented to me as perchild 
mpm (security) vs winnt mpm (portability), so my initial thoughts were 
along that line.  But as I indicated, I considered the possibility that 
it was just a coincidence of events, not a directed intent.  If I say I 
prefer A to B, is that wrong?  IMO no, because I did not ask anyone to 
agree, nor ask their opinion, nor tell anyone what to prefer.  You don't 
have to talk about that then.  :-)  I am not arguing for or against an 
OS.  I just mentioned the three as the OSes to which I have had access, 
and experience, and to which platform some 3rd party solutions to 
setuid/setgid (perorphan?) are available and some are not.  Of course 
I'd like a portable solution

Re: Permissions

2005-01-16 Thread Leif W
Jason Rigby; 2005 January 16 Sunday 04:06
Allow from all computers on the Internet,
Deny from the 10.x.x.x subnet,
Allow from a particular IP address within the 10.x.x.x subnet (ie 
10.0.0.14)
Hi,
This might have been better to start on the Users list...
However I just tried this.
Order deny,allow
Deny from 127.
Allow from 0.0.0.0
Allow from 127.0.0.1
Which worked fine.  If you Allow from all instead of Allow from 
0.0.0.0 then it doesn't work, which is weird, because I thought all 
was interpreted as 0.0.0.0.  You can just substitue your 10. for my 
127. and 10.0.0.14 for 127.0.0.1 and try yourself.

Anyone else care to verify or explain the difference between 'all and 
0.0.0.0?  I used Apache 2.0.52 + SSL for Win32 as built by 
hunter.campbus.com .

Leif



Re: Bug report for Apache httpd-2.0 [2004/12/06]

2004-12-05 Thread Leif W
 [EMAIL PROTECTED], Sunday, December 05, 2004 19:23


+---
+
 | Bugzilla Bug ID
|
 |
+-+
 | | Status: UNC=Unconfirmed NEW=New ASS=Assigned
|
 | | OPN=ReopenedVER=Verified(Skipped
Closed/Resolved)   |
 | |
+-+
 | |   | Severity: BLK=Blocker CRI=CriticalMAJ=Major
|
 | |   |   MIN=Minor   NOR=Normal  ENH=Enhancement
|
 | |   |
+-+
 | |   |   | Date Posted
|
 | |   |   |
+--+
 | |   |   |  | Description
|
 | |   |   |  |
|
...
 | 7862|New|Enh|2002-04-09|suexec never log a group name.
|
...
 |24331|New|Nor|2003-11-02|SSLPassPhraseDialog exec: always default
port in a|
...

+-+---+---+--+--
+
 | Total  456 bugs
|

+---
+

These two are the ones I have posted patches for, and they affect and
apply to 2.0-HEAD and 2.1-HEAD.  They are very simple, maybe 3-5 lines
each.  Help make it 454 bugs!  0.4% less bugs!  Woo!  :)

One patch (SSLPassPhraseDialog) makes the code work the way it is
already documented (currently always passes port 443 which is wrong).

The other patch (suexec) makes the log file more consistent with the way
it handles uname/uid and gname/gid (currently name/# and #/#, should be
name/# and name/#).  As some weary sysadmin may not notice gname/gid
12345/12345 actually belongs to evilgroup/12345.  I've left even more
detailed descriptions on each bug report's page.

I've tried my best to follow the suggestions on How to Contribute
Patches to Apache ( http://httpd.apache.org/dev/patches.html ).
Please, someone else, can you verify?  Perhaps even test the patch(es)?
Change the priority to MIN or ENH if you want.

Regarding point 2, should I ask on the users list for volunteers to test
the patches before asking the devs here to review?

Regarding point 3, style suggestions.  One patch may have an extra
newline (whitespace) after an if/else if/else block.  I'm not sure if it
100% adheres to the style on that point, but the style adheres in every
other regard.

Regarding point 3, documentation.  I'm not sure what or where it needs
documenting.

Regarding points 4 and 5, when I have time I'll check Bugzilla, and if I
can respond or fix, I'll do so.  This may be the rare occasion where I
was actually able to fix something or clean up other's patches and
update the bugzilla entries.  :p

Leif

squeak squeak





[PATCH] Two simple patches for 2.0.52, suexec.c and SSLPassPhraseDialog

2004-12-01 Thread Leif W
Hello again.  Just hoping to get two simple patches reviewed, tested and
applied.  Please take the time if you can.  Each patch changes 3-5 lines
or so, and I don't think either will break anything.  I've included
links to the bug report, patch and original mail to this list.  I've
just checked, and both bugs exist in Apache httpd 2.0.52 and 2.1 (SVN
trunk), and the patch should apply cleanly to both 2.0 and 2.1.  It
really is that simple, and fixes two small and annoying things.  :)

* Summary: suexec never log a group name.

  BUG 7862
  http://nagoya.apache.org/bugzilla/show_bug.cgi?id=7862

  PATCH 13429
  http://nagoya.apache.org/bugzilla/showattachment.cgi?attach_id=13429

  MAIL
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=110046435816636w=2

  NOTE: previous mail had the correct subject line but mislabelled
summary: SSLPassPhraseDialog ...)

* Summary: SSLPassPhraseDialog exec: always default port in argv

  BUG 24331
  http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24331

  PATCH 13453
  http://nagoya.apache.org/bugzilla/showattachment.cgi?attach_id=13453

  MAIL
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=110046149606582w=2






Re: End of Life Policy

2004-11-20 Thread Leif W
 Paul Querna, Saturday, November 20, 2004 13:32

 I would like to have a semi-official policy on how long we will
provide
 security backports for 2.0 releases.

 I suggest a value between 6 and 12 months.

Support 2.0 for the lesser of:

*) Until the next stable release after 2.2 (2.4 or 3.0)
*) 12-24 months from 2.2 release

Rationale: Don't stop supporting 2.0 until 2.2 is widely used.  Getting
usage statistics is tricky, with people disabling server version string.
Have a poll?  ;-)  Widely used should be quantifiable, the definition
is debatable and the timeframe may not be predictable.  Say over 50%,
like 2/3 of the combined users of 2.0 and 2.2 use 2.2, 1/3 use 2.0.  Or
75/25.  Or shall we still include 1.3?  ;-)

 Many distrobutions will provide their own security updates anyways, so
 this would be a service to only a portion of our users.


I use a distribution, but I prefer tarballs to package hell for things
like Apache.  The distributions may patch something as quickly, but on
an older version.  It can take some months or even years before the
package uses the newer version which may have a non-security bugfix.

Anything less than a year seems like pulling the rug out from under
people.  Why stop supporting the software before it even gets widely
adopted?  How long since 2.0 came out, and there are people still stuck
with 1.3, due to valid concerns.

 As always, this is open source, and I would not stop anyone from
 continuing support for the 2.0.x branch. My goal is to help set our
end
 user's expectations for how long they have to upgrade to 2.2.

Maybe it can be done with communication through the available channels
(web, mail, tarballs)?  We strongly urge you to migrate those old 2.0.x
or (ack) 1.3.x modules to 2.2.x within the first ( 6  M  24 ) months
after the 2.2.x release!  Maybe put a timed nag message at the end of
the ./configure script: alert people of the support window, advise them
to upgrade modules.  Not necessarily explicitly dropping security
backports, which makes it look like the developers drop the ball, but
turning it around on the user, to let them know that it's them who chose
to drop the ball.

24 months is a *** eternity though...  :p

Leif





Re: RFC for a Perchild-like-MPM

2004-11-19 Thread Leif W
 Andrew Stribblehill, Thursday, November 18, 2004 07:53

 Quoting Ivan Ristic [EMAIL PROTECTED] (2004-11-17 17:31:39 GMT):
  Paul Querna wrote:
 
Are you familiar with FastCGI? My first impression is that most of
what you envision is possible today with FastCGI, or would be
possible with some (small) additional effort.

 FastCGI is non-free. This solution also copes with things like
 mod_php and mod_perl being a different user. A Good Thing IMO.

Just to clarify: Which FastCGI are we talking about?  There are two
listed on http://modules.apache.org/ .

There's the (former?) OpenMarket's http://fastcgi.com/ (mod_fastcgi),
with the unclear license, which was last released as version 2.4.2 on
2003-11-24.  Does it still work with Apache httpd 2.0.x?  Does it work
with 2.1.x?  There's a more recent snapshot from 2004-04-14, but that is
old enough to make me wonder about compatibility.

Then there's http://fastcgi.coremail.cn/ (mod_fcgid), is GPL, which
implements the FastCGI protocol, and was last released as version 1.0 on
2004-09-14.  Is this implementation complete, efficient, comparable to
the original mod_fastcgi?

Leif





Re: RFC for a Perchild-like-MPM

2004-11-19 Thread Leif W
 Ivan Ristic, Friday, November 19, 2004 12:42
  Leif W wrote:
 Andrew Stribblehill, Thursday, November 18, 2004 07:53
 
 Quoting Ivan Ristic [EMAIL PROTECTED] (2004-11-17 17:31:39
GMT):
 
 Paul Querna wrote:
 
   Are you familiar with FastCGI? My first impression is that most
of
   what you envision is possible today with FastCGI, or would be
   possible with some (small) additional effort.
 
 FastCGI is non-free. This solution also copes with things like
 mod_php and mod_perl being a different user. A Good Thing IMO.
 
  Just to clarify: Which FastCGI are we talking about?  There are two
  listed on http://modules.apache.org/ .
 
  There's the (former?) OpenMarket's http://fastcgi.com/
(mod_fastcgi),
  with the unclear license,

   In what sense is the licence unclear?

Unclear in the sense that one person said it wasn't free, and another
person said it was ASF compliant, and I couldn't tell the difference
after skimming the license quickly.  It wasn't abundantly clear at
first, so I wasn't making any conclusions.

Also in the sense of how it can be used.  Free or not, modify or not,
incorporate in other things, redistribute incorporated binaries from
modified or unmodified sources.  The business about using it only for
FastCGI implementations is a potential trouble spot.  What if someone
wanted to make some hybrid module.  I'm not a module developer at this
point,so I don't know if or when it would make sense to do such a thing.
But if I have a fork in one hand and a electrical outlet in the other, I
want the right to electrocute myself and see what happens.  :)

   But even if it is, I think it is worth to reuse the protocol
   alone. There are many well-tested FastCGI libraries that support
   it on the client side.

After skimming the home page, there seemed to be a clear distinction
between the code for the module and the protocol specification, where
the module code is a reference implementation of the FastCGI protocol.
That was my impression.

  which was last released as version 2.4.2 on
  2003-11-24.  Does it still work with Apache httpd 2.0.x?

   Works fine with httpd 2.0.x in my tests (mod_fastcgi 2.4.2, I
   didn't try the more recent snapshot). I have the impression that
   many people feel FastCGI is dead because there isn't much
   activity on the web site. But it seems to me the authors have
   just made the protocol (and the Apache module) do what they wanted
   it to do.

It was my impression that it was probably dead, and as you said,
possibly just complete or working, which seems like such an alien
concept in free software, where changes and activity are like heartbeats
and a pulse.  :)

  Does it work with 2.1.x?

   I don't know.

When I have time I might try 2.1 from the new shiny SVN.

  Then there's http://fastcgi.coremail.cn/ (mod_fcgid), is GPL, which
  implements the FastCGI protocol, and was last released as version
1.0 on
  2004-09-14.  Is this implementation complete, efficient, comparable
to
  the original mod_fastcgi?

   Never used that one. The web site does not say what motivated
   the developer to produce another implementation.

More toys to play with.

We now return to our regularly scheduled thread, already in progress.

Leif





Re: People still using v1.3 - finding out why

2004-11-18 Thread Leif W
 Graham Leggett , Thursday, November 18, 2004 14:43Hi all,

 I've been keen to do some digging for reasons why someone might need
to
 install httpd v1.3 instead of v2.0 or later.

I have no idea.  Stupidity, laziness, fear of change.

Maybe it's modules.  The bandwidth throttling module might be a valid
reason.  Some people want to get a few more years of their money's worth
that they paid for those $500+ third-party modules 5 years ago, like ASP
and FrontPage or whatever.  Smaller businesses (the customer) may have
had very customized modules made for 1.3 and not have the cash or the
incentive (low web revenues) to invest in a programmer to convert to
2.0.

I used to work at an ISP as sysadmin three years ago, and everything was
1.3, though 2.0 was still reported as beta, but also reported as running
the main Apache website.  That's a mixed signal: either you're not
confident to stop calling it beta, or you're confident to run your
majorly, vitally important website with it.  :p

PHP is a fun language and I really enjoy using it, but I don't get the
FUD crap.  I've played with Apache 2.0 for over 2 years now (mostly
prefork MPM) and I had only one PHP4 problem, and it was not Apache, but
it was a PHP4 problem.  After reading the bug database, I saw that they
refused to acknonwledge it as a PHP4 problem in thepast and just blamed
Apache by default when they heard mention of Apache 2, closed bugs,
marked as invalid, refused to reopen or mark as duplicates, repeatedly
fixed the problem (several months after it was reported), and repeatedly
broke the problem again (same types of reports across multiple
versions), and repeatedly announced fixes for the same exact problem.

This all seems retarded, to have such animosity or unfriendliness to
each other.  The server is the one that the language most often runs on,
and it is one of the most popular languages to ever run on the server.
It would seem like a naturally symbiotic relationship with a huge
incentive to cooperate.  Eventually, just stop offering 1.3 for
download, stop patching 1.3, remove the ability to file 1.3 bug reports,
and I suppose people would upgrade, but it probably wouldn't make any
friends.  ;-)

Leif





Re: Branching and release scheduling

2004-11-16 Thread Leif W
 On: Tue, 16 Nov 2004 07:55:13 PM EST, Jim Jagielski wrote
 
  On Nov 16, 2004, at 3:16 PM, Manoj Kasichainula wrote:
  
  We had a good discussion over lunch today on our release processes and 
  how to have stable releases while making new feature development as 
  fun and easy for the geeks as possible.
 
 I find it somewhat hard to get excited by 2.x development because
 2.1 almost seems like a black hole at times... You put the energy
 into the development but have to wait for it to be folded into
 an actual version that gets released and used.

[snip]

 Stability is great, but we should be careful that we don't unduly
 sacrifice developer energy. This may not be so clear, so feel free
 to grab me :)

Hello,

Yes, I've lurked on this list barely a week, and the only source contributions
are very minor bugs (see my PATCH nags and s/CVS/SVN/g).  However I was
recently surfing around (for no apparent reason) and stumbled upon the GCC
Development Plan ( http://gcc.gnu.org/develop.html ), which seems relevant to
the current discussion.

Of particular note is the schedule section, which sets a definite time frame
of two months per release.  IMO this shows committment to the project, and
keeps everything moving forward, and would avoid the tendency to
procrastinate.  It's a powerful thing to put something in writing.  I'm not a
GCC developer, so I'm not biased,and have no experience with the plan in
practice.  I am however a user of GCC, and I do notice new features every few
months.

The idea of constant development seems unbalanced.  If it's devloped but
never tested or merged or released or stabalized or documented, it is all time
wasted as far as the user is concerned, because they will never know about any
of it unless there's a link to download a release on main site's downloads
page.  Who is going to know the code better than the person who writes it?

If everyone is constantly playing with new features, who does the work of
writing good docs, testing, merging, and fixing bugs?  Development isn't even
1/2 of the job, and as such it needs to be balanced with all of the rest. 
Otherwise you have software which tries to do too much and doesn't quite
deliver everything it promises.

On the other hand, I know how it is, when I go to bed sleepy, thinking of some
code feature or problem, wake up a few times at night and think of it, and
have some ideas in the morning, which maybe disappear forever unless I make
use of them.  Sometimes just making a few notes isn't enough.  This is the
argument for constant development?  To have the most flexibility and
convenience to continually to capture as many new ideas as possible?

The balance often seems to be three stages: Develop  merge (alpha, odd
minors), freeze  test (beta, odd minors), bugfix only (gamma, even minors). 
So then the question again: how to keep releasing on a regular schedule?  For
each piece of new code that is developed or merged, it will need to be
maintained later on (for documentation, testing, and bug fixes), so plan
ahead.  Is the original coder going to commit to that work, or would they
prefer adding new features?  If they want to develop, then they must find a
maintainer to work with the developers with write-access.

Are there enough qualified and trusted people with write-access to the source
repository to review code submissions?  Let's not overload them with a bunch
of abandonned projects or too much new code from one person.  Are there any
prequalifiations like has-docs, has-maintainer(s), needs-testing, has-orphans
or needs-updates (with respective threshhold values to allow new code but
prevent a single developer from adding too much new code until they finish
their old code), which could let the maintainer focus on code which is ready
to be added or merged?  Or is submission-throttling a bad idea, and why?

Seems like there's a lot of code with a lot of easy to fix bugs and not enough
people for the grunt work of reviewing, responding and ultimately closing all
of them, but plenty of people to keep pushing new code development.  Result:
old bugs don't get fixed, new code doesn't get merged, no joy all around.

I look forward to constructive responses if any.  I'm just throwing this out
there for the heck of it.

Thanks as always for all the hard work: thinking of good ideas, implementing
new policies and code to strengthen the development infrastructure, and the
resultant excellent software, and documentation which makes the software come
alive for the rest of us.

Leif




[PATCH 13453] Bug 24331: SSLPassPhraseDialog exec: always default port in argv

2004-11-14 Thread Leif W
Summary: SSLPassPhraseDialog exec: always default port in argv

BUG 24331
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24331

PATCH 13453
http://nagoya.apache.org/bugzilla/showattachment.cgi?attach_id=13453

The bug still exists in 2.0.52.  I first noticed this bug in 2.0.49,
tested with 2.0.48 and it was there as well, and the original reporter
patched against 2.0.47.  I have not tested how far back it goes.

I simplified the original patch.  Now only 3 lines are changed: 1
modified, 2 added.  Now it first checks s-addrs-host_port and then
s-port.  s-addrs is a list, but the patch only looks at the first
host_port.  I don't know if or when this might be a problem.  But the
results for me are more correct with the patch.

It appears to work fine, but is not heavily tested, so I seek tests and
comments.  This is an old bug which could seriously limit multi-port
(non-443) SSL configurations.  This is simple to fix with no side
effects, so I'd like to see testing and feedback and an eventual CVS
commit.  :-)

SSLPassPhraseDialog exec:/path/to/pass_phrase_script
# which logs it's actions, as shown below args ( host:port, method )

WRONG: before patch, port is shown as 443

[EMAIL PROTECTED]:59:39]uid=0(root) gid=0(root) groups=0(root)
sent passwordargs ( some-host:443, RSA )

RIGHT: after patch, real port of 4301 is correctly shown

[EMAIL PROTECTED]:25:44]uid=0(root) gid=0(root) groups=0(root)
sent passwordargs ( some-host:4301, RSA )





[PATCH 13429] Bug 7862: suexec never log a group name.

2004-11-14 Thread Leif W
Summary: SSLPassPhraseDialog exec: always default port in argv

BUG 7862
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=7862

PATCH 13429
http://nagoya.apache.org/bugzilla/showattachment.cgi?attach_id=13429

The bug still exists in 2.0.52.  I first noticed this bug in 2.0.52, but
the original reporter opened the bug in 2002-04-09 from CVS and the
version was 2.0-HEAD.  I have not tested how far back it goes.

I simplified the original patch.  The logic is clearer and less
redundant than the original poster's patch, yet achieves the desired end
result.

It appears to work fine, but is not heavily tested, so I seek tests and
comments.  This is apparently an old bug which although mostly cosmetic,
makes the suexec_log file easier to read in the wee hours of the
morning, when a human might not recognize a numeric group id, but would
recognize the alphanumeric group name.  This is simple to fix with no
side effects, so I'd like to see testing and feedback and an eventual
CVS commit.  :-)

Testing:

suexec configured, SuexecUserGroup set in VirtualHost context, looking
in suexec_log, running a simple script id.sh, with filesystem
permissions and ownsers to match SuexecUserGroup specification and
suexec.c qualifications and sanity checks.

===
#! /bin/bash

cat END_OF_HTML
Content-Type: text/plain

id: `id`
END_OF_HTML
===

WRONG: before patch, actual_gname is always the numeric group id - gid:
(1248/1248)

[2004-11-13 01:54:14]: uid: (1248/someuser) gid: (1248/1248) cmd: id.sh

RIGHT: after patch, actual_gname shows the alphanumeric group name -
gid: (1248/someuser)

[2004-11-13 01:55:23]: uid: (1248/someuser) gid: (1248/someuser) cmd:
id.sh