Re: Puzzling News

2005-03-14 Thread Ben Laurie
William A. Rowe, Jr. wrote:
Fascinating reading (see the bottom two tables of these pages:
http://www.securityspace.com/s_survey/data/man.200501/srvch.html?server=Apacherevision=Apache%2F1.3.33
http://www.securityspace.com/s_survey/data/man.200501/srvch.html?server=Apacherevision=Apache%2F2.0.52
What is notable is that the number of users adopting 1.3.33 in place of 2.0 far 
outweighs the number moving from 1.3 to 2.0.
One could argue that we aren't pushing 2.x releases out fast enough.
I'd argue the opposite, we aren't refining 2.x sufficiently for folks to garner 
an advantage over using 1.3.  It simply isn't more effective for them to use 
2.0 (having tried both.)
Consider this as we prepare to announce to the world 2.1-beta.  Are folks going 
to be more impressed with 2.1-beta (in spite of it's wrinkles that a beta 
always introduces) than what they used before?
For anyone who wants to argue that this is a PHP-caused anomaly, note also
http://www.securityspace.com/s_survey/data/man.200501/apachemods.html
And in sunny news, about 9.6% of domains are hosted on Apache/2[...],
with another 14.24% of Apache users not revealing their version 
(1.x vs 2.x).
FWIW, I have yet to run 2.x in anger. Not sure why you find this puzzling.
--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff


Re: piped log bug?

2005-03-14 Thread Kiran Mendonce
Hi Arkadi,
We've run into a similar problem. I didn't quite understand the second 
solution that you suggested. How would closing the read side of the pipe 
in the httpd child processes help in solving this problem ?

Thanks,
Kiran
Arkadi Shishlov wrote:
On Tue, Mar 08, 2005 at 01:22:46PM -0500, Jeff Trawick wrote:
 

On Tue, 08 Mar 2005 19:00:05 +0200, Arkadi Shishlov [EMAIL PROTECTED] wrote:
   

So is there any progress with the issue or just nobody is interested?
 

me interested?  yes
me have time right now? no
The more investigation work you can do, the better.
   

If you agree with the 'diagnosis' I can try to cook two patches to verify 
it.
1. Do not kill pipe log process, let it read the pipe till EOF. Or a patch to
  cronolog to ignore SIGTERM.
2. Close pipe rd side in children. Is it possible to do in clean way via
  apr*register/whatever?
arkadi.
 




Re: Puzzling News

2005-03-14 Thread William A. Rowe, Jr.
At 05:48 AM 3/14/2005, Ben Laurie wrote:
William A. Rowe, Jr. wrote:
Fascinating reading (see the bottom two tables of these pages:
http://www.securityspace.com/s_survey/data/man.200501/srvch.html?server=Apacherevision=Apache%2F1.3.33
http://www.securityspace.com/s_survey/data/man.200501/srvch.html?server=Apacherevision=Apache%2F2.0.52
What is notable is that the number of users adopting 1.3.33 in place of 2.0 
far outweighs the number moving from 1.3 to 2.0.

FWIW, I have yet to run 2.x in anger. Not sure why you find this puzzling.

In that particular window (of a month) more folks took Apache 2.0.x
servers down in favor of 1.3.x servers, than those who upgraded to
2.0.x from 1.3.x.

I'm not suggesting everyone has a need for 2.0 (clearly some cases
call for it, and just as clearly, it's impractical for others.)
And I'm not suggesting some specific growth rate is 'good' or 'bad'.

I am concerned that 

* what we deliver in 2.0.x is just as usable and stable as 1.3.33.
  This report suggests, to some degree or in some cases, it isn't so.

* when we 'announce' a new 1.3.x release, we are careful to note
  that it isn't an improvement over 2.0.x, and if 2.0.x didn't have
  the bugs that 1.3 release addresses, we note same, so users aren't
  compelled that 2.0.foo is four months old, and they just announced
  this new 1.3.bar.  I better upgrade so I get the bug/security fixes.






Re: Puzzling News

2005-03-14 Thread Joe Schaefer
William A. Rowe, Jr. [EMAIL PROTECTED] writes:


[...]

 In that particular window (of a month) more folks took
 Apache 2.0.x servers down in favor of 1.3.x servers,
 than those who upgraded to 2.0.x from 1.3.x.

That may be explainable by someone familiar with the 
hosteurope.de anomaly in the January 2005 statistics.
The meteoric rise of 1.3.23 is perfectly correlated with
that ISP (I'd guess they're moving some large portfolio 
of domains to SuSE 8.0, which ships 1.3.23).

URL:
  http://www.securityspace.com/s_survey/server_graph.html?type=http
  domaindir=month=200502serv1=QXBhY2hlLzEuMy4yMw==


URL:
  http://www.securityspace.com/s_survey/data/man.200502/
  ISPreport.html?plot=hosteurope.de


-- 
Joe Schaefer



httpd-2.1.3-beta under a large DDOS attack ... not good.

2005-03-14 Thread apache-dev

*long but interesting, I hope*

I had the displeasure of coping with a large DDOS attack this weekend
and tested out how apache 2.1.3-beta did. It didn't do very well at all.

I realize this list is for discussion of changes to the source code and
related issues but I'm hoping this is still appropriate and would be
interested to get feedback.

The attack was from a botnet comprising, at any one moment, over 6000
unique IPs. New IPs were adding themselves fairly constantly, a dozen
every few minutes at least. They were also rolling off. The clients were
all windows XP and 2k machines, and judging from the ping time to many
of them, many were dialup or on other dynamic IPs. They were getting
their current attack targets from a php program on the webserver of a
rooted box, not from an IRC type control system.

The zombie army had a rather unique attack. They would send multiple syn
packets in order to try to open a connection on the web server.
syn_cookies coped ok with this. If they succeeded in opening a
connection to port 80 they would then send one random character at a
time, about one second apart. Each character would come in its own tcp
packet - so the tcp PSH flag was set. If you had an infinitely powerful
web server you would therefore soon be handling over 10,000 to 50,000
active connections, all doing nothing much - why more than the number of
unique ips? because the zombies were also doing this in parallel, so one
zombie could hold open more than one connection. At the same time, they
were also flooding our IP with fragmented ping packets of 1480 bytes
each in order to choke up our port. About 9/10ths of the traffic by
volume was fragmented pings and about 9/10ths of traffic by number of
packets was syn or 1 or 2 byte data packets along with associated RSTs
and so on.

The hardware on the machine coped ok with all this - about 60mbit
incoming traffic and linux 2.4 with NAPIfied latest e1000.o - but one
cpu was pretty much flat out picking up packets from the card.
Unfortunately, if more than 3000 IPs were added to netfilter as DROPs
then the box would start to fall behind (overruns on the card). So
blacklisting all bad ips was a non-starter. Even picking out bad ips
wasn't so easy as they look initially like a normal open request.

Ok, so how did the latest apache source cope with this, using the
mpm-worker module? Not too good. I tried a number of different ways. The
first thing I did was reduce the Timeout to 1 second. In this way I
hoped to fast-drop any connections that were dribbling characters.
Unfortunately, zombies sending characters 1 per second meant that apache
did not drop the connection fast enough, a zombie could keep a slot open
for 5-10 seconds until it got kicked.

Still, my apache configured for 6000 max clients, spread over 60 httpds
with 100 threads each, the server-status would soon show 4000-5000
active connections and could still serve legit requests (hardly
difficult - I was serving with 302 redirects with mod_rewrite to an
unmolested IP address in order to move users).

HOWEVER, even though server-status showed the config was stable like
this - as many new connections coming in as old ones dying off, and
although the server was functional, the memory on the box was being
consumed at a crazy rate. Within 40 seconds, over 1gb of physical memory
had vanished all sucked down by apache.. and unless all processes were
immediately killed, the box would move into swap space and become
totally unresponsive. (the box was an SMP xeon with 2gb of memory). So,
I had to kill and restart apache every 50 seconds.

It also did not matter what I did with RequestsPerChild, 1 or 50, I was
not getting memory back.

Lastly, no logging of 408 (timeout) errors were happening. I could
telnet to apache and sit and wait and get kicked after 1 second, and get
no 408 log line.

Apart from that issue, apache was crashing, especially if I tried to
config for more than 5000 clients. I would either receive out of memory
errors when doing thread creates, or other bad error messages relating
to one child, that would cause the entire server to shut down.

Here is an example of the scoreboard: notice, the server has been up for
just 20 seconds:

   Current Time: Sunday, 13-Mar-2005 14:54:39 EST
   Restart Time: Sunday, 13-Mar-2005 14:54:18 EST
   Parent Server Generation: 0
   Server uptime: 20 seconds
   3598 requests currently being processed, 987 idle workers

RR..C..CCRRRCRRR.RC.R.R..RRRCCCR..CRR.R.CRRR
CRR.RR.CRC.CCRCC..CCR..CC.CC.CCCC..CCR.C.R.RR.R.CC.C
CC..CC..CC.CR.C.R....RR.R.CC.CCRC..C.CR..C...CC..C.C
..C.CCCRRCRCCRRRCRCRRRCCRCCCRRCC
RRCCRCCCRCCRRRCCRRCCRRCRCRCR
RCCCRRR_RRCRCC_CRCCRCRRRCRCCRRCRCRCRRCCRCCRC

Here is an example of the memory usage:

 total   used   free sharedbuffers cached
Mem:   20638921301084 762808 

[PATCH] subrequests don't inherit bodies

2005-03-14 Thread Greg Ames
A customer reported a problem where their back end app would hang until a read 
timed out.  The main request was a POST which did have a request body which was 
read normally.  The POST response contained an ssi tag that caused a subrequest 
to be created.  The subrequest was forwarded to an app on a back end server. 
This forwarded subrequest contained a Content-Length header so the back end 
server/app was expecting a request body that didn't exist, thus the read timeout.

As it turns out, we clone all of the main request's input headers when we create 
the subrequest, including C-L.  Whacking the subrequest's C-L header fixes the 
hang.  Since the main request's body could also have be chunked, we should 
probably remove the subrequest's Transfer-Encoding header as well.

There could be other headers that don't make sense for subrequests without 
bodies, such as gzip/deflate headers.  But I'm concerned about adding too much 
to a fairly common code path.  a few thoughts for changes to subrequest creation:

1. do a quick test for the main request containing a body.  if true (pretty rare 
case), remove all headers pertaining to request bodies.

2. generate a minimal set of headers from scratch that make sense for the 
subrequest, rather than cloning them from the main request

3. (attached) just remove the headers that say a request body exists.
Thoughts?
Greg
Index: server/protocol.c
===
RCS file: /m0xa/cvs/phoenix/2.0.47/server/protocol.c,v
retrieving revision 1.15
diff -u -d -b -r1.15 protocol.c
--- server/protocol.c   5 Nov 2004 15:21:41 -   1.15
+++ server/protocol.c   14 Mar 2005 21:01:37 -
@@ -996,7 +996,14 @@
 
 rnew-status  = HTTP_OK;
 
-rnew-headers_in  = r-headers_in;
+rnew-headers_in  = apr_table_copy(rnew-pool, r-headers_in);
+
+/* we can't allow the subrequest to think it owns a request body
+ * which the main request has already read. 
+ */
+apr_table_unset(rnew-headers_in, Content-Length);
+apr_table_unset(rnew-headers_in, Transfer-Encoding);
+
 rnew-subprocess_env  = apr_table_copy(rnew-pool, r-subprocess_env);
 rnew-headers_out = apr_table_make(rnew-pool, 5);
 rnew-err_headers_out = apr_table_make(rnew-pool, 5);


Re: httpd-2.1.3-beta under a large DDOS attack ... not good.

2005-03-14 Thread Paul Querna
I think it is slightly deceptive to say 2.1.3-beta doesn't handle a DDoS 
attack very well -- 1.3.x or 2.0.x would not do any better.

[EMAIL PROTECTED] wrote:
*long but interesting, I hope*

Few comments:
1) There was a memory leak in the core_input_filter.  It has been fixed 
in /trunk/, but was still present in 2.1.3-beta and 2.0.53.  This should 
stop the leak from sending one-character at a time:
http://issues.apache.org/bugzilla/show_bug.cgi?id=33382

So,
 I had to kill and restart apache every 50 seconds.
2) That won't help Apache performance.  If you are running out of ram, 
lower your max clients, or get more RAM.  Killing apache will force 
apache to reallocate RAM from the OS.. you want to reach a steady state. 
But, I guess since your older version did have a memory leak, nothing 
else could be done.

3) The Event MPM Might handle this load better, since it could pass off 
the one-char a second requests to the event thread.

 * Apache2 can handle 16000 active open connections on a reasonable sized
 box, at least if they are all bogus and going to be rejected, without
 recompilation of glibc.
!6,000 isn't a problem, if you have more RAM. (2gb won't cut it, I think).
 * More than one kind of timeout can be set. For example, I would have
 liked to have set a request phase timeout of 0.5 seconds or a total
 request phase timeout of 2 seconds (not an idle timeout of 1 second).
I agree, a total request phase timeout could be useful.
 * A flag for rejecting slow writers or peculiar ones (malformed garbage
 gets you kicked sooner).

Sometimes it is hard to decide.  Is it a DDoS client, or just a user on 
a 14.4 modem?

 * Graceful non-crashing behavior when thread resources of one kind or
 another are exceeded.
I don't believe there is a 'graceful' way to handle Out-Of-Memory 
conditions.  Suggestions are welcome.

Thanks for the interesting email.
-Paul


Re: [PATCH] subrequests don't inherit bodies

2005-03-14 Thread Joe Schaefer
Greg Ames [EMAIL PROTECTED] writes:

[...]

 As it turns out, we clone all of the main request's input headers when
 we create the subrequest, including C-L.  Whacking the subrequest's
 C-L header fixes the hang.  Since the main request's body could also
 have be chunked, we should probably remove the subrequest's
 Transfer-Encoding header as well. 

Shouldn't you remove Content-Type also?

-- 
Joe Schaefer



feature proposal

2005-03-14 Thread Jie Gao
Hi All,

Apache is already passing client IP addr to the backend server via a
mechanism of headers:

X-Forwarded-For
X-Forwarded-Host
X-Forwarded-Server

The difficulty is that very often the backend server is an Apache
server from a vendor, and any changes to the server will void support.
There are also circumstances in which you simply can't re-recompile
it.

It would be very helpful if Apache has configuration directives in the
core to get those headers (with conditions) in the server configuration
so that acl and logging based on the real IP addresses can work.

Thanks very much,



Jie


Re: feature proposal

2005-03-14 Thread Joshua Slive

On Tue, 15 Mar 2005 13:25:52 +1100 (EST), Jie Gao
[EMAIL PROTECTED] said:
 Hi All,
 
 Apache is already passing client IP addr to the backend server via a
 mechanism of headers:
 
 X-Forwarded-For
 X-Forwarded-Host
 X-Forwarded-Server
 
 The difficulty is that very often the backend server is an Apache
 server from a vendor, and any changes to the server will void support.
 There are also circumstances in which you simply can't re-recompile
 it.
 
 It would be very helpful if Apache has configuration directives in the
 core to get those headers (with conditions) in the server configuration
 so that acl and logging based on the real IP addresses can work.

You can do this already, with a tiny bit of work.

For the logs, replace %h with %{X-Forwarded-For}i in your LogFormat.

For access restrictions
SetEnvIf X-Forwarded-For ^123\.456\.789\.123$ badguy
Order allow,deny
Allow from all
Deny from env=badguy

Not quite as simple and flexible (you can't do reverse lookups on IPs,
for example), but it seems to me that making it easy to simply replace
REMOTE_HOST with X-Forwarded-For could lead to security problems.  There
is probably a module that will do it for you, however.

Joshua.
-- 
Joshua Slive
[EMAIL PROTECTED]



Re: feature proposal

2005-03-14 Thread Jie Gao



On Mon, 14 Mar 2005, Joshua Slive wrote:

 Date: Mon, 14 Mar 2005 22:20:39 -0500
 From: Joshua Slive [EMAIL PROTECTED]
 Reply-To: dev@httpd.apache.org
 To: dev@httpd.apache.org
 Subject: Re: feature proposal


 On Tue, 15 Mar 2005 13:25:52 +1100 (EST), Jie Gao
 [EMAIL PROTECTED] said:
  Hi All,
 
  Apache is already passing client IP addr to the backend server via a
  mechanism of headers:
 
  X-Forwarded-For
  X-Forwarded-Host
  X-Forwarded-Server
 
  The difficulty is that very often the backend server is an Apache
  server from a vendor, and any changes to the server will void support.
  There are also circumstances in which you simply can't re-recompile
  it.
 
  It would be very helpful if Apache has configuration directives in the
  core to get those headers (with conditions) in the server configuration
  so that acl and logging based on the real IP addresses can work.

 You can do this already, with a tiny bit of work.

 For the logs, replace %h with %{X-Forwarded-For}i in your LogFormat.

 For access restrictions
 SetEnvIf X-Forwarded-For ^123\.456\.789\.123$ badguy
 Order allow,deny
 Allow from all
 Deny from env=badguy

 Not quite as simple and flexible (you can't do reverse lookups on IPs,
 for example), but it seems to me that making it easy to simply replace
 REMOTE_HOST with X-Forwarded-For could lead to security problems.  There

Yes, there is a security concern with that setup. I can only trust
X-Forwarded-For when the request is proxied from my front-end server.

Really, to think of it, this feature is a bit tricky to add: on the one
hand, Apache knows who it is talking to and on the other hand, it needs
to let the acl mechanism know the client is really another one.

 is probably a module that will do it for you, however.

I could write the module myself, but the point is I cannot touch (read:
recompile) the backend server

Regards,


Jie


Rolling 2.1.4...

2005-03-14 Thread Paul Querna
I would like to roll the 2.1.4 alpha right after APR 1.1.1 is released.
I plan on rolling APR tonight or Tuesday morning.  If there arent any 
problems, I am hoping to create 2.1.4 on Thursday.  Any big outstanding 
issues?

Thanks,
-Paul


Re: APR OS400 sources

2005-03-14 Thread Jeff Trawick
On Mon, 14 Mar 2005 19:35:52 +0100 (GMT-1), Damir Dezeljin
[EMAIL PROTECTED] wrote:

 I'm wondering if APR OS400 port sources are publicaly available? Does
 anyone know anything about this? What about apache sources?

I have no idea of the OS/400 patches and/or full sources they use are
publically available.

 I'm asking it because I'm developing an application on OS400. I'm using
 APR as a portability library, however from time to time I need to use OS
 API. Having APR sources will be really helpful ;) E.g. I need to get a
 thread ID for my logging module implementation (the logging module will
 not using APR).
 
 On e.g. Linux it is enough to call getpid(). Unfortunately this call
 returns the same process ID for all threads on OS400. Any idea?

you're relying on a Linux wart which has been fixed in recent Linux ;)

 
 Anyway ... is it posible to use apr_os_thread_current() for this purpose?
  I was unable to use it as I didn't find the declaration of the
 apr_os_thread_t structure.
 Will apr functions for 'decoding' threads IDs represent a big performance
 hit in my application?

Here's what Apache's mod_log_config does for logging the thread id:

static const char *log_pid_tid(request_rec *r, char *a)
{
if (*a == '\0' || !strcmp(a, pid)) {
return apr_psprintf(r-pool, % APR_PID_T_FMT, getpid());
}
else if (!strcmp(a, tid)) {
#if APR_HAS_THREADS
apr_os_thread_t tid = apr_os_thread_current();
#else
int tid = 0; /* APR will format 0 anyway but an arg is needed */
#endif
return apr_psprintf(r-pool, %pT, tid);
}
/* bogus format */
return a;
}