Glenn wrote:
On Fri, Feb 06, 2004 at 10:37:51AM -0500, [EMAIL PROTECTED] wrote:
but Joshua has excellent points about virtualness being a property of the
handler. Yes, the server-status handler should know that it is virtual,
but the handler hook is too late to skip the directory walk. But
Greg Marr wrote:
I'm only changing Location ... Directory is unaffected.
Well, that's not entirely true. The Directory is affected indirectly,
because it no longer applies. The behavior currently is: it applies to
everything it matches. This would change it to: it applies to
everything
Enrico Weigelt wrote:
currently we're using multiple threads both on multiplexer and
processor side, but due some MT problems we're thinking to switch
way from that to multiprocessing.
what would you think - does MT bring real performance benefits
over MP ?
It definately cuts down on memory
Graham Leggett wrote:
I am having a moment: I am trying to build httpd statically, but I'm
struggling to find out how it is done.
The ./configure script can be configured to build all binaries
statically using --enable-static-[binary], except for httpd for some
reason.
Can anyone tell me how
I'm interested to know how httpd 2.x can be made more scalable. Could we serve
10,000 clients with current platforms as discussed at
http://www.kegel.com/c10k.html , without massive code churn and module breakage?
I believe that reducing the number of active threads would help by reducing
the
Paul Querna wrote:
On Tue, 2004-06-08 at 10:23 -0400, Greg Ames wrote:
Here is the patch - http://apache.org/~gregames/event.patch .
Very Neat :D
thanks!
I don't think everyone on this list is aware of this, but I have an
outstanding patch[1] for apr_pollset to add both KQueue and sys_epoll
...since 29-Jun-2004 11:10:49 PDT. It looks good to me. If you disagree,
please let us know.
Greg
...since Friday, 03-Sep-2004 07:56:21 PDT. This is httpd-2.0.51-rc2, Sander's
latest tarball. It looks fine to me but if you notice any undesirable behavior
please let us know.
Greg
Greg Ames wrote:
...since Friday, 03-Sep-2004 07:56:21 PDT. This is httpd-2.0.51-rc2,
Sander's latest tarball. It looks fine to me
actually, I see this in the log while shutting down the 2.0.50 build:
httpd in free(): warning: page is already free
httpd in free(): warning: chunk is already
Paul Querna wrote:
On Tue, 2004-09-14 at 08:14 +0200, Andr Malo wrote:
I'm rather for removing the whole crap from the default config and simplifiy
as much as possible.
A 30 KB default config, which nobody outside this circle here
really understands, isn't helpful - especially for beginners.
+1
Jean-Jacques Clar wrote:
This one is working as expected on my server. I tested most
of the paths and it looks fine. Added comments to the previous
path. Will like some double-checking and feedback if possible.
Can you help me/us understand the following a little better? If it is possible
for
Jean-Jacques Clar wrote:
Can you help me/us understand the following a little better?
Let's look at al the cases where the cleanup function is called or the
object is accessed:
_The cleanup bit could only be set under the protection of the global
mutex_. This is critical.
Thanks for the
Jean-Jacques Clar wrote:
Should I then go ahead and commit my patch to the 2.1 tree?
This section from decrement_refcount() makes me nervous:
-if (!obj-cleanup) {
+if (!(apr_atomic_read(obj-refcount) OBJECT_CLEANUP_BIT)) {
cache_remove(sconf-cache_cache, obj);
-
Jean-Jacques Clar wrote:
What about calling memcache_cache_free() instead of the
apr_atomic_set(). The refcount must be more than 1, we
hold the mutex. It should work just fine..
What do you think?
That looks better as far as not loosing an update from an unlocked thread.
I'm still trying to learn
looks good.
Bill Stoddard wrote:
static int remove_url(const char *key)
[...]
-if (obj) {
-obj-cleanup = 1;
+if (!apr_atomic_dec(obj-refcount)) {
+/* For performance, cleanup cache object after releasing the lock */
+cleanup = 1;
this could be:
Bill Stoddard wrote:
+ * memcacache_cache_free is a callback that is only invoked during by a
thread
s/memcacache/memcache/
during ?? by
+ * has been ejected from the cache. decrement the refcount and if the
refcount drop
+ * to 0, cleanup the cache object.
s/drop/drops/
Greg
Bill Stoddard wrote:
-/* If obj-complete is not set, the cache update failed and the
- * object needs to be removed from the cache then cleaned up.
- */
-if (!obj-complete) {
-if (sconf-lock) {
-apr_thread_mutex_lock(sconf-lock);
-}
-/* Remember,
Jean-Jacques Clar wrote:
To remove the 2 noted possible race situations, in addition to another
possible
one in store_body(),
ooops! I missed that one
we could add the following 2 recursive functions,
that have to be called under the protection of the lock (it is now for
Craig,
If I understand what you want to do, I've done a little like that with
modifications mostly to the worker MPM for httpd-2.1. The patch is here -
http://apache.org/~gregames/event.patch .
But I was primarily interested in low hanging fruit - threads that are between
HTTP requests. I
--- David Jones [EMAIL PROTECTED] wrote:
zOS needs to compile with extra CFLAGS in order to link correctly.
After revisions 153273/153266 to ./Makefile.in all compile and link flags
are lost as
buildmark.c is made without them:
concept sounds fine but...
--- Makefile.in.orig Wed Jan 17
--- steve [EMAIL PROTECTED] wrote:
I use it too, and have meddled with it enough at a source level to feel
comfortable running it. It has obvious, documented, problems (don't use
it with mod_ssl),
I didn't make it clear earlier -- I do use the event mpm.
Successfully. What *is* the
--- Henri Gomez [EMAIL PROTECTED] wrote:
Hi to all,
I'm trying to adapt mod_jk to i5/OS v5r4 and see the following in
mod_jk.log (debug mode)
[Tue Apr 17 16:23:44 2007] [6589:0038] [debug] jk_uri_worker_map.c
(423): rule map size is 0
[Tue Apr 17 16:23:44 2007] [6589:0038] [error]
check_pipeline: use AP_MODE_SPECULATIVE to check for data in the input
filters
to accomodate mod_ssl's input filter. AP_MODE_EATCRLF is essentially a
no-op
in that filter.
EATCRLF was used here for a specific reason though - the fact that many
browsers (says ap_core_input_filter)
--- Ruediger Pluem [EMAIL PROTECTED] wrote:
check_pipeline: use AP_MODE_SPECULATIVE to check for data in the input
filters
to accomodate mod_ssl's input filter. AP_MODE_EATCRLF is essentially a
no-op
in that filter.
EATCRLF was used here for a specific reason though - the fact that
no objections in principle to your suggested changes. but in practice, I don't
see where an error bucket gets flagged as metadata. seems like they should be.
Greg
- Original Message
From: Ruediger Pluem [EMAIL PROTECTED]
To: dev@httpd.apache.org
Sent: Saturday, July 7, 2007 4:13:08 PM
PM
Subject: Re: svn commit: r554011 -
/httpd/httpd/trunk/modules/filters/mod_deflate.c
On 07/10/2007 09:27 PM, Greg Ames wrote:
no objections in principle to your suggested changes. but in practice, I
don't see where an error bucket gets flagged as metadata. seems like they
should be.
How
please see rev. 558039. requests_this_child does not need to be 100% accurate.
the cure below is worse than the disease.
Greg
- Original Message
From: Dmytro Fedonin [EMAIL PROTECTED]
To: dev@httpd.apache.org
Sent: Thursday, June 14, 2007 11:49:42 AM
Subject: one word syncronize once
word syncronize once more
Greg Ames wrote:
please see rev. 558039. requests_this_child does not need to be 100%
accurate. the cure below is worse than the disease.
Greg
-requests_this_child--; /* FIXME: should be synchronized - aaron */
+apr_atomic_dec32
Brian Pane wrote:
On Oct 10, 2005, at 12:01 AM, Paul Querna wrote:
If the content has already been generated, why add the overhead of
the context switch/sending to another thread? Can't the same event
thread do a non-blocking write?
Once it finishes writing, then yes, we do require a
Greg Ames wrote:
this is interesting to me because Brian Atkins recently reported that
s/Atkins/Akins/ sorry, Brian
Greg
Brian Akins wrote:
Basically, I was referring to the overall hits a box could serve per
second.
with 512 concurrent connections and about an 8k file, 2.1 with worker
served about 22k request/second. event served about 14k.
do you recall if CPU cycles were maxed out in both cases?
thanks,
Brian Pane wrote:
I think one contributor to the event results is an issue that Paul Querna
pointed out on #httpd-dev the other day: apr_pollset_remove runs in O(n)
time with n descriptors in the pollset.
thanks, I see it. yeah we are going to have to do something about that.
Greg
...a.k.a. Windows Update doesn't work through an httpd-2.* proxy. it looks like a
regression from 1.3.
the interesting stuff happens in ap_content_length_filter. it calculates the new content
length by traversing the output brigade. but in this case, the brigade consists of only
an EOS
Roy T. Fielding wrote:
On Oct 18, 2005, at 3:58 PM, Greg Ames wrote:
Index: server/protocol.c
===
--- server/protocol.c (revision 326257)
+++ server/protocol.c (working copy)
@@ -1302,7 +1302,13 @@
* We can only set a C
Roy T. Fielding wrote:
On Oct 19, 2005, at 8:35 AM, Greg Ames wrote:
let's say it is a HEAD request for a local static file. the
default_handler calls ap_set_content_length which creates a C-L
header. but then the body could be run through some length changing
filter, such as mod_deflate
Scott A. Guyer wrote:
I would like to know more about the process
flow of the event MPM. The docs indicate it was largely
about moving listening logic for keep-alives out of the
worker threads and into the listener thread.
the basic concept of the event MPM is to reduce the amount of
Jeff Trawick wrote:
It isn't clear to me what an input filter should do about
Content-Length when it modifies the length of the body (assuming that
this isn't chunked encoding).
mod_cgi uses brigades to read the body but needs to look at
Content-Length before spawning the CGI script, so
Saju Pillai wrote:
I can understand why serializing apr_pollset_poll() accept() for the
listener threads doesn't make sense in the event-mpm. A quick look
through the code leaves me confused about the following ...
It looks like all the listener threads epoll() simultaenously on the
Paul Querna wrote:
This is traditionally called the 'Thundering Herd' Problem.
When you have N worker processes, and all N of them are awoken for an
accept()'able new client. Unlike the prefork MPM, N is usually a smaller
number in Event, because you don't need that many EventThreads Per
Saju Pillai wrote:
For a brand new client connection, why should there be 2 exchanges
between the listener thread and the worker thread before the request is
actually read ?
The client socket is accepted() and passed to a worker thread which runs
the create_connection hooks and marks the
Paul Querna wrote:
The event mpm expects the apr_pollset backends to be based on epoll()
/ kqueue() or Solaris 10 event ports. What are the reasons because of
which poll() is not considered to be suitable for the event mpm ?
Is this because of the large number of fd's to be polled and
Jeff Trawick wrote:
There are problems accounting for child processes which are trying to
initialize that result in the parent thinking it needs to create more
children. The less harmful flavor is when it thinks (incorrectly) it
is already at MaxClients and issues the reached MaxClients
Jeff Trawick wrote:
After a fix to prevent the fork bomb during slow child startup is
applied, something REALLY strange and completely unanticipated has to
happen to get the scoreboard is full, not at MaxClients message.
True?
true. no clue what else might trigger it at this point.
Markus Litz wrote:
Hello,
how can i get the filename only of the requested uri? For example if
http://www.example.com/test.html; is requestet, i only want test.html.
request_rec::filename only gives the full filename on disk.
basename(r-filename)
Greg
Jeff Trawick wrote:
I have been working with a user on one of these fork bomb scenarios
and assumed it was the child_init hook. But after giving them a test
fix that relies on a child setting scoreboard fields in child_main
before child-init hooks run, and also adds some debugging traces
in Greg Ames' patch. This is because of the enhancement to
apr_pollset that enables other threads to _add() or _remove() while the
main thread is inside a _poll().
even with plain ol' vanilla poll() ?
The place where the Event MPM should shine is with the more common case
of relatively high
Brian Pane wrote:
Paul Querna wrote:
Brian Akins wrote:
Have you tried it with higher number of clients -- i.e,. -c 1024?
Nope. I was already maxing out my 100mbit LAN at 25 clients. I don't
have a good testing area for static content request benchmarking.
I am thinking of trying to find an
Brian Akins wrote:
Can you still have multiple processes? We use 10k plus threads per box
with worker.
with my patch, yes. with Paul's, no.
But Paul's has some very nice features that mine doesn't have, so I think a
hybrid is the way to go.
Assuming you have a high percentage of threads in
Brian Akins wrote:
I can certainly provide you guys with some testing, if nothing else.
excellent!
We have some home grown benchmarks that may could help,
If they simulate user think time or would otherwise cause a lot of keepalive
timeouts, great! Finding the right client/benchmark is a
Paul Querna wrote:
This only works for the EPoll and KQueue backends. This allows a worker
thread to _add() or _remove() directly, without having to touch the
thread in _poll().
It could be implemented for plain Poll by having the pollset contain an
internal pipe. This Pipe could be pushed
Paul Querna wrote:
The updated patch for today adds multiple processes.
cool!
However, the big thing it doesn't use is accept serialization.
hmmm, that would be challenging with a merged listener/event thread. If the
event thread is blocked waiting for its turn to accept(), it can't react to a
Justin Erenkrantz wrote:
Um, flood already lets you do this and lots more.
Does flood allow multiple connections per thread or per process? Ideally the
load simulator would scale as well as the server, although that's not strictly
necessary. Eventually I want to run at least as many
Justin Erenkrantz wrote:
this MPM breaks any pipelined connections because there can be a deadlock.
core_input_filter or any connection-level filter (say SSL) could be
holding onto a complete request that hasn't been processed yet. The
worker thread will only process one request and then put it
Justin Erenkrantz wrote:
--On Tuesday, October 26, 2004 12:03 PM -0400 Greg Ames
[EMAIL PROTECTED] wrote:
Yes, this needs to be fixed. I don't see it as difficult problem. We
already test for whether the output filters need to be flushed (i.e., is
there any more data in the input filter chain
Justin Erenkrantz wrote:
The problem is that there is no reliable way to determine if there is
more data in the input filters without actually invoking a read.
that sucks IMO, but it does sounds like how the code works today. We do socket
read() syscalls during the MODE_EATCRLF calls that are
Justin Erenkrantz wrote:
--On Tuesday, October 26, 2004 4:25 PM -0400 Greg Ames
that sucks IMO, but it does sounds like how the code works today. We do
socket read() syscalls during the MODE_EATCRLF calls that are almost
always unproductive. They could be optimized away. I don't believe 1.3
Justin Erenkrantz wrote:
Connection-level filters like mod_ssl would have to be rewritten to be
async.
or to simply report whether they held on to any data.
This is how EAGAIN return values would work. But, again, I don't think
we could add it easily without changing a lot of filter semantics.
Justin Erenkrantz wrote:
--On Tuesday, October 26, 2004 4:56 PM -0400 Greg Ames
[EMAIL PROTECTED] wrote:
So I'm thinking that we should see MODE_EATCRLF behave differently when
core_input_filter has data stashed.
I would prefer a more general solution. As I hinted at before, couldn't
I was able to easily create a browser hang by configuring Mozilla to enable HTTP
pipelining, then pointing my DocumentRoot at an old copy of the
xml.apache.org web site which had tons of imbedded graphics. Thanks, Justin,
for pointing out the bug.
The attached patch fixes it when there are
Justin Erenkrantz wrote:
I was able to easily create a browser hang by configuring Mozilla to
enable HTTPpipelining, then pointing my DocumentRoot at an old copy
of the xml.apache.org web site which had tons of imbedded graphics.
Thanks, Justin, for pointing out the bug.
I'm sure you *really*
Greg Ames wrote:
Justin Erenkrantz wrote:
Okay. I'd be curious to figure out what's going on with the
speculative non-blocking reads.
The odd behavior with speculative reads is due to API differences that I
didn't take into account.
This makes it behave properly on my laptop
William A. Rowe, Jr. wrote:
Someone cried wolf, b.t.w., about connection and request pool
allocation being too tightly coupled to threads. They can be
decoupled pretty painlessly, by tying an allocator to a single
connection object. We can presume that request pools will be
a subpool of each
Justin Erenkrantz wrote:
On Mon, Nov 01, 2004 at 08:39:47PM -0500, Greg Ames wrote:
This makes it behave properly on my laptop with speculative reads. I have
no idea if it works with mod_ssl or what speculative buys us.
mod_ssl will most likely work correctly without changes. -- justin
Let's
I have Paul's version of the Event MPM patch up and running. The only glitch I
saw bringing it up was a warning for two unused variables in http_core.c
(patchlet below). Then I tried stressing it with SPECweb99 and saw some errors
after several minutes:
[error] (24)Too many open files:
Paul Querna wrote:
I have Paul's version of the Event MPM patch up and running. The only
glitch I saw bringing it up was a warning for two unused variables in
http_core.c (patchlet below). Then I tried stressing it with
SPECweb99 and saw some errors after several minutes:
[error] (24)Too
or deny the SSL problem at the moment, but I'm looking at the
CLOSE-WAITs first.
Originally based on the patch by Greg Ames.
...which was originally based on a patch by Bill Stoddard.
Greg
Jeff Trawick wrote:
[...]
As soon as the
new child takes over the process slot, the MPM forgets that the old
child existed. If the old child never exits on its own (long-running
or hung request), then when Apache is terminated the old child will
still hang around (parent pid - 1)
Paul Querna wrote:
Attached ia a patch for the Worker MPM that uses APR Atomics to change
the value of requests_this_child.
I changed it around to count *up*, instead of counting down... So I
would like someone else to look at it before I commit it.
-0.5
What's the point? This slows down the
Jeff Trawick wrote:
Please review the proxy-reqbody branch for proposed improvements to
2.1-dev. There is a 2.0.x equivalent of the patch at
http://httpd.apache.org/~trawick/20proxyreqbody.txt.
+1 (reviewed, not tested)
certainly an improvement over what we have today. The brains (decisions
William A. Rowe, Jr. wrote:
Question - did infra already put this to the fire under
www.apache.org? Given all the quirks we'd seen keeping
viewsvn stable, a pass on svn.apache.org would be extra
reassuring.
I'm working on it. log replay is not behaving at the moment but it looks more
like a
It's been up 2 1/2 hours and looks fine to me. Let us know if you spot a
problem.
btw, I would appreciate being copied directly on related emails. My apache.org
mailing list posts are way behind.
Thanks,
Greg
Nick Maynard wrote:
UNIX MPMs that actually _work_ in Apache 2:
worker
prefork (old)
event (experimental)
unclear if it works with mod_ssl with pipelining (not tested here yet)
Greg
Paul A. Houle wrote:
On Linux I've done some benchmarking and found that worker isn't
any faster than prefork at serving static pages. (Is it any different
on other platforms, such as Solaris?)
I'm sure we can tweak worker and event to make them faster, especially in 2.1+
with
Aaron Bannert wrote:
Just so I understand the problem correctly,
but that since the turnover is so quick you end up having children
lingering around with one or two thread slots and essentially
we approach the prefork scenario in terms of number of child
processes. Is this correct?
in worker +
Jeff Trawick wrote:
Then realize you need to support boatloads more clients, so you bump
up MaxClients to 5000. Now when load changes very slightly (as a
percentage of MaxClients), which happens continuously, the web server
will create or destroy a child process.
b) tweak worker MPM to
A customer reported a problem where their back end app would hang until a read
timed out. The main request was a POST which did have a request body which was
read normally. The POST response contained an ssi tag that caused a subrequest
to be created. The subrequest was forwarded to an app
Paul Querna wrote:
[EMAIL PROTECTED] wrote:
*long but interesting, I hope*
yes it is, to me anyway.
3) The Event MPM Might handle this load better, since it could pass off
the one-char a second requests to the event thread.
no doubt it could be taught to do that and provide relief for this type
Bill Stoddard wrote:
Joe Schaefer wrote:
As it turns out, we clone all of the main request's input headers when
we create the subrequest, including C-L. Whacking the subrequest's
C-L header fixes the hang. Since the main request's body could also
have be chunked, we should probably remove the
Paul Querna wrote:
Bill Stoddard wrote:
The problem is that the subrequest is inheriting entity-header fields
from the mainline request (mainline request was a POST). This patch
should be generalised to remove all inherited entity-header fields
from the subrequest.
Something that popped into
Justin Erenkrantz wrote:
As I just noted in STATUS for 2.0, read_length isn't a sufficient check.
It'd only be set if the client has *already* read the body *and* they
used the 1.3.x mechanisms for reading the request body.
both true in 100% of the cases I've seen in the wild. ok, I'll admit it
Rici Lake wrote:
I was taking a look at the implementation of the renamed (but still
misleading) AP_MODE_EATCRLF,
AP_MODE_PEEK was more accurate, but whatever...
Removing the mode altogether would mean that either every request was
flushed through the filter chain even in pipelining mode, or
Justin Erenkrantz wrote:
On Fri, Apr 15, 2005 at 11:56:38AM -0400, Greg Ames wrote:
the reason that this and the corresponding 1.3 BUFF logic exists is to
minimize tinygrams - ip packets that are less than a full mtu size.
tinygrams definately degrade network performance and trigger things
Justin Erenkrantz wrote:
On Fri, Apr 15, 2005 at 10:09:39AM -0700, Justin Erenkrantz wrote:
This would save us from the extra round trip. I'm not sure where else we
could even place such a check besides ap_pass_brigade. -- justin
Thinking about this a little bit more:
There's no reason we
Justin Erenkrantz wrote:
On Fri, Apr 15, 2005 at 03:46:37PM -0400, Greg Ames wrote:
it is sounding better all the time as far as performance. if I understand
you correctly I think this one eliminates the extra trips down the input
and output filter chains. but unfortunately we still have
[EMAIL PROTECTED] wrote:
* don't propagate input headers describing a body to a GET subrequest
with no body
@@ -219,12 +220,34 @@
-1: jerenkrantz (read_length isn't a sufficient check to see if a body
is present in the request; presence of T-E and C-L in
http://people.apache.org/~gregames/thread_create_recovery.patch
design:
* exit with APEXIT_CHILDSICK for thread create failures (same as other patches)
* add logic to the parent to decide how bad these errors really are. if we
can't initialize a single worker process, just give up. otherwise
William A. Rowe, Jr. wrote:
Will, can you help me understand your concern? this doesn't change the headers of the POST request. it only affects which new headers we generate for the new SSI GET subrequest.
Well I totally understand the issue - I'm blowing up in some PHP
code due to an earlier
Brian Akins wrote:
Bill Stoddard wrote:
If the event MPM is working properly, then a worker thread should not
be blocking waiting for the next ka
request. You still have the overhead of the tcp connection and some
storage used by httpd to manage connection
events but both of those are small
Greg Ames wrote:
Brian Akins wrote:
We've been doing some testing with the current 2.1 implementation, and
it works, it just currently doesn't offer much advantage over worker
for us. If num keepalives == maxclients, you can't accept anymore
connections.
that's a surprise
I noticed that multiple packets are being sent to the network when one would do
on a couple of Linux 2.6.x boxes. one is SuSE SLES 9, the other is RHEL 4. the
first packet is all the HTTP headers, the second is the body/file. strace
http://people.apache.org/~gregames/rhel4.cork.strace
my biggest hurdle in getting the event MPM to work with mod_ssl was learning how
to create a self signed server cert with openssl.
http://httpd.apache.org/docs-2.0/ssl/ssl_faq.html#ownca is very good but refers
to a sign.sh script that I couldn't find in httpd-2.x . I assume sign.sh was
part
Paul Querna wrote:
Yes... I believe it will 'mostly' work, but the issue becomes tricky
once you consider the SSL protocol. The problem is we might have an
entire pipe-lined request buffered inside the SSL Packets, and
therefore, never trigger the socket to come out of the poll(). For
simple
Joe Orton wrote:
You can create a self-signed cert for mod_ssl testing with just one
command: openssl req -x509 -nodes -new -out foo.cert -keyout foo.key
the docs are a bit too helpful there really.
thanks Joe! this looks like a time saver.
Greg
Paul Querna wrote:
once I got past that, it just worked. my tests were fairly simple. I
had pipelining enabled in mozilla and also created a script that did
HTTP/1.1 pipelining. if anyone can think of other scenarios I should
test with mod_ssl please let me know.
Yes... I believe it will
Brian Pane wrote:
Looking at the pattern of calls to ap_core_output_filter() in the event
MPM, it occurred to me that it may be straightforward to hand off the
writing of the request to an async completion thread in a lot of useful
real-world cases.
In function check_pipeline_flush() in
Greg Ames wrote:
Brian Pane wrote:
I'm eager to hear some feedback on this idea:
* Will it work? Or am I overlooking some design flaw?
it should work as long as everything important that happens after the
check_pipeline_flush call still gets done somehow. a quick glance at
the code
Brian Pane wrote:
Rather than automatically setting TCP_NODELAY in core_pre_connection(),
perhaps we should set it conditionally in core_output_filter(), where
we have
enough information to tell whether it's needed.
I'm +1 on the concept of being more lazy about setting the sockopts. the
Paul Querna wrote:
Brian Pane wrote:
On the subject of asynchronous write completion, I've drawn a
connection state model that
combines the current state transitions of the event MPM with new states
for write completion
and the handler phase.
Very cool diagram.
amen.
I see a tiny
Peter Djalaliev wrote:
Where does the Apache web server first detect that a request is over HTTPS?
I can't find the specific place in the source code where this is done
(assuming a specific place exists).
my (non-expert) guess: shortly after the request is mapped to an IP
based virtual host
[EMAIL PROTECTED] wrote:
use Greg's cleaner fix for CAN-2005-2970
Modified:
@@ -823,6 +818,7 @@
free(ti);
ap_scoreboard_image-servers[process_slot][thread_slot].pid = ap_my_pid;
+ap_scoreboard_image-servers[process_slot][thread_slot].tid =
apr_os_thread_current();
1 - 100 of 536 matches
Mail list logo