Ruediger Pluem wrote:
The following patch should fix the above thing. Comments?
Index: server/core_filters.c
+1.
Regards,
Graham
--
smime.p7s
Description: S/MIME Cryptographic Signature
On Oct 25, 2008, at 11:17 AM, Ruediger Pluem wrote:
On 10/19/2008 03:06 PM, Ruediger Pluem wrote:
Another maybe funny sidenote: Because of the way the read method on
socket buckets
work and the way the core input filter works, the ap_get_brigade
call when processing
the http body of the
On 10/26/2008 04:58 PM, Jim Jagielski wrote:
On Oct 25, 2008, at 11:17 AM, Ruediger Pluem wrote:
On 10/19/2008 03:06 PM, Ruediger Pluem wrote:
Another maybe funny sidenote: Because of the way the read method on
socket buckets
work and the way the core input filter works, the
On Oct 26, 2008, at 12:15 PM, Ruediger Pluem wrote:
Sure it is an if/else. I don't care very much whether it is continue
of if/else. It just saved some indenting. So It can be in the
way you proposed below.
:)
As you mean, functionally it's the same. I'm just a stickler for
certain
On 10/19/2008 03:06 PM, Ruediger Pluem wrote:
Another maybe funny sidenote: Because of the way the read method on socket
buckets
work and the way the core input filter works, the ap_get_brigade call when
processing
the http body of the backend response in mod_proxy_http never returns a
On 10/18/08 4:28 PM, William A. Rowe, Jr. [EMAIL PROTECTED] wrote:
Also consider today that a server on broadband can easily spew 1gb/sec
bandwidth
at the client. If this is composed content (or proxied, etc, but not
sendfiled)
it would make sense to allow multiple buffer pages and/or
Akins, Brian wrote:
Would not the generic store-and-forward approach I sent last week help all
of these situations? It effective turns any request into a sendfiled
response. Let me do some checking and I may be able to just donate the
code, since it's basically a very hacked up mod_deflate
On Oct 19, 2008, at 4:20 PM, Ruediger Pluem wrote:
On 10/19/2008 07:35 PM, Jim Jagielski wrote:
On Oct 18, 2008, at 4:22 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not
faster,
and very much needs to be fixed.
I agree in
Jim Jagielski wrote:
I thought that was the concern; that the pool wasn't released
immediately. If you disable reuse, then you don't need to
worry about when it is released... or I must be missing something
obvious here :/
Whether the connection is released and returned to the pool (when
On Oct 20, 2008, at 10:13 AM, Graham Leggett wrote:
Jim Jagielski wrote:
I thought that was the concern; that the pool wasn't released
immediately. If you disable reuse, then you don't need to
worry about when it is released... or I must be missing something
obvious here :/
Whether the
On 10/18/2008 10:22 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
Plus the default socket and TCP buffers on most OS should be already
larger then this. So in order to profit from the optimization the time
the client needs to consume the ProxyIOBufferSize needs to be
On 10/19/2008 01:21 PM, Ruediger Pluem wrote:
On 10/18/2008 10:22 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
Plus the default socket and TCP buffers on most OS should be already
larger then this. So in order to profit from the optimization the time
the client needs to
On Oct 18, 2008, at 4:22 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not
faster,
and very much needs to be fixed.
I agree in theory. But I don't think so in practice.
Unfortunately I know so in practice. In this example we
On 10/19/2008 07:35 PM, Jim Jagielski wrote:
On Oct 18, 2008, at 4:22 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not
faster,
and very much needs to be fixed.
I agree in theory. But I don't think so in practice.
Ruediger Pluem wrote:
The code Graham is talking about was introduced by him in r93811 and was
removed in r104602 about 4 years ago. So I am not astonished any longer
that I cannot remember this optimization. It was before my time :-).
This optimization was never in 2.2.x (2.0.x still ships
Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not faster,
and very much needs to be fixed.
I agree in theory. But I don't think so in practice.
Unfortunately I know so in practice. In this example we are seeing
single connections being held open for 30
Graham Leggett wrote:
2. The optimization only helps for the last chunk being read from the backend
which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size
isn't
set explicitly this amounts to just 8k. I guess if you are having clients
or connections that take a
On Oct 15, 2008, at 6:56 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
Something else to try is to look at the ProxyIOBufferSize parameter.
The proxy reads from the backend in blocks, and as soon as a block
is
not full (ie it's the last block), the proxy will complete and
terminate
On 10/17/2008 05:38 PM, Jim Jagielski wrote:
On Oct 15, 2008, at 6:56 PM, Graham Leggett wrote:
Ruediger Pluem wrote:
Something else to try is to look at the ProxyIOBufferSize parameter.
The proxy reads from the backend in blocks, and as soon as a block is
not full (ie it's the last
On 10/15/08 6:56 PM, Graham Leggett [EMAIL PROTECTED] wrote:
Obviously, if the loop comes round more than once, then the client comes
into play. This definitely needs to be fixed, it is a big performance issue.
Could a more general purpose optimization be done? I was thinking of a
generic
Ruediger Pluem wrote:
This is a pity, because then it will become much harder to debug
this issue. Any chance you get shell access or that you can instruct
the administrators in the service company to get the needed information
for you?
Getting shell access is very unlikely ...
However,
On 10/16/2008 02:35 PM, Lars Eilebrecht wrote:
Ruediger Pluem wrote:
This is a pity, because then it will become much harder to debug
this issue. Any chance you get shell access or that you can instruct
the administrators in the service company to get the needed information
for you?
On Wed, Oct 15, 2008 at 7:55 AM, Lars Eilebrecht [EMAIL PROTECTED] wrote:
Hi,
The first odd thing is that I would have expected that Apache
uses all child processes about equally. Especially I would
have expected that there are at least 25 threads for the second
process in state _ (waiting
On 10/15/2008 01:55 PM, Lars Eilebrecht wrote:
Hi,
I'm trying to debug a performance issue on an Apache infrastructure
using 2.2.9 as reverse proxies. The Apache servers don't do much
except for ProxyPass'ing data from others backend servers, and
caching the content using mod_mem_cache.
Ruediger Pluem wrote:
Is it really a good idea to use mod_mem_cache? Keep in mind that
mod_mem_cache uses local caches per process and cannot use sendfile
to send cached data. It seems that mod_disk_cache with a cache root
on a ram disk could be more efficient here.
No, it really isn't a
On 15 Oct 2008, at 14:41, Ruediger Pluem wrote:
On 10/15/2008 01:55 PM, Lars Eilebrecht wrote:
I'm trying to debug a performance issue on an Apache infrastructure
using 2.2.9 as reverse proxies. The Apache servers don't do much
except for ProxyPass'ing data from others backend servers, and
On 10/15/2008 08:25 PM, Lars Eilebrecht wrote:
Ruediger Pluem wrote:
Is it really a good idea to use mod_mem_cache? Keep in mind that
mod_mem_cache uses local caches per process and cannot use sendfile
to send cached data. It seems that mod_disk_cache with a cache root
on a ram disk could
Lars Eilebrecht wrote:
The second odd thing is that the connections/threads in W state
seem to be hanging, i.e., no data is being transferred over the
connection and these threads/connection time out after about 256
seconds. However, the general Timeout setting is 30s so why
isn't the
Ruediger Pluem wrote:
Something else to try is to look at the ProxyIOBufferSize parameter.
The proxy reads from the backend in blocks, and as soon as a block is
not full (ie it's the last block), the proxy will complete and terminate
the backend request before sending the last block on to the
29 matches
Mail list logo