On Sat, Oct 18, 2008 at 15:20, Andrej van der Zee
[EMAIL PROTECTED] wrote:
Hi,
I did not find a solution, I just stopped using prepared statements
altogether. But I tried to isolate the problem just now, and found somehow
that I cannot use FLOAT in prepared statement somehow (when I tried INT
Hello.
The scenario is client calling server1 and server1 is calling, by
xmlHTTPRequest, to server2.
server2 has to return xml data to server1 and to the client.
my general broblem is cross-domain, as server1 and server2 are not in the
same domain.
Could you, please, clear this for me:
Is
On 10/18/2008 01:25 AM, Paul Querna wrote:
Looking at a problem that seems easy to re-produce using un-patched
trunk, 2.2.10 and 2.0.63.
Using a graceful restart causes higher memory usage in the parent, which
is then passed on to the 'new' children processes.
The issue seems to appear
Ruediger Pluem wrote:
The code Graham is talking about was introduced by him in r93811 and was
removed in r104602 about 4 years ago. So I am not astonished any longer
that I cannot remember this optimization. It was before my time :-).
This optimization was never in 2.2.x (2.0.x still ships
Hi,
I did not find a solution, I just stopped using prepared statements
altogether. But I tried to isolate the problem just now, and found somehow
that I cannot use FLOAT in prepared statement somehow (when I tried INT
columns it even segfaults). Below the source code of a mini-module to
Ruediger Pluem schrieb:
On 10/18/2008 01:25 AM, Paul Querna wrote:
Looking at a problem that seems easy to re-produce using un-patched
trunk, 2.2.10 and 2.0.63.
Using a graceful restart causes higher memory usage in the parent, which
is then passed on to the 'new' children processes.
Ruediger Pluem wrote:
As a result, the connection pool has made the server slower, not faster,
and very much needs to be fixed.
I agree in theory. But I don't think so in practice.
Unfortunately I know so in practice. In this example we are seeing
single connections being held open for 30
Graham Leggett wrote:
2. The optimization only helps for the last chunk being read from the backend
which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size
isn't
set explicitly this amounts to just 8k. I guess if you are having clients
or connections that take a