Re: mod_dbd and prepared statements (httpd-2.2.9)

2008-10-18 Thread Sorin Manolache
On Sat, Oct 18, 2008 at 15:20, Andrej van der Zee
[EMAIL PROTECTED] wrote:
 Hi,

 I did not find a solution, I just stopped using prepared statements
 altogether. But I tried to isolate the problem just now, and found somehow
 that I cannot use FLOAT in prepared statement somehow (when I tried INT
 columns it even segfaults). Below the source code of a mini-module to
 illustrate this. This is the table I created in MySQL:

 CREATE TABLE simple_table (duration FLOAT NOT NULL) ENGINE=INNODEDB;

 And this is what appears in the MySQL log:

 PrepareINSERT INTO simple_table (duration) VALUES (?)
 ExecuteINSERT INTO simple_table (duration) VALUES ('')

 If you want to reproduce it, dont forget to put this in httpd.conf:
 LoadModule prep_stmt_module   modules/mod_prep_stmt.so
 PrepStmt on

Hello,

I have not reproduced it, it's Saturday :-). I have never worked with
mod_dbd but I might have a clue though, although I am not sure.

You call ap_dbd_prepare from prep_stmt_config_set_prep_stmt_on. This
function is called during the conf parsing phase when there are no
apache children processes. cmd-server refers to the server structure
of the root apache process (the parent of all future apache children).
Then the apr_dbd_pvquery is called from an apache child process, in
which the server structure is not the same as the one you passed to
ap_dbd_prepare. As I have never worked with mod_dbd, I do not know if
this can cause a crash. If your mini-module is inspired from a
text-book or some working example, then my hunch is wrong.

Try the following: Leave cfg-prep_stmt_on = flag but move the
ap_dbd_prepare from the config hook to the child_init hook.

S


General questions

2008-10-18 Thread ampo

Hello.

The scenario is client calling server1 and server1 is calling, by
xmlHTTPRequest, to server2.
server2 has to return xml data to server1 and to the client.
my general broblem is cross-domain, as server1 and server2 are not in the
same domain.

Could you, please, clear this for me:
Is APACHE, configured on server1, as proxy is operating as the client
calling server1, or its action is when server1 is requesting server2?
What I need is the second option.

Currently, I have HTML page on server1 and ASP on server2. any other better
options?

Thanks.

-- 
View this message in context: 
http://www.nabble.com/General-questions-tp20045322p20045322.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: leak on graceful restarts

2008-10-18 Thread Ruediger Pluem


On 10/18/2008 01:25 AM, Paul Querna wrote:
 Looking at a problem that seems easy to re-produce using un-patched
 trunk, 2.2.10 and 2.0.63.
 
 Using a graceful restart causes higher memory usage in the parent, which
 is then passed on to the 'new' children processes.
 
 The issue seems to appear in all the Worker, Event and Prefork MPMs.
 
 Compile up httpd without apr pool debugging enabled, start it as normal,
  and then check the memory usage:
 
 $ ./apachectl start
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  2672 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start
 
 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3192 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start
 
 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3236 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start
 
 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3280 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start
 
 (2672 - 3192 - 3236 - 3280)
 
 And, at least over here, httpd consistently grows in RSS, without any
 obvious cause.
 
 Seems reproducible on Ubuntu and Darwin, using 2.2.10, 2.0.63 and trunk.
 
 Any ideas?

Two quick thoughts:

1. Memory fragmentation in the allocator lists (we had this discussion either 
here
   or on [EMAIL PROTECTED] a short time ago).

2. At some locations we use a global pool (process-pool) to allocate memory, 
e.g. mod_ssl
   and when setting up the listeners. I haven't checked so far if this global 
pool usage is
   justified.


Regards

RĂ¼diger


Re: strange usage pattern for child processes

2008-10-18 Thread Graham Leggett

Ruediger Pluem wrote:


The code Graham is talking about was introduced by him in r93811 and was
removed in r104602 about 4 years ago. So I am not astonished any longer
that I cannot remember this optimization. It was before my time :-).
This optimization was never in 2.2.x (2.0.x still ships with it).
BTW: This logic flow cannot be restored easily, because due to the current
pool and bucket allocator usage it *must* be ensured that all buckets
are flushed down the chain before we can return the backend connection
to the connection pool. By the time this was removed and in 2.0.x we do
*not* use a connection pool for backend connections.


Right now, the problem we are seeing is that the expensive backends are 
tied up until slow frontends eventually get round to completely 
consuming responses, which is a huge waste of backend resources.


As a result, the connection pool has made the server slower, not faster, 
and very much needs to be fixed.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: mod_dbd and prepared statements (httpd-2.2.9)

2008-10-18 Thread Andrej van der Zee
Hi,

I did not find a solution, I just stopped using prepared statements
altogether. But I tried to isolate the problem just now, and found somehow
that I cannot use FLOAT in prepared statement somehow (when I tried INT
columns it even segfaults). Below the source code of a mini-module to
illustrate this. This is the table I created in MySQL:

CREATE TABLE simple_table (duration FLOAT NOT NULL) ENGINE=INNODEDB;

And this is what appears in the MySQL log:

PrepareINSERT INTO simple_table (duration) VALUES (?)
ExecuteINSERT INTO simple_table (duration) VALUES ('')

If you want to reproduce it, dont forget to put this in httpd.conf:
LoadModule prep_stmt_module   modules/mod_prep_stmt.so
PrepStmt on


-  Complete mini-module -



#include httpd.h
#include http_config.h
#include http_core.h
#include http_log.h
#include http_protocol.h
#include http_connection.h
#include apr_file_info.h
#include apr_file_io.h
#include apr_dbd.h
#include mod_dbd.h

module AP_MODULE_DECLARE_DATA prep_stmt_module;

static int prep_stmt_write(request_rec *r);

typedef struct prep_stmt_config
{
  int prep_stmt_on;
} prep_stmt_config;

static const char * prep_stmt_config_set_prep_stmt_on(cmd_parms *cmd, void
*dummy, int flag)
{
ap_dbd_prepare(cmd-server, INSERT INTO simple_table (duration) VALUES
(%f), insert_row);

  prep_stmt_config *cfg = ap_get_module_config(cmd-server-module_config,
prep_stmt_module);
  cfg-prep_stmt_on = flag;
  return NULL;
}

static int prep_stmt_write(request_rec *r)
{
prep_stmt_config *cfg = ap_get_module_config(r-server-module_config,
prep_stmt_module);
if (!cfg-prep_stmt_on)
return DECLINED;

ap_dbd_t * dbd = ap_dbd_acquire(r);

apr_dbd_prepared_t *prepared = apr_hash_get(dbd-prepared, insert_row,
APR_HASH_KEY_STRING);
if (!prepared) {
ap_log_error(APLOG_MARK, APLOG_ERR, 0, r-server,
DBD Log: Failed to get prepared statement:
update_request);
return DECLINED;
}

int rv, nrows;
if (rv = apr_dbd_pvquery(dbd-driver, r-pool, dbd-handle, nrows,
prepared, 10.2, NULL)) {
const char *errmsg = apr_dbd_error(dbd-driver, dbd-handle, rv);
ap_log_error(APLOG_MARK, APLOG_ERR, 0, r-server,
DBD Log: Failed to execute prepared statement:
insert_row);
return DECLINED;
}
}

static const command_rec prep_stmt_cmds[] = {
AP_INIT_FLAG(PrepStmt, prep_stmt_config_set_prep_stmt_on,
NULL, RSRC_CONF, Enable DBD Log),
{ NULL }
};

static void prep_stmt_register_hooks(apr_pool_t *p)
{
ap_hook_log_transaction(prep_stmt_write, NULL, NULL, APR_HOOK_MIDDLE);
}

static void * prep_stmt_create_config(apr_pool_t *pool, server_rec *s)
{
  prep_stmt_config *cfg = apr_pcalloc(pool, sizeof(prep_stmt_config));
  return cfg;
}

module AP_MODULE_DECLARE_DATA prep_stmt_module =
{
STANDARD20_MODULE_STUFF,/* stuff that needs to be declared in every
2.0 mod */
NULL,   /* create per-directory config structure*/
NULL,   /* merge per-directory config
structures*/
prep_stmt_create_config,   /* create per-server
config structure   */
NULL,   /* merge per-server config
structures   */
prep_stmt_cmds,   /* command
apr_table_t  */
prep_stmt_register_hooks  /* register
hooks   */
};


Re: leak on graceful restarts

2008-10-18 Thread Rainer Jung


Ruediger Pluem schrieb:
 
 On 10/18/2008 01:25 AM, Paul Querna wrote:
 Looking at a problem that seems easy to re-produce using un-patched
 trunk, 2.2.10 and 2.0.63.

 Using a graceful restart causes higher memory usage in the parent, which
 is then passed on to the 'new' children processes.

 The issue seems to appear in all the Worker, Event and Prefork MPMs.

 Compile up httpd without apr pool debugging enabled, start it as normal,
  and then check the memory usage:

 $ ./apachectl start
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  2672 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start

 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3192 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start

 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3236 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start

 $ ./apachectl graceful
 $ ps alx | grep 18459
 1  1000 18459 1  20   0 152752  3280 674009 Ss   ?  0:00
 /home/chip/temp/httpd/bin/httpd -k start

 (2672 - 3192 - 3236 - 3280)

 And, at least over here, httpd consistently grows in RSS, without any
 obvious cause.

 Seems reproducible on Ubuntu and Darwin, using 2.2.10, 2.0.63 and trunk.

 Any ideas?
 
 Two quick thoughts:
 
 1. Memory fragmentation in the allocator lists (we had this discussion either 
 here
or on [EMAIL PROTECTED] a short time ago).
 
 2. At some locations we use a global pool (process-pool) to allocate memory, 
 e.g. mod_ssl
and when setting up the listeners. I haven't checked so far if this global 
 pool usage is
justified.

Using my production configurations on Solaris with 2.2.10 worker I can
only reproduce a leak during graceful restart when loading mod_ssl. The
memory size does not always increase though, after a couple of restarts
it decreases again, but not back to the previous minimum so over all
there is a small leak related to restarts.

Regards,

Rainer



Re: strange usage pattern for child processes

2008-10-18 Thread Graham Leggett

Ruediger Pluem wrote:


As a result, the connection pool has made the server slower, not faster,
and very much needs to be fixed.


I agree in theory. But I don't think so in practice.


Unfortunately I know so in practice. In this example we are seeing 
single connections being held open for 30 second or more. :(



1. 2.0.x behaviour: If you did use keepalive connections to the backend
   the connection to the backenend was kept alive and as it was bound to the
   frontend connection in 2.0.x it couldn't be used by other connections.
   Depending on the backend server it wasted the same number of resources
   as without the optimization (backend like httpd worker, httpd prefork) or
   a small amount of resources (backend like httpd event with HTTP or a recent
   Tomcat web connector). So you didn't benefit very well from this optimization
   in 2.0.x as long as you did not turn off the keepalives to the backend.


Those who did need the optimisation, would have turned off keepalives to 
the backend.



2. The optimization only helps for the last chunk being read from the backend
   which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size isn't
   set explicitly this amounts to just 8k. I guess if you are having clients
   or connections that take a long time to consume just 8k you are troubled
   anyway.


Which is why you would increase the size from 8k to something big enough 
to hold your complete pages. The CNN home page for example is 92k.


As recent as 3 years ago, I had one client running an entire company on 
a 64kbps legacy telco connection that cost over USD1000 per month. These 
clients tie up your backend for many seconds, sometimes minutes, and 
protecting you from this is one of the key reasons you would use a 
reverse proxy.



   Plus the default socket and TCP buffers on most OS should be already
   larger then this. So in order to profit from the optimization the time
   the client needs to consume the ProxyIOBufferSize needs to be remarkable.


It makes no difference how large the TCP buffers are, the backend will 
only be released for reuse when the frontend has completely flushed and 
acknowledged the request, so all your buffers don't help at all.


As soon as the backend has provided the very last bit of the request, 
the backend should be released immediately and placed back in the pool. 
The backend might only take tens or hundreds of milliseconds to complete 
its work, but is then tied up frozen for many orders of magnitude more 
than that waiting for the client to say that is is done.


Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: strange usage pattern for child processes

2008-10-18 Thread William A. Rowe, Jr.
Graham Leggett wrote:
 
 2. The optimization only helps for the last chunk being read from the backend
which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size 
 isn't
set explicitly this amounts to just 8k. I guess if you are having clients
or connections that take a long time to consume just 8k you are troubled
anyway.
 
 Which is why you would increase the size from 8k to something big enough
 to hold your complete pages. The CNN home page for example is 92k.

Also consider today that a server on broadband can easily spew 1gb/sec bandwidth
at the client.  If this is composed content (or proxied, etc, but not sendfiled)
it would make sense to allow multiple buffer pages and/or resizing buffer pages,
in a Location  or Files  or Proxy  specific context.

Since mismatched buffer pages are a loser from the recycling pov, it seems 
sensible
to set that based on traffic and bandwidth, but then take advantage of multiple
pages to complete the handoff from the backend connection pool to the client 
facing
connection pool.