Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-28 Thread Jan Kaluža

On 03/28/2016 05:52 PM, Stefan Fritsch wrote:

On Monday 07 March 2016 12:41:25, Jan Kaluža wrote:

This is needed for httpd startup with systemd when one wants to use
particular IP address to bind. There is no way how to start httpd
after the IP address has been configured in systemd and according
to systemd developers, the applications should become more robust
to handle network changes like that. The full reasoning is
explained here [1].


This is wrong. We really want to be able to check the configuration
for correctness at startup. If you switch to using freebind by
default, you loose that sanity check. And using freebind for all
systemd users is basically the same as making it the default.


I do not plan to enable that feature by default. I think 99% of the 
users use Listen without the particular public IP address, so there is 
no need to use it for them.



Doesn't network-online.target do what you need? If no, why not?


It does, but it's not what systemd authors describe as a best solution. 
Also, as Yann said, we might want to use that API even for different 
socket options in near future.


You are right that from user perspective, it probably doesn't matter if 
the user enables network-online.target or adds freebind option to httpd 
- he has to change the configuration of the system anyway. I was just 
thinking that it would be nice to have a support for that even in httpd.



I am not against the freebind feature as such, it's useful for
failover solutions/VRRP/etc. But I am strictly against advertising
this as a workaround for broken systemd design.


We do not advertise it publicly as a workaround for systemd issues. I 
was using systemd in my email just to show the use-case I'm personally 
interested in. In the httpd documentation for this config option in my 
patch, there is no mention of systemd.


I also personally think that systemd should handle this situation 
differently, but systemd developers think the opposite for a long time. 
I was discussing that with them and it's very unlikely that they would 
change something related to this problem.


Regards,
Jan Kaluza





The patch needs latest APR-trunk currently, but it could be
rewritten to set IP_FREEBIND directly instead of using APR API (We
use that way for REUSEADDR socket option).

Do you think FreeListen is good name for this feature, or would you
name/implement it differently?

[1] https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/






Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-19 Thread Jan Kaluža

On 03/08/2016 11:43 AM, Jan Kaluža wrote:

On 03/08/2016 10:25 AM, Yann Ylavic wrote:

On Tue, Mar 8, 2016 at 9:46 AM, Yann Ylavic <ylavic@gmail.com> wrote:

On Tue, Mar 8, 2016 at 9:28 AM, Jan Kaluža <jkal...@redhat.com> wrote:


I have chosen FreeListen over the flags


FWIW, should be take the YAD path, I'd prefer ListenFree (over
FreeListen) to emphasize on the "Listen directive family" with a
prefix...


Thinking more about this, I think I second Jim on the wish to have a
single Listen directive with some parameter like
"options=freebind,backlog:4095,reuseport,...".


Thinking about right syntax for options...

I would personally like something like "Listen [IP-address:]portnumber
[protocol] [option1] [option2] ...". Do we have list of supported
protocols by Listen directive, or we support whatever protocol is there?

If we have explicit list of protocols, then the protocols itself could
become an options.

If not, can it be acceptable, that you always have to define protocol
when you wan to use options?


I've implemented the way described in that question above ^. Please see 
the attached patch and share your opinions.


The syntax to enable IP_FREEBIND currently is:

Listen 192.168.0.1:80 http freebind

Regards,
Jan Kaluza



Otherwise I can always implement Yann's idea with "Listen
[IP-address:]portnumber [protocol] [options=[option1,option2,...]]".

Regards,
Jan Kaluza



We could then whatever (new) IP option more easily (less docs work...)
and maybe deprecate ListenBacklog.

For example, the "reuseport" (SO_REUSEPORT) option seem to be usable
w/o the current buckets mechanism in latest linux kernels, so indeed
we may add more and more options there...





Index: docs/manual/mod/mpm_common.xml
===
--- docs/manual/mod/mpm_common.xml	(revision 1733461)
+++ docs/manual/mod/mpm_common.xml	(working copy)
@@ -171,7 +171,7 @@
 Listen
 IP addresses and ports that the server
 listens to
-Listen [IP-address:]portnumber [protocol]
+Listen [IP-address:]portnumber [protocol] [option] [...]
 server config
 eventworker
 preforkmpm_winnt
@@ -240,6 +240,17 @@
   error message.
 
 
+The optional option arguments are used to configure
+   system specific socket options:
+
+
+OptionDescription
+freebindSets the IP_FREEBIND socket option
+on systems where this option is available. It is therefore possible
+to start the server even when particular IP address set in the
+Listen directive is not configured yet.
+
+
 
 DNS Issues
 Setting which addresses and ports Apache HTTP Server
Index: include/ap_listen.h
===
--- include/ap_listen.h	(revision 1733461)
+++ include/ap_listen.h	(working copy)
@@ -38,6 +38,9 @@
 typedef struct ap_listen_rec ap_listen_rec;
 typedef apr_status_t (*accept_function)(void **csd, ap_listen_rec *lr, apr_pool_t *ptrans);
 
+/* Whether use APR_SO_FREEBIND */
+#define AP_LISTEN_FREEBIND 1
+
 /**
  * @brief Apache's listeners record.
  *
@@ -71,6 +74,10 @@
 const char* protocol;
 
 ap_slave_t *slave;
+/**
+ * Options to configure socket features.
+ */
+int options;
 };
 
 /**
@@ -149,7 +156,7 @@
 AP_INIT_TAKE1("ListenCoresBucketsRatio", ap_set_listencbratio, NULL, RSRC_CONF, \
   "Ratio between the number of CPU cores (online) and the number of listeners buckets"), \
 AP_INIT_TAKE_ARGV("Listen", ap_set_listener, NULL, RSRC_CONF, \
-  "A port number or a numeric IP address and a port number, and an optional protocol"), \
+  "A port number or a numeric IP address and a port number, an optional protocol, and options"), \
 AP_INIT_TAKE1("SendBufferSize", ap_set_send_buffer_size, NULL, RSRC_CONF, \
   "Send buffer size in bytes"), \
 AP_INIT_TAKE1("ReceiveBufferSize", ap_set_receive_buffer_size, NULL, \
Index: server/listen.c
===
--- server/listen.c	(revision 1733461)
+++ server/listen.c	(working copy)
@@ -68,7 +68,8 @@
 #endif
 
 /* TODO: make_sock is just begging and screaming for APR abstraction */
-static apr_status_t make_sock(apr_pool_t *p, ap_listen_rec *server, int do_bind_listen)
+static apr_status_t make_sock(apr_pool_t *p, ap_listen_rec *server,
+  int do_bind_listen)
 {
 apr_socket_t *s = server->sd;
 int one = 1;
@@ -162,6 +163,22 @@
 }
 #endif
 
+   if (server->options & AP_LISTEN_FREEBIND) {
+#if defined(APR_SO_FREEBIND)
+stat = apr_socket_opt_set(s, APR_SO_FREEBIND, one);
+if (stat != APR_SUCCESS) {
+ap_log_perror(APLOG_MARK, APLOG_WARNING, stat, p, APLOGNO()
+  "make_sock: failed to set set IP_FREEBIND socket option");
+return stat;

Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-08 Thread Jan Kaluža

On 03/08/2016 10:25 AM, Yann Ylavic wrote:

On Tue, Mar 8, 2016 at 9:46 AM, Yann Ylavic <ylavic@gmail.com> wrote:

On Tue, Mar 8, 2016 at 9:28 AM, Jan Kaluža <jkal...@redhat.com> wrote:


I have chosen FreeListen over the flags


FWIW, should be take the YAD path, I'd prefer ListenFree (over
FreeListen) to emphasize on the "Listen directive family" with a
prefix...


Thinking more about this, I think I second Jim on the wish to have a
single Listen directive with some parameter like
"options=freebind,backlog:4095,reuseport,...".


Thinking about right syntax for options...

I would personally like something like "Listen [IP-address:]portnumber 
[protocol] [option1] [option2] ...". Do we have list of supported 
protocols by Listen directive, or we support whatever protocol is there?


If we have explicit list of protocols, then the protocols itself could 
become an options.


If not, can it be acceptable, that you always have to define protocol 
when you wan to use options?


Otherwise I can always implement Yann's idea with "Listen 
[IP-address:]portnumber [protocol] [options=[option1,option2,...]]".


Regards,
Jan Kaluza



We could then whatever (new) IP option more easily (less docs work...)
and maybe deprecate ListenBacklog.

For example, the "reuseport" (SO_REUSEPORT) option seem to be usable
w/o the current buckets mechanism in latest linux kernels, so indeed
we may add more and more options there...





Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-08 Thread Jan Kaluža

On 03/08/2016 06:32 AM, William A Rowe Jr wrote:

On Mar 7, 2016 21:59, "Yehuda Katz" <yeh...@ymkatz.net
<mailto:yeh...@ymkatz.net>> wrote:
 >
 > On Mon, Mar 7, 2016 at 9:06 PM, William A Rowe Jr
<wr...@rowe-clan.net <mailto:wr...@rowe-clan.net>> wrote:
 >>
 >> On Mar 7, 2016 13:54, "Jan Kaluža" <jkal...@redhat.com
<mailto:jkal...@redhat.com>> wrote:
 >> >
 >> > On 03/07/2016 04:17 PM, Jim Jagielski wrote:
 >> >>
 >> >> Intstead of adding YAD (yet another directive ;) ), would it
 >> >> be possible to somehow leverage Listen itself, maybe with some
 >> >> sort of flag?
 >> >
 >> >
 >> > Yes, that would be quite possible. I was thinking about that way,
but I have chosen YAD as a first approach. If you think adding flag to
Listen is better way, I can rework my patch.
 >> >
 >> > Regards,
 >> > Jan Kaluza
 >> >
 >>
 >> Reviewing the behavior, an unadorned new directive makes more sense
to me than cluttering Listen, which already takes one optional protocol
behavior argument.
 >>
 >> The same handler can process both directives.
 >
 > A benefit of using a flag is: what happens if the default changes at
some point? YAD would need to be created to go back to the old behavior
- which would make things more complicated.  Is it possible to use
something like a plus/minus or question mark symbol with each
address:port which would allow the default to be changed at some future
point without requiring having this discussion again?
 >
 > Example:
 > Listen ?192.170.2.1:80 <http://192.170.2.1:80>  # Use IP_FREEBIND to
listen when IP is available (new behavior)
 > Listen +192.170.2.5:8000 <http://192.170.2.5:8000>  # Require IP to
be available (old behavior)
 > Listen [2001:db8::a00:20ff:fea7:ccea]:80  # Current default behavior
(old)

Interesting point, I raised the same consideration for piped logging
syntax some years ago.  But at that time it was the consensus that the
default would change with the next major version.

I see small probably that this would become a default behavior, from the
perspective of robustness alone.  I see that you simply suggest a
tri-state (and trailing '?' Vs '!' makes more intuitive sense to me),
but I'm not seeing a consensus yet that the default would change in the
future.


I think that the default behaviour should not be changed, at least not 
in major version.


I have chosen FreeListen over the flags, because I personally think it's 
better to have two directives than single one with more and more flags 
changing its behaviour. But I care more about that feature, not about a 
way how it is enabled in the end :).


Jan Kaluza



The same can be accomplished by adding a third 'FixedListen' directive,
leaving Listen itself free to switch behaviors.

But from the migration perspective, I'd be loath to switch any default
Listen behavior without the admin's explicit intervention.  POLS.





Re: [PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-07 Thread Jan Kaluža

On 03/07/2016 04:17 PM, Jim Jagielski wrote:

Intstead of adding YAD (yet another directive ;) ), would it
be possible to somehow leverage Listen itself, maybe with some
sort of flag?


Yes, that would be quite possible. I was thinking about that way, but I 
have chosen YAD as a first approach. If you think adding flag to Listen 
is better way, I can rework my patch.


Regards,
Jan Kaluza


On Mar 7, 2016, at 6:41 AM, Jan Kaluža <jkal...@redhat.com> wrote:

Hi,

attached patch adds new "FreeListen" directive. The difference between "Listen" and 
"FreeListen" is that "FreeListen" sets the IP_FREEBIND socket option on platforms where this is 
available.

It is therefore possible to start the server even when particular IP address set in the 
"FreeListen" is not configured yet.

This is needed for httpd startup with systemd when one wants to use particular 
IP address to bind. There is no way how to start httpd after the IP address has 
been configured in systemd and according to systemd developers, the 
applications should become more robust to handle network changes like that. The 
full reasoning is explained here [1].

The patch needs latest APR-trunk currently, but it could be rewritten to set 
IP_FREEBIND directly instead of using APR API (We use that way for REUSEADDR 
socket option).

Do you think FreeListen is good name for this feature, or would you 
name/implement it differently?

[1] https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/

Regards,
Jan Kaluza







[PATCH] Add "FreeListen" to support IP_FREEBIND

2016-03-07 Thread Jan Kaluža

Hi,

attached patch adds new "FreeListen" directive. The difference between 
"Listen" and "FreeListen" is that "FreeListen" sets the IP_FREEBIND 
socket option on platforms where this is available.


It is therefore possible to start the server even when particular IP 
address set in the "FreeListen" is not configured yet.


This is needed for httpd startup with systemd when one wants to use 
particular IP address to bind. There is no way how to start httpd after 
the IP address has been configured in systemd and according to systemd 
developers, the applications should become more robust to handle network 
changes like that. The full reasoning is explained here [1].


The patch needs latest APR-trunk currently, but it could be rewritten to 
set IP_FREEBIND directly instead of using APR API (We use that way for 
REUSEADDR socket option).


Do you think FreeListen is good name for this feature, or would you 
name/implement it differently?


[1] https://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/

Regards,
Jan Kaluza
Index: docs/manual/mod/mpm_common.xml
===
--- docs/manual/mod/mpm_common.xml	(revision 1733461)
+++ docs/manual/mod/mpm_common.xml	(working copy)
@@ -104,6 +104,31 @@
 
 
 
+FreeListen
+IP addresses and ports that the server
+listens to
+FreeListen [IP-address:]portnumber [protocol]
+server config
+eventworker
+preforkmpm_winnt
+mpm_netwarempmt_os2
+
+
+
+The FreeListen directive has the same meaning
+as the Listen directive. The difference between
+these two directives is that FreeListen directive
+sets the IP_FREEBIND socket option on platforms where this
+option is available.
+
+It is therefore possible to start the server even when particular IP
+address set in the FreeListen directive is not
+configured yet.
+
+Listen
+
+
+
 GracefulShutdownTimeout
 Specify a timeout after which a gracefully shutdown server
 will exit.
@@ -241,6 +266,7 @@
 
 
 
+FreeListen
 DNS Issues
 Setting which addresses and ports Apache HTTP Server
 uses
Index: include/ap_listen.h
===
--- include/ap_listen.h	(revision 1733461)
+++ include/ap_listen.h	(working copy)
@@ -71,6 +71,10 @@
 const char* protocol;
 
 ap_slave_t *slave;
+/**
+ * Whether use APR_SO_FREEBIND.
+ */
+int free_bind;
 };
 
 /**
@@ -148,8 +152,10 @@
   "Maximum length of the queue of pending connections, as used by listen(2)"), \
 AP_INIT_TAKE1("ListenCoresBucketsRatio", ap_set_listencbratio, NULL, RSRC_CONF, \
   "Ratio between the number of CPU cores (online) and the number of listeners buckets"), \
-AP_INIT_TAKE_ARGV("Listen", ap_set_listener, NULL, RSRC_CONF, \
+AP_INIT_TAKE_ARGV("Listen", ap_set_listener, (void *) 0, RSRC_CONF, \
   "A port number or a numeric IP address and a port number, and an optional protocol"), \
+AP_INIT_TAKE_ARGV("FreeListen", ap_set_listener, (void *) 1, RSRC_CONF, \
+  "A port number or a numeric IP address and a port number, and an optional protocol"), \
 AP_INIT_TAKE1("SendBufferSize", ap_set_send_buffer_size, NULL, RSRC_CONF, \
   "Send buffer size in bytes"), \
 AP_INIT_TAKE1("ReceiveBufferSize", ap_set_receive_buffer_size, NULL, \
Index: server/listen.c
===
--- server/listen.c	(revision 1733461)
+++ server/listen.c	(working copy)
@@ -68,7 +68,8 @@
 #endif
 
 /* TODO: make_sock is just begging and screaming for APR abstraction */
-static apr_status_t make_sock(apr_pool_t *p, ap_listen_rec *server, int do_bind_listen)
+static apr_status_t make_sock(apr_pool_t *p, ap_listen_rec *server,
+  int do_bind_listen)
 {
 apr_socket_t *s = server->sd;
 int one = 1;
@@ -162,6 +163,21 @@
 }
 #endif
 
+   if (server->free_bind) {
+#if defined(APR_SO_FREEBIND)
+stat = apr_socket_opt_set(s, APR_SO_FREEBIND, one);
+if (stat != APR_SUCCESS) {
+ap_log_perror(APLOG_MARK, APLOG_WARNING, stat, p, APLOGNO(00071)
+  "make_sock: failed to set set IP_FREEBIND socket option");
+return stat;
+}
+#else
+ap_log_perror(APLOG_MARK, APLOG_WARNING, stat, p, APLOGNO(00071)
+"make_sock: FreeListen not supported by the system");
+return APR_ENOTIMPL;
+#endif
+}
+
 if (do_bind_listen) {
 #if APR_HAVE_IPV6
 if (server->bind_addr->family == APR_INET6) {
@@ -348,6 +364,7 @@
 rec = apr_palloc(process->pool, sizeof(ap_listen_rec));
 rec->active = 0;
 rec->next = 0;
+rec->free_bind = 0;
 
 rv = apr_os_sock_make(>sd, , process->pool);
 if (rv != APR_SUCCESS) {
@@ -406,7 +423,7 @@
 
 static const char *alloc_listener(process_rec *process, char *addr,
   apr_port_t port, const char* proto,
-  void *slave)
+  void 

Re: mod_mime_magic, gzipped tarballs and Docker

2016-01-19 Thread Jan Kaluža

On 01/18/2016 09:22 PM, William A Rowe Jr wrote:

On Mon, Jan 18, 2016 at 5:13 AM, Jan Kaluža <jkal...@redhat.com
<mailto:jkal...@redhat.com>> wrote:

On 01/08/2016 07:44 PM, William A Rowe Jr wrote:

Do we have to repeat the softmagic call if checkzmagic resolves to
x-gzip/x-deflate and the internal content type needs to be
deciphered?


That's true.

I think that Yann's patch moving the zmagic call after the softmagic
call would just mean that zmagic won't be called at all if the
softmagic recognizes the file format.

We would have to check the softmagic result by something similar to
"magic_rsl_to_request" and if it's type we want to decompress, we
would have to run zmagic.

Before really trying to do so, I want to ask if I understand the
reasoning right. Do we consider this way because users can then
remove x-gzip from mime magic and will be able to use it to disable
the mod_mime_magic behaviour discussed in this thread?


Yes... and in a more flexible manner that allows us to override any
compression mode, not only deflate, by simply tweaking the magic entries.

Putting magic file entry overrides into the mod_mime_magic directives is
a lot more interesting than simply toggling compression recognition.


Okay... I have something half-way right now.

Attached patch changes the order of softmagic and zmagic. When softmagic 
recognizes a file, the zmagic is called. zmagic method checks if the 
file has been recognized as "x-gzip" and it tries to uncompress it and 
call "tryit" again with the uncompressed data as before.


I'm now playing with an idea to extend the magic file so we wouldn't 
have to keep the "x-gzip" -> "gzip -dcq" relation hardcoded, but it 
could be defined in the magic file instead.


William, is the patch close to the behaviour you were describing?

Regards,
Jan Kaluza

Index: modules/metadata/mod_mime_magic.c
===
--- modules/metadata/mod_mime_magic.c	(revision 1725450)
+++ modules/metadata/mod_mime_magic.c	(working copy)
@@ -463,6 +463,8 @@
 magic_rsl *head;  /* result string list */
 magic_rsl *tail;
 unsigned suf_recursion;   /* recursion depth in suffix check */
+char *type;   /* Content-Type cached value */
+char *encoding;   /* Content-Encoding cached value */
 } magic_req_rec;
 
 /*
@@ -534,6 +536,8 @@
   sizeof(magic_req_rec));
 
 req_dat->head = req_dat->tail = (magic_rsl *) NULL;
+req_dat->type = NULL;
+req_dat->encoding = NULL;
 ap_set_module_config(r->request_config, _magic_module, req_dat);
 return req_dat;
 }
@@ -656,13 +660,26 @@
 return result;
 }
 
-/* states for the state-machine algorithm in magic_rsl_to_request() */
+static int magic_rsl_clear(request_rec *r)
+{
+magic_req_rec *req_dat = (magic_req_rec *)
+ap_get_module_config(r->request_config, _magic_module);
+req_dat->head = NULL;
+req_dat->tail = NULL;
+req_dat->type = NULL;
+req_dat->encoding = NULL;
+
+/* success */
+return 0;
+}
+
+/* states for the state-machine algorithm in magic_rsl_parse() */
 typedef enum {
 rsl_leading_space, rsl_type, rsl_subtype, rsl_separator, rsl_encoding
 } rsl_states;
 
 /* process the RSL and set the MIME info in the request record */
-static int magic_rsl_to_request(request_rec *r)
+static int magic_rsl_parse(request_rec *r, char **type, char **encoding)
 {
 int cur_frag, /* current fragment number/counter */
 cur_pos,  /* current position within fragment */
@@ -673,7 +690,6 @@
 encoding_pos, /* content encoding starting point: position */
 encoding_len; /* content encoding length */
 
-char *tmp;
 magic_rsl *frag;  /* list-traversal pointer */
 rsl_states state;
 
@@ -686,6 +702,13 @@
 return DECLINED;
 }
 
+/* check for cached values filled in by previous run */
+if (req_dat->type) {
+*type = req_dat->type;
+*encoding = req_dat->encoding;
+return OK;
+}
+
 /* start searching for the type and encoding */
 state = rsl_leading_space;
 type_frag = type_pos = type_len = 0;
@@ -785,24 +808,24 @@
 }
 
 /* save the info in the request record */
-tmp = rsl_strdup(r, type_frag, type_pos, type_len);
+*type = rsl_strdup(r, type_frag, type_pos, type_len);
 /* XXX: this could be done at config time I'm sure... but I'm
  * confused by all this magic_rsl stuff. -djg */
-ap_content_type_tolower(tmp);
-ap_set_content_type(r, tmp);
+ap_content_type_tolower(*type);
+req_dat->type = *type;
 
 if (state == rsl_encoding) {
-tmp = rsl_strdup(r, encoding_frag,
+*encoding = rsl_strdup(r, encoding_frag,

Re: mod_mime_magic, gzipped tarballs and Docker

2016-01-18 Thread Jan Kaluža

On 01/08/2016 07:44 PM, William A Rowe Jr wrote:

Do we have to repeat the softmagic call if checkzmagic resolves to
x-gzip/x-deflate and the internal content type needs to be deciphered?


That's true.

I think that Yann's patch moving the zmagic call after the softmagic 
call would just mean that zmagic won't be called at all if the softmagic 
recognizes the file format.


We would have to check the softmagic result by something similar to 
"magic_rsl_to_request" and if it's type we want to decompress, we would 
have to run zmagic.


Before really trying to do so, I want to ask if I understand the 
reasoning right. Do we consider this way because users can then remove 
x-gzip from mime magic and will be able to use it to disable the 
mod_mime_magic behaviour discussed in this thread?


Regards,
Jan Kaluza


If so, a tweak to this patch sounds like a winner, and could selectively
recognize different inflation streams including bzip2 etc.

On Fri, Jan 8, 2016 at 10:57 AM, Yann Ylavic > wrote:

On Fri, Jan 8, 2016 at 5:30 PM, Yann Ylavic > wrote:
> On Fri, Jan 8, 2016 at 3:17 PM, William A Rowe Jr > wrote:
>>
>> Agreed it is configuration, but cant we simply tweak our recommended
>> conf/magic
>> file???
>>
>> # standard unix compress
>> # Enable the alternate line below to present gzip content as a transfer
>> encoded
>> # stream of the underlying content;
>> #0   string  \037\235application/octet-stream
>> x-compress
>> 0   string  \037\235application/octet-stream
>>
>> # gzip (GNU zip, not to be confused with [Info-ZIP/PKWARE] zip archiver)
>> # Enable the alternate line below to present gzip content as a transfer
>> encoded
>> # stream of the underlying content;
>> #0   string  \037\213application/octet-stream
>> x-gzip
>> 0   string  \037\213application/octet-stream
>>
>> WDYT?
>
> I wasn't aware of conf/magic file, but it seems not used by zmagic()
> which hardcodes its types.
> I may be missing something though...

Looks like it is already like this in conf/magic of 2.4.x and 2.2.x.
My bet is that zmagic() is run before softmagic(), so the magic file
is of no help.

Maybe a simpler patch could be:

Index: modules/metadata/mod_mime_magic.c
===
--- modules/metadata/mod_mime_magic.c(revision 1723283)
+++ modules/metadata/mod_mime_magic.c(working copy)
@@ -880,14 +885,6 @@ static int tryit(request_rec *r, unsigned char *bu
   int checkzmagic)
  {
  /*
- * Try compression stuff
- */
-if (checkzmagic == 1) {
-if (zmagic(r, buf, nb) == 1)
-return OK;
-}
-
-/*
   * try tests in /etc/magic (or surrogate magic file)
   */
  if (softmagic(r, buf, nb) == 1)
@@ -900,6 +897,14 @@ static int tryit(request_rec *r, unsigned char *bu
  return OK;

  /*
+ * Try compression stuff
+ */
+if (checkzmagic == 1) {
+if (zmagic(r, buf, nb) == 1)
+return OK;
+}
+
+/*
   * abandon hope, all ye who remain here
   */
  return DECLINED;
--






Re: mod_mime_magic, gzipped tarballs and Docker

2016-01-08 Thread Jan Kaluža

On 01/08/2016 08:49 AM, Jan Kaluža wrote:

Hi,

it seems Docker client has a problem handling httpd responses [1] when
you run Docker server behind httpd working as a reverse proxy. It is
caused by "mod_mime_magic" adding following Content-Type and
Content-Encoding to gzipped tarballs sent as a response by Docker server:


Just to correct myself a bit. I have found out it's not a reverse-proxy 
case - in that case, mod_mime_magic would not be in the loop. They are 
just serving Docker data using following config:


https://github.com/rbarlow/pulp_docker/blob/master/plugins/etc/httpd/conf.d/pulp_docker.conf


Content-Type: application/x-tar
Content-Encoding: x-gzip

Docker client expects gzipped tarballs to be received, but because it
receives Content-Encoding: x-gzip, it decompresses the response and
receives just plain tar (at least that's how I understand it).

Strictly speaking, httpd is probably right here according to RFC:

"""
The Content-Encoding entity-header field is used as a modifier to the
media-type. When present, its value indicates what additional content
codings have been applied to the entity-body, and thus what decoding
mechanisms must be applied in order to obtain the media-type referenced
by the Content-Type header field.
"""

So, the mod_mime_magic is saying here that the body is tarball encoded
by gzip.

But I think even we are right here, the Content-Encoding should be used
only when httpd itself encodes the original content, otherwise it
confuses clients (I've asked few people and it confuses some web
browsers too - I can get more info if needed.).

Maybe we could stop setting Content-Encoding in mod_mime_magic and just
use Content-Type?

[1] https://github.com/docker/docker/issues/17595

Regards,
Jan Kaluza


Regards,
Jan Kaluza



mod_mime_magic, gzipped tarballs and Docker

2016-01-07 Thread Jan Kaluža

Hi,

it seems Docker client has a problem handling httpd responses [1] when 
you run Docker server behind httpd working as a reverse proxy. It is 
caused by "mod_mime_magic" adding following Content-Type and 
Content-Encoding to gzipped tarballs sent as a response by Docker server:


Content-Type: application/x-tar
Content-Encoding: x-gzip

Docker client expects gzipped tarballs to be received, but because it 
receives Content-Encoding: x-gzip, it decompresses the response and 
receives just plain tar (at least that's how I understand it).


Strictly speaking, httpd is probably right here according to RFC:

"""
The Content-Encoding entity-header field is used as a modifier to the 
media-type. When present, its value indicates what additional content 
codings have been applied to the entity-body, and thus what decoding 
mechanisms must be applied in order to obtain the media-type referenced 
by the Content-Type header field.

"""

So, the mod_mime_magic is saying here that the body is tarball encoded 
by gzip.


But I think even we are right here, the Content-Encoding should be used 
only when httpd itself encodes the original content, otherwise it 
confuses clients (I've asked few people and it confuses some web 
browsers too - I can get more info if needed.).


Maybe we could stop setting Content-Encoding in mod_mime_magic and just 
use Content-Type?


[1] https://github.com/docker/docker/issues/17595

Regards,
Jan Kaluza


Re: Shouldn't ap_get_remote_host use req->useragent_addr?

2016-01-07 Thread Jan Kaluža

On 01/07/2016 04:06 PM, Eric Covener wrote:

On Thu, Jan 7, 2016 at 9:25 AM, Jan Kaluža <jkal...@redhat.com> wrote:

When httpd is running behind a reverse proxy and mod_remoteip is configured,
the correct client IP is logged (using %a in the LogFormat), but the proxy
IP is used by 'Require host .mydomain.net'. I would expect the host based on
IP provided by mod_remoteip to be used here.

Is this expected behaviour? Maybe the ap_get_remote_host method should use
req->useragent_addr instead of conn->client_addr to obtain the REMOTE_HOST.


what about "Require ip ..."?



This would work, but we should clarify that in documentation then, 
because both "Require ip" and "Require host" use term "remote client" in 
their description, but for "Require ip", mod_remoteip is respected, 
while for "Require host", the mod_remoteip is not respected.


I think this is really confusing.

Regards,
Jan Kaluza



Re: Weird behaviour with mod_ssl and SSLCryptoDevice

2015-12-15 Thread Jan Kaluža

On 12/14/2015 02:12 PM, jean-frederic clere wrote:

Hi,

I am sure I am doing something wrong, but when using a dummy crypto
device to recreate a customer issue I am getting a similar issue in
httpd-trunk but I am nearly sure someone would have complained here if
that would be the case.


Hi,

I think I've just fixed that in . I will 
also propose that for 2.4.x and 2.2.x.


Regards,
Jan Kaluza


Comments

Jean-Frederic

The stack:
+++
(gdb) bt
#0  0x7fac11198a98 in raise () from /lib64/libc.so.6
#1  0x7fac1119a69a in abort () from /lib64/libc.so.6
#2  0x7fac01f27e33 in dummy_init (e=) at e_dummy.c:146
#3  0x7fac05a69509 in test_rc4_init_key () from /lib64/libcrypto.so.10
#4  0x7fac05b3374c in ?? () from /lib64/libcrypto.so.10
#5  0x7fac0001 in ?? ()
#6  0x00e37138 in ?? ()
#7  0x00e62348 in ?? ()
#8  0x7fac11d9d68a in apr_pool_cleanup_register (p=0x7fac05a6a122
, data=0xfe4678, plain_cleanup_fn=0x1,
child_cleanup_fn=0x7fac05dab5c0)
 at memory/unix/apr_pools.c:2218
#9  0x00f094c0 in ?? ()
#10 0x7fac0625bf60 in ?? () from /home/jfclere/APACHE/modules/mod_ssl.so
#11 0x00e60938 in ?? ()
#12 0x00e62348 in ?? ()
#13 0x7fac05a6ac58 in BUF_memdup () from /lib64/libcrypto.so.10
#14 0x7fac06039af2 in ssl_init_Engine (s=0xe60938,
s@entry=0x7fffb9ad2e90, p=p@entry=0xe37138) at ssl_engine_init.c:386
#15 0x7fac0603bdaf in ssl_init_Module (p=0xe37138, plog=, ptemp=0xf09780, base_server=0x7fffb9ad2e90) at ssl_engine_init.c:242
#16 0x00455153 in ap_run_post_config
(pconf=pconf@entry=0xe37138, plog=0xea0778, ptemp=0xe62348, s=0xe60938)
at config.c:103
#17 0x0042c575 in main (argc=3, argv=0x7fffb9ad3158) at main.c:788
+++





Re: Weird behaviour with mod_ssl and SSLCryptoDevice

2015-12-15 Thread Jan Kaluža

On 12/15/2015 02:16 PM, Yann Ylavic wrote:

Hi Jan,

On Tue, Dec 15, 2015 at 12:51 PM, Jan Kaluža <jkal...@redhat.com> wrote:


I think I've just fixed that in <http://svn.apache.org/r1720129>. I will
also propose that for 2.4.x and 2.2.x.


Shouldn't we do the same for ecparams below?


Probably yes, I was just checking the arguments which get passed to 
"SSL_CTX_set_*" functions. I think you are right we should call 
EC_GROUP_free there.


Jan Kaluza


Regards,
Yann.





"httpd -X" segfaults with 2.4.17

2015-10-16 Thread Jan Kaluža

Hi,

httpd 2.4.17 segfaults when used with prefork MPM (and probably also 
with other MPMs) and -X option since r1705492.


The crash happens in the following call in prefork.c (and probably also 
worker.c and so on):


ap_mpm_pod_check(my_bucket->pod)

pod is NULL and later dereferenced.

The pod is NULL because of the following:

if (!one_process && /* no POD in one_process mode */
(rv = ap_mpm_pod_open(pconf, _buckets[i].pod))) {

Since the previous code opened pod every time, I (right now) have no 
idea how it is supposed to work after that patch, so while I'm going to 
look deeper at it, I'm sending this email to the list in case other 
people would have better idea how it is supposed to work after the 
SO_REUSEPORT change.


Regards,
Jan Kaluza


Re: PR 58267: Regression in 2.2.31 caused by r1680920

2015-08-25 Thread Jan Kaluža

On 08/24/2015 11:12 PM, Yann Ylavic wrote:

On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic ylavic@gmail.com wrote:


On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža jkal...@redhat.com wrote:


2) Increment proxy_lb_workers according to number of workers in balancer
when using ProxyPass /foobar/ Balancer://foobar/ in the VirtualHost. The
scoreboard would have right size and ap_proxy_set_scoreboard_lb would not
fail then.


I think we can do this quite easily in merge_proxy_config(), by
incrementing proxy_lb_workers for each base-balancers-workers. I did
not test it yet though.


I tested the below which seems to work.


Hm, this reserves the slots in scoreboard even when the balancers are 
not used in the virtualhost, or am I wrong?


I originally thought about increasing proxy_lb_workers only when they 
are used in the ProxyPass* in the vhost.



Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c(revision 1697358)
+++ modules/proxy/mod_proxy.c(working copy)
@@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p, s

  static void * merge_proxy_config(apr_pool_t *p, void *basev, void *overridesv)
  {
+int i;
  proxy_server_conf *ps = apr_pcalloc(p, sizeof(proxy_server_conf));
  proxy_server_conf *base = (proxy_server_conf *) basev;
  proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
@@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p, vo
  ps-allowed_connect_ports = apr_array_append(p,
base-allowed_connect_ports, overrides-allowed_connect_ports);
  ps-workers = apr_array_append(p, base-workers, overrides-workers);
  ps-balancers = apr_array_append(p, base-balancers, 
overrides-balancers);
+/* The balancers inherited from base don't have their members reserved on
+ * the scorebord_lb for this server, account for them now.
+ */
+for (i = 0; i  base-balancers-nelts; ++i) {
+proxy_balancer *balancer = (proxy_balancer *)base-balancers-elts + i;
+proxy_lb_workers += balancer-workers-nelts;
+}
  ps-forward = overrides-forward ? overrides-forward : base-forward;
  ps-reverse = overrides-reverse ? overrides-reverse : base-reverse;

--

Please note that since all the workers would really be accounted in
the scoreboard, configurations like the one of PR 58267 (with
inherited balancers) would also need bigger SHMs (but no more) than
currently...

WDYT?



Regards,
Jan Kaluza




Re: PR 58267: Regression in 2.2.31 caused by r1680920

2015-08-25 Thread Jan Kaluža
.
The patches proposed so far would not change this. I think in order to
revert to the old behaviour we need to
store with each worker in which server_rec context it was created. e.g.
adding a const char * field to the worker that would be filled with
server-server_hostname. Then we could use this value for creating the
md5.

Regards

Rüdiger


-Original Message-
From: Jan Kaluža [mailto:jkal...@redhat.com]
Sent: Dienstag, 25. August 2015 10:23
To: dev@httpd.apache.org
Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920

On 08/24/2015 11:12 PM, Yann Ylavic wrote:

On Mon, Aug 24, 2015 at 5:51 PM, Yann Ylavic ylavic@gmail.com

wrote:


On Mon, Aug 24, 2015 at 4:47 PM, Jan Kaluža jkal...@redhat.com

wrote:


2) Increment proxy_lb_workers according to number of workers in

balancer

when using ProxyPass /foobar/ Balancer://foobar/ in the

VirtualHost.

The

scoreboard would have right size and ap_proxy_set_scoreboard_lb

would

not

fail then.


I think we can do this quite easily in merge_proxy_config(), by
incrementing proxy_lb_workers for each base-balancers-workers. I

did

not test it yet though.


I tested the below which seems to work.


Hm, this reserves the slots in scoreboard even when the balancers are
not used in the virtualhost, or am I wrong?

I originally thought about increasing proxy_lb_workers only when they
are used in the ProxyPass* in the vhost.


Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c(revision 1697358)
+++ modules/proxy/mod_proxy.c(working copy)
@@ -1135,6 +1135,7 @@ static void * create_proxy_config(apr_pool_t *p,

s


   static void * merge_proxy_config(apr_pool_t *p, void *basev, void

*overridesv)

   {
+int i;
   proxy_server_conf *ps = apr_pcalloc(p,

sizeof(proxy_server_conf));

   proxy_server_conf *base = (proxy_server_conf *) basev;
   proxy_server_conf *overrides = (proxy_server_conf *) overridesv;
@@ -1147,6 +1148,13 @@ static void * merge_proxy_config(apr_pool_t *p,

vo

   ps-allowed_connect_ports = apr_array_append(p,
base-allowed_connect_ports, overrides-allowed_connect_ports);
   ps-workers = apr_array_append(p, base-workers, overrides-
workers);
   ps-balancers = apr_array_append(p, base-balancers, overrides-
balancers);
+/* The balancers inherited from base don't have their members

reserved on

+ * the scorebord_lb for this server, account for them now.
+ */
+for (i = 0; i  base-balancers-nelts; ++i) {
+proxy_balancer *balancer = (proxy_balancer *)base-balancers-
elts + i;
+proxy_lb_workers += balancer-workers-nelts;
+}
   ps-forward = overrides-forward ? overrides-forward : base-
forward;
   ps-reverse = overrides-reverse ? overrides-reverse : base-
reverse;

--

Please note that since all the workers would really be accounted in
the scoreboard, configurations like the one of PR 58267 (with
inherited balancers) would also need bigger SHMs (but no more) than
currently...

WDYT?



Regards,
Jan Kaluza







Re: PR 58267: Regression in 2.2.31 caused by r1680920

2015-08-25 Thread Jan Kaluža

On 08/25/2015 02:41 PM, Plüm, Rüdiger, Vodafone Group wrote:

Thanks for the pointer. It is missing because I removed it by accident when 
excluding some debug code I setup locally for analysing the issue from the 
patch I posted. I will post a proper version and if you agree put it in STATUS 
for 2.2.x. As far as I can tell this change only applies to 2.2.x. So it would 
be fine to propose it directly in STATUS without any trunk commit.


I agree.

Jan Kaluza


Regards

Rüdiger


-Original Message-
From: Jan Kaluža [mailto:jkal...@redhat.com]
Sent: Dienstag, 25. August 2015 14:15
To: dev@httpd.apache.org
Subject: Re: PR 58267: Regression in 2.2.31 caused by r1680920

On 08/25/2015 11:39 AM, Plüm, Rüdiger, Vodafone Group wrote:

How about the following patch? It uses the server_rec of the server that

originally created the configuration item.

This looks like good strategy. I've verified that the patch fixes this
issue and does not break anything when testing briefly.

What do you think, Yann?

Note that id_server declaration is missing in the patch. Otherwise
it's OK.

Regards,
Jan Kaluza


Index: modules/proxy/proxy_util.c
===
--- modules/proxy/proxy_util.c  (revision 1697578)
+++ modules/proxy/proxy_util.c  (working copy)
@@ -1460,6 +1460,7 @@
   (*worker)-flush_packets = flush_off;
   (*worker)-flush_wait = PROXY_FLUSH_WAIT;
   (*worker)-smax = -1;
+(*worker)-server = conf-s;
   /* Increase the total worker count */
   proxy_lb_workers++;
   init_conn_pool(p, *worker);
@@ -1824,14 +1829,20 @@
   apr_md5_update(ctx, (unsigned char *)balancer-name,
  strlen(balancer-name));
   }
-if (server) {
+if (worker-server) {
+id_server = worker-server;
+}
+else {
+id_server = server;
+}
+if (id_server) {
   server_addr_rec *addr;
   /* Assumes the unique identifier of a vhost is its

address(es)

* plus the ServerName:Port. Should two or more vhosts

have this

* same identifier, the first one would always be elected

to

* handle the requests, so this shouldn't be an issue...
*/
-for (addr = server-addrs; addr; addr = addr-next) {
+for (addr = id_server-addrs; addr; addr = addr-next) {
   char host_ip[64]; /* for any IPv[46] string */
   apr_sockaddr_ip_getbuf(host_ip, sizeof host_ip,
  addr-host_addr);
@@ -1840,10 +1851,10 @@
   apr_md5_update(ctx, (unsigned char *)addr-
host_port,
  sizeof(addr-host_port));
   }
-apr_md5_update(ctx, (unsigned char *)server-
server_hostname,
-   strlen(server-server_hostname));
-apr_md5_update(ctx, (unsigned char *)server-port,
-   sizeof(server-port));
+apr_md5_update(ctx, (unsigned char *)id_server-
server_hostname,
+   strlen(id_server-server_hostname));
+apr_md5_update(ctx, (unsigned char *)id_server-port,
+   sizeof(id_server-port));
   }
   apr_md5_final(digest, ctx);

Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c   (revision 1697578)
+++ modules/proxy/mod_proxy.c   (working copy)
@@ -1129,6 +1129,7 @@
   ps-badopt = bad_error;
   ps-badopt_set = 0;
   ps-pool = p;
+ps-s = s;

   return ps;
   }
@@ -1172,6 +1173,7 @@
   ps-proxy_status = (overrides-proxy_status_set == 0) ? base-
proxy_status : overrides-proxy_status;
   ps-proxy_status_set = overrides-proxy_status_set || base-
proxy_status_set;
   ps-pool = p;
+ps-s = overrides-s;
   return ps;
   }

Index: modules/proxy/mod_proxy.h
===
--- modules/proxy/mod_proxy.h   (revision 1697578)
+++ modules/proxy/mod_proxy.h   (working copy)
@@ -193,6 +193,7 @@
   } proxy_status; /* Status display options */
   char proxy_status_set;
   apr_pool_t *pool;   /* Pool used for allocating this

struct */

+server_rec *s;  /* The server_rec where this

configuration was created in */

   } proxy_server_conf;


@@ -369,6 +370,7 @@
   chardisablereuse_set;
   apr_interval_time_t conn_timeout;
   charconn_timeout_set;
+server_rec  *server;/* The server_rec where this

configuration was created in */

   };

   /*


Regards

Rüdiger


-Original Message-
From: Plüm, Rüdiger, Vodafone Group
Sent: Dienstag, 25. August 2015 10:48
To: dev@httpd.apache.org
Subject: RE: PR 58267: Regression in 2.2.31 caused by r1680920

I think the current state

PR 58267: Regression in 2.2.31 caused by r1680920

2015-08-24 Thread Jan Kaluža

Hi,

unfortunately, the r1680920 brought undesired behavior described in PR 
58267 to 2.2.x. The bug is well described in the PR, so I won't describe 
it in this email.


I have tried to debug it and I think the problem is that we use also 
server-server_hostname to compute the hash in the 
ap_proxy_set_scoreboard_lb. This hash is used to find out proper 
ap_scoreboard field.


It all happens in mod_proxy.c:child_init's scope.

If the Proxy Balancer://foobar has been defined, all the 
BalancerMembers are initialized with the hash computed with usage of 
global server-server_hostname.


Later, if the ProxyPass /foobar/ Balancer://foobar/ has been used in 
the VirtualHost, ap_proxy_initialize_worker_share is called again with 
server-server_hostname set to the VirtualHost's one.


Now, the root of the error is that the scoreboard size is static (set to 
proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not 
incremented when ProxyPass with balancer is used in the virtualhost. 
This leads to lack of space in scoreboard when Balancers are used in 
multiple virtualhosts.


I think there are two possible fixes:

1) Do not use server-server_hostname when computing hash which is used 
to determine right scoreboard field. I think this would fix this bug, 
but I'm not sure what would happen in situations when you define 2 
balancers with the same name in different virtualhosts...


On the other-side, when there is global Proxy balancer, it make sense to 
use the same worker-s for all the ProxyPass in virtualhosts.


2) Increment proxy_lb_workers according to number of workers in balancer 
when using ProxyPass /foobar/ Balancer://foobar/ in the VirtualHost. 
The scoreboard would have right size and ap_proxy_set_scoreboard_lb 
would not fail then.



Since it's 2.2.x which should be probably stable without big changes, 
I'm asking the list for more opinions... I will try to implement patch 
for option 2) tomorrow and see if this really fixes the issue.


Regards,
Jan Kaluza


Re: PR 58267: Regression in 2.2.31 caused by r1680920

2015-08-24 Thread Jan Kaluža

On 08/24/2015 04:47 PM, Jan Kaluža wrote:

Hi,

unfortunately, the r1680920 brought undesired behavior described in PR
58267 to 2.2.x. The bug is well described in the PR, so I won't describe
it in this email.

I have tried to debug it and I think the problem is that we use also
server-server_hostname to compute the hash in the
ap_proxy_set_scoreboard_lb. This hash is used to find out proper
ap_scoreboard field.

It all happens in mod_proxy.c:child_init's scope.

If the Proxy Balancer://foobar has been defined, all the
BalancerMembers are initialized with the hash computed with usage of
global server-server_hostname.

Later, if the ProxyPass /foobar/ Balancer://foobar/ has been used in
the VirtualHost, ap_proxy_initialize_worker_share is called again with
server-server_hostname set to the VirtualHost's one.

Now, the root of the error is that the scoreboard size is static (set to
proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not
incremented when ProxyPass with balancer is used in the virtualhost.
This leads to lack of space in scoreboard when Balancers are used in
multiple virtualhosts.

I think there are two possible fixes:

1) Do not use server-server_hostname when computing hash which is used
to determine right scoreboard field. I think this would fix this bug,
but I'm not sure what would happen in situations when you define 2
balancers with the same name in different virtualhosts...

On the other-side, when there is global Proxy balancer, it make sense to
use the same worker-s for all the ProxyPass in virtualhosts.


It seems that 2.4.x uses just the name of the worker to determine slot 
for shared memory, so maybe it wouldn't be a problem for 2.2.x too...


Jan Kaluza


2) Increment proxy_lb_workers according to number of workers in balancer
when using ProxyPass /foobar/ Balancer://foobar/ in the VirtualHost.
The scoreboard would have right size and ap_proxy_set_scoreboard_lb
would not fail then.


Since it's 2.2.x which should be probably stable without big changes,
I'm asking the list for more opinions... I will try to implement patch
for option 2) tomorrow and see if this really fixes the issue.

Regards,
Jan Kaluza




Re: mod_ssl: How to react on default OpenSSL SSL_CTX_set_options?

2015-07-22 Thread Jan Kaluža

On 07/21/2015 04:07 PM, Yann Ylavic wrote:

On Tue, Jul 21, 2015 at 2:50 PM, Plüm, Rüdiger, Vodafone Group
ruediger.pl...@vodafone.com wrote:


I would go for 2.


+1



Done in http://svn.apache.org/r1692258.

Regards,
Jan Kaluza



mod_ssl: How to react on default OpenSSL SSL_CTX_set_options?

2015-07-21 Thread Jan Kaluža

Hi,

in Fedora, OpenSSL maintainers are setting SSL_OP_NO_SSLv2 and 
SSL_OP_NO_SSLv3 options by default [1].


This disables both SSLv2 and SSLv3 by default in the SSLv23_method(), 
which is what mod_ssl uses when more than one version is requested.


The side effect of this change in OpenSSL is that some configurations 
that attempt to explicitly enable SSLv3 don't work correctly.  While 
this enables SSLv3, as it uses SSLv3_method:


SSLProtocol +SSLv3

the following two do not work:

SSLProtocol +SSLv3 +TLSv1
SSLProtocol all -TLSv1.1 -TLSv1.2

We have following options now:

1. Clear the SSL_OP_NO_SSLvX flags when there is +SSLvX. It means 
doing something like:


if (!(protocol  SSL_PROTOCOL_SSLV3)) {
SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv3);
} else {
SSL_CTX_clear_options(ctx, SSL_OP_NO_SSLv3);
}

That overwrites the defaults set by the OpenSSL.

2. Same as 1., but print a warning we are overwriting the system OpenSSL 
settings.


3. Respect the defaults set by OpenSSL and print a warning, that we 
won't overwrite it. That's probably silly if you really want to enable 
SSLv3 just in httpd.



[1] http://pkgs.fedoraproject.org/cgit/openssl.git/commit/?id=80b5477

What would you choose? Or should that be handled differently?

Regards,
Jan Kaluza


Re: Using UPN from subjectAltName with SSLUserName

2015-07-10 Thread Jan Kaluža

On 06/29/2015 03:14 PM, Jan Pazdziora wrote:

On Mon, Jun 29, 2015 at 01:47:45PM +0200, Jan Pazdziora wrote:

On Sun, Jun 28, 2015 at 05:11:57PM +0200, Kaspar Brand wrote:

On 22.06.2015 10:37, Jan Pazdziora wrote:

Please find a new patch attached which I hope covers all the
parts you've outlined, for SSL_CLIENT_SAN_OTHER_msUPN_*.


Thanks. Your implementation assumes that only a single otherName form
(msUPN) needs to be supported, but I would prefer to code it in a
somewhat more extensible way.

Does the attached patch work for you? As a practical way of


Yes it does.

My only question (and comments bellow) is about passing the oid rather
than nid through the functions. I can see the string id-on-dnsSRV
used twice in the patch in call OBJ_txt2nid and twice in call OBJ_txt2obj
as well when ideally all which should be needed one
OBJ_txt2nid(id-on-dnsSRV) and one OBJ_create if not found -- the rest
could be done by comparing integers (nid). Unless I'm missing something
about the oid/nid interaction.


Ah, now I see it -- you look at the semantics of oid to compare
value-type so nid would not be enough.

How about just passing char * and doing all the mapping logic
including possible OBJ_create in parse_otherName_value? My goal here
is to have all the hard work of determining the semantics isolated
in one place.

Please see patch attached.


Hi Kaspar,

please could you find some time to review this patch?

I can say that both proposed patches (your and Jan's) are equivalent 
when it comes to implementation functionality. Unfortunately, I don't 
have the OpenSSL knowledge to comment the differences on technical 
level, but I would also like to see this functionality in the trunk :).


Regards,
Jan Kaluza



Re: Using UPN from subjectAltName with SSLUserName

2015-06-19 Thread Jan Kaluža

On 06/18/2015 12:22 PM, Yann Ylavic wrote:

On Thu, Jun 18, 2015 at 11:49 AM, Jan Pazdziora jpazdzi...@redhat.com wrote:


I'd appreciate any comments about suitability of such change, as well
as the implementation. Specifically, I'm not sure if people will
prefer the generic and currently proposed

 SSL_CLIENT_SAN_otherName_n

which gets any value of otherName type, or perhaps going with

 SSL_CLIENT_SAN_UPN_n

and checking the OID just for the UPNs. Based on that decision I plan
to then respin the patch with documentation changes included.


I think a more generic way would to have something like
SSL_CLIENT_OID_oid_n, so that we wouldn't have to add a new field
each time.
In this case, that would be: SSL_CLIENT_OID_1.3.6.1.4.1.311.20.2.3_n.


I think that's nice idea. I can probably work on that. The only question 
is if we would like to have this generic way as additional feature, or 
we really want to use it instead of the SSL_CLIENT_SAN_otherName_n as 
proposed by Jan.


I think that the common cases should have non-generic variable. The 
question is if otherName is the common case.


Ideas?


Regards,
Yann.



Regards,
Jan Kaluza



Re: mod_deflate was Re: [VOTE] Release Apache httpd 2.4.13 as GA

2015-06-04 Thread Jan Kaluža

On 06/05/2015 07:01 AM, William A Rowe Jr wrote:

On Thu, Jun 4, 2015 at 10:47 PM, Gregg Smith g...@gknw.net
mailto:g...@gknw.net wrote:


This is new, not quite sure how I didn't see it a few weeks ago as
it's 9 weeks old.
Who forgot to fill in the number?

mod_deflate.c(1283) : warning C4003: not enough actual parameters
for macro 'APLOGNO'


I just rechecked my compilation from near-trunk 6 hours ago, I don't see
this.

More background, please?  gcc or other compiler rev?  OS?  Revision?


I see APLOGNO() in the code. It has been added in 
http://svn.apache.org/r1669555.


Regards,
Jan Kaluza


It avoids a lot of needless speculation.




Re: ALPN patch comments

2015-06-03 Thread Jan Kaluža

On 06/03/2015 03:43 PM, Stefan Eissing wrote:

Hmm, personally, I do not like redundant configurations. If someone configures 
a module, like mod_h2, to be enabled (H2Engine on), she could expect the module 
to take all the necessary steps. So I am no fan of a „SSLAlpnEnable“.

If a client sends ALPN information in its hello, it certainly can expect an 
answer from the server. Since in absence of any other modules, the httpd will 
do „http/1.1“, I think that is a reasonable response.

For clients that do not sent ALPN, any possible answer from the server will not 
be relevant for them. And I would be surprised if a client gets confused by 
that. If it uses openssl, it will probably not have registered own callbacks 
and thus never see the server answer.


I'm not sure if that's the same case, but Red Hat actually has open bug 
report for the old 2.4.x NPN code (right now reverted in 2.4.x) about 
NPN extensions being sent even they have not been configured.


I think last time I've checked, the httpd trunk ALPN code did something 
similar (returned http/1.1 when no other modules have been configured).


The reasoning in that Red Hat bug report is that there should be a way 
how to stop this behavior because of compliance with some security 
standards.


I just want to point out that there might be some users who care about this.

Regards,
Jan Kaluza


As to the check for sc-server-ssl_alpn_pref-nelts“ that is very much 
depending on the order of hooks. In the case of mod_h2, registering for alpn happens in pre 
connection hooks and those run *after* mod_ssl pre_connection hook, I am pretty sure.

//Stefan


Am 03.06.2015 um 14:52 schrieb Yann Ylavic ylavic@gmail.com:

I wonder if registering the ssl_callback_alpn_select callback
inconditionally could break some clients.
Are those (ALPN ready) always negociate http/1.1?

Otherwise we could check for sc-server-ssl_alpn_pref-nelts  0 (or
a dedicated SSLAlpnEnable) beforing using
SSL_CTX_set_alpn_select_cb().
In that case mod_h2 would not work out of the box, needing some
administration on the httpd side.


On Wed, Jun 3, 2015 at 12:56 PM, Stefan Eissing
stefan.eiss...@greenbytes.de wrote:

I tested the lined patch on a 2.4.x checkout with mod_h2 on OS X 10.10 and 
openssl 1.0.2. All my tests ran fine.

//Stefan


Am 02.06.2015 um 16:56 schrieb Eric Covener cove...@gmail.com:

Can you test the latest rev of the backport candidate?

http://people.apache.org/~ylavic/httpd-2.4.x-alpn-v4.patch



On Mon, Apr 27, 2015 at 11:06 AM Stefan Eissing stefan.eiss...@greenbytes.de 
wrote:


Am 25.04.2015 um 11:47 schrieb Kaspar Brand httpd-dev.2...@velox.ch:

On 22.04.2015 18:54, Jim Jagielski wrote:

For me the time seems right to rip NPN out of trunk and only backport
the ALPN code to 2.4.



I'd be +1 for that.


So, to get one step further, and since there were no explicit objections
to removing NPN support so far (or arguments for keeping it, FWIW), I
went ahead and took a stab at this with r1676004.

Only tested in terms of compiles both w/ and w/o HAVE_TLS_ALPN, so it
certainly needs more eyes before a backport proposal could be made.
There's also a TODO: we should have a mod_ssl configuration parameter
in ssl_engine_kernel.c which I'm unsure to what it refers.


The „TODO“ is a leftover from before SSLAlpnPreference was introduced. It can 
be removed.

I diff’ed the current mod_ssl against the 2.4 branch, removed everything but he 
ALPN changes and made a patch for my sandbox. This works on my OS X with 
mod_h2. My Ubuntu sandbox is still resisting as some test clients still link 
the system ssl which only speaks NPN (or link against a lib_event that links 
against the system openssl). It’s a mess.

Stefan



Kaspar


green/bytes GmbH
Hafenweg 16, 48155 Münster, Germany
Phone: +49 251 2807760. Amtsgericht Münster: HRB5782





green/bytes GmbH
Hafenweg 16, 48155 Münster, Germany
Phone: +49 251 2807760. Amtsgericht Münster: HRB5782





green/bytes GmbH
Hafenweg 16, 48155 Münster, Germany
Phone: +49 251 2807760. Amtsgericht Münster: HRB5782







mod_proxy_fcgi default port

2015-05-26 Thread Jan Kaluža

Hi,

currently the port for fcgi:// protocol defaults to 8000. Is there any 
reason why we use this port number as a default? Also, I think this 
default port number is not documented anywhere.


I'm asking, because php-fpm uses port 9000 by default. I know that these 
ports are not standardized and fcgi backend servers can use whatever 
ports they want, but maybe we could switch to 9000 and document this 
default port?


Regards,
Jan Kaluza


Re: SSLDisableCRLCaching, is it even possible in 2.4.x?

2015-04-22 Thread Jan Kaluža

On 04/22/2015 09:50 AM, Kaspar Brand wrote:

On 21.04.2015 12:20, Jan Kaluža wrote:

we used to have a patch against httpd-2.2.15 to add SSLDisableCRLCaching
option to not cache CRLs. I was trying to adapt this patch for
httpd-trunk and eventually include it upstream but now I'm in dead end.

The patch removes all the CRLs from the per-server_rec OpenSSL cache
created in ssl_init_ctx_crl (OpenSSL caches the CRLs in
X509_store.objs). This all works properly, but I'm thinking about
thread-safety.


Starting with 2.3.15 (https://svn.apache.org/r1165056), CRL checking was
completely delegated to OpenSSL, so it would be a bit surprising to me
if that patch can be ported to trunk.


I'm aware of that, that's why I'm rewriting that patch for trunk :).


Fiddling with OpenSSL internals
looks rather scary to me, at least at first sight - perhaps there's an
API for clearing a CRL store in OpenSSL?


Unfortunately there's no such API in OpenSSL. There's caching flag in 
X509_STORE struct, but it's never used for anything actually.


Maybe it would be better idea to implement that in OpenSSL, but that's 
kind of long-term goal. I was hoping to have this feature in httpd at first.



Kaspar



Regards,
Jan Kaluza



SSLDisableCRLCaching, is it even possible in 2.4.x?

2015-04-21 Thread Jan Kaluža

Hi,

we used to have a patch against httpd-2.2.15 to add SSLDisableCRLCaching 
option to not cache CRLs. I was trying to adapt this patch for 
httpd-trunk and eventually include it upstream but now I'm in dead end.


The patch removes all the CRLs from the per-server_rec OpenSSL cache 
created in ssl_init_ctx_crl (OpenSSL caches the CRLs in 
X509_store.objs). This all works properly, but I'm thinking about 
thread-safety.


The theoretical problem can happen when something gets the pointer to 
CRL from the cache, my code removes the CRL from the cache and deletes 
it and the original code starts using the pointer, which is now pointing 
to deleted object.


I have seen X509_OBJECT.references variable, which could help me a lot 
here (I could just decrease the refcount instead of deleting), but it 
seems this is not used anywhere in OpenSSL, so it's actually useless. 
Does anyone know if that's really true or I'm just missing something?


I'm looking for someone to review that code (maybe I'm not right with 
the thread-safety problem) or advise me the way to move further (if 
there's any).


Thanks,
Jan Kaluza


httpd-trunk-disable-crl.patch
Description: application/download


Re: [PATCH] mod_log_config: Allow logging using errorlog provider

2015-04-17 Thread Jan Kaluža

On 04/07/2015 11:47 AM, Jan Kaluža wrote:

Hi,

we have ap_errorlog_provider in the trunk for some time. I was thinking
about extending it to mod_log_config, so CustomLog/TransferLog would
work with any module providing error_log logging ability like mod_syslog
or mod_journald.

Attached patch does that by introducing CustomLog
errorlog_provider_name:arg syntax, which should be backward compatible
with the current syntax used for CustomLog.


Committed in r1674261.


The patch changes ap_default_log_writer_init to detect this syntax, find
the provider and initialize it. It also changes ap_default_log_writer to
detect initialized provider and use it to log the message

I would like to see that feature in the trunk, but since this changes
the syntax little bit, I will wait a bit for other opinions if any.

Regards,
Jan Kaluza


Regards,
Jan Kaluza



Re: [RELEASE CANDIDATE] Apache-Test-1.39 RC1

2015-04-14 Thread Jan Kaluža

On 04/13/2015 09:23 AM, Steve Hay wrote:

On 8 April 2015 at 18:04, Steve Hay steve.m@googlemail.com wrote:

On 8 April 2015 at 14:24, Steve Hay steve.m@googlemail.com wrote:

Please download, test, and report back on this Apache-Test 1.39
release candidate.

http://people.apache.org/~stevehay/Apache-Test-1.39-rc1.tar.gz



Success on Windows with VC++ 2010 using perl-5.20.2 with/without LWP
and httpd-2.2.29/2.4.10 (4 combinations in all).



Please can anyone spend a short while giving this a spin?


I will give it a try on Fedora in the afternoon hopefully.

Jan Kaluza


I've so far only had two reports -- and one of those was mine!





[PATCH] mod_log_config: Allow logging using errorlog provider

2015-04-07 Thread Jan Kaluža

Hi,

we have ap_errorlog_provider in the trunk for some time. I was thinking 
about extending it to mod_log_config, so CustomLog/TransferLog would 
work with any module providing error_log logging ability like mod_syslog 
or mod_journald.


Attached patch does that by introducing CustomLog 
errorlog_provider_name:arg syntax, which should be backward compatible 
with the current syntax used for CustomLog.


The patch changes ap_default_log_writer_init to detect this syntax, find 
the provider and initialize it. It also changes ap_default_log_writer to 
detect initialized provider and use it to log the message


I would like to see that feature in the trunk, but since this changes 
the syntax little bit, I will wait a bit for other opinions if any.


Regards,
Jan Kaluza


httpd-trunk-mod_log_config-provider.patch
Description: application/download


Re: [PATCH] Balancers, VirtualHost and ProxyPass

2015-03-19 Thread Jan Kaluža

On 12/12/2014 02:23 PM, Jan Kaluža wrote:

On 12/12/2014 02:17 PM, Yann Ylavic wrote:

On Fri, Dec 12, 2014 at 2:09 PM, Plüm, Rüdiger, Vodafone Group
ruediger.pl...@vodafone.com wrote:



-Ursprüngliche Nachricht-
Von: Jan Kaluža [mailto:jkal...@redhat.com]
Gesendet: Freitag, 12. Dezember 2014 14:00
An: dev@httpd.apache.org
Betreff: Re: [PATCH] Balancers, VirtualHost and ProxyPass

You are right :(. Fix attached. Order of errstatuses should not be a
problem. Its values are checked against r-status to determine status
code on which is balancer worker marked as in error.


Guess it is fine now :-)


+1


I forgot to commit this one. Done in trunk in r1667707 now.


Thanks you both, that one was painful :(.

Jan Kaluza


Regards,
Yann.





Regards,
Jan Kaluza



Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-18 Thread Jan Kaluža

On 03/18/2015 10:01 AM, Yann Ylavic wrote:

On Wed, Mar 18, 2015 at 9:48 AM, Jan Kaluža jkal...@redhat.com wrote:

On 03/18/2015 09:23 AM, Yann Ylavic wrote:


On Wed, Mar 18, 2015 at 8:07 AM, Jan Kaluža jkal...@redhat.com wrote:



I have no big knowledge of WebSockets, but it should be possible to
detect
Switching Protocol header and return HTTP error if some error happens
before
we switch to WebSocket.

Would this be acceptable, or you think this empty reply is not worth
fixing
this way?



The problem is that at this point (poll() loop), 101 Switching
Protocol is already sent to the backend, and hence the
ap_proxy_pass_brigade() doing so in proxy_wstunnel_request() should
already have failed with the correct status code returned to the
handler.



Hm, I think I'm little bit confused how it's supposed to work. I thought
mod_proxy_wstunnel appends Upgrade and Connection header to the request's
headers, sends it to the backend and then the backend responds with 101
Switching Protocols.


Yes, this is what's supposed to be done, with all but the backend's
response happening before the poll() loop.
That's why I wonder why, before that, there is no failure when the
request (with appended Upgrade and Connection headers) is sent to the
backend.


What's going on here is that backend does not reply anything. 
proxy_wstunnel_transfer returns APR_EOF without receiving 101 Switching 
Protocols from the backend and without passing anything to the client. 
We could check if APR_EOF happens before 101 Switching Protocols and 
send HTTP error to client, right?




But when reading the code now, it would mean that 101 Switch Protocols is
forwarded to the client, which, I think, should not happen.


This is not happening, ap_proxy_pass_brigade() forwards to the backend.





Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-18 Thread Jan Kaluža

On 03/18/2015 11:07 AM, Yann Ylavic wrote:

Corresponding patch attached...

On Wed, Mar 18, 2015 at 10:57 AM, Yann Ylavic ylavic@gmail.com wrote:

On Wed, Mar 18, 2015 at 10:44 AM, Yann Ylavic ylavic@gmail.com wrote:



[]

Index: modules/proxy/mod_proxy_wstunnel.c
===
--- modules/proxy/mod_proxy_wstunnel.c(revision 1665828)
+++ modules/proxy/mod_proxy_wstunnel.c(working copy)

[]

@@ -292,7 +299,15 @@ static int proxy_wstunnel_request(apr_pool_t *p, r
  ap_log_rerror(APLOG_MARK, APLOG_TRACE2, 0, r,
finished with poll() - cleaning up);

-return OK;
+if (!response_sent) {
+return HTTP_BAD_GATEWAY;
+}
+else if (!request_sent) {
+return HTTP_BAD_REQUEST;


This case is probably not a good idea, so we'd better not handle
request_sent at all (only response_sent), and use NULL instead for
proxy_wstunnel_transfer().


+1 for the last patch. It fixes the bug I see.

Jan Kaluza


+}
+else {
+return OK;
+}
  }

  /*
--




Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-18 Thread Jan Kaluža

On 03/18/2015 09:23 AM, Yann Ylavic wrote:

On Wed, Mar 18, 2015 at 8:07 AM, Jan Kaluža jkal...@redhat.com wrote:


I have no big knowledge of WebSockets, but it should be possible to detect
Switching Protocol header and return HTTP error if some error happens before
we switch to WebSocket.

Would this be acceptable, or you think this empty reply is not worth fixing
this way?


The problem is that at this point (poll() loop), 101 Switching
Protocol is already sent to the backend, and hence the
ap_proxy_pass_brigade() doing so in proxy_wstunnel_request() should
already have failed with the correct status code returned to the
handler.


Hm, I think I'm little bit confused how it's supposed to work. I thought 
mod_proxy_wstunnel appends Upgrade and Connection header to the 
request's headers, sends it to the backend and then the backend responds 
with 101 Switching Protocols.


But when reading the code now, it would mean that 101 Switch Protocols 
is forwarded to the client, which, I think, should not happen.


Well, I will test it more with some real websocket server as a backend 
to fully understand how it works.



Why isn't that happening?




Regards,
Yann.



Regards,
Jan Kaluza



Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-18 Thread Jan Kaluža

On 03/17/2015 02:10 PM, Eric Covener wrote:

On Tue, Mar 17, 2015 at 9:06 AM, Yann Ylavic ylavic@gmail.com wrote:

GET /test/ HTTP/1.1
User-Agent: curl/7.29.0
Host: 127.0.0.1
Accept: */*



No Upgrade header in this test?


Right, no Upgrade header. That's the particular situation where one of 
our httpd tests fail. I've already tried even with Upgrade header and it 
returns empty response in this case too.





Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-18 Thread Jan Kaluža

On 03/17/2015 02:06 PM, Yann Ylavic wrote:

On Tue, Mar 17, 2015 at 1:47 PM, Jan Kaluža jkal...@redhat.com wrote:

On 03/17/2015 01:23 PM, Yann Ylavic wrote:


On Tue, Mar 17, 2015 at 12:38 PM, Jan Kaluža jkal...@redhat.com wrote:


Hi,

I have found out that when WSS is used and SSL handshake fails, httpd
closes
client connection without any response to the client.



If the SSL handshake fails, there is no SSL established connection
which we can send an HTTP response on.
We can only send an SSL alert in this case, and I think mod_ssl takes
care of this already (this occurs while reading the request header,
before mod_proxy_wstunnel IMHO).



Hm, maybe I described it wrongly. What I see here is Empty response from
server


Sorry, you were obviously talking about SSL handshake with the backend...


when I do following:

1. Use this configuration:

ProxyTimeout 2
SSLProxyEngine on
Location /test/
 ProxyPass https://localhost:8080/
 ProxyPassReverse https://localhost:8080/
 ProxyPass wss://localhost:8080/
 ProxyPassReverse wss://localhost:8080/
/Location


2. nc -l 8080  /dev/null

3. curl -v --insecure https://127.0.0.1/test/
(...)

GET /test/ HTTP/1.1
User-Agent: curl/7.29.0
Host: 127.0.0.1
Accept: */*


* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server

With httpd-2.4.6 I see an error response in this case and I think it really
should do return something.


I see now, the handshake failure indeed occurs in the poll()ing loop
when the first packets are read/send from/to the backend.
But still once the connection is Upgrade-d, it is quite application
specific whether or not an HTTP response should be sent to the client,
and when (only if nothing has been sent already, anytime?). IOW, what
would the backend do if it fails after the Upgrade has been
negociated?


I have no big knowledge of WebSockets, but it should be possible to 
detect Switching Protocol header and return HTTP error if some error 
happens before we switch to WebSocket.


Would this be acceptable, or you think this empty reply is not worth 
fixing this way?



Regards,
Yann.



Regards,
Jan Kaluza



mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-17 Thread Jan Kaluža

Hi,

I have found out that when WSS is used and SSL handshake fails, httpd 
closes client connection without any response to the client.


In the log, one can see following:

mod_proxy_wstunnel.c(131): (103)Software caused connection abort: 
[client 127.0.0.1:49915] AH02442: error on sock - ap_get_brigade


Attached patch against 2.4.x fixes it. I'm not committing it, because 
this problem has been introduced in r1493741 and seems like intentional 
thing. This commit has been reverted in r1605946, so my theory is that 
this particular part of mod_proxy_wstunnel has not been reverted 
completely, but I want to be sure before I commit/propose.


Regards,
Jan Kaluza
Index: modules/proxy/mod_proxy_wstunnel.c
===
--- modules/proxy/mod_proxy_wstunnel.c	(revision 1665797)
+++ modules/proxy/mod_proxy_wstunnel.c	(working copy)
@@ -160,6 +160,7 @@
 conn_rec *c = r-connection;
 apr_socket_t *sock = conn-sock;
 conn_rec *backconn = conn-connection;
+int client_error = 0;
 char *buf;
 apr_bucket_brigade *header_brigade;
 apr_bucket *e;
@@ -257,6 +258,9 @@
 ap_log_rerror(APLOG_MARK, APLOG_NOTICE, 0, r, APLOGNO(02605)
 unknown event on backconn %d, pollevent);
 }
+if (rv != APR_SUCCESS) {
+client_error = 1;
+}
 }
 else if (cur-desc.s == client_socket) {
 pollevent = cur-rtnevents;
@@ -292,6 +296,10 @@
 ap_log_rerror(APLOG_MARK, APLOG_TRACE2, 0, r,
   finished with poll() - cleaning up);
 
+if (client_error) {
+return HTTP_INTERNAL_SERVER_ERROR;
+}
+
 return OK;
 }
 


Re: mod_proxy_wstunnel ignores proxy_wstunnel_transfer errors

2015-03-17 Thread Jan Kaluža

On 03/17/2015 01:23 PM, Yann Ylavic wrote:

On Tue, Mar 17, 2015 at 12:38 PM, Jan Kaluža jkal...@redhat.com wrote:

Hi,

I have found out that when WSS is used and SSL handshake fails, httpd closes
client connection without any response to the client.


If the SSL handshake fails, there is no SSL established connection
which we can send an HTTP response on.
We can only send an SSL alert in this case, and I think mod_ssl takes
care of this already (this occurs while reading the request header,
before mod_proxy_wstunnel IMHO).


Hm, maybe I described it wrongly. What I see here is Empty response 
from server when I do following:


1. Use this configuration:

ProxyTimeout 2
SSLProxyEngine on
Location /test/
ProxyPass https://localhost:8080/
ProxyPassReverse https://localhost:8080/
ProxyPass wss://localhost:8080/
ProxyPassReverse wss://localhost:8080/
/Location


2. nc -l 8080  /dev/null

3. curl -v --insecure https://127.0.0.1/test/
(...)
 GET /test/ HTTP/1.1
 User-Agent: curl/7.29.0
 Host: 127.0.0.1
 Accept: */*

* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server

With httpd-2.4.6 I see an error response in this case and I think it 
really should do return something.


Regards,
Jan Kaluza



In the log, one can see following:

mod_proxy_wstunnel.c(131): (103)Software caused connection abort: [client
127.0.0.1:49915] AH02442: error on sock - ap_get_brigade

Attached patch against 2.4.x fixes it. I'm not committing it, because this
problem has been introduced in r1493741 and seems like intentional thing.
This commit has been reverted in r1605946, so my theory is that this
particular part of mod_proxy_wstunnel has not been reverted completely, but
I want to be sure before I commit/propose.


One the Upgrade is done, I don't think we can respond with 500 (in the
poll()ing phase, this is no more HTTP).
AFAICT r1605946 did nor revert r1493741, and I think this rather comes
for https://bz.apache.org/bugzilla/show_bug.cgi?id=56299#c7.

Regards,
Yann.





Re: PR 53435, r101624, mod_ssl: error strings can't be loaded again once?

2015-03-11 Thread Jan Kaluža

On 11/11/2014 02:32 PM, Jan Kaluža wrote:

Hi,

latest comment in PR 53435 shows that memory leak in mod_ssl which
happens during graceful restarts can be caused by r101624. Since this
commit is 11 years old, I wanted to ask people here, if following is
still true with current OpenSSL:


Hi,

little follow-up here, we have memory leak on reload in 
ENGINE_load_builtin_engines which is caused by 
http://svn.apache.org/r654119. Again, this is 6 years old commit which 
may or may not be true. Does anyone know if it's still valid?



+/* Also don't call CRYPTO_cleanup_all_ex_data here; any registered
+ * ex_data indices may have been cached in static variables in
+ * OpenSSL; removing them may cause havoc.  Notably, with OpenSSL
+ * versions = 0.9.8f, COMP_CTX cleanups would not be run, which
+ * could result in a per-connection memory leak (!). */
+


Regards,
Jan Kaluza


@@ -255,7 +255,11 @@ static apr_status_t ssl_cleanup_pre_config(void
*data)
#endif
#endif
ERR_remove_state(0);
- ERR_free_strings();
+
+ /* Don't call ERR_free_strings here; ERR_load_*_strings only
+ * actually load the error strings once per process due to static
+ * variable abuse in OpenSSL. */
+
/*
* TODO: determine somewhere we can safely shove out diagnostics
* (when enabled) at this late stage in the game:


Last comment in PR 53435 showed that leaks disappeared after reverting
this patch and it does not seem to break anything here.

Regards,
Jan Kaluza




Re: AW: Run external RewriteMap program as non-root

2015-03-06 Thread Jan Kaluža

On 03/05/2015 02:51 PM, Plüm, Rüdiger, Vodafone Group wrote:




-Ursprüngliche Nachricht-
Von: Jan Kaluža [mailto:jkal...@redhat.com]
Gesendet: Donnerstag, 5. März 2015 14:08
An: dev@httpd.apache.org
Betreff: Re: Run external RewriteMap program as non-root

On 03/05/2015 12:53 PM, Yann Ylavic wrote:

On Thu, Mar 5, 2015 at 12:08 PM, Jan Kaluža jkal...@redhat.com

wrote:

On 03/05/2015 07:55 AM, Jan Kaluža wrote:


3. Execute it where it is now (post_config), but set user/group

using

apr_procattr_t. So far I think this would duplicate the code of
mod_unixd and would probably have to also handle the windows

equivalent

of that module (if there's any).



I've been thinking about this one more and with introduction of third
argument to RewriteMap, it could be possible with patch similar to

attached

one.

You can do RewriteMap MapName prg:/path user:group with the patch.

This could be even backported to 2.4.x.


I'm fine with this one too (unix only?).


Still thinking about good RewriteMap syntax to pass password for
Windows. But If people don't mind, having this unix only is also
solution :).



The password issue for Windows was also on my mind :-). Having it in cleartext 
in the config seems ugly.
So Unix only should be fine at least for the start.


Committed in r1664565. Thanks all for discussion.


Regards

Rüdiger



Regards,
Jan Kaluza




Re: Run external RewriteMap program as non-root

2015-03-05 Thread Jan Kaluža

On 03/05/2015 07:55 AM, Jan Kaluža wrote:

Hi,

currently, the External Rewriting Program (RewriteMap prg:) is run as
root. I would like to change it but I see three ways how to do it:

1. Execute it right after drop_privileges hook. This looks like best
way, but I haven't found any hook which could be used for that (except
drop_privileges with APR_HOOK_REALLY_LAST, which does not seem as proper
place to me).

2. Execute it in child_init. This is done after drop_privileges, so the
user/group is good. The problem here is that it would execute one
rewrite program per child. Right now I'm not sure if it's really
problem. It could be useful to have more instances of rewriting program
to make its bottleneck lower.

3. Execute it where it is now (post_config), but set user/group using
apr_procattr_t. So far I think this would duplicate the code of
mod_unixd and would probably have to also handle the windows equivalent
of that module (if there's any).


I've been thinking about this one more and with introduction of third 
argument to RewriteMap, it could be possible with patch similar to 
attached one.


You can do RewriteMap MapName prg:/path user:group with the patch.

This could be even backported to 2.4.x.

Jan Kaluza


What way do you think is the best, or would you do it differently?

I'm attaching patch for number 2.

Regards,
Jan Kaluza


Index: modules/mappers/mod_rewrite.c
===
--- modules/mappers/mod_rewrite.c	(revision 1663642)
+++ modules/mappers/mod_rewrite.c	(working copy)
@@ -267,6 +267,8 @@
 const char *dbdq;  /* SQL SELECT statement for rewritemap */
 const char *checkfile2;/* filename to check for map existence
   NULL if only one file   */
+const char *user;
+const char *group;
 } rewritemap_entry;
 
 /* special pattern types for RewriteCond */
@@ -1171,6 +1173,7 @@
 
 static apr_status_t rewritemap_program_child(apr_pool_t *p,
  const char *progname, char **argv,
+ const char *user, const char *group,
  apr_file_t **fpout,
  apr_file_t **fpin)
 {
@@ -1183,6 +1186,8 @@
   APR_FULL_BLOCK, APR_NO_PIPE))
  APR_SUCCESS == (rc=apr_procattr_dir_set(procattr,
  ap_make_dirstr_parent(p, argv[0])))
+ (!user || APR_SUCCESS == (rc=apr_procattr_user_set(procattr, user, )))
+ (!group || APR_SUCCESS == (rc=apr_procattr_group_set(procattr, group)))
  APR_SUCCESS == (rc=apr_procattr_cmdtype_set(procattr, APR_PROGRAM))
  APR_SUCCESS == (rc=apr_procattr_child_errfn_set(procattr,
rewrite_child_errfn))
@@ -1240,6 +1245,7 @@
 }
 
 rc = rewritemap_program_child(p, map-argv[0], map-argv,
+  map-user, map-group,
   fpout, fpin);
 if (rc != APR_SUCCESS || fpin == NULL || fpout == NULL) {
 ap_log_error(APLOG_MARK, APLOG_ERR, rc, s, APLOGNO(00654)
@@ -3018,7 +3024,7 @@
 }
 
 static const char *cmd_rewritemap(cmd_parms *cmd, void *dconf, const char *a1,
-  const char *a2)
+  const char *a2, const char *a3)
 {
 rewrite_server_conf *sconf;
 rewritemap_entry *newmap;
@@ -3124,6 +3130,11 @@
 
 newmap-type  = MAPTYPE_PRG;
 newmap-checkfile = newmap-argv[0];
+if (a3) {
+char *tok_cntx;
+newmap-user = apr_strtok(apr_pstrdup(cmd-pool, a3), :, tok_cntx);
+newmap-group = apr_strtok(NULL, :, tok_cntx);
+}
 }
 else if (strncasecmp(a2, int:, 4) == 0) {
 newmap-type  = MAPTYPE_INT;
@@ -5205,8 +5216,8 @@
  an input string and a to be applied regexp-pattern),
 AP_INIT_RAW_ARGS(RewriteRule, cmd_rewriterule, NULL, OR_FILEINFO,
  an URL-applied regexp-pattern and a substitution URL),
-AP_INIT_TAKE2(   RewriteMap,  cmd_rewritemap,  NULL, RSRC_CONF,
- a mapname and a filename),
+AP_INIT_TAKE23(   RewriteMap,  cmd_rewritemap,  NULL, RSRC_CONF,
+ a mapname and a filename and options),
 { NULL }
 };
 


Re: Run external RewriteMap program as non-root

2015-03-05 Thread Jan Kaluža

On 03/05/2015 09:54 AM, Jan Kaluža wrote:

On 03/05/2015 09:03 AM, Ruediger Pluem wrote:



On 03/05/2015 07:55 AM, Jan Kaluža wrote:

Hi,

currently, the External Rewriting Program (RewriteMap prg:) is run
as root. I would like to change it but I see three
ways how to do it:

1. Execute it right after drop_privileges hook. This looks like best
way, but I haven't found any hook which could be
used for that (except drop_privileges with APR_HOOK_REALLY_LAST,
which does not seem as proper place to me).

2. Execute it in child_init. This is done after drop_privileges, so
the user/group is good. The problem here is that
it would execute one rewrite program per child. Right now I'm not
sure if it's really problem. It could be useful to
have more instances of rewriting program to make its bottleneck lower.

3. Execute it where it is now (post_config), but set user/group using
apr_procattr_t. So far I think this would
duplicate the code of mod_unixd and would probably have to also
handle the windows equivalent of that module (if there's
any).

What way do you think is the best, or would you do it differently?

I'm attaching patch for number 2.


I would tend to 2. as well, but as far as I remember using the
rewritemap program is synchronized across all processes.
This raises two questions:

1. Does rewriting still work with the current patch?


It does work for me. I've done some tests with curl and ab with
prefork/event/worker MPMs.


2. If it does can stuff be optimized to move from a server wide lock
to a process wide lock (or even no lock for
prefork) to remove the contention here?


This could be possible, I will look at it.


Attached patch does it and works for me. RewriteMap with external 
program is also 24% faster with prefork with this patch.


Jan Kaluza


OTOH looking at the topic of backwards compatibility existing rewrite
programs
might rely on not working in parallel. Some may even have an issue if
more then one copy of them is running in parallel,
albeit not processing stuff in parallel which of course would cause an
issue with the proposed patch. Furthermore
existing setups might expect to be run as root. But this stuff only
needs to be considered when we think about
backporting and is moot for trunk.


Right, I'm currently thinking only about trunk. For the 2.4.x, we would
have to do it differently with backward compatibility in mind. I think
something like option 1 with configuration directive to enable new
behaviour would be more acceptable for 2.4.x. We would have single
rewritemap program in this case running as an apache user only if admin
wants it.


Regards

Rüdiger



Regards,
Jan Kaluza



Index: modules/mappers/mod_rewrite.c
===
--- modules/mappers/mod_rewrite.c	(revision 1663642)
+++ modules/mappers/mod_rewrite.c	(working copy)
@@ -52,7 +52,7 @@
 #include apr_user.h
 #include apr_lib.h
 #include apr_signal.h
-#include apr_global_mutex.h
+#include apr_thread_mutex.h
 #include apr_dbm.h
 #include apr_dbd.h
 #include mod_dbd.h
@@ -100,6 +100,7 @@
 
 #include mod_rewrite.h
 #include ap_expr.h
+#include ap_mpm.h
 
 #if APR_CHARSET_EBCDIC
 #include util_charset.h
@@ -416,8 +417,7 @@
 static int proxy_available;
 
 /* Locks/Mutexes */
-static apr_global_mutex_t *rewrite_mapr_lock_acquire = NULL;
-static const char *rewritemap_mutex_type = rewrite-map;
+static apr_thread_mutex_t *rewrite_mapr_lock_acquire = NULL;
 
 /* Optional functions imported from mod_ssl when loaded: */
 static APR_OPTIONAL_FN_TYPE(ssl_var_lookup) *rewrite_ssl_lookup = NULL;
@@ -1437,10 +1437,10 @@
 
 /* take the lock */
 if (rewrite_mapr_lock_acquire) {
-rv = apr_global_mutex_lock(rewrite_mapr_lock_acquire);
+rv = apr_thread_mutex_lock(rewrite_mapr_lock_acquire);
 if (rv != APR_SUCCESS) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, APLOGNO(00659)
-  apr_global_mutex_lock(rewrite_mapr_lock_acquire) 
+  apr_thread_mutex_lock(rewrite_mapr_lock_acquire) 
   failed);
 return NULL; /* Maybe this should be fatal? */
 }
@@ -1551,10 +1551,10 @@
 
 /* give the lock back */
 if (rewrite_mapr_lock_acquire) {
-rv = apr_global_mutex_unlock(rewrite_mapr_lock_acquire);
+rv = apr_thread_mutex_unlock(rewrite_mapr_lock_acquire);
 if (rv != APR_SUCCESS) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, APLOGNO(00660)
-  apr_global_mutex_unlock(rewrite_mapr_lock_acquire) 
+  apr_thread_mutex_unlock(rewrite_mapr_lock_acquire) 
   failed);
 return NULL; /* Maybe this should be fatal? */
 }
@@ -2639,13 +2639,21 @@
 static apr_status_t rewritelock_create(server_rec *s, apr_pool_t *p)
 {
 apr_status_t rc;
+int threaded;
 
+/* RewriteMap program is spawned per child, so for non-threaded MPMs,
+ * we

Re: Run external RewriteMap program as non-root

2015-03-05 Thread Jan Kaluža

On 03/05/2015 12:53 PM, Yann Ylavic wrote:

On Thu, Mar 5, 2015 at 12:08 PM, Jan Kaluža jkal...@redhat.com wrote:

On 03/05/2015 07:55 AM, Jan Kaluža wrote:


3. Execute it where it is now (post_config), but set user/group using
apr_procattr_t. So far I think this would duplicate the code of
mod_unixd and would probably have to also handle the windows equivalent
of that module (if there's any).



I've been thinking about this one more and with introduction of third
argument to RewriteMap, it could be possible with patch similar to attached
one.

You can do RewriteMap MapName prg:/path user:group with the patch.

This could be even backported to 2.4.x.


I'm fine with this one too (unix only?).


Still thinking about good RewriteMap syntax to pass password for 
Windows. But If people don't mind, having this unix only is also 
solution :).





Run external RewriteMap program as non-root

2015-03-04 Thread Jan Kaluža

Hi,

currently, the External Rewriting Program (RewriteMap prg:) is run as 
root. I would like to change it but I see three ways how to do it:


1. Execute it right after drop_privileges hook. This looks like best 
way, but I haven't found any hook which could be used for that (except 
drop_privileges with APR_HOOK_REALLY_LAST, which does not seem as proper 
place to me).


2. Execute it in child_init. This is done after drop_privileges, so the 
user/group is good. The problem here is that it would execute one 
rewrite program per child. Right now I'm not sure if it's really 
problem. It could be useful to have more instances of rewriting program 
to make its bottleneck lower.


3. Execute it where it is now (post_config), but set user/group using 
apr_procattr_t. So far I think this would duplicate the code of 
mod_unixd and would probably have to also handle the windows equivalent 
of that module (if there's any).


What way do you think is the best, or would you do it differently?

I'm attaching patch for number 2.

Regards,
Jan Kaluza
Index: modules/mappers/mod_rewrite.c
===
--- modules/mappers/mod_rewrite.c	(revision 1663642)
+++ modules/mappers/mod_rewrite.c	(working copy)
@@ -4449,17 +4449,6 @@
 apr_pool_cleanup_register(p, (void *)s, rewritelock_remove,
   apr_pool_cleanup_null);
 
-/* if we are not doing the initial config, step through the servers and
- * open the RewriteMap prg:xxx programs,
- */
-if (ap_state_query(AP_SQ_MAIN_STATE) == AP_SQ_MS_CREATE_CONFIG) {
-for (; s; s = s-next) {
-if (run_rewritemap_programs(s, p) != APR_SUCCESS) {
-return HTTP_INTERNAL_SERVER_ERROR;
-}
-}
-}
-
 rewrite_ssl_lookup = APR_RETRIEVE_OPTIONAL_FN(ssl_var_lookup);
 rewrite_is_https = APR_RETRIEVE_OPTIONAL_FN(ssl_is_https);
 
@@ -4485,6 +4474,11 @@
 ap_log_error(APLOG_MARK, APLOG_CRIT, rv, s, APLOGNO(00667)
  mod_rewrite: could not init map cache in child);
 }
+
+/* step through the servers and open the RewriteMap prg:xxx programs */
+for (; s; s = s-next) {
+run_rewritemap_programs(s, p);
+}
 }
 
 


Re: What is the lifetime of apr_dbd_get_entry(...); result?

2015-03-03 Thread Jan Kaluža

On 03/02/2015 11:24 AM, Yann Ylavic wrote:

I meant to reply to all...



On Mon, Mar 2, 2015 at 11:23 AM, Yann Ylavic ylavic@gmail.com wrote:

On Mon, Mar 2, 2015 at 8:22 AM, Jan Kaluža jkal...@redhat.com wrote:

On 02/04/2015 02:53 PM, Jan Kaluža wrote:


httpd's mod_authn_dbd module currently seems to expect that lifetime of
apr_dbd_get_entry(...) result is the same as of the pool used in
apr_dbd_get_row(...) function.


It seems that the result's lifetime is really the one of the row, but
mod_authn_dbd keep reading rows (sequentially) to cleanup the results
(before exiting the loop, probably for connection keepalive purpose).



But this is not a true for at least pgsql backend. Its get_entry
function returns return PQgetvalue(row-res-res, row-n, n);.
Documentation for PQgetvalue says following:

One must explicitly copy the data into other storage if it is to be
used past the lifetime of the PGresult structure itself.

res-res is also freed in dbd_pgsql_get_row(...).


Only the results out-of-range are freed (wrt asked rownum), but not
the current one(s).



Is that correct behaviour and httpd's mod_authn_dbd module should strdup
the result, or is it apr_dbd, which should return result with expected
lifetime?


I think the fix is to be mod_authn_dbd only, APR looks correct here,
and so is your patch.


Thanks for answer, I've committed it in r1663647.


Regards,
Yann.


Regards,
Jan Kaluza



Re: mod_proxy's aside connections proposal

2015-03-03 Thread Jan Kaluža

On 09/30/2014 04:47 PM, Yann Ylavic wrote:

Hello,

I have proposed a patch for PR39673 but I'm not sure it would be
accepted for mainline httpd, so here I am.


Hi,

I would like to get more opinions on the patch Yann proposed in this 
email. I fully understand that NTLM is not HTTP/1.1 compatible, because 
it's not stateless. The question here is if we want to use the approach 
Yann showed in this patch or we would say that we won't support 
protocols like that.


Regards,
Jan Kaluza


The patch adds the possibility (for a module) to acquire a connection
aside from the worker's reslist, so that it won't be acquired from the
reslist nor put back to it once released (nor reused for another
client connection, unless the module does it on its own).

The connection is still bound to a worker, hence it will be
established/closed according to the worker's configuration (hostname,
port, SSL, is_address_reusable, TTL...), and the module can still (and
should) call ap_proxy_determine_connection(),
ap_proxy_connect_backend() and ap_proxy_release_connection() to do
that work.

The new function to acquire an aside connection is :

PROXY_DECLARE(int) ap_proxy_acquire_connection_ex(const char *proxy_function,
   proxy_conn_rec **conn,
   proxy_worker *worker,
   server_rec *s, void *baton,
   ap_proxy_acquire_fn acquire,
   ap_proxy_release_fn release);

When called with both baton and acquire set to NULL,
ap_proxy_acquire_connection_ex() is the same as
ap_proxy_acquire_connection(), and will acquire connections from the
reslist.

When acquire is not NULL, it is used as the constructor for *conn
(given the baton), so it's up to the caller to create a new (aside)
connection (with the new function ap_proxy_aside_connection() also
provided) or reuse an existing one according to its criteria (and the
baton). If release is not NULL, it will be called upon
ap_proxy_release_connection() with the same baton.

When acquire is NULL but baton isn't, a built-in acquire function is
used to select an existing or create a new connection associated to a
conn_rec (and still the worker). The baton is assumed to be a conn_rec
(eg. the one of the client's connection), and this mode is hence a
return back to the days of httpd = 2.0.
This is the trick to respond to PR39673 (NTLM forwarding), or any
issue due to a session associated to a TCP connection (some
load-balancers configured to do so expect that), which mod_proxy can't
forward currently.

The patch then uses the new ap_proxy_acquire_connection_ex() function
(latter mode) in mod_proxy_http and mod_proxy_ajp (which both have the
ability to reuse connections), but only when the environment var
proxy-aside-c is defined. This allows for the administrator to use
SetEnv[If] or a RewriteRule to enable the functionality, according to
specific conditions, like (PR39673) :

 RewriteCond %{HTTP:Authorization} ^NTLM
 RewriteRule ^ - [E=proxy-aside-c]

The patch is attached, and I tried to make it as generic as I could so
that it is not PR39673 (NTLM) specific, aside connections looks like a
nice to have for modules.

This version is slightly different from the one proposed in the PR, in
that it will always close aside connections associated with a
non-reusable worker upon release (the PR's one failed to do that), and
the default release callback (when none is given) for aside
connections is now to apply the worker's configured TTL (if any).

Do you think this can/should (not) be applied to httpd?

Regards,
Yann.





Re: What is the lifetime of apr_dbd_get_entry(...); result?

2015-03-01 Thread Jan Kaluža

On 02/04/2015 02:53 PM, Jan Kaluža wrote:

Hi,


I'm CCing httpd-dev list too, because this question has not been 
answered on APR list yet and since it causes mod_authn_dbd to stop 
working randomly with pgsql, I think it could be interesting even for 
httpd developers.


I'm also attaching example patch which fixes this issue on httpd side, 
but I think we should consider fixing it in the APR-util.



httpd's mod_authn_dbd module currently seems to expect that lifetime of
apr_dbd_get_entry(...) result is the same as of the pool used in
apr_dbd_get_row(...) function.

But this is not a true for at least pgsql backend. Its get_entry
function returns return PQgetvalue(row-res-res, row-n, n);.
Documentation for PQgetvalue says following:

One must explicitly copy the data into other storage if it is to be
used past the lifetime of the PGresult structure itself.

res-res is also freed in dbd_pgsql_get_row(...).

Is that correct behaviour and httpd's mod_authn_dbd module should strdup
the result, or is it apr_dbd, which should return result with expected
lifetime?

Regards,
Jan Kaluza


Regards,
Jan Kaluza

Index: modules/aaa/mod_authn_dbd.c
===
--- modules/aaa/mod_authn_dbd.c	(revision 1644346)
+++ modules/aaa/mod_authn_dbd.c	(working copy)
@@ -169,7 +169,8 @@
 i++;
 }
 #endif
-dbd_password = apr_dbd_get_entry(dbd-driver, row, 0);
+dbd_password = apr_pstrdup(r-pool,
+   apr_dbd_get_entry(dbd-driver, row, 0));
 }
 /* we can't break out here or row won't get cleaned up */
 }
@@ -263,7 +264,8 @@
 i++;
 }
 #endif
-dbd_hash = apr_dbd_get_entry(dbd-driver, row, 0);
+dbd_hash = apr_pstrdup(r-pool,
+   apr_dbd_get_entry(dbd-driver, row, 0));
 }
 /* we can't break out here or row won't get cleaned up */
 }


Re: svn commit: r1635428 - in /httpd/httpd/trunk: include/http_core.h server/core.c server/request.c

2015-01-22 Thread Jan Kaluža

On 01/22/2015 12:22 AM, William A. Rowe Jr. wrote:

On Mon, 19 Jan 2015 16:28:46 -0600
William A. Rowe Jr. wr...@rowe-clan.net wrote:


On Sun, 18 Jan 2015 23:00:10 -0500
Eric Covener cove...@gmail.com wrote:


On Thu, Oct 30, 2014 at 4:34 AM,  jkal...@apache.org wrote:


+/* core_dir_config is Directory*, but the requested
file is
+ * not a directory, so although the regexp could
match,
+ * we skip it. */
+if (entry_core-d_is_directory 
r-finfo.filetype != APR_DIR) {
+continue;
+}
+
  if (ap_regexec(entry_core-r, r-filename, nmatch,
pmatch, 0)) { continue;
  }


I think this is broken.


You are correct, I don't think it means what the author thought this
code means.


Indeed, it was nonsense.  The fix begins with some code similar to
[attached] but there are a number of shortcuts in the code that would
need to be accounted for by expanding the dir_len at the appropriate
time.  The patch is also insufficient in that the r-filename buffer
needs to be at least one extra byte to compensate for the 'maybe a
trailing slash, maybe not' on a final directory component in a file
name.


Hi,

you are both right, sorry for the patch. I will revert it from trunk (if 
not already done by someone else). That part of httpd looks more 
complicated that I thought previously.


Jan Kaluza


That said, I'm putting this out there to get folks thinking, I can't
spend much more time on it myself, today.

Cheers,

Bill





Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-16 Thread Jan Kaluža

On 12/16/2014 01:57 PM, Jim Jagielski wrote:

Isn't this already addressed/handled with the BalancerInherit directive??


No, it isn't. The BalancerInherit only says that the balancers from the 
main config will be copied to vhost context *after* the config_tree is 
processed. And the word *after* is the problem here. When you try to use 
the inherited balancer in the vhost config, you can't, because vhost 
does not know about the inherited balancers in the time of config 
processing.


Every time you try to use (with ProxyPass for example) inherited 
balancer in the vhost, mod_proxy creates brand new balancer in the vhost 
context. This new balancer is completely ignored later and replaced by 
the original balancer you wanted to use when BalancerInherit is used.


If we think httpd should not allow ProxyPass in the VirtualHost with 
balancers defined outside of VirtualHost context, we should disable that 
and show warning that httpd doesn't support this configuration.


Otherwise we need to add balancer merging as I did in the patch.

Regards,
Jan Kaluza


On Dec 10, 2014, at 7:25 AM, Jan Kaluža jkal...@redhat.com wrote:

Hi,

I've found out that following configuration does not work as expected:

Proxy balancer://a
   ...
/Proxy
VirtualHost *:80
ProxyPass / balancer://a stickysession=JSESSIONID|jsessionid
/VirtualHost

In this case, two proxy_balancers are created. The first one in Proxy section 
in the main config without stickysession and the second one in the vhost 
section with stickysession set.

Because of merge_proxy_config behaviour, the one from the main config is always 
preferred and therefore you cannot set stickysession (and other options) this 
way.

Attached patch fixes that by changing the merge strategy for balancers array to 
merge options set by ProxyPass.

I think we would need the same for proxy_worker too, but before I spent 
afternoon working on it, I wanted to ask, do you think this is the right way 
how to fix this?

Regards,
Jan Kaluza
httpd-trunk-balancer-vhost.patch






Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-12 Thread Jan Kaluža

On 12/11/2014 03:05 PM, Plüm, Rüdiger, Vodafone Group wrote:




-Original Message-
From: Jan Kaluža [mailto:jkal...@redhat.com]
Sent: Donnerstag, 11. Dezember 2014 14:40
To: dev@httpd.apache.org
Subject: Re: [PATCH] Balancers, VirtualHost and ProxyPass

On 12/11/2014 08:47 AM, Jan Kaluža wrote:

On 12/10/2014 08:21 PM, Ruediger Pluem wrote:



On 12/10/2014 02:21 PM, Jan Kaluža wrote:

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

But this way we lose the base ones that are not touched in the
virtual host and e.g. are only used by rewriterules.
So we should transfer the base ones to the merged array in any case
and update them where needed.


Hm, you are right. Check the new version attached to this email.


But this one changes the parent configuration. So if you have two
virtual hosts and in each you change a different
parameter of the parent the later one has *both* changes, not only
one. So you would need to do a copy of these
balancers first and adjust the copy. Then only add the adjusted copy
to the result. For the base balancers not matching
the override balancers just add them to the result untouched. Same for
the override ones, as you already do


That makes sense, thanks. I will work on it and send updated patch.


I have to admit that this feature is starting to be more time-consuming
than I thought :). The attached patch should address all the issues
pointed by Yann and Rüdiger.


Looks fine in general. Details:


I hope I've finally fixed everything now :), see the attached patch 
please. I really appreciate your reviews here!


Regards,
Jan Kaluza


+if (!tmp.lbmethod_set  b1-lbmethod_set) {
+b2-lbmethod_set = b1-lbmethod_set;
+b2-lbmethod = b1-lbmethod;

This is already the case because of the

  +*b2 = *b1;

above. So if lbmethod is set differently in overrides you lose it in favour of 
the base setting.


+PROXY_STRNCPY(tmp.s-lbpname, b1-s-lbpname);
+}
+if (!tmp.growth_set  b1-growth_set) {
+b2-growth_set = b1-growth_set;
+b2-growth = b1-growth;

Same as above

+}
+if (!tmp.failontimeout_set  b1-failontimeout_set) {
+b2-failontimeout_set = b1-failontimeout_set;
+b2-failontimeout = b1-failontimeout;
+}

Same as above.

  if (apr_is_empty_array(tmp.errstatuses)
+ !apr_is_empty_array(b1-errstatuses)) {
+apr_array_cat(b2-errstatuses, b1-errstatuses);
+}

b1 and b2 point to the same array. So the result will be a doubled b1 if 
apr_array_cat doesn't fail on getting supplied the same pointer twice :-)


Regards

Rüdiger



Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c	(revision 1644346)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -306,6 +306,7 @@
 }
 else
 balancer-s-sticky_separator = *val;
+balancer-s-sticky_separator_set = 1;
 }
 else if (!strcasecmp(key, nofailover)) {
 /* If set to 'on' the session will break
@@ -318,6 +319,7 @@
 balancer-s-sticky_force = 0;
 else
 return failover must be On|Off;
+balancer-s-sticky_force_set = 1;
 }
 else if (!strcasecmp(key, timeout)) {
 /* Balancer timeout in seconds.
@@ -348,6 +350,7 @@
 if (provider) {
 balancer-lbmethod = provider;
 if (PROXY_STRNCPY(balancer-s-lbpname, val) == APR_SUCCESS) {
+balancer-lbmethod_set = 1;
 return NULL;
 }
 else {
@@ -367,6 +370,7 @@
 balancer-s-scolonsep = 0;
 else
 return scolonpathdelim must be On|Off;
+balancer-s-scolonsep_set = 1;
 }
 else if (!strcasecmp(key, failonstatus)) {
 char *val_split;
@@ -397,6 +401,7 @@
 balancer-failontimeout = 0;
 else
 return failontimeout must be On|Off;
+balancer-failontimeout_set = 1;
 }
 else if (!strcasecmp(key, nonce)) {
 if (!strcasecmp(val, None)) {
@@ -407,6 +412,7 @@
 return Provided nonce is too large;
 }
 }
+balancer-s-nonce_set = 1;
 }
 else if (!strcasecmp(key, growth)) {
 ival = atoi(val);
@@ -413,6 +419,7 @@
 if (ival  1 || ival  100)   /* arbitrary limit here */
 return growth must be between 1 and 100;
 balancer-growth = ival;
+balancer-growth_set = 1;
 }
 else if (!strcasecmp(key, forcerecovery)) {
 if (!strcasecmp(val, on))
@@ -421,6 +428,7 @@
 balancer-s-forcerecovery = 0;
 else
 return forcerecovery must be On|Off;
+balancer-s-forcerecovery_set = 1;
 }
 else {
 return unknown

Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-12 Thread Jan Kaluža

On 12/12/2014 12:08 PM, Yann Ylavic wrote:

Hi Jan,

On Fri, Dec 12, 2014 at 9:44 AM, Jan Kaluža jkal...@redhat.com wrote:

On 12/11/2014 03:05 PM, Plüm, Rüdiger, Vodafone Group wrote:


Looks fine in general. Details:


I hope I've finally fixed everything now :), see the attached patch please.


Aren't the following parameters missing for the merge:
 unsigned intmax_attempts_set:1;
 unsigned intvhosted:1;
 unsigned intinactive:1;
vhosted does not seem to be used, but seems a good candidate to
differentiate base vs vhost balancer.


max_attempts_set is merged. I've been merging only things which are set 
by set_balancer_param(). vhosted and inactive seems both to be unused. 
Without the semantics for those, I have no idea how to merge them 
(should I add vhosted_set/inactive_set or just merge them as they are...).


I think vhosted has been committed by a mistake in r1206286. I can't 
find a place where inactive is set (was just grepping for inactive).




Detail, sticky can't be NULL here:
+if ((!b2-s-sticky || *b2-s-sticky == 0)
+ b1-s-sticky  *b1-s-sticky) {
+PROXY_STRNCPY(b2-s-sticky_path, b1-s-sticky_path);
+PROXY_STRNCPY(b2-s-sticky, b1-s-sticky);
+}
+


Ok, it looks like the rest of mod_proxy does not check for s-sticky == 
NULL too, so I will remove it from this method too.


Regards,
Jan Kaluza


I am (still) confused about the shallow copy:


   +*b2 = *b1;


IIUC, Rüdiger's point is that base balancers' parameters shouldn't be
modified by vhosts specifics, because some vhosts (or RewriteRules)
may use the default ones.
But then why would they share (at runtime) the same lbmethod, members
list (dynamic with balancer-manager), backend connections (reslist,
same timeouts, reusability, states, any worker parameter).

IMHO there should be a deep copy here, that's another balancer, with
its own mutexes, own members having their own parameters, own
sockets...



Regards,
Yann.





Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-12 Thread Jan Kaluža

On 12/12/2014 11:56 AM, Ruediger Pluem wrote:



On 12/12/2014 09:44 AM, Jan Kaluža wrote:

On 12/11/2014 03:05 PM, Plüm, Rüdiger, Vodafone Group wrote:




-Original Message-
From: Jan Kaluža [mailto:jkal...@redhat.com]
Sent: Donnerstag, 11. Dezember 2014 14:40
To: dev@httpd.apache.org
Subject: Re: [PATCH] Balancers, VirtualHost and ProxyPass

On 12/11/2014 08:47 AM, Jan Kaluža wrote:

On 12/10/2014 08:21 PM, Ruediger Pluem wrote:



On 12/10/2014 02:21 PM, Jan Kaluža wrote:

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

But this way we lose the base ones that are not touched in the
virtual host and e.g. are only used by rewriterules.
So we should transfer the base ones to the merged array in any case
and update them where needed.


Hm, you are right. Check the new version attached to this email.


But this one changes the parent configuration. So if you have two
virtual hosts and in each you change a different
parameter of the parent the later one has *both* changes, not only
one. So you would need to do a copy of these
balancers first and adjust the copy. Then only add the adjusted copy
to the result. For the base balancers not matching
the override balancers just add them to the result untouched. Same for
the override ones, as you already do


That makes sense, thanks. I will work on it and send updated patch.


I have to admit that this feature is starting to be more time-consuming
than I thought :). The attached patch should address all the issues
pointed by Yann and Rüdiger.


Looks fine in general. Details:


I hope I've finally fixed everything now :), see the attached patch please. I 
really appreciate your reviews here!


Looks good. One thing though: I think apr_array_cat is still wrong since we 
only copied a *pointer* to the array with
*b2 = *b1. So apr_array_cat adjusts the original array from the base. I guess 
we should do:

apr_array_cat(tmp.errstatuses, b2-errstatuses);
b2-errstatuses = tmp.errstatuses;

Of course this is only fine when the order of the errstatuses array doesn't 
matter. Otherwise:

  b2-errstatuses = apr_array_append(b2-errstatuses, tmp.errstatuses);

And from order perspective, maybe:

return apr_array_append(p, tocopy, overrides);

If order doesn't matter I guess

apr_array_cat(overrides, tocopy);
return overrides;

would be also fine.


You are right :(. Fix attached. Order of errstatuses should not be a 
problem. Its values are checked against r-status to determine status 
code on which is balancer worker marked as in error.


Jan Kaluza


Regards

Rüdiger



Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c	(revision 1644346)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -306,6 +306,7 @@
 }
 else
 balancer-s-sticky_separator = *val;
+balancer-s-sticky_separator_set = 1;
 }
 else if (!strcasecmp(key, nofailover)) {
 /* If set to 'on' the session will break
@@ -318,6 +319,7 @@
 balancer-s-sticky_force = 0;
 else
 return failover must be On|Off;
+balancer-s-sticky_force_set = 1;
 }
 else if (!strcasecmp(key, timeout)) {
 /* Balancer timeout in seconds.
@@ -348,6 +350,7 @@
 if (provider) {
 balancer-lbmethod = provider;
 if (PROXY_STRNCPY(balancer-s-lbpname, val) == APR_SUCCESS) {
+balancer-lbmethod_set = 1;
 return NULL;
 }
 else {
@@ -367,6 +370,7 @@
 balancer-s-scolonsep = 0;
 else
 return scolonpathdelim must be On|Off;
+balancer-s-scolonsep_set = 1;
 }
 else if (!strcasecmp(key, failonstatus)) {
 char *val_split;
@@ -397,6 +401,7 @@
 balancer-failontimeout = 0;
 else
 return failontimeout must be On|Off;
+balancer-failontimeout_set = 1;
 }
 else if (!strcasecmp(key, nonce)) {
 if (!strcasecmp(val, None)) {
@@ -407,6 +412,7 @@
 return Provided nonce is too large;
 }
 }
+balancer-s-nonce_set = 1;
 }
 else if (!strcasecmp(key, growth)) {
 ival = atoi(val);
@@ -413,6 +419,7 @@
 if (ival  1 || ival  100)   /* arbitrary limit here */
 return growth must be between 1 and 100;
 balancer-growth = ival;
+balancer-growth_set = 1;
 }
 else if (!strcasecmp(key, forcerecovery)) {
 if (!strcasecmp(val, on))
@@ -421,6 +428,7 @@
 balancer-s-forcerecovery = 0;
 else
 return forcerecovery must be On|Off;
+balancer-s-forcerecovery_set = 1;
 }
 else {
 return unknown Balancer parameter;
@@ -1272,6 +1280,99 @@
 return ps;
 }
 
+static apr_array_header_t *merge_balancers(apr_pool_t *p,
+   apr_array_header_t *base

Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-12 Thread Jan Kaluža

On 12/12/2014 02:17 PM, Yann Ylavic wrote:

On Fri, Dec 12, 2014 at 2:09 PM, Plüm, Rüdiger, Vodafone Group
ruediger.pl...@vodafone.com wrote:



-Ursprüngliche Nachricht-
Von: Jan Kaluža [mailto:jkal...@redhat.com]
Gesendet: Freitag, 12. Dezember 2014 14:00
An: dev@httpd.apache.org
Betreff: Re: [PATCH] Balancers, VirtualHost and ProxyPass

You are right :(. Fix attached. Order of errstatuses should not be a
problem. Its values are checked against r-status to determine status
code on which is balancer worker marked as in error.


Guess it is fine now :-)


+1


Thanks you both, that one was painful :(.

Jan Kaluza


Regards,
Yann.





Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-11 Thread Jan Kaluža

On 12/10/2014 08:21 PM, Ruediger Pluem wrote:



On 12/10/2014 02:21 PM, Jan Kaluža wrote:

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

But this way we lose the base ones that are not touched in the virtual host and 
e.g. are only used by rewriterules.
So we should transfer the base ones to the merged array in any case and update 
them where needed.


Hm, you are right. Check the new version attached to this email.


But this one changes the parent configuration. So if you have two virtual hosts 
and in each you change a different
parameter of the parent the later one has *both* changes, not only one. So you 
would need to do a copy of these
balancers first and adjust the copy. Then only add the adjusted copy to the 
result. For the base balancers not matching
the override balancers just add them to the result untouched. Same for the 
override ones, as you already do


That makes sense, thanks. I will work on it and send updated patch.

Jan Kaluza


Regards

Rüdiger





Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-11 Thread Jan Kaluža

On 12/11/2014 08:47 AM, Jan Kaluža wrote:

On 12/10/2014 08:21 PM, Ruediger Pluem wrote:



On 12/10/2014 02:21 PM, Jan Kaluža wrote:

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

But this way we lose the base ones that are not touched in the
virtual host and e.g. are only used by rewriterules.
So we should transfer the base ones to the merged array in any case
and update them where needed.


Hm, you are right. Check the new version attached to this email.


But this one changes the parent configuration. So if you have two
virtual hosts and in each you change a different
parameter of the parent the later one has *both* changes, not only
one. So you would need to do a copy of these
balancers first and adjust the copy. Then only add the adjusted copy
to the result. For the base balancers not matching
the override balancers just add them to the result untouched. Same for
the override ones, as you already do


That makes sense, thanks. I will work on it and send updated patch.


I have to admit that this feature is starting to be more time-consuming 
than I thought :). The attached patch should address all the issues 
pointed by Yann and Rüdiger.


When balancer is found in both overrides and base arrays, the copy of 
the base balancer replaces the one in override array, but the original 
override's options, which can be set by ProxyPass, are kept.


If the balancer exists only in the overrides array, it is kept untouched.

If the balancer exists only in the base array, it is appended to the 
override array and returned untouched.



Jan Kaluza


Regards

Rüdiger





Regards,
Jan Kaluza

Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c	(revision 1644346)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -306,6 +306,7 @@
 }
 else
 balancer-s-sticky_separator = *val;
+balancer-s-sticky_separator_set = 1;
 }
 else if (!strcasecmp(key, nofailover)) {
 /* If set to 'on' the session will break
@@ -318,6 +319,7 @@
 balancer-s-sticky_force = 0;
 else
 return failover must be On|Off;
+balancer-s-sticky_force_set = 1;
 }
 else if (!strcasecmp(key, timeout)) {
 /* Balancer timeout in seconds.
@@ -348,6 +350,7 @@
 if (provider) {
 balancer-lbmethod = provider;
 if (PROXY_STRNCPY(balancer-s-lbpname, val) == APR_SUCCESS) {
+balancer-lbmethod_set = 1;
 return NULL;
 }
 else {
@@ -367,6 +370,7 @@
 balancer-s-scolonsep = 0;
 else
 return scolonpathdelim must be On|Off;
+balancer-s-scolonsep_set = 1;
 }
 else if (!strcasecmp(key, failonstatus)) {
 char *val_split;
@@ -397,6 +401,7 @@
 balancer-failontimeout = 0;
 else
 return failontimeout must be On|Off;
+balancer-failontimeout_set = 1;
 }
 else if (!strcasecmp(key, nonce)) {
 if (!strcasecmp(val, None)) {
@@ -407,6 +412,7 @@
 return Provided nonce is too large;
 }
 }
+balancer-s-nonce_set = 1;
 }
 else if (!strcasecmp(key, growth)) {
 ival = atoi(val);
@@ -413,6 +419,7 @@
 if (ival  1 || ival  100)   /* arbitrary limit here */
 return growth must be between 1 and 100;
 balancer-growth = ival;
+balancer-growth_set = 1;
 }
 else if (!strcasecmp(key, forcerecovery)) {
 if (!strcasecmp(val, on))
@@ -421,6 +428,7 @@
 balancer-s-forcerecovery = 0;
 else
 return forcerecovery must be On|Off;
+balancer-s-forcerecovery_set = 1;
 }
 else {
 return unknown Balancer parameter;
@@ -1272,6 +1280,94 @@
 return ps;
 }
 
+static apr_array_header_t *merge_balancers(apr_pool_t *p,
+   apr_array_header_t *base,
+   apr_array_header_t *overrides)
+{
+proxy_balancer *b1;
+proxy_balancer *b2;
+proxy_balancer tmp;
+int x, y, found;
+apr_array_header_t *tocopy = apr_array_make(p, 1, sizeof(proxy_balancer));
+
+/* Check if the balancer is defined in both override and base configs:
+ * a) If it is, Create copy of base balancer and change the configuration
+ *which can be changed by ProxyPass.
+ * b) Otherwise, copy the balancer to tocopy array and merge it later.
+ */
+b1 = (proxy_balancer *) base-elts;
+for (y = 0; y  base-nelts; y++) {
+b2 = (proxy_balancer *) overrides-elts;
+for (x = 0, found = 0; x  overrides-nelts; x++) {
+if (b1-hash.def == b2-hash.def  b1-hash.fnv == b2-hash.fnv) {
+tmp = *b2;
+*b2 = *b1;
+b2-s = tmp.s;
+if ((!tmp.s-sticky || *tmp.s-sticky == 0)
+ b1-s-sticky  *b1-s

[PATCH] Balancers, VirtualHost and ProxyPass

2014-12-10 Thread Jan Kaluža

Hi,

I've found out that following configuration does not work as expected:

Proxy balancer://a
   ...
/Proxy
VirtualHost *:80
ProxyPass / balancer://a stickysession=JSESSIONID|jsessionid
/VirtualHost

In this case, two proxy_balancers are created. The first one in Proxy 
section in the main config without stickysession and the second one in 
the vhost section with stickysession set.


Because of merge_proxy_config behaviour, the one from the main config is 
always preferred and therefore you cannot set stickysession (and other 
options) this way.


Attached patch fixes that by changing the merge strategy for balancers 
array to merge options set by ProxyPass.


I think we would need the same for proxy_worker too, but before I spent 
afternoon working on it, I wanted to ask, do you think this is the right 
way how to fix this?


Regards,
Jan Kaluza
Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c	(revision 1644346)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -306,6 +306,7 @@
 }
 else
 balancer-s-sticky_separator = *val;
+balancer-s-sticky_separator_set = 1;
 }
 else if (!strcasecmp(key, nofailover)) {
 /* If set to 'on' the session will break
@@ -318,6 +319,7 @@
 balancer-s-sticky_force = 0;
 else
 return failover must be On|Off;
+balancer-s-sticky_force_set = 1;
 }
 else if (!strcasecmp(key, timeout)) {
 /* Balancer timeout in seconds.
@@ -348,6 +350,7 @@
 if (provider) {
 balancer-lbmethod = provider;
 if (PROXY_STRNCPY(balancer-s-lbpname, val) == APR_SUCCESS) {
+balancer-lbmethod_set = 1;
 return NULL;
 }
 else {
@@ -367,6 +370,7 @@
 balancer-s-scolonsep = 0;
 else
 return scolonpathdelim must be On|Off;
+balancer-s-scolonsep_set = 1;
 }
 else if (!strcasecmp(key, failonstatus)) {
 char *val_split;
@@ -397,6 +401,7 @@
 balancer-failontimeout = 0;
 else
 return failontimeout must be On|Off;
+balancer-failontimeout_set = 1;
 }
 else if (!strcasecmp(key, nonce)) {
 if (!strcasecmp(val, None)) {
@@ -407,6 +412,7 @@
 return Provided nonce is too large;
 }
 }
+balancer-s-nonce_set = 1;
 }
 else if (!strcasecmp(key, growth)) {
 ival = atoi(val);
@@ -413,6 +419,7 @@
 if (ival  1 || ival  100)   /* arbitrary limit here */
 return growth must be between 1 and 100;
 balancer-growth = ival;
+balancer-growth_set = 1;
 }
 else if (!strcasecmp(key, forcerecovery)) {
 if (!strcasecmp(val, on))
@@ -421,6 +428,7 @@
 balancer-s-forcerecovery = 0;
 else
 return forcerecovery must be On|Off;
+balancer-s-forcerecovery_set = 1;
 }
 else {
 return unknown Balancer parameter;
@@ -1272,6 +1280,89 @@
 return ps;
 }
 
+static apr_array_header_t *merge_balancers(apr_pool_t *p,
+   apr_array_header_t *base,
+   apr_array_header_t *overrides)
+{
+proxy_balancer *b1;
+proxy_balancer *b2;
+proxy_balancer *copy;
+int x, y, found;
+apr_array_header_t *merged = apr_array_make(p, base-nelts,
+sizeof(proxy_balancer));
+
+/* Check if the balancer is defined in both override and base configs:
+ * a) If it is, merge the changes which can be done by ProxyPass.
+ * b) Otherwise, copy the balancer to merged array.
+ */
+b2 = (proxy_balancer *) overrides-elts;
+for (x = 0; x  overrides-nelts; x++) {
+b1 = (proxy_balancer *) base-elts;
+for (y = 0, found = 0; y  base-nelts; y++) {
+if (b1-hash.def == b2-hash.def  b1-hash.fnv == b2-hash.fnv) {
+if (b2-s-sticky  *b2-s-sticky) {
+PROXY_STRNCPY(b1-s-sticky_path, b2-s-sticky_path);
+PROXY_STRNCPY(b1-s-sticky, b2-s-sticky);
+}
+if (b2-s-sticky_separator_set) {
+b1-s-sticky_separator_set = b2-s-sticky_separator_set;
+b1-s-sticky_separator = b2-s-sticky_separator;
+}
+if (b2-s-timeout) {
+b1-s-timeout = b2-s-timeout;
+}
+if (b2-s-max_attempts_set) {
+b1-s-max_attempts_set = b2-s-max_attempts_set;
+b1-s-max_attempts = b2-s-max_attempts;
+}
+if (b2-lbmethod_set) {
+b1-lbmethod_set = b2-lbmethod_set;
+b1-lbmethod = b2-lbmethod;
+PROXY_STRNCPY(b1-s-lbpname, b2-s-lbpname);
+}
+if 

Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-10 Thread Jan Kaluža

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

But this way we lose the base ones that are not touched in the virtual host and 
e.g. are only used by rewriterules.
So we should transfer the base ones to the merged array in any case and update 
them where needed.


Hm, you are right. Check the new version attached to this email.


Isn't the config merge on a critical path with every request? So double for 
loops always worry me a little bit from performance point of view.


I think that happens only in ap_fixup_virtual_hosts call, which is 
executed only during the httpd start. From this point of view, double 
for loop should be OK.


Regards,
Jan Kaluza


Regards

Rüdiger


-Original Message-
From: Jan Kaluža [mailto:jkal...@redhat.com]
Sent: Mittwoch, 10. Dezember 2014 13:26
To: httpd
Subject: [PATCH] Balancers, VirtualHost and ProxyPass

Hi,

I've found out that following configuration does not work as expected:

Proxy balancer://a
 ...
/Proxy
VirtualHost *:80
  ProxyPass / balancer://a stickysession=JSESSIONID|jsessionid
/VirtualHost

In this case, two proxy_balancers are created. The first one in Proxy
section in the main config without stickysession and the second one in
the vhost section with stickysession set.

Because of merge_proxy_config behaviour, the one from the main config is
always preferred and therefore you cannot set stickysession (and other
options) this way.

Attached patch fixes that by changing the merge strategy for balancers
array to merge options set by ProxyPass.

I think we would need the same for proxy_worker too, but before I spent
afternoon working on it, I wanted to ask, do you think this is the right
way how to fix this?

Regards,
Jan Kaluza


Index: modules/proxy/mod_proxy.c
===
--- modules/proxy/mod_proxy.c	(revision 1644346)
+++ modules/proxy/mod_proxy.c	(working copy)
@@ -306,6 +306,7 @@
 }
 else
 balancer-s-sticky_separator = *val;
+balancer-s-sticky_separator_set = 1;
 }
 else if (!strcasecmp(key, nofailover)) {
 /* If set to 'on' the session will break
@@ -318,6 +319,7 @@
 balancer-s-sticky_force = 0;
 else
 return failover must be On|Off;
+balancer-s-sticky_force_set = 1;
 }
 else if (!strcasecmp(key, timeout)) {
 /* Balancer timeout in seconds.
@@ -348,6 +350,7 @@
 if (provider) {
 balancer-lbmethod = provider;
 if (PROXY_STRNCPY(balancer-s-lbpname, val) == APR_SUCCESS) {
+balancer-lbmethod_set = 1;
 return NULL;
 }
 else {
@@ -367,6 +370,7 @@
 balancer-s-scolonsep = 0;
 else
 return scolonpathdelim must be On|Off;
+balancer-s-scolonsep_set = 1;
 }
 else if (!strcasecmp(key, failonstatus)) {
 char *val_split;
@@ -397,6 +401,7 @@
 balancer-failontimeout = 0;
 else
 return failontimeout must be On|Off;
+balancer-failontimeout_set = 1;
 }
 else if (!strcasecmp(key, nonce)) {
 if (!strcasecmp(val, None)) {
@@ -407,6 +412,7 @@
 return Provided nonce is too large;
 }
 }
+balancer-s-nonce_set = 1;
 }
 else if (!strcasecmp(key, growth)) {
 ival = atoi(val);
@@ -413,6 +419,7 @@
 if (ival  1 || ival  100)   /* arbitrary limit here */
 return growth must be between 1 and 100;
 balancer-growth = ival;
+balancer-growth_set = 1;
 }
 else if (!strcasecmp(key, forcerecovery)) {
 if (!strcasecmp(val, on))
@@ -421,6 +428,7 @@
 balancer-s-forcerecovery = 0;
 else
 return forcerecovery must be On|Off;
+balancer-s-forcerecovery_set = 1;
 }
 else {
 return unknown Balancer parameter;
@@ -1272,6 +1280,86 @@
 return ps;
 }
 
+static apr_array_header_t *merge_balancers(apr_pool_t *p,
+   apr_array_header_t *base,
+   apr_array_header_t *overrides)
+{
+proxy_balancer *b1;
+proxy_balancer *b2;
+int x, y, found;
+apr_array_header_t *tocopy = apr_array_make(p, 1, sizeof(proxy_balancer));
+
+/* Check if the balancer is defined in both override and base configs:
+ * a) If it is, merge the changes which can be done by ProxyPass.
+ * b) Otherwise, copy the balancer to merged array.
+ */
+b2 = (proxy_balancer *) overrides-elts;
+for (x = 0; x  overrides-nelts; x++) {
+b1 = (proxy_balancer *) base-elts;
+for (y = 0, found = 0; y  base-nelts; y++) {
+if (b1-hash.def == b2-hash.def  b1-hash.fnv == b2-hash.fnv) {
+if (b2-s-sticky  *b2-s-sticky) {
+PROXY_STRNCPY(b1-s-sticky_path, b2-s-sticky_path);
+PROXY_STRNCPY(b1-s-sticky, b2-s-sticky

Re: [PATCH] Balancers, VirtualHost and ProxyPass

2014-12-10 Thread Jan Kaluža

On 12/10/2014 02:50 PM, Yann Ylavic wrote:

On Wed, Dec 10, 2014 at 2:21 PM, Jan Kaluža jkal...@redhat.com wrote:

On 12/10/2014 01:49 PM, Plüm, Rüdiger, Vodafone Group wrote:

Isn't the config merge on a critical path with every request? So double
for loops always worry me a little bit from performance point of view.


I think that happens only in ap_fixup_virtual_hosts call, which is executed
only during the httpd start. From this point of view, double for loop should
be OK.


Right, but since there will possibly more balancers/workers (main and
vhosts ones), the child_init() of mod_proxy and mod_proxy_balancer may
take longer to initialize all the balancers/workers.
This is not at request time still.


That's true, but I'm not sure I see different way how to do it without 
changing the way how the balancers are stored.


I could for example create temporary apr_hash which would allow 
iterating override balancers just once, but I'm not sure it's so much 
better than the current way. (Btw, is there a reason, why are we storing 
balancers in array?)



Also, what will the balancer_handler() do now, manage main
workers/balancers only?


Hm, this should not change anything done by balancer_handler().

The patch only sets the balancer-s before the post_config is called. 
The vhost has still its own proxy_balancer instance as well as the main 
config. The only thing changed by the patch is that if you set some 
option in vhost config section, it will get set in proxy_balancer too.


The question here is if it is a valid use-case to use balancer with the 
same name in the main config and also in the vhost, but treat them as 
two independent balancers. If that's valid use-case, then the patch is 
wrong, because it changes the main balancer according to things done in 
vhost.


If you use BalancerPersist, it (imho) does not matter what you do with 
balancer-s before post_config, because after the post_config, it will 
get replaced by the stored shared memory.



Regards,
Yann.




Regards,
Jan Kaluza


Re: server/util_expr_(parse|scan) generated sources files

2014-12-08 Thread Jan Kaluža

On 12/08/2014 10:29 PM, Yann Ylavic wrote:

Finally committed in r1643929, generated by bison-2.7.1.


Thanks Yann!

Regards,
Jan Kaluza



On Mon, Dec 8, 2014 at 9:12 PM, Yann Ylavic ylavic@gmail.com wrote:

Reverted in r1643901.

Someone with a bison version above 2.7.12-4996 should do this (mine is 2.5).

On Mon, Dec 8, 2014 at 8:48 PM, Yann Ylavic ylavic@gmail.com wrote:

OK, thanks.

Done in r1643900.

On Mon, Dec 8, 2014 at 7:33 PM, Gregg Smith g...@gknw.net wrote:

Hi Yann,

Commit the newly generated files please.

Regards,

Gregg



On 12/8/2014 9:18 AM, Yann Ylavic wrote:


Hi,

commit r1642154 modified server/util_expr_parse.y, hence
server/util_expr_parse.c and server/util_expr_parse.h are re-generated
by bison during make.
However these .[ch] are also committed in the svn repository...

There is also the same issue with server/util_expr_scan.l wrt
server/util_expr_scan.c (generated by flex), should the former be
modified.

How are we supposed to handle these files upon change of the
bison/flex sources, commit the newly generated files (using the same
bison and flex tools which are not required by the build system
currently), or should we remove the generated files from the svn and
sowehow require any lex/yacc parser at configure time?

Regards,
Yann.







Re: Systemd support in 2.4

2014-12-07 Thread Jan Kaluža

On 12/06/2014 01:40 PM, Jeff Trawick wrote:

On Fri, Dec 5, 2014 at 6:59 AM, Jan Kaluža jkal...@redhat.com
mailto:jkal...@redhat.com wrote:

On 12/02/2014 02:08 PM, Jeff Trawick wrote:

On Wed, Sep 17, 2014 at 9:22 AM, Jeff Trawick traw...@gmail.com
mailto:traw...@gmail.com
mailto:traw...@gmail.com mailto:traw...@gmail.com wrote:

 On Mon, Sep 15, 2014 at 2:00 AM, Jan Kaluža
jkal...@redhat.com mailto:jkal...@redhat.com
 mailto:jkal...@redhat.com mailto:jkal...@redhat.com wrote:

 On 09/14/2014 01:21 PM, Martynas Bendorius wrote:

 Hello,

 Is there any special reason why mod_systemd and
mod_journald
 (available
 in trunk) are not backported to 2.4 yet?


 Hi,

 I think mod_systemd could be proposed for 2.4 branch
(maybe even
 with the changes adding socket activation), but for
 mod_journald, we would have to backport modular
logging, which
 breaks the API/ABI and therefore I'm afraid that won't
happen in
 2.4 branch :(.


 I have an old patch somewhere that doesn't break the
API/ABI, and
 accommodates such differences as syslog being built-in in
2.4.x.  I
 didn't realize that anybody besides me actually cared.

 I'll try to find time to see how it fits in 2.4.x-HEAD.


I've simply attached it from its state one year ago, not having
time to
play with it.  I don't think it is necessary to break the ABI.
syslog
support remains part of core logging instead of in a module.


I've created my version of the patch based on yours. It includes
also more recent commits from trunk related to errorlog providers.
Can you try it? I presume you have done that backport, because you
are running that somewhere :).


Once upon a time...  What remains in my possession is a test module I
wrote for testing the interface (attached).

Anyway, is this feature from ap_errorlog_provider lost intentionally?

+/** Checks syntax of ErrorLog directive argument.
+ * @param cmd The config directive
+ * @param arg ErrorLog directive argument (or the empty string if
+ * no argument was provided)
+ * @return Error message or NULL on success
+ * @remark The argument will be stored in the error_fname field
+ * of server_rec for access later.
+ */
+const char * (*parse_errorlog_arg)(cmd_parms *cmd, const char *arg);

(and code to call it in set_errorlog)



Thanks, I've missed that one when backporting. I've also checked that 
now my patch is the same as yours :).


Regards,
Jan Kaluza



Regards,
Jan Kaluza


 Jan Kaluza


 As we have a lot of distributions already using
systemd by
 default
 (CentOS/RHEL 7, Fedora, Arch Linux, CoreOS,
openSUSE), and
 more of them
 are going to use systemd by default (Debian 8 (Jessie),
 Ubuntu), it
 requires manual patching of apache for the support of
 systemd/journald.

 Thank you!




--
Born in Roswell... married an alien...
http://emptyhammock.com/



Index: docs/manual/mod/core.xml
===
--- docs/manual/mod/core.xml	(revision 1643223)
+++ docs/manual/mod/core.xml	(working copy)
@@ -1315,6 +1315,9 @@
 
 highlight language=configErrorLog syslog:user/highlight
 
+pAdditional modules can provide their own ErrorLog providers. The syntax
+is similar to codesyslog/code example above./p
+
 pSECURITY: See the a
 href=../misc/security_tips.html#serverrootsecurity tips/a
 document for details on why your security could be compromised
Index: include/http_core.h
===
--- include/http_core.h	(revision 1643223)
+++ include/http_core.h	(working copy)
@@ -835,8 +835,54 @@
 
 /** message format */
 const char *format;
+/** 1 if logging using provider, 0 otherwise */
+int using_provider;
 } ap_errorlog_info;
 
+#define AP_ERRORLOG_PROVIDER_GROUP error_log_writer
+#define AP_ERRORLOG_PROVIDER_VERSION 0
+#define AP_ERRORLOG_DEFAULT_PROVIDER file
+
+/** add APR_EOL_STR to the end of log message */
+#define AP_ERRORLOG_PROVIDER_ADD_EOL_STR   1
+
+typedef struct ap_errorlog_provider ap_errorlog_provider;
+
+struct ap_errorlog_provider {
+/** Initializes the error log writer.
+ * @param p The pool to create any storage from
+ * @param s Server for which the logger is initialized
+ * @return Pointer to handle passed later to writer() function
+ * @note On success, the provider must return non

Re: Systemd support in 2.4

2014-12-05 Thread Jan Kaluža

On 12/02/2014 02:08 PM, Jeff Trawick wrote:

On Wed, Sep 17, 2014 at 9:22 AM, Jeff Trawick traw...@gmail.com
mailto:traw...@gmail.com wrote:

On Mon, Sep 15, 2014 at 2:00 AM, Jan Kaluža jkal...@redhat.com
mailto:jkal...@redhat.com wrote:

On 09/14/2014 01:21 PM, Martynas Bendorius wrote:

Hello,

Is there any special reason why mod_systemd and mod_journald
(available
in trunk) are not backported to 2.4 yet?


Hi,

I think mod_systemd could be proposed for 2.4 branch (maybe even
with the changes adding socket activation), but for
mod_journald, we would have to backport modular logging, which
breaks the API/ABI and therefore I'm afraid that won't happen in
2.4 branch :(.


I have an old patch somewhere that doesn't break the API/ABI, and
accommodates such differences as syslog being built-in in 2.4.x.  I
didn't realize that anybody besides me actually cared.

I'll try to find time to see how it fits in 2.4.x-HEAD.


I've simply attached it from its state one year ago, not having time to
play with it.  I don't think it is necessary to break the ABI.  syslog
support remains part of core logging instead of in a module.


I've created my version of the patch based on yours. It includes also 
more recent commits from trunk related to errorlog providers. Can you 
try it? I presume you have done that backport, because you are running 
that somewhere :).


Regards,
Jan Kaluza


Jan Kaluza


As we have a lot of distributions already using systemd by
default
(CentOS/RHEL 7, Fedora, Arch Linux, CoreOS, openSUSE), and
more of them
are going to use systemd by default (Debian 8 (Jessie),
Ubuntu), it
requires manual patching of apache for the support of
systemd/journald.

Thank you!





--
Born in Roswell... married an alien...
http://emptyhammock.com/



Index: docs/manual/mod/core.xml
===
--- docs/manual/mod/core.xml	(revision 1643223)
+++ docs/manual/mod/core.xml	(working copy)
@@ -1315,6 +1315,9 @@
 
 highlight language=configErrorLog syslog:user/highlight
 
+pAdditional modules can provide their own ErrorLog providers. The syntax
+is similar to codesyslog/code example above./p
+
 pSECURITY: See the a
 href=../misc/security_tips.html#serverrootsecurity tips/a
 document for details on why your security could be compromised
Index: include/http_core.h
===
--- include/http_core.h	(revision 1643223)
+++ include/http_core.h	(working copy)
@@ -835,8 +835,44 @@
 
 /** message format */
 const char *format;
+/** 1 if logging using provider, 0 otherwise */
+int using_provider;
 } ap_errorlog_info;
 
+#define AP_ERRORLOG_PROVIDER_GROUP error_log_writer
+#define AP_ERRORLOG_PROVIDER_VERSION 0
+#define AP_ERRORLOG_DEFAULT_PROVIDER file
+
+/** add APR_EOL_STR to the end of log message */
+#define AP_ERRORLOG_PROVIDER_ADD_EOL_STR   1
+
+typedef struct ap_errorlog_provider ap_errorlog_provider;
+
+struct ap_errorlog_provider {
+/** Initializes the error log writer.
+ * @param p The pool to create any storage from
+ * @param s Server for which the logger is initialized
+ * @return Pointer to handle passed later to writer() function
+ * @note On success, the provider must return non-NULL, even if
+ * the handle is not necessary when the writer() function is
+ * called.  On failure, the provider should log a startup error
+ * message and return NULL to abort httpd startup.
+ */
+void * (*init)(apr_pool_t *p, server_rec *s);
+
+/** Logs the error message to external error log.
+ * @param info Context of the error message
+ * @param handle Handle created by init() function
+ * @param errstr Error message
+ * @param len Length of the error message
+ */
+apr_status_t (*writer)(const ap_errorlog_info *info, void *handle,
+   const char *errstr, apr_size_t len);
+
+/** a combination of the AP_ERRORLOG_PROVIDER_* flags */
+unsigned int flags;
+};
+
 /**
  * callback function prototype for a external errorlog handler
  * @note To avoid unbounded memory usage, these functions must not allocate
Index: include/httpd.h
===
--- include/httpd.h	(revision 1643223)
+++ include/httpd.h	(working copy)
@@ -1248,6 +1248,10 @@
 apr_file_t *error_log;
 /** The log level configuration */
 struct ap_logconf log;
+/** External error log writer provider */
+struct ap_errorlog_provider *errorlog_provider;
+/** Handle to be passed to external log provider's logging method */
+void *errorlog_provider_handle;
 
 /* Module-specific configuration for server

Re: svn commit: r1642154 - in /httpd/httpd/trunk: docs/manual/expr.xml include/ap_expr.h server/util_expr_eval.c server/util_expr_parse.y

2014-12-05 Thread Jan Kaluža

On 12/05/2014 02:26 PM, Eric Covener wrote:

On Thu, Nov 27, 2014 at 8:46 AM,  jkal...@apache.org wrote:

* ap_exr: Add replace(string, from, to) function.


Is it possible to evaluate this from ap_expr_str_exec()?


Hm, it worked for me like this:

Require expr replace(%{REQUEST_METHOD},  E, O) == GOT

This internally uses ap_expr_str_exec().


I am stuck on trying to get it to work. I think we cannot get to multi-valued
functions from  the %{func:arg} syntax used when starting at string.


I think I'm lost here. Checking for ap_expr documentation, I don't see 
this syntax. Can you describe this more. I'm sorry, but was checking 
ap_expr for first time while doing this patch, so there might be things 
I overlooked.


Regards,
Jan Kaluza



Re: svn commit: r1642154 - in /httpd/httpd/trunk: docs/manual/expr.xml include/ap_expr.h server/util_expr_eval.c server/util_expr_parse.y

2014-12-05 Thread Jan Kaluža

On 12/05/2014 05:09 PM, Eric Covener wrote:

On Fri, Dec 5, 2014 at 11:01 AM, Jan Kaluža jkal...@redhat.com wrote:

On 12/05/2014 02:26 PM, Eric Covener wrote:


On Thu, Nov 27, 2014 at 8:46 AM,  jkal...@apache.org wrote:


* ap_exr: Add replace(string, from, to) function.



Is it possible to evaluate this from ap_expr_str_exec()?



Hm, it worked for me like this:

Require expr replace(%{REQUEST_METHOD},  E, O) == GOT


I think this uses ap_expr_exec / aka boolean result:

static authz_status expr_check_authorization(request_rec *r,
  const char *require_line,
  const void *parsed_require_line)
{
 const char *err = NULL;
 const struct require_expr_info *info = parsed_require_line;
 int rc = ap_expr_exec(r, info-expr, err);

 if (rc  0) {
 ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, APLOGNO(02320)
   Error evaluating expression in 'Require expr': %s,
   err);
 return AUTHZ_GENERAL_ERROR;
 }
 else if (rc == 0) {
 if (info-want_user)
 return AUTHZ_DENIED_NO_USER;
 else
 return AUTHZ_DENIED;
 }
 else {
 return AUTHZ_GRANTED;
 }
}



This internally uses ap_expr_str_exec().


I am stuck on trying to get it to work. I think we cannot get to
multi-valued
functions from  the %{func:arg} syntax used when starting at string.



I think I'm lost here. Checking for ap_expr documentation, I don't see this
syntax. Can you describe this more. I'm sorry, but was checking ap_expr for
first time while doing this patch, so there might be things I overlooked.


I think it is the first function of its kind so it may not be easy.

When you actually want a string as the net result of an expression,
you start at string in ap_expr doc.

string  ::= stringpart
   | string stringpart

stringpart  ::= cstring
   | variable
   | rebackref

variable::= %{ varname }
   | %{ funcname : funcargs }

Full disclosure -- I don't understand this stuff so well, but I have
been learning on the job trying to make a dent in documenting some
different flavors of expressions in at least if and header set Foo
expr= so there is a starting point.


I see what you mean now, thanks. I will need some time to actually think 
about possible solutions.


Right now, I'm curious why we can't add:

stringpart  ::= function

But there may be some problem with that I don't see right now. Well, I 
will try to play with it and let you know.


Jan Kaluza



Re: svn commit: r1642154 - in /httpd/httpd/trunk: docs/manual/expr.xml include/ap_expr.h server/util_expr_eval.c server/util_expr_parse.y

2014-12-03 Thread Jan Kaluža

Thanks for reviewing that commit. I've fixed both issues in r1643094.

Regards,
Jan Kaluza

On 12/02/2014 09:55 PM, Ruediger Pluem wrote:



On 11/27/2014 02:46 PM, jkal...@apache.org wrote:

Author: jkaluza
Date: Thu Nov 27 13:46:11 2014
New Revision: 1642154

URL: http://svn.apache.org/r1642154
Log:
* ap_exr: Add replace(string, from, to) function.

Modified:
 httpd/httpd/trunk/docs/manual/expr.xml
 httpd/httpd/trunk/include/ap_expr.h
 httpd/httpd/trunk/server/util_expr_eval.c
 httpd/httpd/trunk/server/util_expr_parse.y




Modified: httpd/httpd/trunk/server/util_expr_eval.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/server/util_expr_eval.c?rev=1642154r1=1642153r2=1642154view=diff
==
--- httpd/httpd/trunk/server/util_expr_eval.c (original)
+++ httpd/httpd/trunk/server/util_expr_eval.c Thu Nov 27 13:46:11 2014



@@ -183,13 +186,29 @@ static const char *ap_expr_eval_string_f
  const ap_expr_t *info,
  const ap_expr_t *arg)
  {
-ap_expr_string_func_t *func = (ap_expr_string_func_t *)info-node_arg1;
  const void *data = info-node_arg2;

  AP_DEBUG_ASSERT(info-node_op == op_StringFuncInfo);
-AP_DEBUG_ASSERT(func != NULL);
+AP_DEBUG_ASSERT(info-node_arg1 != NULL);
  AP_DEBUG_ASSERT(data != NULL);
-return (*func)(ctx, data, ap_expr_eval_word(ctx, arg));
+if (arg-node_op == op_ListElement) {
+/* Evaluate the list elements and store them in apr_array_header. */
+ap_expr_string_list_func_t *func = (ap_expr_string_list_func_t 
*)info-node_arg1;
+apr_array_header_t *args = apr_array_make(ctx-p, 1, sizeof(char *));


Shouldn't we use at least 2 instead of 1? I guess we expect at least 2 elements.


+do {
+const ap_expr_t *val = arg-node_arg1;
+const char **new = apr_array_push(args);
+*new = ap_expr_eval_word(ctx, val);
+
+arg = arg-node_arg2;
+} while (arg != NULL);
+
+return (*func)(ctx, data, args);
+}
+else {
+ap_expr_string_func_t *func = (ap_expr_string_func_t *)info-node_arg1;
+return (*func)(ctx, data, ap_expr_eval_word(ctx, arg));
+}
  }

  static int intstrcmp(const char *s1, const char *s2)



@@ -1071,6 +1110,59 @@ static const char *ldap_func(ap_expr_eva
  }
  #endif

+static int replace_func_parse_arg(ap_expr_lookup_parms *parms)
+{
+const char *original = parms-arg;
+const apr_strmatch_pattern *pattern;
+
+if (!parms-arg) {
+*parms-err = apr_psprintf(parms-ptemp, replace() function needs 
+   exactly 3 arguments);
+return !OK;
+}
+pattern = apr_strmatch_precompile(parms-pool, original, 0);
+*parms-data = pattern;
+return OK;
+}
+
+static const char *replace_func(ap_expr_eval_ctx_t *ctx, const void *data,
+   const apr_array_header_t *args)
+{
+char *buff, *original, *replacement;
+struct ap_varbuf vb;
+apr_size_t repl_len;
+const char *repl;
+apr_size_t bytes;
+apr_size_t len;
+const apr_strmatch_pattern *pattern = data;
+if (args-nelts != 3) {
+*ctx-err = apr_psprintf(ctx-p, replace() function needs 
+ exactly 3 arguments);
+return ;
+}
+
+buff = APR_ARRAY_IDX(args, 2, char *);
+original = APR_ARRAY_IDX(args, 1, char *);
+replacement = APR_ARRAY_IDX(args, 0, char *);
+repl_len = strlen(replacement);
+bytes = strlen(buff);
+
+ap_varbuf_init(ctx-p, vb, 0);
+vb.strlen = 0;
+
+while ((repl = apr_strmatch(pattern, buff, bytes))) {
+len = (apr_size_t) (repl - buff);
+ap_varbuf_strmemcat(vb, buff, len);
+ap_varbuf_strmemcat(vb, replacement, repl_len);
+
+len += repl_len;


This goes wrong if replacement string and original string are of different 
size. IMHO you need to add strlen(original)
to len above.


+bytes -= len;
+buff += len;
+}
+
+return ap_varbuf_pdup(ctx-p, vb, NULL, 0, buff, bytes, len);
+}
+
  #define MAX_FILE_SIZE 10*1024*1024
  static const char *file_func(ap_expr_eval_ctx_t *ctx, const void *data,
   char *arg)
@@ -1657,6 +1749,7 @@ static const struct expr_provider_single


Regards

Rüdiger





Re: svn commit: r1640495 - /httpd/httpd/trunk/modules/proxy/mod_proxy_fcgi.c

2014-12-03 Thread Jan Kaluža

On 12/01/2014 02:15 PM, Yann Ylavic wrote:

On Wed, Nov 19, 2014 at 8:19 AM,  jkal...@apache.org wrote:

Author: jkaluza
Date: Wed Nov 19 07:19:13 2014
New Revision: 1640495

URL: http://svn.apache.org/r1640495
Log:
* mod_proxy_fcgi: Ignore body data from backend for 304 responses. PR 57198.

Modified:
 httpd/httpd/trunk/modules/proxy/mod_proxy_fcgi.c


[]

--- httpd/httpd/trunk/modules/proxy/mod_proxy_fcgi.c (original)
+++ httpd/httpd/trunk/modules/proxy/mod_proxy_fcgi.c Wed Nov 19 07:19:13 2014
@@ -369,7 +369,7 @@ static apr_status_t dispatch(proxy_conn_
   const char **err)
  {
  apr_bucket_brigade *ib, *ob;
-int seen_end_of_headers = 0, done = 0;
+int seen_end_of_headers = 0, done = 0, ignore_body = 0;
  apr_status_t rv = APR_SUCCESS;
  int script_error_status = HTTP_OK;
  conn_rec *c = r-connection;
@@ -579,9 +579,16 @@ recv_again:
  APR_BRIGADE_INSERT_TAIL(ob, tmp_b);
  r-status = status;
  ap_pass_brigade(r-output_filters, ob);
-ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, 
APLOGNO(01070)
-  Error parsing script headers);
-rv = APR_EINVAL;
+if (status == HTTP_NOT_MODIFIED) {
+/* The 304 response MUST NOT contain
+ * a message-body, ignore it. */
+ignore_body = 1;
+}
+else {
+ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, 
APLOGNO(01070)
+Error parsing script 
headers);
+rv = APR_EINVAL;
+}
  break;
  }



There seems to be another case below this point where we Send the
part of the body that we read while reading the headers (as the
comment says), with no !ignore_body check.


I agree with this part and I will fix it in my next commit.


Also, the backend may use a Status: 304 Not Modified header, which
would not result in ap_scan_script_header_err_brigade_ex() returning
that status but setting it to r-status.


In this case, it's up to backend to not send the body and if it sends 
some body together with 304 status, we don't have to care about that, 
right? I can implement also check for r-status and set ignore_body=1, 
but I thought it's not needed on httpd side.



So, more generally, shouldn't we check both
ap_scan_script_header_err_brigade_ex()'s returned status AND r-status
against 304 to ignore the body (ie. bypass ap_pass_brigade() for
anything but EOS)?

Regards,
Yann.



Regards,
Jan Kaluza



Re: svn commit: r1609680 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h proxy_util.c

2014-11-27 Thread Jan Kaluža

On 11/24/2014 01:37 PM, Eric Covener wrote:

please check r1641381


Anyone against proposing r1609680 (commit from the subject) + r1641381 
for 2.4.x?


Regards,
Jan Kaluza


On Sun, Nov 23, 2014 at 9:59 PM, Eric Covener cove...@gmail.com wrote:

On Sun, Nov 23, 2014 at 9:57 PM, Eric Covener cove...@gmail.com wrote:

On Fri, Jul 11, 2014 at 6:36 AM,  jkal...@apache.org wrote:

static int ap_proxy_strcmp_ematch(const char *str, const char *expected)
+{
+apr_size_t x, y;
+
+for (x = 0, y = 0; expected[y]; ++y, ++x) {
+if ((!str[x])  (expected[y] != '
|| !apr_isdigit(expected[y + 1])))
+return -1;
+if (expected[y] == '  apr_isdigit(expected[y + 1])) {
+while (expected[y] == '  apr_isdigit(expected[y + 1]))
+y += 2;
+if (!expected[y])
+return 0;
+while (str[x]) {
+int ret;
+if ((ret = ap_proxy_strcmp_ematch(str[x++], expected[y])) != 
1)
+return ret;
+}
+return -1;
+}
+else if (expected[y] == '\\') {
+/* NUL is an invalid char! */
+if (!expected[++y])
+return -2;
+}
+if (str[x] != expected[y])
+return 1;
+}
+return (str[x] != '\0');
+}




Sorry, stray keystroke (tab?) made gmail send early.

This is breaking the common PHP-FPM recipes using unix domain sockets
in trunk e.g.

 ProxyPassMatch ^/info.php$
unix:/var/run/php5-fpm.sock|fcgi://localhost/home/covener/SRC/httpd-trunk/built/htdocs/

The old test accepted the worker URL being a prefix of the worker:

 strncmp(url_copy, worker-s-name, worker_name_length) == 0)

but now that doesn't happen for ProxyPassMatch.  This seems to be
due to the last return expecting str[x] to have been totally consumed
by the expected (worker) string.


--
Eric Covener
cove...@gmail.com








Re: mod_ssl FakeBasicAuth, the colon problem (PR 52644)

2014-11-24 Thread Jan Kaluža

On 06/26/2014 09:22 AM, Ruediger Pluem wrote:



Joe Orton wrote:

I've had a user hit this: with FakeBasicAuth the client DN gets
translated into a Basic auth blob of base64(username:password), which
then fails when the username part contains a : colon character.

At minimum mod_ssl could/should catch and fail auth under FakeBasicAuth
when DN is seen with a :, that's easy enough.  We *could* also try
escaping the colon, but that introduces an inevitable ambiguity since
there is no escaping standard.

One approach would be to escape any colon in the DN by replacing with
some unusual character sequence ( or whatever) and then only fail
for unescaped DNs which contain that sequence to avoid ambiguity
problems.

Any opinions before I hack something up?

Probably the correct way to approach this problem is using Graham's
nice hacks in the trunk to allow users to construct an appropriate
username:password blog based on expressions:

   http://svn.apache.org/viewvc?view=revisionrevision=r1457471


+1 as this being the real fix at least where the expression parser is available.
Maybe just document the colon problem with FakeBasicAuth and point the user to 
AuthBasicFake
to do its own escaping of the colon with the expression parser (at best with an 
example).
But I just realize that a simple search and replace function is missing in the 
expression parser.


Attached patch implements that. You can test the patch for example like 
that:


Require expr replace(%{REQUEST_METHOD},  E, O) == GOT

If there won't be any -1, I will commit it (+ docs) to trunk later this 
week.



So maybe hack that up an then go the way above?

Regards

Rüdiger



Regards,
Jan Kaluza



httpd-trunk-ap_expr-replace.patch
Description: application/download


Re: svn commit: r1609680 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h proxy_util.c

2014-11-24 Thread Jan Kaluža

On 11/24/2014 03:59 AM, Eric Covener wrote:

On Sun, Nov 23, 2014 at 9:57 PM, Eric Covener cove...@gmail.com wrote:

On Fri, Jul 11, 2014 at 6:36 AM,  jkal...@apache.org wrote:

static int ap_proxy_strcmp_ematch(const char *str, const char *expected)
+{
+apr_size_t x, y;
+
+for (x = 0, y = 0; expected[y]; ++y, ++x) {
+if ((!str[x])  (expected[y] != '
|| !apr_isdigit(expected[y + 1])))
+return -1;
+if (expected[y] == '  apr_isdigit(expected[y + 1])) {
+while (expected[y] == '  apr_isdigit(expected[y + 1]))
+y += 2;
+if (!expected[y])
+return 0;
+while (str[x]) {
+int ret;
+if ((ret = ap_proxy_strcmp_ematch(str[x++], expected[y])) != 
1)
+return ret;
+}
+return -1;
+}
+else if (expected[y] == '\\') {
+/* NUL is an invalid char! */
+if (!expected[++y])
+return -2;
+}
+if (str[x] != expected[y])
+return 1;
+}
+return (str[x] != '\0');
+}




Sorry, stray keystroke (tab?) made gmail send early.

This is breaking the common PHP-FPM recipes using unix domain sockets
in trunk e.g.

 ProxyPassMatch ^/info.php$
unix:/var/run/php5-fpm.sock|fcgi://localhost/home/covener/SRC/httpd-trunk/built/htdocs/

The old test accepted the worker URL being a prefix of the worker:

 strncmp(url_copy, worker-s-name, worker_name_length) == 0)

but now that doesn't happen for ProxyPassMatch.  This seems to be
due to the last return expecting str[x] to have been totally consumed
by the expected (worker) string.




Conflict discovered in file 'proxy/proxy_util.c'.

^ You were faster, thanks for fixing this issue :).

Regards,
Jan Kaluza



PR 53435, r101624, mod_ssl: error strings can't be loaded again once?

2014-11-11 Thread Jan Kaluža

Hi,

latest comment in PR 53435 shows that memory leak in mod_ssl which 
happens during graceful restarts can be caused by r101624. Since this 
commit is 11 years old, I wanted to ask people here, if following is 
still true with current OpenSSL:



@@ -255,7 +255,11 @@ static apr_status_t ssl_cleanup_pre_config(void *data)
#endif
#endif
ERR_remove_state(0);
- ERR_free_strings();
+
+ /* Don't call ERR_free_strings here; ERR_load_*_strings only
+ * actually load the error strings once per process due to static
+ * variable abuse in OpenSSL. */
+
/*
* TODO: determine somewhere we can safely shove out diagnostics
* (when enabled) at this late stage in the game:


Last comment in PR 53435 showed that leaks disappeared after reverting 
this patch and it does not seem to break anything here.


Regards,
Jan Kaluza


Re: PR 53435, r101624, mod_ssl: error strings can't be loaded again once?

2014-11-11 Thread Jan Kaluža

On 11/12/2014 07:16 AM, Kaspar Brand wrote:

On 12.11.2014 03:28, Dr Stephen Henson wrote:

I just checked the sources and this was fixed in OpenSSL 0.9.7m just over 7
years ago...


For 0.9.8, it was fixed with 0.9.8e:

https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=900f7a87760d1053127976480efcd71371787d6e

I.e., given that for 2.4.x, we still support 0.9.8a (and for 2.2.x,
even 0.9.6 something), the ERR_free_strings() call should probably be
wrapped by a suitable #if (OPENSSL_VERSION_NUMBER = 


Thanks for the info. I will do the patch and commit it to trunk and 
propose for 2.4.x.



Kaspar



Regards,
Jan Kaluza



Re: svn commit: r1622450 - /httpd/httpd/trunk/support/ab.c

2014-11-03 Thread Jan Kaluža

On 11/02/2014 05:09 PM, Yann Ylavic wrote:

Hi,

On Thu, Sep 4, 2014 at 12:52 PM,  jkal...@apache.org wrote:

Author: jkaluza
Date: Thu Sep  4 10:52:24 2014
New Revision: 1622450

URL: http://svn.apache.org/r1622450
Log:
ab: increase request and response header size to 8192 bytes,
fix potential buffer-overflow in Server: header handling.

Modified:
 httpd/httpd/trunk/support/ab.c

Modified: httpd/httpd/trunk/support/ab.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/support/ab.c?rev=1622450r1=1622449r2=1622450view=diff
==
--- httpd/httpd/trunk/support/ab.c (original)
+++ httpd/httpd/trunk/support/ab.c Thu Sep  4 10:52:24 2014

[snip]

@@ -1516,12 +1516,14 @@ static void read_connection(struct conne
   * this is first time, extract some interesting info
   */
  char *p, *q;
+size_t len = 0;
  p = strstr(c-cbuff, Server:);
  q = servername;
  if (p) {
  p += 8;
-while (*p  32)
-*q++ = *p++;
+/* -1 to not overwrite last '\0' byte */
+while (*p  32  len++  sizeof(servername) - 1)


Maybe ++len above (instead of len++) since we need to leave room for
the final '\0' below?
Otherwise we may still overflow when writing it to
servername[sizeof(servername)]...


I think technically that code is OK. It writes sizeof(servername) - 1 
characters to servername and keeps the last byte for zero. It could be 
rewritten as ++len  sizeof(servername), but the result is the same 
and since gcc optimizes that, it even generates the same code.


Just to be really sure, I wrote following test code:

#include stdio.h
#include stdlib.h

#define BUFF_SIZE 10

int main(int argc, char **argv) {
char *servername = malloc(BUFF_SIZE);
char original[] = Something_longer_than_10_bytes;
char *p = original, *q = servername;
size_t len = 0;
while (*p  32  len++  BUFF_SIZE - 1)
*q++ = *p++;
*q = 0;
printf('%s'\n, servername);
return 0;
}

Running that in valgrind looks OK too.

$ gcc test.c
$ valgrind -q ./a.out
'Something'
$

Am I missing something?

Regards,
Jan Kaluza


+*q++ = *p++;
  }
  *q = 0;
  }



Regards,
Yann.





Fix DirectoryMatch to not match regular files?

2014-10-29 Thread Jan Kaluža

Hi,

I was trying to fix PR41867 using attached patch. While the patch seems 
to work, I'm thinking if the behaviour change introduced by the patch 
can bring some problems.


Currently, DirectoryMatch ^/var/www/html/private matches also 
/var/www/html/private.txt even it is a regular file and not a 
directory. With the patch, DirectoryMatch won't match private.txt in 
this case, because it's a file.


While I think this is excepted behaviour of DirectoryMatch, I'm not sure 
if it's acceptable change in 2.4.x branch (or even trunk?). What do you 
think?


Regards,
Jan Kaluza


httpd-trunk-directorymatch-no-files-match.patch
Description: application/download


Re: Systemd support in 2.4

2014-09-17 Thread Jan Kaluža

On 09/15/2014 08:00 AM, Jan Kaluža wrote:

On 09/14/2014 01:21 PM, Martynas Bendorius wrote:

Hello,

Is there any special reason why mod_systemd and mod_journald (available
in trunk) are not backported to 2.4 yet?


Hi,

I think mod_systemd could be proposed for 2.4 branch (maybe even with
the changes adding socket activation), but for mod_journald, we would
have to backport modular logging, which breaks the API/ABI and
therefore I'm afraid that won't happen in 2.4 branch :(.


mod_systemd and socket activation proposed in r1625531.

Jan Kaluza


Jan Kaluza


As we have a lot of distributions already using systemd by default
(CentOS/RHEL 7, Fedora, Arch Linux, CoreOS, openSUSE), and more of them
are going to use systemd by default (Debian 8 (Jessie), Ubuntu), it
requires manual patching of apache for the support of systemd/journald.

Thank you!







Re: Systemd support in 2.4

2014-09-15 Thread Jan Kaluža

On 09/14/2014 01:21 PM, Martynas Bendorius wrote:

Hello,

Is there any special reason why mod_systemd and mod_journald (available
in trunk) are not backported to 2.4 yet?


Hi,

I think mod_systemd could be proposed for 2.4 branch (maybe even with 
the changes adding socket activation), but for mod_journald, we would 
have to backport modular logging, which breaks the API/ABI and 
therefore I'm afraid that won't happen in 2.4 branch :(.


Jan Kaluza


As we have a lot of distributions already using systemd by default
(CentOS/RHEL 7, Fedora, Arch Linux, CoreOS, openSUSE), and more of them
are going to use systemd by default (Debian 8 (Jessie), Ubuntu), it
requires manual patching of apache for the support of systemd/journald.

Thank you!





Re: Systemd support in 2.4

2014-09-14 Thread Jan Kaluža

On 09/14/2014 01:35 PM, Reindl Harald wrote:


Am 14.09.2014 um 13:21 schrieb Martynas Bendorius:

Is there any special reason why mod_systemd and mod_journald (available in 
trunk) are not backported to 2.4 yet?

As we have a lot of distributions already using systemd by default (CentOS/RHEL 
7, Fedora, Arch Linux, CoreOS,
openSUSE), and more of them are going to use systemd by default (Debian 8 
(Jessie), Ubuntu), it requires manual
patching of apache for the support of systemd/journald


in most setups you have access logs for each virtual host for log-analyzers
and sometimes the customer itself has access to this logs - not sure why
you would have that in the systemlogs



I think you can still have access_log per virtualhost with mod_journald, 
but from the performance reasons, I would not advice anyone to store 
access_log with mod_journald.


Jan Kaluza



Re: mod_proxy: PHP SCRIPT_FILENAME (PHP-FPM using UDS) and Apache documentation

2014-09-11 Thread Jan Kaluža

On 09/10/2014 07:17 PM, Jim Jagielski wrote:

I know that PHP is current doing a LOT of fixes on
hPHP-FPM...


I've recently come to https://bugs.php.net/bug.php?id=65641 and was 
thinking if we can do anything about it.


Jan Kaluza


On Sep 10, 2014, at 12:00 PM, Martynas Bendorius marty...@martynas.it wrote:


Hello,

http://httpd.apache.org/docs/current/mod/mod_proxy.html#handler breaks PHP-FPM 
SCRIPT_FILENAME. It contains double // at the beginning of it like:
SCRIPT_FILENAME: //home/admin/domains/testing.tld/public_html/test.php

While it should be:
SCRIPT_FILENAME: /home/admin/domains/testing.tld/public_html/test.php

Replacing localhost/ to just localhost fixes the problem (removing / from 
the end).

I mean:
SetHandler  proxy:unix:/path/to/app.sock|fcgi://localhost

Instead of:
SetHandler  proxy:unix:/path/to/app.sock|fcgi://localhost/

Should it be considered a typo in Apache documentation or a bug in the way 
PHP-FPM SAPI translates the path?

Thank you!

--
Best regards,
Martynas Bendorius







Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-08-26 Thread Jan Kaluža

On 08/19/2014 12:39 PM, Jan Kaluža wrote:

@@ -3206,6 +3277,10 @@ static int event_pre_config(apr_pool_t *
  atomics not working as expected - add32 of
negative number);
 return HTTP_INTERNAL_SERVER_ERROR;
 }
+retained-idle_spawn_rate = apr_palloc(pconf, sizeof(int) *
num_buckets);
+for (i = 0; i num_buckets; i++) {
+retained-idle_spawn_rate[i] = 1;
+}
 rv = apr_pollset_create(event_pollset, 1, plog,
 APR_POLLSET_THREADSAFE |
APR_POLLSET_NOCOPY);
 if (rv != APR_SUCCESS) {


Shouldn't this be done outside of if (!one_process  !foreground)
condition? Otherwise in the case of -DFOREGROUND, it will crash because
retained-idle_spawn_rate is NULL.


Done in r1620569.

Jan Kaluza


Jan Kaluza





Re: svn commit: r1618555 - /httpd/httpd/trunk/server/mpm/prefork/prefork.c

2014-08-21 Thread Jan Kaluža

On 08/18/2014 04:31 PM, Jan Kaluža wrote:

On 08/18/2014 02:20 PM, Ruediger Pluem wrote:



jkal...@apache.org wrote:

Author: jkaluza
Date: Mon Aug 18 07:43:43 2014
New Revision: 1618555

URL: http://svn.apache.org/r1618555
Log:
prefork: Ignore SIGINT in child. This fixes race-condition in signals
handling
when httpd is runnning on foreground and user hits ctrl+c. In this
case, SIGINT
is sent to all children followed by SIGTERM from the main process, which
interrupts the SIGINT handler and leads to inconsistency (process
freezes
or crashes).

Modified:
 httpd/httpd/trunk/server/mpm/prefork/prefork.c



Don't we need to do this for the other MPM's as well?


I haven't tried to reproduce it with different MPMs, but it looks like
good idea. The code looks very similar when it comes to signal handling,
so the same problem can be there too. I will try to reproduce this bug
and commit fix also for other MPMs eventually.


I was not able to reproduce the bug with event or worker MPM. That does 
not mean the problem is not there, but I don't want to fix it blindly 
without seeing the actual bug.


Jan Kaluza


Jan Kaluza


Regards

Rüdiger







Re: svn commit: r1599531 - in /httpd/httpd/trunk: CHANGES include/ap_listen.h server/listen.c server/mpm/event/event.c server/mpm/prefork/prefork.c server/mpm/worker/worker.c server/mpm_unix.c

2014-08-19 Thread Jan Kaluža

@@ -3206,6 +3277,10 @@ static int event_pre_config(apr_pool_t *
  atomics not working as expected - add32 of negative 
number);
 return HTTP_INTERNAL_SERVER_ERROR;
 }
+retained-idle_spawn_rate = apr_palloc(pconf, sizeof(int) * 
num_buckets);
+for (i = 0; i num_buckets; i++) {
+retained-idle_spawn_rate[i] = 1;
+}
 rv = apr_pollset_create(event_pollset, 1, plog,
 APR_POLLSET_THREADSAFE | APR_POLLSET_NOCOPY);
 if (rv != APR_SUCCESS) {


Shouldn't this be done outside of if (!one_process  !foreground) 
condition? Otherwise in the case of -DFOREGROUND, it will crash because 
retained-idle_spawn_rate is NULL.


Jan Kaluza



Re: prefork mpm crashes on SIGINT [possible patch?]

2014-08-18 Thread Jan Kaluža

On 08/14/2014 04:22 PM, Jan Kaluža wrote:

Hi,

I'm resurrecting this old thread, because I hesitate to do changes in
signal handling without any ack of someone else.


Committed in r1618555.

Jan Kaluza



This bug is more problematic in the context of docker [1] where people
tend to run httpd -DFOREGROUND, because in docker, you should run
applications on foreground and docker will manage the application itself.

Note that from time to time depending on luck, this is reproducible even
without any load and the server. Just httpd -DFOREGROUND and then ctrl+c.

[1] https://www.docker.com/

Is anyone against the patch I sent last year in previous email in this
thread below?

Regards,
Jan Kaluza

On 05/09/2013 09:03 AM, Jan Kaluža wrote:

I think I have proper solution for the crashes mentioned in previous
mail (see the attached patch):

1. Ignore SIGINT and SIGTERM in clean_child_exit. Handlers of these two
signals would call clean_child_exit again and there is no reason to
handle them once we are exiting.

2. Ignore SIGINT completely in children. I'm not sure I see valid
use-case in sending SIGINT to child process, but it's causing crashes
when running with -DFOREGROUND. See the comment in attached patch for
more info.

I was not able to reproduce the crash with this patch.

On 05/07/2013 08:37 AM, Jan Kaluža wrote:

Hi,

to reproduce it, I start httpd-2.4.4 like this:

/usr/sbin/httpd -k start -DFOREGROUND

and in another terminal I run ab like this:

ab -n 15 -c 4 http://127.0.0.1/index.php

Index.php is just dummy script with phpinfo.

When ab is benchmarking, I kill httpd using ctrl+c. In 4/10 tries it
crashes with following backtrace [1].
If gdb does not lie, there is race condition in signal handling. I'm not
sure if anything changed in this code between 2.2 and 2.4, but it looks
like 2.2 was handling this situation somehow better. I think this
situation can lead to various problems like deadlocks.

I've been able to workaround this using the attached patch, but I don't
think this is the best way how to fix this problem. It would be better
to find out why there are more signals called in a row as a result of
single SIGINT signal sent to main process.

I will try to do that myself, but I was thinking there could be people
more familiar with this part of code and with httpd's children lifecycle
reading this list.

[1]

#0  0x7fe74b7d2ba5 in raise () from /lib64/libc.so.6
#1  0x7fe74b7d4358 in abort () from /lib64/libc.so.6
#2  0x7fe74b81259b in __libc_message () from /lib64/libc.so.6
#3  0x7fe74b819a8e in _int_free () from /lib64/libc.so.6
#4  0x7fe74bf8e85c in allocator_free (node=0x7fe74bb4e798
main_arena+88, allocator=0x7fe74e7ab670) at
memory/unix/apr_pools.c:430
#5  apr_pool_destroy (pool=0x7fe74e7ab768) at
memory/unix/apr_pools.c:856
#6  0x7fe74308c1be in clean_child_exit (code=code@entry=0) at
prefork.c:218
#7  0x7fe74308c9cb in just_die (sig=optimized out) at
prefork.c:344
#8  signal handler called
#9  0x7fe74d06d541 in _dl_fini () from /lib64/ld-linux-x86-64.so.2
#10 0x7fe74b7d5df1 in __run_exit_handlers () from /lib64/libc.so.6
#11 0x7fe74b7d5e75 in exit () from /lib64/libc.so.6
#12 0x7fe74308c1db in clean_child_exit (code=code@entry=0) at
prefork.c:227
#13 0x7fe74308c5f5 in child_main
(child_num_arg=child_num_arg@entry=2) at prefork.c:725
#14 0x7fe74308c94c in make_child (s=0x7fe74e5a5368,
slot=slot@entry=2) at prefork.c:800
#15 0x7fe74308c9a6 in startup_children (number_to_start=3) at
prefork.c:818
#16 0x7fe74308d856 in prefork_run (_pconf=optimized out,
plog=0x7fe74e5d74f8, s=0x7fe74e5a5368) at prefork.c:976
#17 0x7fe74d2a5d4e in ap_run_mpm ()
#18 0x7fe74d29f3fa in main ()

Thanks for help,
Jan Kaluza


Regards,
Jan Kaluza






Re: svn commit: r1618579 - in /httpd/httpd/trunk: docs/log-message-tags/next-number docs/manual/mod/mod_systemd.xml modules/arch/unix/mod_systemd.c

2014-08-18 Thread Jan Kaluža

On 08/18/2014 02:23 PM, Ruediger Pluem wrote:



jkal...@apache.org wrote:

Author: jkaluza
Date: Mon Aug 18 10:48:41 2014
New Revision: 1618579

URL: http://svn.apache.org/r1618579
Log:
mod_systemd: Add IdleShutdown - number of seconds in idle-state after which
httpd is shutdown. This is useful in a combination with socket activation.
Add mod_systemd documentation.

Added:
 httpd/httpd/trunk/docs/manual/mod/mod_systemd.xml   (with props)
Modified:
 httpd/httpd/trunk/docs/log-message-tags/next-number
 httpd/httpd/trunk/modules/arch/unix/mod_systemd.c

Modified: httpd/httpd/trunk/docs/log-message-tags/next-number
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/docs/log-message-tags/next-number?rev=1618579r1=1618578r2=1618579view=diff
==
--- httpd/httpd/trunk/docs/log-message-tags/next-number (original)
+++ httpd/httpd/trunk/docs/log-message-tags/next-number Mon Aug 18 10:48:41 2014
@@ -1 +1 @@
-2804
+2805

Added: httpd/httpd/trunk/docs/manual/mod/mod_systemd.xml
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/docs/manual/mod/mod_systemd.xml?rev=1618579view=auto
==
Binary file - no diff available.

Propchange: httpd/httpd/trunk/docs/manual/mod/mod_systemd.xml
--
 svn:mime-type = application/xml



Why is this a binary file and no text file?


Hm, it seems that subversion treats XML as application/xml by default in 
Fedora. I've fixed that in r1618595.


Jan Kaluza


Regards

Rüdiger





Re: svn commit: r1618555 - /httpd/httpd/trunk/server/mpm/prefork/prefork.c

2014-08-18 Thread Jan Kaluža

On 08/18/2014 02:20 PM, Ruediger Pluem wrote:



jkal...@apache.org wrote:

Author: jkaluza
Date: Mon Aug 18 07:43:43 2014
New Revision: 1618555

URL: http://svn.apache.org/r1618555
Log:
prefork: Ignore SIGINT in child. This fixes race-condition in signals handling
when httpd is runnning on foreground and user hits ctrl+c. In this case, SIGINT
is sent to all children followed by SIGTERM from the main process, which
interrupts the SIGINT handler and leads to inconsistency (process freezes
or crashes).

Modified:
 httpd/httpd/trunk/server/mpm/prefork/prefork.c



Don't we need to do this for the other MPM's as well?


I haven't tried to reproduce it with different MPMs, but it looks like 
good idea. The code looks very similar when it comes to signal handling, 
so the same problem can be there too. I will try to reproduce this bug 
and commit fix also for other MPMs eventually.


Jan Kaluza


Regards

Rüdiger





Re: prefork mpm crashes on SIGINT [possible patch?]

2014-08-14 Thread Jan Kaluža

Hi,

I'm resurrecting this old thread, because I hesitate to do changes in 
signal handling without any ack of someone else.


This bug is more problematic in the context of docker [1] where people 
tend to run httpd -DFOREGROUND, because in docker, you should run 
applications on foreground and docker will manage the application itself.


Note that from time to time depending on luck, this is reproducible even 
without any load and the server. Just httpd -DFOREGROUND and then ctrl+c.


[1] https://www.docker.com/

Is anyone against the patch I sent last year in previous email in this 
thread below?


Regards,
Jan Kaluza

On 05/09/2013 09:03 AM, Jan Kaluža wrote:

I think I have proper solution for the crashes mentioned in previous
mail (see the attached patch):

1. Ignore SIGINT and SIGTERM in clean_child_exit. Handlers of these two
signals would call clean_child_exit again and there is no reason to
handle them once we are exiting.

2. Ignore SIGINT completely in children. I'm not sure I see valid
use-case in sending SIGINT to child process, but it's causing crashes
when running with -DFOREGROUND. See the comment in attached patch for
more info.

I was not able to reproduce the crash with this patch.

On 05/07/2013 08:37 AM, Jan Kaluža wrote:

Hi,

to reproduce it, I start httpd-2.4.4 like this:

/usr/sbin/httpd -k start -DFOREGROUND

and in another terminal I run ab like this:

ab -n 15 -c 4 http://127.0.0.1/index.php

Index.php is just dummy script with phpinfo.

When ab is benchmarking, I kill httpd using ctrl+c. In 4/10 tries it
crashes with following backtrace [1].
If gdb does not lie, there is race condition in signal handling. I'm not
sure if anything changed in this code between 2.2 and 2.4, but it looks
like 2.2 was handling this situation somehow better. I think this
situation can lead to various problems like deadlocks.

I've been able to workaround this using the attached patch, but I don't
think this is the best way how to fix this problem. It would be better
to find out why there are more signals called in a row as a result of
single SIGINT signal sent to main process.

I will try to do that myself, but I was thinking there could be people
more familiar with this part of code and with httpd's children lifecycle
reading this list.

[1]

#0  0x7fe74b7d2ba5 in raise () from /lib64/libc.so.6
#1  0x7fe74b7d4358 in abort () from /lib64/libc.so.6
#2  0x7fe74b81259b in __libc_message () from /lib64/libc.so.6
#3  0x7fe74b819a8e in _int_free () from /lib64/libc.so.6
#4  0x7fe74bf8e85c in allocator_free (node=0x7fe74bb4e798
main_arena+88, allocator=0x7fe74e7ab670) at memory/unix/apr_pools.c:430
#5  apr_pool_destroy (pool=0x7fe74e7ab768) at memory/unix/apr_pools.c:856
#6  0x7fe74308c1be in clean_child_exit (code=code@entry=0) at
prefork.c:218
#7  0x7fe74308c9cb in just_die (sig=optimized out) at prefork.c:344
#8  signal handler called
#9  0x7fe74d06d541 in _dl_fini () from /lib64/ld-linux-x86-64.so.2
#10 0x7fe74b7d5df1 in __run_exit_handlers () from /lib64/libc.so.6
#11 0x7fe74b7d5e75 in exit () from /lib64/libc.so.6
#12 0x7fe74308c1db in clean_child_exit (code=code@entry=0) at
prefork.c:227
#13 0x7fe74308c5f5 in child_main
(child_num_arg=child_num_arg@entry=2) at prefork.c:725
#14 0x7fe74308c94c in make_child (s=0x7fe74e5a5368,
slot=slot@entry=2) at prefork.c:800
#15 0x7fe74308c9a6 in startup_children (number_to_start=3) at
prefork.c:818
#16 0x7fe74308d856 in prefork_run (_pconf=optimized out,
plog=0x7fe74e5d74f8, s=0x7fe74e5a5368) at prefork.c:976
#17 0x7fe74d2a5d4e in ap_run_mpm ()
#18 0x7fe74d29f3fa in main ()

Thanks for help,
Jan Kaluza


Regards,
Jan Kaluza




Re: [VOTE] Release Apache httpd 2.4.10 as GA

2014-07-17 Thread Jan Kaluža

On 07/15/2014 07:20 PM, Jim Jagielski wrote:

The pre-release test tarballs for Apache httpd 2.4.10 can be found
at the usual place:

http://httpd.apache.org/dev/dist/

I'm calling a VOTE on releasing these as Apache httpd 2.4.10 GA.

[ ] +1: Good to go
[ ] +0: meh
[ ] -1: Danger Will Robinson. And why.

Vote will last the normal 72 hrs.

NOTE: The *-deps are only there for convenience.



+1, it works correctly for me on Fedora 20.

Jan Kaluza



Re: svn commit: r1610339 - in /httpd/httpd/trunk: docs/manual/mod/mod_journald.xml modules/loggers/config.m4 modules/loggers/mod_journald.c

2014-07-14 Thread Jan Kaluža

On 07/14/2014 09:52 AM, Ruediger Pluem wrote:



jkal...@apache.org wrote:

Author: jkaluza
Date: Mon Jul 14 05:52:45 2014
New Revision: 1610339

URL: http://svn.apache.org/r1610339
Log:
mod_journald: New module implementing error_log provider for systemd-journald.

Added:
 httpd/httpd/trunk/docs/manual/mod/mod_journald.xml   (with props)
 httpd/httpd/trunk/modules/loggers/mod_journald.c
Modified:
 httpd/httpd/trunk/modules/loggers/config.m4




Added: httpd/httpd/trunk/modules/loggers/mod_journald.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/loggers/mod_journald.c?rev=1610339view=auto
==
--- httpd/httpd/trunk/modules/loggers/mod_journald.c (added)
+++ httpd/httpd/trunk/modules/loggers/mod_journald.c Mon Jul 14 05:52:45 2014



+static apr_status_t journald_log_writer(request_rec *r,
+   void *handle,
+   const char **strs,
+   int *strl,
+   int nelts,
+   apr_size_t len)
+
+{
+char *str;
+char *s;
+int i;
+apr_status_t rv = APR_SUCCESS;
+
+str = apr_palloc(r-pool, len + 1);


Why +1?


That's taken from ap_default_log_writer(...) and it's also in 
ap_buffered_log_writer(...). When thinking about it now, it's probably 
useless, because len is sum of strlen() of each string in strs, so it 
does not include '\0', but my log_writer (and also the 
ap_default_log_writer/ap_buffered_log_writer) does not actually use/set 
that last zero byte.


I think we can remove that len + 1 in all three cases then?

Jan Kaluza


Regards

Rüdiger





Re: [PATCH] Make error logging modular

2014-07-11 Thread Jan Kaluža

On 09/18/2013 02:12 PM, Jim Jagielski wrote:

+1!
On Sep 18, 2013, at 8:09 AM, Eric Covener cove...@gmail.com wrote:


On Wed, Sep 18, 2013 at 8:01 AM, Jan Kaluža jkal...@redhat.com wrote:

On 07/22/2013 08:02 AM, Jan Kaluza wrote:


- Original Message -


Hello Jan,

Is there any reason we shouldn't do this in trunk?



I don't see any reason. This patch was intended for trunk, but I don't
have
svn commit access, so I'm sending patches to this list :). It's also
better
that someone reviews my code, because I don't have so long experience with
httpd development.



If there's nobody against this change, I will commit the first two patches
(+ documentation) to trunk in the end of the week. I think we should wait
with mod_journald a bit until journald's performance gets better, but if you
think it would be useful to have mod_journald in trunk too, let me know.



+1, and I don't think mod_journald would hurt.





Hi,

I've updated mod_journald to latest trunk and added documentation. You 
can check the patch against trunk at 
http://people.apache.org/~jkaluza/patches/mod_journald/0001-mod_journald.patch.


I'm going to commit it to trunk next week.

Regards,
Jan Kaluza



Re: [PATCH] Fix settings options with ProxyPassMatch

2014-07-11 Thread Jan Kaluža

On 07/10/2014 03:57 PM, Yann Ylavic wrote:

On Thu, Jul 10, 2014 at 9:12 AM, Jan Kaluža jkal...@redhat.com wrote:

On 07/09/2014 04:26 PM, Yann Ylavic wrote:

I forgot proxysection(), why not handle the
ap_proxy_define_match_worker() case there too?



I'm not sure I see what you mean. There's match_worker created in
proxysection(), or do you mean something else?


Indeed I missed the code in proxysection()...

That's +1 for me.



Committed in r1609680.

Jan Kaluza



Re: [PATCH] Make error logging modular

2014-07-11 Thread Jan Kaluža

On 07/11/2014 12:53 PM, Yann Ylavic wrote:

Hi Jan,

On Fri, Jul 11, 2014 at 11:17 AM, Jan Kaluža jkal...@redhat.com wrote:

I've updated mod_journald to latest trunk and added documentation. You can
check the patch against trunk at
http://people.apache.org/~jkaluza/patches/mod_journald/0001-mod_journald.patch.


+static apr_pool_t *journald_info_get_pool(const ap_errorlog_info *info)
+{
+if (info-s  info-s-process  info-s-process-pool)
+return info-s-process-pool;
+if (info-pool)
+return info-pool;
+if (info-c  info-c-pool)
+return info-c-pool;
+if (info-r  info-r-pool)
+return info-r-pool;
+return 0;
+}

Shouldn't this be in the reverse order?


Hm, I think my original intention here was to try as first the pools 
which have the biggest chance to be available. Since I'm creating 
subpool of the pool returned by this method, it should be OK to use even 
s-process-pool. Maybe it's useless try to optimize things...


Jan Kaluza


Re: svn commit: r1609680 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h proxy_util.c

2014-07-11 Thread Jan Kaluža

On 07/11/2014 12:59 PM, Yann Ylavic wrote:

On Fri, Jul 11, 2014 at 12:36 PM,  jkal...@apache.org wrote:

Author: jkaluza
Date: Fri Jul 11 10:36:15 2014
New Revision: 1609680

URL: http://svn.apache.org/r1609680
Log:
mod_proxy: add ap_proxy_define_match_worker() and use it for ProxyPassMatch
and ProxyMatch section to distinguish between normal workers and workers
with regex substitutions in the name. Implement handling of such workers
in ap_proxy_get_worker(). PR 43513


[...]

--- httpd/httpd/trunk/modules/proxy/mod_proxy.c (original)
+++ httpd/httpd/trunk/modules/proxy/mod_proxy.c Fri Jul 11 10:36:15 2014
@@ -1647,15 +1647,30 @@ static const char *
  new-balancer = balancer;
  }
  else {
-proxy_worker *worker = ap_proxy_get_worker(cmd-temp_pool, NULL, conf, 
de_socketfy(cmd-pool, r));
+proxy_worker *worker = ap_proxy_get_worker(cmd-temp_pool, NULL, conf, 
new-real);
  int reuse = 0;
  if (!worker) {
-const char *err = ap_proxy_define_worker(cmd-pool, worker, NULL, 
conf, r, 0);
+const char *err;
+if (use_regex) {
+err = ap_proxy_define_match_worker(cmd-pool, worker, NULL,
+   conf, r, 0);
+}
+else {
+err = ap_proxy_define_worker(cmd-pool, worker, NULL,
+ conf, r, 0);
+}
  if (err)
  return apr_pstrcat(cmd-temp_pool, ProxyPass , err, NULL);

  PROXY_COPY_CONF_PARAMS(worker, conf);
-} else {
+}
+else if ((use_regex != 0) ^ (worker-s-is_name_matchable)) {


Maybe (worker-s-is_name_matchable != 0)?


Done in r1609688. Thanks.


+return apr_pstrcat(cmd-temp_pool, ProxyPass/Proxy and 
+   ProxyPassMatch/ProxyMatch can't be used 
+   altogether with the same worker name ,
+   (, worker-s-name, ), NULL);
+}
+else {
  reuse = 1;
  ap_log_error(APLOG_MARK, APLOG_INFO, 0, cmd-server, 
APLOGNO(01145)
   Sharing worker '%s' instead of creating new worker 
'%s',

[...]

@@ -2354,12 +2371,24 @@ static const char *proxysection(cmd_parm
  worker = ap_proxy_get_worker(cmd-temp_pool, NULL, sconf,
   de_socketfy(cmd-temp_pool, 
(char*)conf-p));
  if (!worker) {
-err = ap_proxy_define_worker(cmd-pool, worker, NULL,
-  sconf, conf-p, 0);
+if (use_regex) {
+err = ap_proxy_define_match_worker(cmd-pool, worker, 
NULL,
+   sconf, conf-p, 0);
+}
+else {
+err = ap_proxy_define_worker(cmd-pool, worker, NULL,
+ sconf, conf-p, 0);
+}
  if (err)
  return apr_pstrcat(cmd-temp_pool, thiscmd-name,
  , err, NULL);
  }
+else if ((use_regex != 0) ^ (worker-s-is_name_matchable)) {


Likewise?


+return apr_pstrcat(cmd-temp_pool, ProxyPass/Proxy and 
+   ProxyPassMatch/ProxyMatch can't be used 
+   altogether with the same worker name ,
+   (, worker-s-name, ), NULL);
+}




Re: svn commit: r1609680 - in /httpd/httpd/trunk/modules/proxy: mod_proxy.c mod_proxy.h proxy_util.c

2014-07-11 Thread Jan Kaluža

On 07/11/2014 01:38 PM, Jim Jagielski wrote:


On Jul 11, 2014, at 6:36 AM, jkal...@apache.org wrote:


Author: jkaluza
Date: Fri Jul 11 10:36:15 2014
New Revision: 1609680

URL: http://svn.apache.org/r1609680
Log:
mod_proxy: add ap_proxy_define_match_worker() and use it for ProxyPassMatch
and ProxyMatch section to distinguish between normal workers and workers
with regex substitutions in the name. Implement handling of such workers
in ap_proxy_get_worker(). PR 43513

Modified:
httpd/httpd/trunk/modules/proxy/mod_proxy.c
httpd/httpd/trunk/modules/proxy/mod_proxy.h
httpd/httpd/trunk/modules/proxy/proxy_util.c

Modified: httpd/httpd/trunk/modules/proxy/mod_proxy.c
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/mod_proxy.c?rev=1609680r1=1609679r2=1609680view=diff
==
--- httpd/httpd/trunk/modules/proxy/mod_proxy.c (original)
+++ httpd/httpd/trunk/modules/proxy/mod_proxy.c Fri Jul 11 10:36:15 2014
@@ -1647,15 +1647,30 @@ static const char *
 new-balancer = balancer;
 }
 else {
-proxy_worker *worker = ap_proxy_get_worker(cmd-temp_pool, NULL, conf, 
de_socketfy(cmd-pool, r));
+proxy_worker *worker = ap_proxy_get_worker(cmd-temp_pool, NULL, conf, 
new-real);


Why do we no longer de_socketfy? Looks like this breaks UDS.


This should be ok, because new-real is already de_socketfy-ied (see 
line 1601)


http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/mod_proxy.c?revision=1609688view=markup#l1601


 int reuse = 0;
 if (!worker) {
-const char *err = ap_proxy_define_worker(cmd-pool, worker, NULL, 
conf, r, 0);
+const char *err;
+if (use_regex) {
+err = ap_proxy_define_match_worker(cmd-pool, worker, NULL,
+   conf, r, 0);
+}
+else {
+err = ap_proxy_define_worker(cmd-pool, worker, NULL,
+ conf, r, 0);
+}
 if (err)
 return apr_pstrcat(cmd-temp_pool, ProxyPass , err, NULL);

 PROXY_COPY_CONF_PARAMS(worker, conf);
-} else {
+}
+else if ((use_regex != 0) ^ (worker-s-is_name_matchable)) {


Minor nit: I always disliked using bitwise xor for boolean xor :/


That's from Yann's part of the patch, but I can change it if needed.


We need an mmn bump.



Will do it in next commit.

Regards,
Jan Kaluza



Re: [PATCH] Make error logging modular

2014-07-11 Thread Jan Kaluža

On 07/11/2014 01:23 PM, Yann Ylavic wrote:

On Fri, Jul 11, 2014 at 1:07 PM, Jan Kaluža jkal...@redhat.com wrote:

On 07/11/2014 12:53 PM, Yann Ylavic wrote:


Hi Jan,

On Fri, Jul 11, 2014 at 11:17 AM, Jan Kaluža jkal...@redhat.com wrote:


I've updated mod_journald to latest trunk and added documentation. You
can
check the patch against trunk at

http://people.apache.org/~jkaluza/patches/mod_journald/0001-mod_journald.patch.



+static apr_pool_t *journald_info_get_pool(const ap_errorlog_info *info)
+{
+if (info-s  info-s-process  info-s-process-pool)
+return info-s-process-pool;
+if (info-pool)
+return info-pool;
+if (info-c  info-c-pool)
+return info-c-pool;
+if (info-r  info-r-pool)
+return info-r-pool;
+return 0;
+}

Shouldn't this be in the reverse order?



Hm, I think my original intention here was to try as first the pools which
have the biggest chance to be available. Since I'm creating subpool of the
pool returned by this method, it should be OK to use even s-process-pool.
Maybe it's useless try to optimize things...


When a subpool is created, a lock is acquired/released on the parent
pool, so maybe for threaded MPMs here we'd better not hold it process
wide for every log?



Didn't know that. I've updated the patch and the order is now reversed. 
I'm thinking about not creating subpool in case when r-pool is 
available, but I'm not sure if more complicated code is worth doing it 
(not sure if the performance penalty of r-pool subpool creation is big 
enough). I will do some measurement with CustomLog.




Re: svn commit: r1609709 - /httpd/httpd/trunk/include/ap_mmn.h

2014-07-11 Thread Jan Kaluža

On 07/11/2014 03:54 PM, Jim Jagielski wrote:


On Jul 11, 2014, at 9:02 AM, Ruediger Pluem rpl...@apache.org wrote:




jkal...@apache.org wrote:

Author: jkaluza
Date: Fri Jul 11 12:49:54 2014
New Revision: 1609709

URL: http://svn.apache.org/r1609709
Log:
bump mmn because of r1609680

Modified:
httpd/httpd/trunk/include/ap_mmn.h

Modified: httpd/httpd/trunk/include/ap_mmn.h
URL: 
http://svn.apache.org/viewvc/httpd/httpd/trunk/include/ap_mmn.h?rev=1609709r1=1609708r2=1609709view=diff
==
--- httpd/httpd/trunk/include/ap_mmn.h (original)
+++ httpd/httpd/trunk/include/ap_mmn.h Fri Jul 11 12:49:54 2014
@@ -464,6 +464,8 @@
  * 20140611.1 (2.5.0-dev)  Add ap_proxy_connect_uds().
  * 20140627.0 (2.5.0-dev)  Revert 20140611.0 change.
  * 20140627.1 (2.5.0-dev)  add last_backend_conn to util_ldap_connection_t
+ * 20140627.2 (2.5.0-dev)  Added is_name_matchable to proxy_worker_shared.


In this case I think this no issue, but wouldn't this require a major bump once 
this would cause
ALIGNED_PROXY_WORKER_SHARED_SIZE to increase?



Historically, every time we change proxy_worker_shared, we bump mmn
(see previous bumps).

Plus, we also added ap_proxy_define_match_worker() as well,
which is part of the bump (and should be part of the comment
as well, now that I think of it).



It is part of the comment, but on the second line, which is not pasted 
above :).


Jan Kaluza



Re: [PATCH] Fix settings options with ProxyPassMatch

2014-07-10 Thread Jan Kaluža

On 07/09/2014 04:26 PM, Yann Ylavic wrote:

On Wed, Jul 9, 2014 at 4:14 PM, Yann Ylavic ylavic@gmail.com wrote:

On Wed, Jul 9, 2014 at 3:03 PM, Jan Kaluža jkal...@redhat.com wrote:

Hi,

could you please check the patch I've attached to this email?


Looks good to me.



It changes following parts of Yann's patch:

1. keep only single name of the worker stored in shared memory.

2. when ProxyPassMatch is used, wshared-is_name_matchable = 1.

3. if is_name_matchable == 1, ap_proxy_get_worker() uses
ap_proxy_strcmp_ematch() which treats $N as '*'.


Much simpler, no need to store the pattern.

Would it be possible to handle some escaping character (eg. \), so
that $ can be expressed without being interpolated?
(There is still the comment in ap_proxy_strcmp_ematch(), but not the
code anymore).
AFAICT, $ is a legitimate URL character that need not be %-escaped.


Done in attached patch.


I forgot proxysection(), why not handle the
ap_proxy_define_match_worker() case there too?



I'm not sure I see what you mean. There's match_worker created in 
proxysection(), or do you mean something else?


Jan Kaluza



httpd-trunk-proxy-matchv3.patch
Description: application/download


Re: [PATCH] Fix settings options with ProxyPassMatch

2014-07-09 Thread Jan Kaluža

On 04/29/2014 03:51 PM, Jim Jagielski wrote:


On Apr 29, 2014, at 8:41 AM, Jan Kaluža jkal...@redhat.com wrote:


Because later we have to match the URL of request with some proxy_worker.

If you configure ProxyPassMatch like this:
ProxyPassMatch ^/test/(\d+)/foo.jpg http://x/$1/foo.jpg

Then the proxy_worker name would be http://x/$1/foo.jpg;.

If you receive request with URL http://x/something/foo.jpg;, ap_proxy_get_worker() will 
have to find out the worker with name http://x/$1/foo.jpg;. The question here is how it 
would do that?

The answer used in the patch is we change the worker name to http://x/*/foo.jpg; and 
check if the URL (http://x/something/foo.jpg; in our case) matches that worker.

If we store the original name with $N, we will have to find out different way 
how to match the worker (probably emulating wildcard pattern matching)

It would be possible to store only the original name (with $N variables), store the flag that the 
proxy worker is using regex and change ap_proxy_strcmp_ematch() function to treat $N as 
*, but I don't see any real advantage here.



In Yann's suggested patch we don't store match_name where it
belongs; so we'd need to put it in shm, which means more
memory. Instead, we store as is and add a simple char flag
which sez if the stored name is a regex. Much savings.


Hi,

could you please check the patch I've attached to this email?

It changes following parts of Yann's patch:

1. keep only single name of the worker stored in shared memory.

2. when ProxyPassMatch is used, wshared-is_name_matchable = 1.

3. if is_name_matchable == 1, ap_proxy_get_worker() uses 
ap_proxy_strcmp_ematch() which treats $N as '*'.


I've tested this patch a bit and it seems to work for me.

Regards,
Jan Kaluza


And I have no idea why storing with $1 - * somehow makes
things easier or implies a different way how to match the worker.

Finally, let's think about this deeper...

Assume we do have

ProxyPassMatch ^/test/(\d+)/foo.jpg http://x/$1/foo.jpg
ProxyPassMatch ^/zippy/(\d+)/bar.jpg http://x/$1/omar/propjoe.gif

is the intent/desire to have 2 workers or 1? A worker is, in
some ways, simply a nickname for the socket related to a host and port.
Maybe, in the interests of efficiency and speed, since regexes
are slow as it is, a condition could be specified (a limitation,
as it were), that when using PPM, only everything up to
the 1st potential substitution is considered a unique worker.





httpd-trunk-proxy-matchv2.patch
Description: application/download


Re: svn commit: r1608703 - /httpd/httpd/trunk/server/listen.c

2014-07-08 Thread Jan Kaluža

On 07/08/2014 02:00 PM, Yann Ylavic wrote:

On Tue, Jul 8, 2014 at 11:42 AM,  jkal...@apache.org wrote:


@@ -279,8 +279,35 @@ static apr_status_t close_listeners_on_e

  #ifdef HAVE_SYSTEMD

+static int find_systemd_socket(process_rec * process, apr_port_t port) {
+int fdcount, fd;
+int sdc = sd_listen_fds(0);
+
+if (sdc  0) {
+ap_log_perror(APLOG_MARK, APLOG_CRIT, sdc, process-pool, 
APLOGNO(02486)
+  find_systemd_socket: Error parsing enviroment, sd_listen_fds 
returned %d,
+  sdc);
+return 1;
+}
+
+if (sdc == 0) {
+ap_log_perror(APLOG_MARK, APLOG_CRIT, sdc, process-pool, 
APLOGNO(02487)
+  find_systemd_socket: At least one socket must be set.);
+return 1;
+}


Shouldn't these returns be -1?



Yes, good catch, thanks. Fixed in r1608744.

Jan Kaluza



Re: [PATCH] Fix settings options with ProxyPassMatch

2014-04-29 Thread Jan Kaluža

On 04/29/2014 01:04 PM, Jim Jagielski wrote:


On Apr 24, 2014, at 8:57 PM, Yann Ylavic ylavic@gmail.com wrote:


Hi Jan,

sorry for the late.

On Tue, Apr 22, 2014 at 3:39 PM, Jan Kaluža jkal...@redhat.com wrote:

Hi again,

the patch has been here for some time already. I hesitate to commit it to
trunk without any review, because it changes the core code in mod_proxy and
I'm afraid that there could exist more corner-cases I'm not aware of.

On the other side (and to motivate someone with deeper knowledge of
mod_proxy :) ), it would be great to have UDS support also for
ProxyPassMatch and fix some old bugs related to setting options together
with ProxyPassMatch (like PR 43513 and PR 54973).

So, anyone who would have time to review this patch, please? :)


I slightly modified your patch with the following changes :
1. Use \-escaping instead of %-escaping for wildcard_match so to
consume less space and unescape quickly.


Wouldn't this cause confusion/issue with normal escaping?


2. Fix the recursion in ap_proxy_wildcard_match(), which was calling
the legacy ap_strcmp_match(), and unescape according to 1.


+1


2'. Shouldn't this function be made iterative (it is not final
recursive currently)?


+1


3. Always try to get the worker with its de_socketfy()ed name, and
only if not found use ap_proxy_define[_wildcard]_worker() with the
configured URL (add_pass() and proxysection() were not consistent
about ap_proxy_get_worker()'s given name/URL).


Makes sense.


4. Don't match both the normal name with strcnmp() and the wildcard
name with strcmp_match() in ap_proxy_get_worker(), since the worker is
either wildcard or not (this enforcement is also added to add_pass()
and proxysection() so that it is not possible to declare the same
worker name once with a Match, again without, or vice versa).
4'. Does it makes sense to use the longest match_name or should we go
for the first matching wildcard worker (if any) wins?


Longest match name, imo.


5. Rename ap_proxy_wildcard_match() as ap_proxy_strcmp_ematch() to
suggest it could go to server/util.ch (if/when the associated
ap_matchexp_escape() and ap_proxy_strcasecmp_ematch() are coded).


Why?


6. Move proxy_worker_shared { char
wildcard_name[PROXY_WORKER_MAX_NAME_SIZE]; } to proxy_worker { char
*match_name; } to save shm space and avoid length issues (I don't see
how this could be updated at runtime, for now, balancer-manager can't
declare Match workers).


Future effort, so -1 on the moving.


6. Keep ap_proxy_escape_wildcard_url() static (private) since it is
not used outside proxy_util (now).


Sounds good.


This is not very tested (yet), I'll come back if I detect issues, but
first I'd like your feedbacks on this modifications...

Patch attached.



Just a comment: I am really concerned that with such a major change,
we will likely break things in ways we are unaware of... We should
focus on fixing the actual problem w/o also combining it with
other fixes that are tangentially involved. For example, maybe
a simply flag when creating the worker that sez worker name is a regex
is the way to approach it, and thus ap_proxy_get_worker() could
add additional checks for finding the right worker when it
knows it needs to allow for regex substitutions. That creates
a additional logic path which is self-contained and isolated, instead
of monkeying around with changing current paths...



That's what we do with current patch I think, don't we? In the patch, we 
create char *match_name which is NULL when the worker_name is not 
regex and contains the escaped name if regex is used (with $N replaced 
by '*').


ap_proxy_get_worker() later checks the match_name and if it's not NULL, 
it tries regex matching using ap_proxy_strcmp_ematch().


The actual problem is that ap_proxy_get_worker() cannot find out proper 
proxy_worker, because this method is not regex aware. To make it regex 
aware, we have to distinguish regex workers during creation, replace the 
regex variables ($N - *) in some safe manner and allow 
ap_proxy_get_worker() to find the right worker using this new info. And 
that's what the patch does.


Regards,
Jan Kaluza



Re: [PATCH] Fix settings options with ProxyPassMatch

2014-04-29 Thread Jan Kaluža

On 04/29/2014 02:22 PM, Jim Jagielski wrote:


On Apr 29, 2014, at 7:41 AM, Jan Kaluža jkal...@redhat.com wrote:


That's what we do with current patch I think, don't we? In the patch, we create char 
*match_name which is NULL when the worker_name is not regex and contains the escaped name if 
regex is used (with $N replaced by '*').

ap_proxy_get_worker() later checks the match_name and if it's not NULL, it 
tries regex matching using ap_proxy_strcmp_ematch().

The actual problem is that ap_proxy_get_worker() cannot find out proper 
proxy_worker, because this method is not regex aware. To make it regex aware, 
we have to distinguish regex workers during creation,


right, hence the flag.


replace the regex variables ($N - *) in some safe manner and allow 
ap_proxy_get_worker() to find the right worker using this new info. And that's 
what the patch does.


Why replace? Why can't we store the name as is?


Because later we have to match the URL of request with some proxy_worker.

If you configure ProxyPassMatch like this:
ProxyPassMatch ^/test/(\d+)/foo.jpg http://x/$1/foo.jpg

Then the proxy_worker name would be http://x/$1/foo.jpg;.

If you receive request with URL http://x/something/foo.jpg;, 
ap_proxy_get_worker() will have to find out the worker with name 
http://x/$1/foo.jpg;. The question here is how it would do that?


The answer used in the patch is we change the worker name to 
http://x/*/foo.jpg; and check if the URL (http://x/something/foo.jpg; 
in our case) matches that worker.


If we store the original name with $N, we will have to find out 
different way how to match the worker (probably emulating wildcard 
pattern matching)


It would be possible to store only the original name (with $N 
variables), store the flag that the proxy worker is using regex and 
change ap_proxy_strcmp_ematch() function to treat $N as *, but I 
don't see any real advantage here.



The version I looked at had some 'use_regex' logic outside of
ap_proxy_get_worker(), right before we call it, in fact. Maybe
I'm seeing an older version of the patch?



I think that's the correct patch. It has use_regex only for creating the 
proxy_worker to distinguish between normal worker and regex worker.


Note that use_regex does not influence the call of 
ap_proxy_get_worker(). As I said, it's used only to distinguish 
whether the newly created worker is normal worker or regex worker.


Regards,
Jan Kaluza



Re: [PATCH] Fix settings options with ProxyPassMatch

2014-04-29 Thread Jan Kaluža

On 04/29/2014 03:29 PM, Jim Jagielski wrote:


On Apr 29, 2014, at 8:41 AM, Jan Kaluža jkal...@redhat.com wrote:


Because later we have to match the URL of request with some proxy_worker.

If you configure ProxyPassMatch like this:
ProxyPassMatch ^/test/(\d+)/foo.jpg http://x/$1/foo.jpg

Then the proxy_worker name would be http://x/$1/foo.jpg;.

If you receive request with URL http://x/something/foo.jpg;, ap_proxy_get_worker() will 
have to find out the worker with name http://x/$1/foo.jpg;. The question here is how it 
would do that?

The answer used in the patch is we change the worker name to http://x/*/foo.jpg; and 
check if the URL (http://x/something/foo.jpg; in our case) matches that worker.

If we store the original name with $N, we will have to find out different way 
how to match the worker (probably emulating wildcard pattern matching)

It would be possible to store only the original name (with $N variables), store the flag that the 
proxy worker is using regex and change ap_proxy_strcmp_ematch() function to treat $N as 
*, but I don't see any real advantage here.


Huh???

I thought we were talking about this simple, streamlined patch that
does JUST matching/awareness of regex's, not the one which seems
to fold in a BUNCH of other semi-related stuff.



Sorry, I was talking about the latest Yann's patch all the time :(.








The version I looked at had some 'use_regex' logic outside of
ap_proxy_get_worker(), right before we call it, in fact. Maybe
I'm seeing an older version of the patch?



I think that's the correct patch. It has use_regex only for creating the proxy_worker to 
distinguish between normal worker and regex worker.

Note that use_regex does not influence the call of ap_proxy_get_worker(). As I said, it's used 
only to distinguish whether the newly created worker is normal worker or regex worker.

Regards,
Jan Kaluza






Re: [PATCH] Fix settings options with ProxyPassMatch

2014-04-28 Thread Jan Kaluža

On 04/25/2014 02:57 AM, Yann Ylavic wrote:

Hi Jan,

sorry for the late.


No problem :).



On Tue, Apr 22, 2014 at 3:39 PM, Jan Kaluža jkal...@redhat.com wrote:

Hi again,

the patch has been here for some time already. I hesitate to commit it to
trunk without any review, because it changes the core code in mod_proxy and
I'm afraid that there could exist more corner-cases I'm not aware of.

On the other side (and to motivate someone with deeper knowledge of
mod_proxy :) ), it would be great to have UDS support also for
ProxyPassMatch and fix some old bugs related to setting options together
with ProxyPassMatch (like PR 43513 and PR 54973).

So, anyone who would have time to review this patch, please? :)


I slightly modified your patch with the following changes :
1. Use \-escaping instead of %-escaping for wildcard_match so to
consume less space and unescape quickly.


Hm, I'm not sure right now if that's right. The corner-case here is 
following configuration:


ScriptAlias /cgi?bin/ /usr/local/apache2/cgi-bin/
ProxyPassMatch ^/x/(.*\.x)$ http://127.0.0.1/cgi?bin/$1 timeout=1

I'm not sure how valid this configuration is, but it leads to following 
request:


http://127.0.0.1/cgi%3Fbin/printenv.x;

while the proxy_worker has following escaped string:

http://127.0.0.1/cgi\\?bin/*;

So the right proxy_worker is not matched in this case (not sure if 
right is really right in this case :) ).



2. Fix the recursion in ap_proxy_wildcard_match(), which was calling
the legacy ap_strcmp_match(), and unescape according to 1.
2'. Shouldn't this function be made iterative (it is not final
recursive currently)?
3. Always try to get the worker with its de_socketfy()ed name, and
only if not found use ap_proxy_define[_wildcard]_worker() with the
configured URL (add_pass() and proxysection() were not consistent
about ap_proxy_get_worker()'s given name/URL).


+1 to this.


4. Don't match both the normal name with strcnmp() and the wildcard
name with strcmp_match() in ap_proxy_get_worker(), since the worker is
either wildcard or not (this enforcement is also added to add_pass()
and proxysection() so that it is not possible to declare the same
worker name once with a Match, again without, or vice versa).
4'. Does it makes sense to use the longest match_name or should we go
for the first matching wildcard worker (if any) wins?


Hm, I think it should be the longest match_name, if you mean that 
http://127.0.0.1/cgi-bin/*; should be matched if there exist two 
workers like http://127.0.0.1/*; and http://127.0.0.1/cgi-bin/*;.


But if I'm right, in this case, there will be just one *shared* worker. 
Can you write an example of two non-shared workers with matching 
wirldcard name?



5. Rename ap_proxy_wildcard_match() as ap_proxy_strcmp_ematch() to
suggest it could go to server/util.ch (if/when the associated
ap_matchexp_escape() and ap_proxy_strcasecmp_ematch() are coded).
6. Move proxy_worker_shared { char
wildcard_name[PROXY_WORKER_MAX_NAME_SIZE]; } to proxy_worker { char
*match_name; } to save shm space and avoid length issues (I don't see
how this could be updated at runtime, for now, balancer-manager can't
declare Match workers).
6. Keep ap_proxy_escape_wildcard_url() static (private) since it is
not used outside proxy_util (now).


+1 to all above.


This is not very tested (yet), I'll come back if I detect issues, but
first I'd like your feedbacks on this modifications...


I've tested the patch and generally it works for me (except the things I 
wrote above).



Patch attached.

Regards,
Yann.



Regards,
Jan Kaluza



  1   2   >