Re: what to expect from fcgid

2014-02-24 Thread Lazy
https://github.com/hollow/mod_fastcgi_handler is quite easy to
configure, unfortunetly it is abandoned

2014-02-21 17:56 GMT+01:00 Антон Панков :
> Dear All.
>
> Please,  explain  what  functionality  now in trunk and what plans for
> mod_fcgid.  I miss FastCGIExternalServer feature.
>
> The  problem  is  to  allow  different  site  parts to be processed by
> different  FastCGI  servers  (frankly  speaking, by different  php-fpm
> pools).   Also,thereisa   need   of   external  access
> checker.
>
> This  type of configuration has the following problem with apache
> 2.4.X:
>
> 1.   With mod_fastcgi  and  some   magic   [1]   it  is  possible  to  
> achieve  desired
> configuration. But mod_fastcgi doesn't compile with Apache 2.4.
>
> 2. Mod_fcgid lack of FastCGIExternalServer feature.
>
> 3. Mod_proxy_fcgi has rough configuration.  It can't be tied
> to  handlers,  so  it assigned to "directory", not to specified files.
> Also, it avoid unix sockets (i know, they are in trunk),
>
> So,   the  question  is  whether  things  that will help me  already in
> trunk?  Does it make sense to wait for future apache httpd release?
>
> May be i do things wrong and somebody can direct me to right way .
>
>
> Appendices
> 1. Magic to run different
> #- PHP startup as unpriveleged user (general purpose script)-
> #-php-fpm (emptyfile) treat as a pipe to php-fpm standart pool
> FastCgiExternalServer /var/fdwww/outroot/php-fpm -socket 
> /var/run/fastcgi/php-fpm-common
> FastCgiExternalServer /var/fdwww/outroot/php-fpm-wxsf -socket 
> /var/run/fastcgi/php-fpm-common
>
> #   PHP regular fastcgi starter
> Action php-fc /internal/doscript
> Alias /internal/doscript  /var/fdwww/outroot/php-fpm
>
> #-BOX realm ---
> #Accesschecker for BOX realm
> FastCgiServer /usr/fdsys/acheckers/boxcheck.pl -processes 2
> #PHP startup as boxscript user
> #-php-fpm-box (emptyfile) treat as a pipe to php-fpm boxscript pool
> FastCgiExternalServer /var/fdwww/outroot/php-fpm-box -socket 
> /var/run/fastcgi/php-fpm-box
> # PHP magic for Box starter
> Action php-fc-box /internal/doscript-box
> Alias /internal/doscript-box  /var/fdwww/outroot/php-fpm-box
>
> --
> Best regards,
>  Anthony  mailto:ant_m...@inbox.ru
>


Re: Adding AddHandler support for mod_proxy

2014-02-16 Thread Lazy
2014-02-06 ryo takatsuki :
> Hi,
>
> I have an improvement request to suggest but I would like to first provide
> some background to justify it, I apologise for the long email :).
>
> I'm actively using mod_proxy to forward PHP files requests to PHP-FPM. My
> current approach is to use a RewriteRule with the 'P' flag because (in most
> of the cases) it plays nicely with other rules configured by the
> applications I'm configuring, as well as allowing per-Directory
> configurations.
>
> To make it properly work I must assure the proxy RewriteRule must be the
> latest one to be evaluated. The problem is that from time to time I
> encounter corner cases in which the rules previously executed include a [L]
> option that abort the next rules evaluation, skipping the proxy one, making
> Apache serve the PHP text as plain text. This can be solved by tweaking the
> rules but it is a tedious process and is hard to determine all the scenarios
> in which the rewrites could go wrong.

IMHO this is a good idea, a handler is more compatible with .htacess
files created for
mod_php and it fits shared hosting env

>
> Thinking about my goal with all of this was at the beginning, I realised I
> only wanted a way of configuring a handler for all my PHP files, that in
> this case is PHP-FPM, without having to worry about what happens before the
> resource is going to be served. This made my think about the possibility of
> adding this functionality to mod_proxy itself, allowing defining a proxy
> worker as a handler for certain types of files. Something like:
>
>  AddHandler "proxy:unix:/path/to/app.sock|fcgi://localhost/" .php

AddHandler might be tricky from security point of view, eg. most of cms software
usually checks only for last extension before writing uploaded files,
but this AddHandler will also
pass test.php.jpeg to php which might execute this

> I made a quick POC, it is a really small change and for those in my
> situation it could really simplify the configuration of their apps. Of
> course, I'm open to criticisms and alternative solutions :).
>
>
> The code that adds the new functionality is inserted at the beginning of
> mod_proxy's proxy_handler. The conditions are a little weird because I only
> wanted to check the handler if it is not a proxy request already.
>
> diff --git a/modules/proxy/mod_proxy.c b/modules/proxy/mod_proxy.c
> index 9d7c92f..49f3bdc 100644
> --- a/modules/proxy/mod_proxy.c
> +++ b/modules/proxy/mod_proxy.c
> @@ -927,8 +927,20 @@ static int proxy_handler(request_rec *r)
>  struct dirconn_entry *list = (struct dirconn_entry
> *)conf->dirconn->elts;
>
>  /* is this for us? */
> -if (!r->proxyreq || !r->filename || strncmp(r->filename, "proxy:", 6)
> != 0)
> +if (!r->filename)
> +  return DECLINED;
> +
> +if (!r->proxyreq) {
> +  if (r->handler && strncmp(r->handler, "proxy:", 6) == 0 &&
> strncmp(r->filename, "proxy:", 6) != 0) {
> +r->proxyreq = PROXYREQ_REVERSE;
> +r->filename = apr_pstrcat(r->pool, r->handler, r->filename, NULL);
> +apr_table_setn(r->notes, "rewrite-proxy", "1");
> +  } else {
>  return DECLINED;
> +  }
> +} else if (strncmp(r->filename, "proxy:", 6) != 0) {
> +  return DECLINED;
> +}
>
>  /* handle max-forwards / OPTIONS / TRACE */
>  if ((str = apr_table_get(r->headers_in, "Max-Forwards"))) {


Re: URL scanning by bots

2013-04-30 Thread Lazy
2013/4/30 Graham Leggett 

> On 30 Apr 2013, at 12:03 PM, André Warnier  wrote:
>
> > The only cost would a relatively small change to the Apache webservers,
> which is what my
> > suggestion consists of : adding a variable delay (say between 100 ms and
> 2000 ms) to any
> > 404 response.
>
> This would have no real effect.
>
> Bots are patient, slowing them down isn't going to inconvenience a bot in
> any way. The simple workaround if the bot does take too long is to simply
> send the requests in parallel. At the same time, slowing down 404s would
> break real websites, as 404 isn't necessarily an error, but rather simply a
> notice that says the resource isn't found.
>
> Regards,
> Graham
> --
>
>
If you want to slow down the bots I whould suggest using

mod_security + simple scripts+ ipset + iptables TARPIT in the raw table

this way You would be able to block efficiently a very large number of
ipnumbers, using
TARPIT will take care of the
delaying new bot connections at minimal cost (much lower then delaying the
request in userspace, or even returning some error code)

http://ipset.netfilter.org/
http://serverfault.com/questions/113796/setting-up-tarpit-technique-in-
iptables
http://www.modsecurity.org/documentation/modsecurity-apache/1.9.3/html-
multipage/05-actions.html
-- 
Michal Grzedzicki


Re: [patch] Fix cross-user symlink race condition vulnerability

2012-11-04 Thread Lazy
2012/10/31 Eric Jacobs :
> On 10/31/2012 06:00 AM, Eric Covener wrote:
>>
>> In general that is the proper form -- but this particular issue is
>> documented as a limitation:
>>
>> "Omitting this option should not be considered a security restriction,
>> since symlink testing is subject to race conditions that make it
>> circumventable."
>
>
> Some users (like Bluehost) require the functionality of symlinks without the
> possibility of server side vulnerabilities. Having the vulnerability
> documented doesn't keep servers safe. The patch I submitted allows httpd to
> use symlinks in a protected fashion that doesn't allow for users to serve
> arbitrary files.
>
> I'll go ahead and submit a more detailed email to the security. More
> feedback from the devs is appreciated.

on some systems, at least on Linux You can use a grsecurity kernel
patch feature which prevents those races
and is cheeper performance wise

+config GRKERNSEC_SYMLINKOWN
+   bool "Kernel-enforced SymlinksIfOwnerMatch"
+   default y if GRKERNSEC_CONFIG_AUTO && GRKERNSEC_CONFIG_SERVER
+   help
+ Apache's SymlinksIfOwnerMatch option has an inherent race condition
+ that prevents it from being used as a security feature.  As Apache
+ verifies the symlink by performing a stat() against the target of
+ the symlink before it is followed, an attacker can setup a symlink
+ to point to a same-owned file, then replace the symlink with one
+ that targets another user's file just after Apache "validates" the
+ symlink -- a classic TOCTOU race.  If you say Y here, a complete,
+ race-free replacement for Apache's "SymlinksIfOwnerMatch" option
+ will be in place for the group you specify. If the sysctl option
+ is enabled, a sysctl option with name "enforce_symlinksifowner" is
+ created.

there probably is something similar on *BSD's, or if there isn't it
won't be hard to make

Your patch checks for a race conditions every time, even if Symlinks
weren't allowed. It also references some
configuration dependent directory like /usr/local/apache/htdocs.

-- 
Michal Grzedzicki


Re: Re: Re: Re: mod_fcgid concurrency bottleneck, issue#53693

2012-08-28 Thread Lazy
2012/8/28 pqf :
> So what can mod_fcgid do in this overloaded?
> 1. mod_fcgid get a request
> 2. mod_fcgid can't apply a free slot of FCGI handler
> 3. mod_fcgid send a spawn request to PM
> 4. PM deny the request(for too much process already)
> 5. Now
>for( i=1; i<64; i++)
>   {
>  a) mod_fcgid delay a while, and then send another spawn request to PM
> and try apply free slot again.
>  b) mod_fcgid send another spawn request at once, even the last request
> is denied.
>  c) ??
> (now is a, b maybe not a good idea, any new idea?)
>   }
>
> I think the bottleneck is too much request, too less FCGI handler. httpd(or
> mod_fcgid) either drop client connections or delay a while, there is no
> other way out?

My idea is to add a availability number to each class. If a wait fails
it will be decreased or increased if wait is successful. Let's say we
want to wait max 100 times 250ms


int connected=0;

   for(i=0; !connected && i <= (get_clas_avail()/MAX_AVAIL)*100; i++) {
/* Apply a process slot */
bucket_ctx->procnode = apply_free_procnode(r, &fcgi_request);

/* Send a spawn request if I can't get a process slot */
/* procmgr_send_spawn_cmd() return APR_SUCCESS if a
process is created */
if( !bucket_ctx->procnode &&
(procmgr_send_spawn_cmd(&fcgi_request, r) != APR_SUCCESS) )
apr_sleep(apr_time_from_msec(250));
else
/* Apply a process slot */
bucket_ctx->procnode = apply_free_procnode(r, &fcgi_request);

if (bucket_ctx->procnode) {
if (proc_connect_ipc(bucket_ctx->procnode,
 &bucket_ctx->ipc) != APR_SUCCESS) {
proc_close_ipc(&bucket_ctx->ipc);
bucket_ctx->procnode->diewhy = FCGID_DIE_CONNECT_ERROR;
return_procnode(r, bucket_ctx->procnode, 1 /* has
error */ );
bucket_ctx->procnode = NULL;
decrease_avail();
}else {
increase_avail();
connected=1;
}
}
}

if (!connected) {
decrease_avail();
ap_log_rerror(APLOG_MARK, APLOG_WARNING, 0, r,
  "mod_fcgid: can't apply process slot for %s",
  cmd_conf->cmdline);
return HTTP_SERVICE_UNAVAILABLE;
}

decrease_avail() might halve availability each time called.


Availability should be dynamic maybe controlled by processmanager and
returned to the threads handling connections by
procmgr_send_spawn_cmd(), it can depend on total number of denied
spawn requests for specific class in a similar way as score, without
this connections will be 503 not sooner then 25 seconds, which is
still IMHO to long. Another improvement would bo to make wait time
shorter for not overloaded classes to keep the penalty of denied spawn
as low as possible.


I plan to work on that later.

>
>
> Another question. Is it necessary to call procmgr_init_spawn_cmd() from
> inside the for loop ?
> I took a brief look, it seems not necessary. I will move it out of loop and
> test.
>
> 2012-08-28
> 
> pqf
> 
> 发件人:Lazy
> 发送时间:2012-08-27 21:47
> 主题:Re: Re: Re: mod_fcgid concurrency bottleneck, issue#53693
> 收件人:"dev"
> 抄送:
>
> 2012/8/16 pqf :
>> How about this:
>> 1. procmgr_post_spawn_cmd() now return a status code from PM, so process
>> handler now know the spawn request is denyed or not.
>> 2. if a new process is created, no sleep is needed.
>> 3. if no process is created, sleep a while
>
> sorry for the late reply,
>
> in the old code there ware no sleep() between procmgr_post_spawn_cmd()
> and apply_free_procnode()
>
> sleep() was invoked only if there ware no free procnode.
>
> This happened only if we ware denied spawning new process or in some
> cases if some other thread managed to use that procnode before us.
>
> Your change adresses cases if some other thread stole "our" newly
> spawned fcgi process, old code was waiting 1s before trying to spawn
> another/recheck, new code doesn't, I guess this is the orginal issue
> in stress tests when total number of simultaneous connections doesn't
> exceed max fcgi processes. But when spawning is denied recovery time
> is still long 1s.
>
>
> I was refering to cases when spawn is denied.
>
> If a vhost is overloaded or someone added sleep(60) in the code,
> mod_fcgid blocks on all request to that vhost
> for over a minute and it is possible to occupy 1000 threads using
> under 20 new co

Re: Re: Re: mod_fcgid concurrency bottleneck, issue#53693

2012-08-27 Thread Lazy
2012/8/16 pqf :
> How about this:
> 1. procmgr_post_spawn_cmd() now return a status code from PM, so process
> handler now know the spawn request is denyed or not.
> 2. if a new process is created, no sleep is needed.
> 3. if no process is created, sleep a while

sorry for the late reply,

in the old code there ware no sleep() between procmgr_post_spawn_cmd()
and apply_free_procnode()

sleep() was invoked only if there ware no free procnode.

This happened only if we ware denied spawning new process or in some
cases if some other thread managed to use that procnode before us.

Your change adresses cases if some other thread stole "our" newly
spawned fcgi process, old code was waiting 1s before trying to spawn
another/recheck, new code doesn't, I guess this is the orginal issue
in stress tests when total number of simultaneous connections doesn't
exceed max fcgi processes. But when spawning is denied recovery time
is still long 1s.


I was refering to cases when spawn is denied.

If a vhost is overloaded or someone added sleep(60) in the code,
mod_fcgid blocks on all request to that vhost
for over a minute and it is possible to occupy 1000 threads using
under 20 new connections to slow vhost
per second. This can be mitingated by adding avaiability which will
impact time spend on waiting for free process. Overloaded vhost will
start to drop connections faster preventing the web-server reaching
MaxClients
limit.

Another question. Is it necessary to call procmgr_init_spawn_cmd()
from inside the for loop ?


>
> 2012-08-16
> 
> pqf
> 
> 发件人:Lazy
> 发送时间:2012-08-16 16:47
> 主题:Re: Re: mod_fcgid concurrency bottleneck, issue#53693
> 收件人:"dev"
> 抄送:
>
> 2012/8/16 pqf :
>> Hi, Michal
>> My solution do "add availability to each class", which is the
>> procmgr_post_spawn_cmd() call in each loop do.
>> The sleep() call is intrudused for a stress test without warm up time, in
>> this case, mod_fcgid will create more processes than a slow start one(each
>> process handler can't apply a free slot on the very begining, so send a
>> request to process manager to create one, it's easy to reach the max # of
>> process limit while httpd startup, but the idle process will be killed
>> later), the sleep() call is a little like a "server side warm up delay".
>> But since someone said remove this sleep(), the server work fine without
>> bottleneck(Maybe he didn't notise the warm up issue?), so I thought remove
>> the sleep() is a good idea. But reduce the time of sleep() is fine to me
>> too.
>
> I was referring to the case where all processes are busy, without
> sleep(), handle_request() wil quickly send spawn requsts, whith will
> be denyed by process menager, with sleep() handle_request() will
> always wait quite a long time,
> occupying slots
>
> --
> Michal Grzedzicki
>
>


Re: Re: mod_fcgid concurrency bottleneck, issue#53693

2012-08-16 Thread Lazy
2012/8/16 pqf :
> Hi, Michal
> My solution do "add availability to each class", which is the
> procmgr_post_spawn_cmd() call in each loop do.
> The sleep() call is intrudused for a stress test without warm up time, in
> this case, mod_fcgid will create more processes than a slow start one(each
> process handler can't apply a free slot on the very begining, so send a
> request to process manager to create one, it's easy to reach the max # of
> process limit while httpd startup, but the idle process will be killed
> later), the sleep() call is a little like a "server side warm up delay".
> But since someone said remove this sleep(), the server work fine without
> bottleneck(Maybe he didn't notise the warm up issue?), so I thought remove
> the sleep() is a good idea. But reduce the time of sleep() is fine to me
> too.

I was referring to the case where all processes are busy, without
sleep(), handle_request() wil quickly send spawn requsts, whith will
be denyed by process menager, with sleep() handle_request() will
always wait quite a long time,
occupying slots

-- 
Michal Grzedzicki


Re: mod_fcgid concurrency bottleneck, issue#53693

2012-08-15 Thread Lazy
2012/8/15 pqf :
> Hi, all
> I prefer the following solution, can anyone review it?
> procmgr_post_spawn_cmd() will be blocked until process manager create a new
> fcgid process, the worst case is someone else take the new created process
> before I do, and I have to post another spawn command to PM again. The
> extreme case is loop FCGID_APPLY_TRY_COUNT but get no process slot.
>
>
> Index: fcgid_bridge.c
> ===
> --- fcgid_bridge.c  (revision 1373226)
> +++ fcgid_bridge.c  (working copy)
> @@ -30,7 +30,7 @@
>  #include "fcgid_spawn_ctl.h"
>  #include "fcgid_protocol.h"
>  #include "fcgid_bucket.h"
> -#define FCGID_APPLY_TRY_COUNT 2
> +#define FCGID_APPLY_TRY_COUNT 4
>  #define FCGID_REQUEST_COUNT 32
>  #define FCGID_BRIGADE_CLEAN_STEP 32
>
> @@ -447,19 +447,13 @@
>  if (bucket_ctx->procnode)
>  break;
>
> -/* Avoid sleeping the very first time through if there are no
> -   busy processes; the problem is just that we haven't spawned
> -   anything yet, so waiting is pointless */
> -if (i > 0 || j > 0 || count_busy_processes(r, &fcgi_request)) {
> -apr_sleep(apr_time_from_sec(1));
> -
> -bucket_ctx->procnode = apply_free_procnode(r,
> &fcgi_request);
> -if (bucket_ctx->procnode)
> -break;
> -}
> -
>  /* Send a spawn request if I can't get a process slot */
>  procmgr_post_spawn_cmd(&fcgi_request, r);
> +
> +/* Try again */
> +bucket_ctx->procnode = apply_free_procnode(r, &fcgi_request);
> +if (bucket_ctx->procnode)
> +break;
>  }
>
>  /* Connect to the fastcgi server */


if You get rid of sleep apache will not wait for free process if all
of them are busy, this will lead to 503 errors

currently mod_fcgid waits FCGID_APPLY_TRY_COUNT * FCGID_REQUEST_COUNT
* 1 second and this is usually 64 seconds, this means if You have an
overloaded vhost with low FcgidMaxProcessesPerClass it can bring whole
server down, each thread waits 64 seconds so it doesn't take long
before all threads are occupied.

In my setup we lowerd the wait time and FCGID_REQUEST_COUNT to lower
the impact of overloaded class but I think
the best solution will be to add availability to each class. Total
wait time will be related to it (by changing sleep time, and
FCGID_APPLY_TRY_COUNT). If request is unsuccessful availability will
be halved so next time the wait time will be shorter. This way
congested class will get 0% availability, and new connections will
instantly get 503 it there are no free slots. A successful wait will
increase availability.


Regards,

Michal Grzedzicki


mod_fcgid graceful restarts

2012-04-10 Thread Lazy
Hi All,

Currently graceful restart while using mod_fcgid just kill all
subprocesses, this is not sefe for applications and slows down
reloads.

John Lightsey provided a patch to make real graceful restarts on
mod_fcgid, now graceful part is separated as requested in the bug
report

https://issues.apache.org/bugzilla/show_bug.cgi?id=48769

I am running with this patch on over 10 machines doing daily 24+
graceful restarts, so far without any issues (reloads are faster, and
there are no orphan left behind.)



Regards,

Michal Grzedzicki


long timeout on overloaded mod_fcgid

2011-12-29 Thread Lazy
Hi,

When some vhost's scripts exhaust all process slots available to it
(FcgidMaxProcessesPerClass),
next requests are waiting for over 60 seconds before issuing 503 error.

I came across this while modifying suexec to use cgroups to provide
better resource separation for use in our shared hosting environment.
When some vhost got it's cpu resources sharply reduced its incoming
traffic quickly occupyed all available connection slots rendering
whole web server unavailable.

This could lead to a dos situation when unresponsive fcgi application
will occupy large number of connection slots.

mod_fcgid could use process statistics to detect these situations and
return 503 error sooner.

Maybe check for active and idle time, or allow only a limited number
of clients to wait for a single fcgi process class if all of them are
busy.

http://svn.apache.org/repos/asf/httpd/mod_fcgid/trunk/modules/fcgid/fcgid_bridge.c

#define FCGID_REQUEST_COUNT 32
#define FCGID_APPLY_TRY_COUNT 2

handle_request(request_rec * r, int role, fcgid_cmd_conf *cmd_conf,
   apr_bucket_brigade * output_brigade)
{
...
/* Try to get a connected ipc handle */
for (i = 0; i < FCGID_REQUEST_COUNT; i++) {
/* Apply a free process slot, send a spawn request if I can't get one */
for (j = 0; j < FCGID_APPLY_TRY_COUNT; j++) {
/* Init spawn request */
procmgr_init_spawn_cmd(&fcgi_request, r, cmd_conf);
^^
do we have to do it on every iteration ?

if yes I think it can be moved just before
procmgr_post_spawn_cmd(&fcgi_request, r);
so if there is a free procnode we don't call procmgr_init_spawn_cmd at all


bucket_ctx->ipc.connect_timeout =
fcgi_request.cmdopts.ipc_connect_timeout;
bucket_ctx->ipc.communation_timeout =
fcgi_request.cmdopts.ipc_comm_timeout;

/* Apply a process slot */
bucket_ctx->procnode = apply_free_procnode(r, &fcgi_request);
if (bucket_ctx->procnode)
break;

/* Avoid sleeping the very first time through if there are no
   busy processes; the problem is just that we haven't spawned
   anything yet, so waiting is pointless */
if (i > 0 || j > 0 || count_busy_processes(r, &fcgi_request)) {
apr_sleep(apr_time_from_sec(1));

bucket_ctx->procnode = apply_free_procnode(r, &fcgi_request);
if (bucket_ctx->procnode)
break;
}

/* Send a spawn request if I can't get a process slot */
procmgr_post_spawn_cmd(&fcgi_request, r);
}


-- 
Michal Grzedzicki


Re: PATCH mod_fcgid compile fails

2011-10-05 Thread Lazy
2011/10/5 stefan novak :
>> this is only a warning, are You sure httpd fails to build ?
>
> yes, httpd fails to build with this lines :(
> maybe another compile flag will help, i'm not sure...

i ran similar test on my system without problems (only this warning)
# ./configure --enable-fcgid
# make
...
/usr/share/apr-1.0/build/libtool --silent --mode=link
i486-linux-gnu-gcc -pthread-o httpd  modules.lo buildmark.o
-export-dynamic server/libmain.la modules/aaa/libmod_authn_file.la
modules/aaa/libmod_authn_default.la modules/aaa/libmod_authz_host.la
modules/aaa/libmod_authz_groupfile.la modules/aaa/libmod_authz_user.la
modules/aaa/libmod_authz_default.la modules/aaa/libmod_auth_basic.la
modules/fcgid/libmod_fcgid.la modules/filters/libmod_include.la
modules/filters/libmod_filter.la modules/loggers/libmod_log_config.la
modules/metadata/libmod_env.la modules/metadata/libmod_setenvif.la
modules/metadata/libmod_version.la modules/http/libmod_http.la
modules/http/libmod_mime.la modules/generators/libmod_status.la
modules/generators/libmod_autoindex.la
modules/generators/libmod_asis.la modules/generators/libmod_cgi.la
modules/mappers/libmod_negotiation.la modules/mappers/libmod_dir.la
modules/mappers/libmod_actions.la modules/mappers/libmod_userdir.la
modules/mappers/libmod_alias.la modules/mappers/libmod_so.la
server/mpm/prefork/libprefork.la os/unix/libos.la -lm
/usr/src/test_ap/httpd-2.2.21/srclib/pcre/libpcre.la
/usr/lib/libaprutil-1.la /usr/lib/libapr-1.la -luuid -lrt -lcrypt
-lpthread -ldl
modules/fcgid/.libs/libmod_fcgid.a(fcgid_mutex_unix.o): In function
`fcgid_mutex_create':
fcgid_mutex_unix.c:(.text+0x47): warning: the use of `tmpnam' is
dangerous, better use `mkstemp'
make[1]: Opuszczenie katalogu `/obr/src/test_ap/httpd-2.2.21'

# ./httpd -l|grep fcgid
  mod_fcgid.c

so httpd biulds correctly

if it still fails for you please paste more make logs and your configure options

-- 
Michal Grzedzicki


Re: PATCH mod_fcgid compile fails

2011-10-05 Thread Lazy
2011/10/5 stefan novak :
> Hello!
> When you want to compile mod_fcgid als build in static module it fails with:
> modules/fcgid/.libs/libmod_fcgid.a(fcgid_mutex_unix.o): In function
> `fcgid_mutex_create':
> fcgid_mutex_unix.c:(.text+0x65): warning: the use of `tmpnam' is dangerous,
> better use `mkstemp'
> make[1]: Leaving directory `/root/rpmbuild/SOURCES/httpd-2.2.21'

this is only a warning, are You sure httpd fails to build ?

> The following patch helps, but i dont know if its the right solution.
> 1129#centos6-build:diff modules/fcgid/fcgid_mutex_unix.c
> ../mod_fcgid-2.3.6/modules/fcgid/fcgid_mutex_unix.c
> 118c118
> <     mkstemp(lockfile);
> ---
>>     tmpnam(lockfile);
> Can someone check it?

I think this won't work

from man

The  mkstemp()  function generates a unique temporary filename from
template, creates and opens the file, and returns an open file
descriptor for the file.

The  tmpnam() function returns a pointer to a string that is a valid
filename, and such that a file with this name did not exist at some
point in time

tmpnam() creates a temporary filename in lockfile which is used by
apr_global_mutex_create()

mkstemp() won't touch lockfile so apr_global_mutex_create() will get
always same empty string, i think this might work on some platforms
where default lock mechanism doesn't need a filename

-- 
Michal Grzedzicki


Re: DoS with mod_deflate & range requests

2011-08-23 Thread Lazy
2011/8/23 Lazy :
> 2011/8/23 Stefan Fritsch :
>> http://seclists.org/fulldisclosure/2011/Aug/175
>>
>> I haven't looked into it so far. And I am not sure I will have time today.
>>
>
> it is sending HEAD requests with lots of  ranges
> HEAD / HTTP/1.1
> Host: 
> Range:bytes=0-,5-1,5-2,5-3,.
>
> the code in
> ap_byterange_filter()
> http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/modules/http/byterange_filter.c
> creates a bucket for every range element,
>
> the number of buckets is limited by the size of the document in
> published code but I think it can be enchanced by
> using 1-2,1-3,..1-doc_size,2-1,2-2, 2-doc_size
>
> doeas Range in HEAD request have any sense at all ?

quick fix bellow made it immune to this dos

diff -ru modules/http/byterange_filter.c.org
modules/http/byterange_filter.c |less
--- byterange_filter.c  2011-02-13 15:32:19.0 +0100
+++ modules/http/byterange_filter.c 2011-08-23 15:54:37.0 +0200
@@ -320,6 +320,7 @@
 const char *if_range;
 const char *match;
 const char *ct;
+char * tmp;
 int num_ranges;

 if (r->assbackwards) {
@@ -373,14 +374,13 @@
 }
 }

-if (!ap_strchr_c(range, ',')) {
-/* a single range */
-num_ranges = 1;
-}
-else {
-/* a multiple range */
-num_ranges = 2;
-}
+/* count ranges, exit if more then 10 */
+tmp=range+6;
+num_ranges=1;
+while(*++tmp)
+if(*tmp == ',')
+   if(++num_ranges > 10)
+   return 0;

 r->status = HTTP_PARTIAL_CONTENT;
 r->range = range + 6;


Re: DoS with mod_deflate & range requests

2011-08-23 Thread Lazy
2011/8/23 Stefan Fritsch :
> http://seclists.org/fulldisclosure/2011/Aug/175
>
> I haven't looked into it so far. And I am not sure I will have time today.
>

it is sending HEAD requests with lots of  ranges
HEAD / HTTP/1.1
Host: 
Range:bytes=0-,5-1,5-2,5-3,.

the code in
ap_byterange_filter()
http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/modules/http/byterange_filter.c
creates a bucket for every range element,

the number of buckets is limited by the size of the document in
published code but I think it can be enchanced by
using 1-2,1-3,..1-doc_size,2-1,2-2, 2-doc_size

doeas Range in HEAD request have any sense at all ?


Apache stuck on sendmsg() and recvmsg()

2011-06-30 Thread Lazy
Hi,

I'm trying to fix an issue in a custom mpm. It's called peruser. More
or less it's a prefork with pools of processes running on different
users.
Additional pool of processes called Multiplexers is accepting
connections and sending them to workers. Each worker pool has it's own
pair of sockets (socketpair(PF_UNIX, SOCK_STREAM)) one for
Multiplexers and other for Workers. Multiplexer sends socket and
request data to Worker using blocking sendmsg(), Workers are using non
blocking
recvmsg().

The code looks like this

in Workers
receive_from_multiplexer()
...
// Don't block
ret = recvmsg(ctrl_sock_fd, &msg, MSG_DONTWAIT);

if (ret == -1 && errno == EAGAIN) {
_DBG("receive_from_multiplexer recvmsg() EAGAIN, someone was faster");

return APR_EAGAIN;
}
else if (ret == -1) {
_DBG("recvmsg failed with error \"%s\"", strerror(errno));
return APR_EGENERAL;
}
else _DBG("recvmsg returned %d", ret);

in Multiplexers

if ((rv = sendmsg(processor->senv->output, &msg, 0)) == -1)
{
apr_pool_destroy(r->pool);
ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, ap_server_conf,
 "Writing message failed %d %d", rv, errno);
return -1;
}


The problem is that sometimes Multiplexer is stuck on sendmsg(), and
Worker is stuck on recvmsg()
os is linux 2.6.32 on amd64

sendmsg(74, {msg_name(0)=NULL, msg_iov(5)=[{"y\1\0\0\0\0\0\0", 8},
{"\0\0\0\0\0\0\0\0", 8},
{"\230\322\265\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\364\351
\0\0\2\0\0\0\20\0\0\0\4\0\0\0\20\0\0\0\0\0\0\0l\324\265\0\0\0\0\0\0\0\0\0\0\0\0\0\2\0\351\364C\303p
\0\0\0\0\0\0\0\0\10\0\0\0\0\0\0\0F\251\266\0\0\0\0\0@\24
5\256\2\0\0\0\0\370\222\266\0\0\0\0\0`\30\252\366\377\177\0\0\320\30\252\366\377\177\0\0\5\0\0\0\0\0\0\0\364\230\254\242a\177\0\0\6\0\0\0\1\0\0\0\5\0\0\0\1\
0\0\0\4\0\0\0\1\0\0\0\3\0\0\0\1\0\1\0\213\0\0\0\1\0\0\0\220\361\5\0\0\0\0\0",
192}, {"GET /oglx.html HTTP/1.0\r\nHost: xxx \r\nU
ser-Agent: Mozilla/5.0 (compatible; Yahoo! Slurp;
http://help.yahoo.com/help/us/ysearch/slurp)\r\nAccept:
text/xml,application/xml,application/xhtml+xml,tex
t/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nAccept-Language:
en-us,en;q=0.5\r\nAccept-Encoding: gzip\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=
0.7\r\n\r\n\0", 378}, {"", 0}], msg_controllen=20, {cmsg_len=20,
cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, {151}},
msg_flags=MSG_PROXY|MSG_DONTWAIT}, 0 <
unfinished ...>

Killing destination Workers frees all Multiplexers.

I think that the problem might be in receive_from_multiplexer(), if a
message gets ie half received the code isn't going back to reread
this, receive_from_multiplexer() is called after apr_pool() on
multiple Workers so there's no guarantee that the same one is going
back to reread the message, and this blocks this socket for other
messages.

I know that this is not httpd code, but perusers mailing list is dead,
and I don't have any other ideas where to go with this.

--
Michal Grzedzicki


Re: [RFC] A new hook: invoke_handler and web-application security

2009-04-12 Thread Lazy


W dniu 2009-04-09, o godz. 18:19, Stefan Fritsch   
napisał(a):



On Thursday 09 April 2009, Graham Dumpleton wrote:

Only you would know that. But then, I could be pointing you at the
wrong MPM. There is from memory another by another name developed
outside of ASF which intends to do the same think. The way it is
implemented is probably going to be different and may be the one I
am actually thinking of. I can't remember the name of it right now.


Maybe you mean MPM itk, which can change to different users for
different vhosts?

http://mpm-itk.sesse.net/



or peruser mpm

__
Lazy

Re: [VOTE] Release Apache HTTP server 2.2.11

2008-12-07 Thread Lazy
2008/12/6 Ruediger Pluem <[EMAIL PROTECTED]>:
> Test tarballs for Apache httpd 2.2.11 are available at:
>
>http://httpd.apache.org/dev/dist/

builds/runs (default config) on OSX 10.5.5

-- 
Michal Grzedzicki


AllowOverride Options vs AllowOverride Options= reloaded

2008-05-11 Thread Lazy
Hello to all,

in https://issues.apache.org/bugzilla/show_bug.cgi?id=44262 I was told
to ask about this issue on [EMAIL PROTECTED]

Long talk short (i gues my last mail was to long to be comprehensible ;)

In apache 1.3/2.0 there was no selection of whitch options can or
can't be set in .htacess files.

At some point in 2.2.x AllowOverride Options=[option1,...] was introduced.

From my point needlessly it brakes compatibility previous apache
versions, because now AllowOverride Options only allows to set *'All'
options whith isn't really all
because 'All' laves out OPT_INCNOEXEC  OPT_SYM_OWNER OPT_MULTI !,
AllowOverride All allows these 3 extra options just as 'AllowOverride
Options=' whitch should be a syntax error but leaves overridable
options unaltered and defaults overridables (from the same patch) are
OPT_ALL + OPT_INCNOEXEC + OPT_SYM_OWNER OPT_MULTI.


*) 'All' like All in Options All whitch really isn't all options


AllowOverride|   1.3/20|  2.2.x
---+---
Options| All + Multiviews and 2 others  |  only All
Options=  | n/a|  All
+ Multiviews and 2 others



If it's designed to work this way there should be something in the
docs but then what does Options= mean ?


--
Michał Grzędzicki


Re: User/group security without CGI (SuEXEC)

2008-05-05 Thread Lazy
2008/5/5 Jille Timmermans <[EMAIL PROTECTED]>:
> -BEGIN PGP SIGNED MESSAGE-
>  Hash: SHA1
>
>  Hello hackers!
>
>  I was thinking of creating a more secure environment for running
>  webscripts (mod_php in my case),
>  I want to run php scripts as their owner.
>
>  I tought of the following scheme's:
>  http://junk.quis.cx/fViKmLRi/apache-user-scheme-p1.png
>  http://junk.quis.cx/bPkxwAbI/apache-user-scheme-p2.png
>
>  And an setting:
>  ExecutiveUser %n # This should run php scripts as $script-owner
>  ExecutiveUser www-%n # this should run php scripts as www-$scriptowner
>  ExecutiveGroup www
>  ExecutiveGroup www-%n
>  (%n meaning the script-owners username, and eg %u for the script-owners
> uid)
>
>  This would (eg) enable me to:
>  [EMAIL PROTECTED]:~# id
>  uid=1000(quis) gid=1000(users) groups=1000(users),1(www-quis)
>  [EMAIL PROTECTED]:~# id www-quis
>  uid=1(www-quis) gid=1(www-quis) groups=1(www-quis)
>  [EMAIL PROTECTED]:~# chown quis:www-quis public_html
>  [EMAIL PROTECTED]:~# chmod 750 public_html
>
>  So only 'my' apache-runas user can access my scripts.
>
>  How do you think about this idea ?
>  It does decrease the performance a bit (Workers should parse the
>  request, put it in some shm, Executive should pick it up from the shm
>  and really run the php-script (See the links above for the terms Worker
>  and Executive)
>  But if the option is not specified it is possible to do it 'the old way'.
>  Would it be possible to implement this as an MPM, or MOD ?
>  (I don't know enough (yet) of apache to say that.)
>  If that is possible there is no loss when it is disabled.
take a look at peruser (http://www.telana.com/peruser.php)

It supports ssl, keep-alive, chroot and chuid per vhost

in simple configurations it seems to work out of the box with some quirks
1) graceful segfaults (apache continues to work)
2) on machines with multiple processors it hangs badly on gaceful restarts
3) some minor issues with ssl cache

last week, I think I ironed out 1 & 2 graceful's work flawlessly on a
busy webserwer (2xdc opteron) (around 300 diferent users with many
more vhosts).

Sadly support list for peruser seems to be dead and latest patch is
based on 2.2.3.

I fixed 2 race conditions, added limited support for ssl for
NamevirtualHosts and did some minor patches.

All without answer so i guess peruser isn't in active development anymore.

There is still an memory leak to plug, maybe my patches did some wrong
but for now it's not a big headache.

Peruser now for me is quite usable, i have some ideas to improve it. I
will do it anyway because i need it for my work.

Somebody told me to fork it, but will anyone care ?

-- 
Michal Grzedzicki


AllowOverride Options= vs Options issues see bug 44262

2008-05-03 Thread Lazy
see https://issues.apache.org/bugzilla/show_bug.cgi?id=44262

The docs for 2.0 say about AllowOverride Options:
(http://httpd.apache.org/docs/2.0/mod/core.html#allowoverride)
"
Options Allow use of the directives controlling specific directory
features (Options and XBitHack).
"

and for 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html#allowoverride)
"
Options[=Option,...]Allow use of the directives controlling specific
directory features (Options and XBitHack). An equal sign may be given
followed by a comma (but no spaces) separated lists of options that
may be set using the Options command.
"

In 2.0 they don't specify which Options can be overwritten and in 2.2
=Option,.. is optional so I expect Options without =Option,... be
compatible with old AllowOverride Options.

The catch is that Option All isn't really all the options.
from includes/http_core.h
"
/** No directives */
#define OPT_NONE 0
/** Indexes directive */
#define OPT_INDEXES 1
/**  Includes directive */
#define OPT_INCLUDES 2
/**  FollowSymLinks directive */
#define OPT_SYM_LINKS 4
/**  ExecCGI directive */
#define OPT_EXECCGI 8
/**  directive unset */
#define OPT_UNSET 16
/**  IncludesNOEXEC directive */
#define OPT_INCNOEXEC 32
/** SymLinksIfOwnerMatch directive */
#define OPT_SYM_OWNER 64
/** MultiViews directive */
#define OPT_MULTI 128
/**  All directives */
#define OPT_ALL (OPT_INDEXES | OPT_INCLUDES | OPT_SYM_LINKS | OPT_EXECCGI)
"
so OPT_INCNOEXEC  OPT_SYM_OWNER OPT_MULTI are omitted from All.


in 2.2.x
AllowOverride Options=
allows everything including Multiviews and others

AllowOverride Options
allows only All and this is only OPT_INDEXES | OPT_INCLUDES |
OPT_SYM_LINKS | OPT_EXECCGI

The bug is in server/core.c:1286
l is the string after "Options=", in this case its just an empty string.

static const char *set_allow_opts(cmd_parms *cmd, allow_options_t *opts,
 const char *l)
{
   allow_options_t opt;
   int first = 1;

   char *w, *p = (char *) l;
   char *tok_state;

   while ((w = apr_strtok(p, ",", &tok_state)) != NULL) {

   if (first) {
   p = NULL;
   *opts = OPT_NONE;
   first = 0;
   }

   if (!strcasecmp(w, "Indexes")) {
   opt = OPT_INDEXES;
   }
   else if (!strcasecmp(w, "Includes")) {
   opt = OPT_INCLUDES;
   }
...
   else if (!strcasecmp(w, "None")) {
   opt = OPT_NONE;
   }
   else if (!strcasecmp(w, "All")) {
   opt = OPT_ALL;
   }
   else {
   return apr_pstrcat(cmd->pool, "Illegal option ", w, NULL);
   }

   *opts |= opt;
   }

   (*opts) &= (~OPT_UNSET);

   return NULL;
}

In case of Options= the while loop isn't executed and options are set to
   (*opts) &= (~OPT_UNSET);
which is previous options & 0x . Probably in case if OPT_UNSET is not 0.


opts by default are

server/core.c:99 static void *create_core_dir_config(apr_pool_t *a, char *dir)
...
conf->override_opts = OPT_UNSET | OPT_ALL | OPT_INCNOEXEC | OPT_SYM_OWNER
 | OPT_MULTI;

and in server/core.c set_override()
..d->override = OR_NONE;
   while (l[0]) {
   w = ap_getword_conf(cmd->pool, &l);

   k = w;
   v = strchr(k, '=');
   if (v) {
   *v++ = '\0';
   }

   if (!strcasecmp(w, "Limit")) {
   d->override |= OR_LIMIT;
   }
   else if (!strcasecmp(k, "Options")) {
// in case of Options=   v=""
   d->override |= OR_OPTIONS;
   if (v)
//in case of "AllowOverride Options=" set_allow_opts() leavs
d->override_opts unchanged
   set_allow_opts(cmd, &(d->override_opts), v);
   else
// in case of "AllowOverride Options" sets ONLY!!! OPT_ALL 
ommiting
OPT_MULTIVIEWS and others
   d->override_opts = OPT_ALL;

This is at least inconsistent, Options= should be a syntax error, or
OPT_NONE + warning.

In pre Options= releases (it bit me hard when I upgraded to 2.2.8 from
2.0.61) AllovOverride Options
allowed overriding any options not only those in OPT_ALL, and now
suddenly it stopped but allowed old behavior
if I set AllowOverride Options=. We might even call it an regression
because old correct config (new docs don't mention
any changes so I expect new versions to be compatible) stops to work.

I will produce a patch when it get's decided what to do with an empty
"Options=" (in docs defined as Options[=Option,] so there must be at
least 1 option set)

-- 
Michał Grzędzicki