[users@httpd] Re: users Digest 30 Jan 2019 15:52:55 -0000 Issue 5789

2019-02-28 Thread Luca Toscano
Hi Jan,

I tried to check the tar.gz but I only see traces files (I suppose
from strace) and an error log, meanwhile what I meant in my last email
was a core dump file. I am afraid but with what you provided is a bit
difficult to understand what is the problem..

Luca

Il giorno mer 27 feb 2019 alle ore 09:58  ha scritto:
>
> Hi Luca,
>
> Thank you for your information.
> I tried to debug this situation as was described in article.
> But unfortunatelly I was able to do only core dump from this fall. I'm 
> sending you this dump file. Are you able to identify in which module is my 
> problem?
>
> Thanks
> Jan
>
>
>
>
>
>
> - Message from Luca Toscano  on Wed, 30 Jan 2019 
> 07:52:37 -0800 -
> To:
> users@httpd.apache.org
> Subject:
> Re: [users@httpd] Segmentation fault when builded with openssl 1.1.1
>
> Hi!
>
> Il giorno lun 28 gen 2019 alle ore 06:36  ha scritto:
> >
> > Hi,
> >
> > I have an issue with version httpd 2.4.38 when it is builded with openssl 
> > 1.1.1 and mod_cluster
> > There are repeated error messages in error.log:
> >
> > AH00052: child pid  exit signal Segmentation fault (11)
> >
> > These error messages are suppressed when mod_ssl is disabled
> >
> > Builded on Linux 3.10.0-693.1.1.el7.x86_64
> >
> > httpd server builded with:
> >
> > APR=apr-1.6.5.tar.gz
> > APR_UTIL=apr-util-1.6.1.tar.gz
> > HTTPD=httpd-2.4.38.tar.gz
> > LIBXML=libxml2-2.9.7.tar.gz
> > M4=m4-1.4.18.tar.gz
> > MOD_CLUSTER=mod_cluster-1.3.10.Final.tar.gz
> > MOD_SECURITY=modsecurity-2.9.2.tar.gz
> > NGHTTP2=nghttp2-1.36.0.tar.gz
> > OPENSSL=openssl-1.1.1a.tar.gz
> > PCRE=pcre-8.42.tar.gz
> >
> > Do you have any idea what could be wrong or it is possible bug in actual 
> > version of httpd server? Because when i tried to build httpd 2.4.37 with 
> > same prerequesities with same way and also with openssl 1.1.1 there is no 
> > problem with this combination.
>
> could you follow http://httpd.apache.org/dev/debugging.html#crashes
> and report back the stacktrace of the segmentation fault? It would be
> very useful to understand what module is triggering this. mod_cluster
> is not part of the standard httpd distribution, so in case of issues
> with it I'd suggest to follow up with them to get more info about how
> to fix the issue.
>
> Thanks!
>
> Luca
>
>

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Apache upgrade 2.2 -> 2.4 and "PerlAuthenHandler Authen::Simple::IMAP"

2019-02-19 Thread Luca Toscano
Hi again,

Can you use something like LogLevel warn perl:trace8 and see if you
get more info in the error log?

Luca

Il giorno mer 20 feb 2019 alle ore 02:23 Jobst Schmalenbach
 ha scritto:
>
> On Mon, Feb 18, 2019 at 07:47:20AM +0100, Luca Toscano 
> (toscano.l...@gmail.com) wrote:
> > Hi Jobst,
> >
> > Il giorno lun 18 feb 2019 alle ore 04:05 Jobst Schmalenbach
> >  ha scritto:
> >
> > Have you installed the perl Authen::Simple::IMAP library? This seems
> > more a Perl issue than a httpd one.. I'd follow up with the mod_perl
> > community support (the module is not part of the standard httpd
> > distribution - https://perl.apache.org/maillist/index.html).
>
> I did via yum.
> I tested it via a small perl script, too, nothing wrong with that.
>
> THe problem I cannot see any error messages in the logs, nothing.
> If I could I would have some info, but its plain nothing.
>
> Jobst
>
>
>
> --
> f u cn rd ths, u cn gt a gd jb n cmptr prgmmng. [Anon]
>
>   | |0| |   Jobst Schmalenbach, General Manager
>   | | |0|   Barrett & Sales Essentials
>   |0|0|0|   +61 3 9533 , POBox 277, Caulfield South, 3162, Australia
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Port 80 error still there even though listening on different port

2019-02-17 Thread Luca Toscano
Hi Mike,
Il giorno lun 18 feb 2019 alle ore 04:08 Mike Starr
 ha scritto:
>
> Hi, I am still getting the ubiquitous Port 80 blocked error when starting 
> Apache even though I am listening on port 8080 in httpd.conf.
>
> What am I doing wrong?

I assume your OS is Linux, so what I'd do is run a command like "sudo
netstat -nltp" to see what process owns port 80. If it is httpd, I'd
check if the httpd.conf contain a "Listen 80" statement (if your
distribution splits config files in multiple places you might have a
"default" config somewhere that you are not aware for example).

Hope that helps!

Luca

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Apache upgrade 2.2 -> 2.4 and "PerlAuthenHandler Authen::Simple::IMAP"

2019-02-17 Thread Luca Toscano
Hi Jobst,

Il giorno lun 18 feb 2019 alle ore 04:05 Jobst Schmalenbach
 ha scritto:
>
> Hi
>
> I have just started upgrading all of my CentOS servers from 6.X to 7.X.
> With that Apache gets upgraded from 2.2 to 2.4.
>
> While I have fixed most of the issues one that I cannot solve is the
> "PerlAuthenHandler Authen::Simple::IMAP" in .htaccess files.
>
> I use this frequently on many machines as it is real easy for me to look 
> after this,
>
> Using apache 2.2 this used to work like a charm with an .htaccess file in the 
> directory to protect:
>
> satisfy any
> Order deny,allow
> deny from all
>
> AuthName "Protected by IMAP credentials"
> AuthType Basic
> require user USER1 USER2
> PerlAuthenHandler Authen::Simple::IMAP
> PerlSetVarAuthenSimpleIMAP_host 
> "CENTRAL.IMAPS.SERVER.HOST.NAME"
> PerlSetVarAuthenSimpleIMAP_protocol "IMAPS"
>
> allow from localhost
> allow from THESERVER
>
> I re-wrote this for apache 2.4 (not repeating the perl stuff) but same 
> .htaccess file
>
>
>  Require user USER1 USER2
>  # do not turn this off, or else this will not work.
>  Require ip 127.0.0.1
>  Require host localhost
>  Require host THESERVER
>
>
> In the server's httpd.conf file I have:
>
>PerlRequire /etc/httpd/conf/startup.pl
>
> which contains this:
>
>#!/bin/env /usr/bin/perl
>use strict;
>use warnings;
>use Authen::Simple::IMAP;
>1;
>
> This loads with no error messages.
>
> The problem really is:
>
>   ==> error_log <==
>   failed to resolve handler Authen::Simple::IMAP
>   failed to resolve handler Authen::Simple::IMAP
>   failed to resolve handler Authen::Simple::IMAP
>   failed to resolve handler Authen::Simple::IMAP

Have you installed the perl Authen::Simple::IMAP library? This seems
more a Perl issue than a httpd one.. I'd follow up with the mod_perl
community support (the module is not part of the standard httpd
distribution - https://perl.apache.org/maillist/index.html).

Hope that helps!

Luca

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Stupid question time - VirtualHost

2019-02-02 Thread Luca Toscano
Hi Jeff!

Il giorno ven 1 feb 2019 alle ore 16:02 Jeff Cauhape
 ha scritto:
>
> My usage of Apache has been pretty plain vanilla, and now I am required to
>
> add a virtual host to a system, and I’m wondering what doing wrong. My hunch
>
> is that it’s obvious to others.
>
>
>
> I am using Apache 2.4.6 as reported by httpd -v
>
>
>
> In my httpd.conf file I have:
>
> …
>
> Listen web1e.detr.nv:80
>
> Listen web1e.detr.nv:280
>
> …
>
> and
>
> 
>
> ServerName survey.nvdetr.org
>
> UseCanonicalName Off
>
> DocumentRoot "/var/www/html/survey/"
>
> ScriptAlias /cgi-bin/ "/var/www/cig-bin/survey/cgi-bin/"
>
> …
>
> 
>
>
>
> Question: Isn’t it true that I must have a Listen directive for each 
> VirtualHost?
>
>
>
> However, if I try to start the apache server configured like this I get an 
> error message that
>
> the port 8090 (or any other number I choose) is already in use and not 
> available. This causes
>
> apache to fail to start.
>
>
>
> # lsof -I :280
>
>
>
> and
>
>
>
> # netstat -ltnp
>
>
>
> Do not show the port in use by anything. I can change the port number to 
> anything I choose
>
> and the results are the same. This suggests to me that the problem is in 
> apache config somewhere.
>
>
>
> If I comment out the Listen director for the VirtualHost, I don’t get the 
> error, but I don’t see any
>
> process listening on the port either.
>
>
>
> Ideas? Suggestions?

Did you check https://httpd.apache.org/docs/2.4/vhosts/examples.html ?
There are useful examples in there, it should clarify all doubts.

Hope that helps!

Luca

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Segmentation fault when builded with openssl 1.1.1

2019-01-30 Thread Luca Toscano
Hi!

Il giorno lun 28 gen 2019 alle ore 06:36  ha scritto:
>
> Hi,
>
> I have an issue with version httpd 2.4.38 when it is builded with openssl 
> 1.1.1 and mod_cluster
> There are repeated error messages in error.log:
>
> AH00052: child pid  exit signal Segmentation fault (11)
>
> These error messages are suppressed when mod_ssl is disabled
>
> Builded on Linux 3.10.0-693.1.1.el7.x86_64
>
> httpd server builded with:
>
> APR=apr-1.6.5.tar.gz
> APR_UTIL=apr-util-1.6.1.tar.gz
> HTTPD=httpd-2.4.38.tar.gz
> LIBXML=libxml2-2.9.7.tar.gz
> M4=m4-1.4.18.tar.gz
> MOD_CLUSTER=mod_cluster-1.3.10.Final.tar.gz
> MOD_SECURITY=modsecurity-2.9.2.tar.gz
> NGHTTP2=nghttp2-1.36.0.tar.gz
> OPENSSL=openssl-1.1.1a.tar.gz
> PCRE=pcre-8.42.tar.gz
>
> Do you have any idea what could be wrong or it is possible bug in actual 
> version of httpd server? Because when i tried to build httpd 2.4.37 with same 
> prerequesities with same way and also with openssl 1.1.1 there is no problem 
> with this combination.

could you follow http://httpd.apache.org/dev/debugging.html#crashes
and report back the stacktrace of the segmentation fault? It would be
very useful to understand what module is triggering this. mod_cluster
is not part of the standard httpd distribution, so in case of issues
with it I'd suggest to follow up with them to get more info about how
to fix the issue.

Thanks!

Luca

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Developing Private Cache Module

2018-09-16 Thread Luca Toscano
Il giorno ven 14 set 2018 alle ore 06:39 Yann Ylavic
 ha scritto:
>
> Hi Thomas,
>
> On Fri, Sep 14, 2018 at 4:18 AM Thomas Salemy  wrote:
> >
> > I want to redevelop the shared object cache that is used to filter
> > HTTP requests. Specifically, I want to serve requests even faster by
> > replacing the module's current structure with this concurrency
> > mechanism.
>
> Could you please be more explicit on what "module" you are talking
> about, which source file/code?
>
> >
> > I am hoping that one of you might be able to describe to me more in
> > detail how this module works and the details of the current
> > concurrency mechanism supporting it so that I may redesign it and
> > prove the value of transactional memory in my research project.
>
> Looks interesting, but for now a bit too vague...
> Also, I think this discussion about design/code/implementation would
> better fit the d...@httpd.apache.org mailing list, for a wider
> developpers/technical audience.
>
> Thanks for reaching out to us anyway!

Other than what Yann mentioned, I'd suggest to read these documents to
have an idea about httpd modules and filters architecture (if you
haven't already done so):

https://httpd.apache.org/docs/2.4/developer/
https://httpd.apache.org/docs/2.4/developer/modguide.html
https://httpd.apache.org/docs/2.4/developer/output-filters.html
(and others)

Hope that helps!

Luca

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Apache 2.4 mod_ratelimit breaks mod_autoindex

2018-08-30 Thread Luca Toscano
More info: https://bz.apache.org/bugzilla/show_bug.cgi?id=62568

Luca

2018-08-29 23:02 GMT+02:00 Eric Covener :
> mod_ratelimit is broken in the current release and due to be fixed in
> the next update.
> On Wed, Aug 29, 2018 at 4:07 PM Aram Akhavan  wrote:
>>
>> I'm trying to use mod_ratelimit to enable bandwidth limiting on my entire 
>> apache server.
>>
>> If I add the following to my apache2.conf:
>>
>> SetOutputFilter RATE_LIMIT
>> SetEnv rate-limit 1024
>> SetEnv rate-initial-burst 1024
>>
>> my indexes stop working. The same thing happens if I add it within a 
>> . I'm using fancy-index, and have copied the contents of its 
>> .htaccess into my mods-enabled/autoindex.conf
>>
>> The html that the server returns when I access an index is
>>
>> 
>> 
>> 
>>   
>>   
>>   
>>   
>>   
>> 
>> 
>>
>> If I instead add those rate limit lines to a  directive, then the 
>> indexing is fixed, and rate limiting works on that folder. However, I'm 
>> trying to use this to rate-limit Nextcloud downloads, and applying the limit 
>> to that virtual host or the root directory doesn't seem to work at all, 
>> hence my desire to apply the rate-limiting to the entire server.
>>
>> Thanks,
>>
>> Aram
>
>
>
> --
> Eric Covener
> cove...@gmail.com
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] mod_wsgi in Apache 2.4

2018-06-01 Thread Luca Toscano
Hi!

2018-06-01 15:38 GMT+02:00 Stormy :

> To support Python code, it appears that mod_wsgi is necessary? |
> desirable. It appears to function correctly within Apache 2.4, but I cannot
> find it in the *Apache* documentation  /2.4/mod/> (the developer's documentation is easily available.)
>
> Does anyone have any experience, thoughts, caveats, recommendations?
>
>
mod_wsgi is a third party module not shipped with httpd, but recently
https://httpd.apache.org/docs/current/mod/mod_proxy_uwsgi.html was donated
to the project and now it is available for 2.4 too.

Hope that helps,

Luca


Re: [users@httpd] Re: mod_suexec with mod_userdir and fcgid (webapps in subdirs with separated user context)

2018-05-23 Thread Luca Toscano
Hi Jonas,

2018-05-10 0:59 GMT+02:00 Jonas Meurer :
>
>
> Thanks a ton. I'm still not 100% sure whether I do it the right way, but
> it occurs to me as if I just discovered two bugs in Apache2 suExec that
> make crazy workarounds necessary.
>
> What do you think?
>

Sorry for the lag in answering. I reviewed a bit the code and found out
that this is a pretty common use case (looking for AP_USERDIR_SUFFIX and
suexec in Google revealed a ton of material). suexec is compiled separately
from httpd, since as you can see from the source it gets a main() by
itself. This means that whatever you set in the httpd's config will not
affect AP_USERDIR_SUFFIX, that is a parameter compiled with suexec (you can
tune it using httpd's configure though at build time, but once you create
the suexec binary it is done). As far as I can see there are suexec
variant's shipped with some distributions that allow a suexec config file,
but I don't have a lot of experience with systems like these.

In this list there should be people running into the same issue that you
encountered, let's see if another ping triggers some answers :)

Hope that helps!

Luca


Re: [users@httpd] Running Lua Script using mod_lua

2018-05-17 Thread Luca Toscano
Hi,

2018-05-16 12:22 GMT+02:00 Hemant Chaudhary 
:

> Hi,
>
> While running lua_script using mod_lua, I am getting this error in
> error_log. What does it mean
> "PANIC: unprotected error in call to Lua API (core and library have
> incompatible numeric types)"
>
>
What version of Lua are you using? You can quickly check with something
like ldd /usr/local/apache2/modules/mod_lua.so. It might be a mismatch
between what you are trying to execute (the script) and what you are using
as interpreter (mod_lua), but I haven't investigated it very well so I
might be wrong :)

Hope that helps,

Luca


Re: [users@httpd] Missing headers on 403 pages

2018-05-09 Thread Luca Toscano
Hi Gradus,

2018-05-09 9:18 GMT+02:00 Gradus Kooistra :

> Dear Sir/Madam,
>
> We setup apache to set headers, like the X-Frame-Options.
> But this doesn’t work for the 403 pages, only the
> Strict-Transport-Security works. On non-error pages, the headers are
> showing correctly in the browser/security scans
>
> The headers are set to the virtual hosts and later also to the global
> apache configuration, without any luck.
>
> The problem is that some security scans show warnings to the customers and
> the think the sites are unsafe.
>
> Is it possible to set the headers, so 403 pages are also delivering this
> to the browsers?
>
>
Have you tried the Header set always option? Ref:
https://httpd.apache.org/docs/current/mod/mod_headers.html#header

Hope that helps,

Luca


Re: [users@httpd] mod_ratelimit working by steps ?

2018-05-08 Thread Luca Toscano
2018-04-22 21:15 GMT+02:00 :

> Hi,
>
> I created a 4MB file and rate limited its directory container in the
>> httpd's conf, and tested 8/20/30/etc.. settings as you suggested with
>> curl:
>>
>> curl http://localhost/test.txt > /dev/null (in this way I drop the
>> returned response but keep the curl's connection metadata summary).
>>
>> In every case I get the expected result (average Dload speed).
>>
>
> Thanks a bunch for testing this, and confirming that something is wrong on
> my side.
> After more test, I'm pretty sure the problem come from a bad interaction
> between mod_ratelimit and mod_proxy.
> (sorry, I forgot to mention that the path I was trying to rate-limit is
> indeed a tomcat app behind mod_proxy).
>
> Did you execute your performance tests in localhost? And also, did you
>> use another tool other than Firefox? I'd be curious to know your
>> results with curl executed in localhost.
>>
>
> I've tried the following (Excerpts from my config at the end of this mail):
>
> 0) rate-limit on tomcat app proxified throught mod_proxy (previous mail)
>=> rate-limit works by step, and does not limit anything if rate-limit
> > 40
>(tried on local with wget)
> 1) rate-limit on a true folder, served by apache :
>=> rate-limit is working as expected
> 2) rate-limit on file served through python's SimpleHttpServer, proxified
> by mod_proxy
>=> rate-limit works by step.
>
> In conclusion, tomcat is not at fault, since python's SimpleHttpServer
> also have a problem, and the trouble come from my reverse-proxy.
> In the case af a reverse proxy, I'm not sure which part of the connection
> get rate-limited ?
> Is that an known problem ?
> Or am I trying to do something totally bogus here ?
> Any ideas to achieve my goal ? (that is, limiting the bandwith used by the
> tomcat app)
>
>
I opened a tracking task in
https://bz.apache.org/bugzilla/show_bug.cgi?id=62362, I tried to write in
there an explanation about what I think it is happening.

Thanks for the report!

Luca


Re: [users@httpd] Re: mod_suexec with mod_userdir and fcgid (webapps in subdirs with separated user context)

2018-04-24 Thread Luca Toscano
Hi Jonas,

2018-04-23 15:40 GMT+02:00 Jonas Meurer :

> Hello again,
>
> maybe my previous mail was to verbose, or maybe simply nobody has an
> idea. Still I'd like to give it a second try:
>
> Do you have a good idea why php-cgi7.0 throws the following error when
> used with mod_fcgid, mod_usermod and mod_suexec?
>
> uid: (1002/webapp1) gid: (1002/webapp1) cmd: php-fcgi-starter cannot get
> docroot information (/var/www/webapp1)
>
> $ ls -al /var/www/webapp1
> drwxr-xr-x 9 root root 4096 Jun 29  2014 .
> drwxr-x---  2 webapp1 webapp1  4096 Nov  7 15:14 php-fcgi
> drwxr-x---  2 webapp1 webapp1  4096 Apr 11  2015 www
> [...]
>
> The same setup works perfectly fine without mod_usermod (i.e. when the
> whole VHost has a dedicated suexec user). Only with mod_usermod, we get
> this strange error.


Premise: I am super ignorant about suexec & C, but this snippet of code in
suexec.c seems to be the one returning the error:

if (getcwd(cwd, AP_MAXPATH) == NULL) {
log_err("cannot get current working directory\n");
exit(111);
}

if (userdir) {
if (((chdir(target_homedir)) != 0) ||
((chdir(AP_USERDIR_SUFFIX)) != 0) ||
((getcwd(dwd, AP_MAXPATH)) == NULL) ||
((chdir(cwd)) != 0)) {
log_err("cannot get docroot information (%s)\n",
target_homedir);
exit(112);
}
}

As far as I can see, this is what it tries to do:

- save the current working dir to 'cwd'
- change dir to "target_homedir", that should be in this
case /var/www/webapp1
- change dir to AP_USERDIR_SUFFIX, that if not re-defined should be
"public_html" (#define AP_USERDIR_SUFFIX "public_html" in suexec.h)
- set the variable 'dwd' (docroot working directory) to the above
- change dir back to cwd (current working directory)

So I'd try to add a public_html directory and see how it goes.

Hope that helps!

Luca


Re: [users@httpd] mod_ratelimit working by steps ?

2018-04-21 Thread Luca Toscano
Hi,

2018-04-19 13:47 GMT+02:00 :

> Hello all,
>
> I'm using Apache 2.4.24 on Debian 9 Stable, behind a DSL connection, with
> an estimated upload capacity of ~130kB/s.
> I'm trying to limit the bandwidth available to my users (per-connection
> limit is fine).
> However, it seems to me that the rate-limit parameter is coarsely grained :
>
> - if I set it to 8, users are limited to 8 kB/s
> - if I set it to 20, or 30, users are limited to 40 kB/s
> - if I set it to 50, 60 or 80, users are limited to my BW, so ~120 kB/s
>
> (tests were done by changing the parameter, restarting apache, then
> downloading a file using Firefox. Observed speed is the one given by
> Firefox).
>
> I found at least one person with a similar problem :
> https://webmasters.stackexchange.com/questions/101988/strang
> e-behaviour-with-apache-mod-ratelimit
>
> It seems to me that rate-limit is working by steps, rather than using the
> inputed value. Is it expected ? Could something else on my network stack be
> the culprit ?
>
> Thanks for reading !
>

I created a 4MB file and rate limited its directory container in the
httpd's conf, and tested 8/20/30/etc.. settings as you suggested with curl:

curl http://localhost/test.txt > /dev/null (in this way I drop the returned
response but keep the curl's connection metadata summary).

In every case I get the expected result (average Dload speed).

Did you execute your performance tests in localhost? And also, did you use
another tool other than Firefox? I'd be curious to know your results with
curl executed in localhost.

Thanks!

Luca


Re: [users@httpd] Require directives

2018-04-17 Thread Luca Toscano
Hi Robert,

2018-04-17 16:27 GMT+02:00 Robert Schweikert :

> Hi,
>
> Configuration question.
>
> Apache version 2.4.23
>
> What I am trying to do is have users authenticate but only allow access
> to that authentication method from known IP ranges. To this effect I
> have a config file that sets:
>
> 
> Options +Indexes +FollowSymLinks
> IndexOptions +NameWidth=*
>
> PerlAuthenHandler THE::PERL::MODULE
> AuthName MODULE
> AuthType Basic
> Require valid-user
> Require expr %{REQUEST_URI} =~ m#^/SOME_EXCEPTION/.*#
>
> Require ip A_VERY_LONG_LIST_OF_IP_RANGES
> Require ip ANOTHER_VERY_LONG_LIST_OF_IP_RANGES
> 
>
> The observed behavior is what could be described as "or" behavior.
> Meaning even traffic from outside the specified IP ranges is allowed to
> hit the auth handler, i.e. the user gets a username/password request
> when accessing a path that is not in the "SOME_EXCEPTION" path.
>
> What I am trying to achieve is that Apache blocks any access if the
> traffic originates from outside the specified IP ranges.
>
> Is there a potential that I am hitting some limit of the number of IP
> ranges specified and thus the whole mechanism of limiting by IP is ignored?
>
> Am I simply mis-interpreting the documentation and I need to structure
> the restrictions differently?
>
> Is there some "and" directive to tie the requires together in an "and"
> fashion to ensure all "Require" directives are considered?


This might be useful:
https://httpd.apache.org/docs/2.4/mod/mod_authz_core.html#logic. By default
the multiple requires are acting as RequireAny, meanwhile you'd probably
need RequireAll.

Hope that helps!

Luca


Re: [users@httpd] ProxyErrorOverride on with PHP-FPM

2018-04-14 Thread Luca Toscano
Hi Matthias,

2018-04-11 11:34 GMT+02:00 Matthias Leopold :

> Hi,
>
> I'm trying to get rid of the message
>
> [proxy_fcgi:error] ... AH01071: Got error 'Primary script unknown\n'
>
> in error logs (LogLevel notice) when proxying to an php-fpm daemon and the
> requested php file doesn't exist.
>
> php-fpm config in VirtualHost is
>
> 
> SetHandler  "proxy:unix:/run/php-fpm/www.sock|fcgi://foobar/"
> 
>
> When I set "ProxyErrorOverride on" the error in browser changes from "File
> not found." to "Not Found
>
> The requested URL /bla.php was not found on this server."
>


ProxyErrorOverride is only related to the error page to display for certain
HTTP error conditions, in order for example to avoid proxying what the
backend returns to the external client (so internal errors wouldn't be
seen, etc..).


>
> In error.log is still get the "Primary script unknown" message. Is there a
> way to suppress this (without fiddling with LogLevel)??
>

IIUC you've set LogLevel to 'notice', but according to
https://httpd.apache.org/docs/2.4/mod/core.html#loglevel is not enough to
avoid displaying the 'error' level ones. You could try with 'crit'
probably, but bare in mind that in this way you wouldn't see anymore any
log raised at 'error' level (some of them might be important to
collect/read).

Hope that helps!

Luca


Re: [users@httpd] proxy_fcgi - force flush to client

2018-03-03 Thread Luca Toscano
2018-02-19 12:07 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:

> Hello,
>
>
> Am 19.02.2018 um 10:11 schrieb Hajo Locke:
>
> Hello,
>
> Am 08.02.2018 um 19:33 schrieb Luca Toscano:
>
>
>
> 2018-02-02 12:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
>
>>
>>
>> Am 02.02.2018 um 07:05 schrieb Luca Toscano:
>>
>> Hello Hajo,
>>
>> 2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
>>
>>> Hello Luca,
>>>
>>> Am 01.02.2018 um 09:10 schrieb Hajo Locke:
>>>
>>> Hello Luca,
>>>
>>> Am 01.02.2018 um 04:46 schrieb Luca Toscano:
>>>
>>> Hi Hajo,
>>>
>>> 2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:
>>>
>>>> Hello List,
>>>>
>>>> currently i compare features and behaviour of proxy_fcgi to classical
>>>> methods like mod_fastcgi/mod_php.
>>>>
>>>> mod_php/fastcgi have options to send every output from backend
>>>> immediately to client. So it is possible to see progressing output in
>>>> browser and not complete websiteoutput at once.
>>>>
>>>> Here is an example script:
>>>> https://pastebin.com/4drpgBMq
>>>>
>>>> if you ran this with php-cli or adjusted mod_php/mod_fastcgi you see
>>>> progress in browser and numbers 0 1 2 appear one after another.
>>>> If you run this with proxy_fcgi you will see no progress, but complete
>>>> output at once.
>>>>
>>>> mod_proxy knows about worker parameter flushpackets, but the docs say
>>>> this is in effect only for AJP. I can confirm that this and related options
>>>> have no effect.
>>>> There are some workarounds posted in the web, but only one worked for
>>>> me. If i add following line to the script, i also see a progress with
>>>> proxy_fcgi in browser:
>>>>
>>>> header('Content-Encoding: none');
>>>>
>>>> Somebody knows a working workaround which works without scriptediting?
>>>> some workarounds tell about using "SetEnv no-gzip 1". This was not working
>>>> for me and iam not please to disable content-compression.
>>>> Is it planned to support >>flushpackets<< also to proxy_fcgi?
>>>>
>>>> May be this is not important for typical website but some
>>>> service/monitoring scripts.
>>>>
>>>>
>>> The functionality is committed to trunk but never backported to 2.4.x
>>> because I was not sure about its importance, it looks like some users might
>>> benefit from it :)
>>>
>>> The trunk patch is http://svn.apache.org/r1802040, it should apply to
>>> 2.4.x if you want to test it and give me some feedback.
>>>
>>> Thanks!
>>>
>>> I tried this and it works great. I see same behaviour as expected with
>>> other methods. I think some users might benefit from this. I saw some
>>> discussion related to this topic and people just ended up by ungainly
>>> workaround.
>>> Great news!
>>>
>>> Unfortunately i spoke too soon. I was too euphoric when reading your
>>> answer ;)
>>> Behaviour is definitively more then expected, but it seems there is
>>> still a minimum-limit for the buffer to flush. I suppose this limit is 4096
>>> bytes.
>>> you can comprehend this with pastebinexample above.
>>> Change line 2 from "$string_length = 14096;" to "$string_length = 1331;"
>>> When calling this php-file you will see no progress. All output appears
>>> at once.
>>> Change scriptline to "$string_length = 1332;", you will see at least 2
>>> steps of output, because first step seems to break this 4096 bufferlimit.
>>> increasing $string_length more and more results in more steps of output.
>>> So current mod_proxy_fcgi.c from svn with configured "flushpackets=On"
>>> seems to work exaktly like "flushpackets=auto iobuffersize=4096".
>>> setting iobuffersize to lower numbers has no effect.
>>> What do you think? Is there still a hard-coded limit or do i have a
>>> problem in my configuration?
>>> I would be really glad, if you could take a look at this issue.
>>>
>>
>> I am far from being an expert in PHP, but I added "ob_flush();" right
>> before "flush()" in your script and the 1331 use case seems flushing
>> correctly. Do you mind to check

Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-08 Thread Luca Toscano
2018-02-02 12:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:

>
>
> Am 02.02.2018 um 07:05 schrieb Luca Toscano:
>
> Hello Hajo,
>
> 2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
>
>> Hello Luca,
>>
>> Am 01.02.2018 um 09:10 schrieb Hajo Locke:
>>
>> Hello Luca,
>>
>> Am 01.02.2018 um 04:46 schrieb Luca Toscano:
>>
>> Hi Hajo,
>>
>> 2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:
>>
>>> Hello List,
>>>
>>> currently i compare features and behaviour of proxy_fcgi to classical
>>> methods like mod_fastcgi/mod_php.
>>>
>>> mod_php/fastcgi have options to send every output from backend
>>> immediately to client. So it is possible to see progressing output in
>>> browser and not complete websiteoutput at once.
>>>
>>> Here is an example script:
>>> https://pastebin.com/4drpgBMq
>>>
>>> if you ran this with php-cli or adjusted mod_php/mod_fastcgi you see
>>> progress in browser and numbers 0 1 2 appear one after another.
>>> If you run this with proxy_fcgi you will see no progress, but complete
>>> output at once.
>>>
>>> mod_proxy knows about worker parameter flushpackets, but the docs say
>>> this is in effect only for AJP. I can confirm that this and related options
>>> have no effect.
>>> There are some workarounds posted in the web, but only one worked for
>>> me. If i add following line to the script, i also see a progress with
>>> proxy_fcgi in browser:
>>>
>>> header('Content-Encoding: none');
>>>
>>> Somebody knows a working workaround which works without scriptediting?
>>> some workarounds tell about using "SetEnv no-gzip 1". This was not working
>>> for me and iam not please to disable content-compression.
>>> Is it planned to support >>flushpackets<< also to proxy_fcgi?
>>>
>>> May be this is not important for typical website but some
>>> service/monitoring scripts.
>>>
>>>
>> The functionality is committed to trunk but never backported to 2.4.x
>> because I was not sure about its importance, it looks like some users might
>> benefit from it :)
>>
>> The trunk patch is http://svn.apache.org/r1802040, it should apply to
>> 2.4.x if you want to test it and give me some feedback.
>>
>> Thanks!
>>
>> I tried this and it works great. I see same behaviour as expected with
>> other methods. I think some users might benefit from this. I saw some
>> discussion related to this topic and people just ended up by ungainly
>> workaround.
>> Great news!
>>
>> Unfortunately i spoke too soon. I was too euphoric when reading your
>> answer ;)
>> Behaviour is definitively more then expected, but it seems there is still
>> a minimum-limit for the buffer to flush. I suppose this limit is 4096 bytes.
>> you can comprehend this with pastebinexample above.
>> Change line 2 from "$string_length = 14096;" to "$string_length = 1331;"
>> When calling this php-file you will see no progress. All output appears
>> at once.
>> Change scriptline to "$string_length = 1332;", you will see at least 2
>> steps of output, because first step seems to break this 4096 bufferlimit.
>> increasing $string_length more and more results in more steps of output.
>> So current mod_proxy_fcgi.c from svn with configured "flushpackets=On"
>> seems to work exaktly like "flushpackets=auto iobuffersize=4096".
>> setting iobuffersize to lower numbers has no effect.
>> What do you think? Is there still a hard-coded limit or do i have a
>> problem in my configuration?
>> I would be really glad, if you could take a look at this issue.
>>
>
> I am far from being an expert in PHP, but I added "ob_flush();" right
> before "flush()" in your script and the 1331 use case seems flushing
> correctly. Do you mind to check and let me know what do you get on your
> testing environment? As far as I can see in the mod_proxy_fcgi's code the
> iobuffersize variable is taken into account..
>
> It seems that i was additional mocked by my browser. There is no need to
> edit this script, just using the right browser ;)
> I think your new mod_proxy_fcgi.c did it and my testing was incorrect. I
> think we can go into weekend..
>


Full list of commits is: svn merge -c 1802040,1807876,1808014,1805490
^/httpd/httpd/trunk .

mod_proxy_fcgi.c only patch:
http://people.apache.org/~elukey/httpd_2.4.x-mod_proxy_fcgi-force_flush.patch

If you want to give it another round of test it would be really
appreciated, in case everything is fine I'll propose it for backport to
2.4.x :)

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-04 Thread Luca Toscano
2018-02-05 2:41 GMT+01:00 Eric Covener <cove...@gmail.com>:

> On Sun, Feb 4, 2018 at 8:27 PM, Luca Toscano <toscano.l...@gmail.com>
> wrote:
> > Hi Hajo,
> >
> >
> > 2018-02-01 3:58 GMT+01:00 Luca Toscano <toscano.l...@gmail.com>:
> >>
> >> Hi Hajo,
> >>
> >> 2018-01-31 2:37 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:
> >>>
> >>> Hello,
> >>>
> >>>
> >>> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
> >>>
> >>> Hello,
> >>>
> >>> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
> >>>
> >>> Hi Hajo,
> >>>
> >>> 2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
> >>>>
> >>>> Hello,
> >>>>
> >>>> thanks Daniel and Stefan. This is a good point.
> >>>> I did the test with a static file and this test was successfully done
> >>>> within only a few seconds.
> >>>>
> >>>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
> >>>> requests: 10 total, 10 started, 10 done, 10
> succeeded, 0
> >>>> failed, 0 errored, 0 timeout
> >>>>
> >>>> so problem seems to be not h2load and basic apache. may be i should
> look
> >>>> deeper into proxy_fcgi configuration.
> >>>> php-fpm configuration is unchanged and was successfully used with
> >>>> classical fastcgi-benchmark, so i think i have to doublecheck the
> proxy.
> >>>>
> >>>> now i did this change in proxy:
> >>>>
> >>>> from
> >>>> enablereuse=on
> >>>> to
> >>>> enablereuse=off
> >>>>
> >>>> this change leads to a working h2load testrun:
> >>>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
> >>>> requests: 10 total, 10 started, 10 done, 10
> succeeded, 0
> >>>> failed, 0 errored, 0 timeout
> >>>>
> >>>> iam surprised by that. i expected a higher performance when reusing
> >>>> backend connections rather then creating new ones.
> >>>> I did some further tests and changed some other php-fpm/proxy values,
> >>>> but once "enablereuse=on" is set, the problem returns.
> >>>>
> >>>> Should i just run the proxy with enablereuse=off? Or do you have an
> >>>> other suspicion?
> >>>
> >>>
> >>>
> >>> Before giving up I'd check two things:
> >>>
> >>> 1) That the same results happen with a regular localhost socket rather
> >>> than a unix one.
> >>>
> >>> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi.
> >>> Currently i see the same behaviour.
> >>>
> >>> 2) What changes on the php-fpm side. Are there more busy workers when
> >>> enablereuse is set to on? I am wondering how php-fpm handles FCGI
> requests
> >>> happening on the same socket, as opposed to assuming that 1 connection
> == 1
> >>> FCGI request.
> >>>
> >>> If "enablereuse=off" is set i see a lot of running php-workerprocesses
> >>> (120-130) and high load. Behaviour is like expected.
> >>> When set "enablereuse=on" i can see a big change. number of running
> >>> php-workers is really low (~40). The test is running some time and
> then it
> >>> stucks.
> >>> I can see that php-fpm processes are still active and waiting for
> >>> connections, but proxy_fcgi is not using them nor it is establishing
> new
> >>> connections. loadavg is low and benchmarktest is not able to finalize.
> >>>
> >>> I did some further tests to solve this issue. I set ttl=1 for this
> Proxy
> >>> and achieved good performance and high number of working childs. But
> this is
> >>> paradoxical.
> >>> proxy_fcgi knows about inactive connection to kill it, but not reenable
> >>> this connection for working.
> >>> May be this is helpful to others.
> >>>
> >>> May be a kind of communicationproblem and checking health/busy status
> of
> >>> php-processes.
> >>> Whole proxy configuration is  this:
> >>>
> >>> 
> >>> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
> >>> 
> >>> 
> >>> 

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-04 Thread Luca Toscano
Hi Hajo,

2018-02-01 3:58 GMT+01:00 Luca Toscano <toscano.l...@gmail.com>:

> Hi Hajo,
>
> 2018-01-31 2:37 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:
>
>> Hello,
>>
>>
>> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
>>
>> Hello,
>>
>> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
>>
>> Hi Hajo,
>>
>> 2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
>>
>>> Hello,
>>>
>>> thanks Daniel and Stefan. This is a good point.
>>> I did the test with a static file and this test was successfully done
>>> within only a few seconds.
>>>
>>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
>>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>>> failed, 0 errored, 0 timeout
>>>
>>> so problem seems to be not h2load and basic apache. may be i should look
>>> deeper into proxy_fcgi configuration.
>>> php-fpm configuration is unchanged and was successfully used with
>>> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>>>
>>> now i did this change in proxy:
>>>
>>> from
>>> enablereuse=on
>>> to
>>> enablereuse=off
>>>
>>> this change leads to a working h2load testrun:
>>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
>>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>>> failed, 0 errored, 0 timeout
>>>
>>> iam surprised by that. i expected a higher performance when reusing
>>> backend connections rather then creating new ones.
>>> I did some further tests and changed some other php-fpm/proxy values,
>>> but once "enablereuse=on" is set, the problem returns.
>>>
>>> Should i just run the proxy with enablereuse=off? Or do you have an
>>> other suspicion?
>>>
>>
>>
>> Before giving up I'd check two things:
>>
>> 1) That the same results happen with a regular localhost socket rather
>> than a unix one.
>>
>> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi.
>> Currently i see the same behaviour.
>>
>> 2) What changes on the php-fpm side. Are there more busy workers when
>> enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
>> happening on the same socket, as opposed to assuming that 1 connection == 1
>> FCGI request.
>>
>> If "enablereuse=off" is set i see a lot of running php-workerprocesses
>> (120-130) and high load. Behaviour is like expected.
>> When set "enablereuse=on" i can see a big change. number of running
>> php-workers is really low (~40). The test is running some time and then it
>> stucks.
>> I can see that php-fpm processes are still active and waiting for
>> connections, but proxy_fcgi is not using them nor it is establishing new
>> connections. loadavg is low and benchmarktest is not able to finalize.
>>
>> I did some further tests to solve this issue. I set ttl=1 for this Proxy
>> and achieved good performance and high number of working childs. But this
>> is paradoxical.
>> proxy_fcgi knows about inactive connection to kill it, but not reenable
>> this connection for working.
>> May be this is helpful to others.
>>
>> May be a kind of communicationproblem and checking health/busy status of
>> php-processes.
>> Whole proxy configuration is  this:
>>
>> 
>> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
>> 
>> 
>>SetHandler "proxy:fcgi://php70fpm"
>> 
>>
>>
> Thanks a lot for following up and reporting these interesting results!
> Yann opened a thread[1] on dev@ to discuss the issue, let's follow up in
> there so we don't keep two conversations open.
>
> Luca
>
> [1]: https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c4
> 9aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E
>
>
reporting in here what I think it is happening in your test environment
when enablereuse is set to on. Recap of your settings:

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500


ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


request_terminate_timeout = 7200
listen = /dev/shm/php70fpm.sock
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000

By default mod_proxy allows a connection pool of ThreadsPerChild
connections 

Re: [users@httpd] stable version of 2.4 running in production?

2018-02-02 Thread Luca Toscano
Hi,

2018-02-02 16:50 GMT+01:00 renee ko :

> I am planing to upgrade Apache from 2.2 to 2.4 on RHEL 6.6.
>
> I am looking for best practice, should i perform an upgrade from 2.2 or
> install 2.4?
>

https://httpd.apache.org/docs/current/upgrading.html is a good starting
point :)

Luca


Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-01 Thread Luca Toscano
Hello Hajo,

2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:

> Hello Luca,
>
> Am 01.02.2018 um 09:10 schrieb Hajo Locke:
>
> Hello Luca,
>
> Am 01.02.2018 um 04:46 schrieb Luca Toscano:
>
> Hi Hajo,
>
> 2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:
>
>> Hello List,
>>
>> currently i compare features and behaviour of proxy_fcgi to classical
>> methods like mod_fastcgi/mod_php.
>>
>> mod_php/fastcgi have options to send every output from backend
>> immediately to client. So it is possible to see progressing output in
>> browser and not complete websiteoutput at once.
>>
>> Here is an example script:
>> https://pastebin.com/4drpgBMq
>>
>> if you ran this with php-cli or adjusted mod_php/mod_fastcgi you see
>> progress in browser and numbers 0 1 2 appear one after another.
>> If you run this with proxy_fcgi you will see no progress, but complete
>> output at once.
>>
>> mod_proxy knows about worker parameter flushpackets, but the docs say
>> this is in effect only for AJP. I can confirm that this and related options
>> have no effect.
>> There are some workarounds posted in the web, but only one worked for me.
>> If i add following line to the script, i also see a progress with
>> proxy_fcgi in browser:
>>
>> header('Content-Encoding: none');
>>
>> Somebody knows a working workaround which works without scriptediting?
>> some workarounds tell about using "SetEnv no-gzip 1". This was not working
>> for me and iam not please to disable content-compression.
>> Is it planned to support >>flushpackets<< also to proxy_fcgi?
>>
>> May be this is not important for typical website but some
>> service/monitoring scripts.
>>
>>
> The functionality is committed to trunk but never backported to 2.4.x
> because I was not sure about its importance, it looks like some users might
> benefit from it :)
>
> The trunk patch is http://svn.apache.org/r1802040, it should apply to
> 2.4.x if you want to test it and give me some feedback.
>
> Thanks!
>
> I tried this and it works great. I see same behaviour as expected with
> other methods. I think some users might benefit from this. I saw some
> discussion related to this topic and people just ended up by ungainly
> workaround.
> Great news!
>
> Unfortunately i spoke too soon. I was too euphoric when reading your
> answer ;)
> Behaviour is definitively more then expected, but it seems there is still
> a minimum-limit for the buffer to flush. I suppose this limit is 4096 bytes.
> you can comprehend this with pastebinexample above.
> Change line 2 from "$string_length = 14096;" to "$string_length = 1331;"
> When calling this php-file you will see no progress. All output appears at
> once.
> Change scriptline to "$string_length = 1332;", you will see at least 2
> steps of output, because first step seems to break this 4096 bufferlimit.
> increasing $string_length more and more results in more steps of output.
> So current mod_proxy_fcgi.c from svn with configured "flushpackets=On"
> seems to work exaktly like "flushpackets=auto iobuffersize=4096".
> setting iobuffersize to lower numbers has no effect.
> What do you think? Is there still a hard-coded limit or do i have a
> problem in my configuration?
> I would be really glad, if you could take a look at this issue.
>

I am far from being an expert in PHP, but I added "ob_flush();" right
before "flush()" in your script and the 1331 use case seems flushing
correctly. Do you mind to check and let me know what do you get on your
testing environment? As far as I can see in the mod_proxy_fcgi's code the
iobuffersize variable is taken into account..

Luca


Re: [users@httpd] virtual host gives unexpected network read error

2018-01-31 Thread Luca Toscano
Hi David,

2018-01-29 19:45 GMT-08:00 David Mehler :

> Hello,
>
> Can someone take a look at the below virtual host configuration?
> Whenever I put it in my apache 2.4 the server returns an alert
> unexpected network read error connection aborted message. If I take it
> out the server behaves normally. Of course nothing is in any of the
> logs I've got LogLevel set to warn. An apachectl -t says the files are
> syntactically correct.
>
> Any ideas?
>
> Thanks.
> Dave.
>
> #
> # Virtual host file
> #
>
> # The example.com http  and https virtual host
> 
>
> SSLCertificateFile "/usr/local/etc/ssl/acme/example.com/fullchain.pem"
> SSLCertificateKeyFile "/usr/local/etc/ssl/acme/private/
> example.com/privkey.pem"
> SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-
> RSA-AES128-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-
> RSA-AES128-GCM-SHA128:DHE-RSA-AES128-GCM-SHA384:DHE-RSA-
> AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA128:ECDHE-RSA-
> AES128-SHA384:ECDHE-RSA-AES128-SHA128:ECDHE-RSA-
> AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA128:DHE-
> RSA-AES128-SHA128:DHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:
> ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-
> SHA384:AES128-GCM-SHA128:AES128-SHA128:AES128-SHA128:
> AES128-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!
> EXPORT:!DES:!MD5:!PSK:!RC4:!3DES
> SSLEngine on
>
>
In this way you are enabling SSL/TLS on both 80 and 443 port, I don't
believe that it will work (even if I didn't test it properly). Any reason
why you have this settings? Can you try without the "*:80" ? In your case
I'd simply create a *:80 Vhost to force a http->https redirect, and then
apply SSL/etc.. settings only to *:443.

Luca


Re: [users@httpd] proxy_fcgi - force flush to client

2018-01-31 Thread Luca Toscano
Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke :

> Hello List,
>
> currently i compare features and behaviour of proxy_fcgi to classical
> methods like mod_fastcgi/mod_php.
>
> mod_php/fastcgi have options to send every output from backend immediately
> to client. So it is possible to see progressing output in browser and not
> complete websiteoutput at once.
>
> Here is an example script:
> https://pastebin.com/4drpgBMq
>
> if you ran this with php-cli or adjusted mod_php/mod_fastcgi you see
> progress in browser and numbers 0 1 2 appear one after another.
> If you run this with proxy_fcgi you will see no progress, but complete
> output at once.
>
> mod_proxy knows about worker parameter flushpackets, but the docs say this
> is in effect only for AJP. I can confirm that this and related options have
> no effect.
> There are some workarounds posted in the web, but only one worked for me.
> If i add following line to the script, i also see a progress with
> proxy_fcgi in browser:
>
> header('Content-Encoding: none');
>
> Somebody knows a working workaround which works without scriptediting?
> some workarounds tell about using "SetEnv no-gzip 1". This was not working
> for me and iam not please to disable content-compression.
> Is it planned to support >>flushpackets<< also to proxy_fcgi?
>
> May be this is not important for typical website but some
> service/monitoring scripts.
>
>
The functionality is committed to trunk but never backported to 2.4.x
because I was not sure about its importance, it looks like some users might
benefit from it :)

The trunk patch is http://svn.apache.org/r1802040, it should apply to 2.4.x
if you want to test it and give me some feedback.

Thanks!

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-31 Thread Luca Toscano
Hi Hajo,

2018-01-31 2:37 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de>:

> Hello,
>
>
> Am 22.01.2018 um 11:54 schrieb Hajo Locke:
>
> Hello,
>
> Am 19.01.2018 um 15:48 schrieb Luca Toscano:
>
> Hi Hajo,
>
> 2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
>
>> Hello,
>>
>> thanks Daniel and Stefan. This is a good point.
>> I did the test with a static file and this test was successfully done
>> within only a few seconds.
>>
>> finished in 20.06s, 4984.80 req/s, 1.27GB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>> failed, 0 errored, 0 timeout
>>
>> so problem seems to be not h2load and basic apache. may be i should look
>> deeper into proxy_fcgi configuration.
>> php-fpm configuration is unchanged and was successfully used with
>> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>>
>> now i did this change in proxy:
>>
>> from
>> enablereuse=on
>> to
>> enablereuse=off
>>
>> this change leads to a working h2load testrun:
>> finished in 51.74s, 1932.87 req/s, 216.05MB/s
>> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
>> failed, 0 errored, 0 timeout
>>
>> iam surprised by that. i expected a higher performance when reusing
>> backend connections rather then creating new ones.
>> I did some further tests and changed some other php-fpm/proxy values, but
>> once "enablereuse=on" is set, the problem returns.
>>
>> Should i just run the proxy with enablereuse=off? Or do you have an other
>> suspicion?
>>
>
>
> Before giving up I'd check two things:
>
> 1) That the same results happen with a regular localhost socket rather
> than a unix one.
>
> I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. Currently
> i see the same behaviour.
>
> 2) What changes on the php-fpm side. Are there more busy workers when
> enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
> happening on the same socket, as opposed to assuming that 1 connection == 1
> FCGI request.
>
> If "enablereuse=off" is set i see a lot of running php-workerprocesses
> (120-130) and high load. Behaviour is like expected.
> When set "enablereuse=on" i can see a big change. number of running
> php-workers is really low (~40). The test is running some time and then it
> stucks.
> I can see that php-fpm processes are still active and waiting for
> connections, but proxy_fcgi is not using them nor it is establishing new
> connections. loadavg is low and benchmarktest is not able to finalize.
>
> I did some further tests to solve this issue. I set ttl=1 for this Proxy
> and achieved good performance and high number of working childs. But this
> is paradoxical.
> proxy_fcgi knows about inactive connection to kill it, but not reenable
> this connection for working.
> May be this is helpful to others.
>
> May be a kind of communicationproblem and checking health/busy status of
> php-processes.
> Whole proxy configuration is  this:
>
> 
> ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000
> 
> 
>SetHandler "proxy:fcgi://php70fpm"
> 
>
>
Thanks a lot for following up and reporting these interesting results! Yann
opened a thread[1] on dev@ to discuss the issue, let's follow up in there
so we don't keep two conversations open.

Luca

[1]:
https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-20 Thread Luca Toscano
2018-01-20 20:23 GMT+01:00 Luca Toscano <toscano.l...@gmail.com>:

> Hi Yann,
>
> 2018-01-19 17:40 GMT+01:00 Yann Ylavic <ylavic@gmail.com>:
>
>> On Fri, Jan 19, 2018 at 5:14 PM, Yann Ylavic <ylavic@gmail.com>
>> wrote:
>> > On Fri, Jan 19, 2018 at 1:46 PM, Daniel <dferra...@gmail.com> wrote:
>> >> I vaguely recall some issue with reuse when using unix socket files so
>> >> it was deliberately set to off by default, but yes, perhaps someone
>> >> experienced enough with mod_proxy_fcgi inner workings can shed some
>> >> light on this and the why yes/not.
>> >>
>> >> With socket files I never tried to enable "enablereuse=on" and got
>> >> much successful results, so perhaps it's safer to keep it off until
>> >> someone clarifies this issue, after all when dealing with unix sockets
>> >> the access delays are quite low.
>> >
>> > {en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
>> > they are never reused.
>>
>> Well, actually it shouldn't, but while the code clearly doesn't reuse
>> sockets (creates a new one for each request), nothing seems to tell
>> the recycler that it should close them unconditionally at the end of
>> the request.
>>
>
> Would you mind to point me to the snippet of code that does this? I am
> trying to reproduce the issue and see if there is a fd leak but didn't
> manage to so far..
>

I am now able to reproduce with Hajo's settings, and indeed with
enablereuse=on I can see a lot of fds leaked via lsof:

httpd 3230 3481www-data   93u unix 0x9ada0cf60400  0t0
   406770 type=STREAM
httpd 3230 3481www-data   94u unix 0x9ada0cf60800  0t0
   406773 type=STREAM
httpd 3230 3481www-data   95u unix 0x9ada0cf66400  0t0
   406776 type=STREAM
[..]

With Yann's patch I cannot seem them anymore, anche h2load does not stop at
50%/60% but completes without any issue. I am still not able to understand
why this happens reading the proxy_util.c code though :)

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-20 Thread Luca Toscano
Hi Yann,

2018-01-19 17:40 GMT+01:00 Yann Ylavic :

> On Fri, Jan 19, 2018 at 5:14 PM, Yann Ylavic  wrote:
> > On Fri, Jan 19, 2018 at 1:46 PM, Daniel  wrote:
> >> I vaguely recall some issue with reuse when using unix socket files so
> >> it was deliberately set to off by default, but yes, perhaps someone
> >> experienced enough with mod_proxy_fcgi inner workings can shed some
> >> light on this and the why yes/not.
> >>
> >> With socket files I never tried to enable "enablereuse=on" and got
> >> much successful results, so perhaps it's safer to keep it off until
> >> someone clarifies this issue, after all when dealing with unix sockets
> >> the access delays are quite low.
> >
> > {en,dis}ablereuse has no effect on Unix Domain Sockets in mod_proxy,
> > they are never reused.
>
> Well, actually it shouldn't, but while the code clearly doesn't reuse
> sockets (creates a new one for each request), nothing seems to tell
> the recycler that it should close them unconditionally at the end of
> the request.
>

Would you mind to point me to the snippet of code that does this? I am
trying to reproduce the issue and see if there is a fd leak but didn't
manage to so far..

Thanks!

Luca


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Luca Toscano
Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke :

> Hello,
>
> thanks Daniel and Stefan. This is a good point.
> I did the test with a static file and this test was successfully done
> within only a few seconds.
>
> finished in 20.06s, 4984.80 req/s, 1.27GB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
> failed, 0 errored, 0 timeout
>
> so problem seems to be not h2load and basic apache. may be i should look
> deeper into proxy_fcgi configuration.
> php-fpm configuration is unchanged and was successfully used with
> classical fastcgi-benchmark, so i think i have to doublecheck the proxy.
>
> now i did this change in proxy:
>
> from
> enablereuse=on
> to
> enablereuse=off
>
> this change leads to a working h2load testrun:
> finished in 51.74s, 1932.87 req/s, 216.05MB/s
> requests: 10 total, 10 started, 10 done, 10 succeeded, 0
> failed, 0 errored, 0 timeout
>
> iam surprised by that. i expected a higher performance when reusing
> backend connections rather then creating new ones.
> I did some further tests and changed some other php-fpm/proxy values, but
> once "enablereuse=on" is set, the problem returns.
>
> Should i just run the proxy with enablereuse=off? Or do you have an other
> suspicion?
>


Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket rather than
a unix one.
2) What changes on the php-fpm side. Are there more busy workers when
enablereuse is set to on? I am wondering how php-fpm handles FCGI requests
happening on the same socket, as opposed to assuming that 1 connection == 1
FCGI request.

Luca


Re: [users@httpd] Redirect only a specific index.php page to new location

2018-01-19 Thread Luca Toscano
Hi Kory,

2018-01-18 5:53 GMT+01:00 Kory Wheatley :

> When someone types to go to http://sftpinterface/deptblogs/  or a link I
> need it to redirect to http://intranet/template_departments.cfm.  Which I
> was able to accomplish in the index.php header content with
>
>  /* Redirect browser */
>  header("Location: http://intranet/template_departments.cfm;);
>
> /* Make sure that code below does not get executed when we redirect. */
> exit;
> ?>
>
> But the problem is all pages underneath http://sftpinterface/deptblogs
> redirect to  http://intranet/template_departments.cfm.  Like I don't want
> http://sftpinterface/deptblogs/nursing to be redirected to
> http://intranet/template_departments.cfm.  I want it to stay on that page
> along with the others.  Only http://sftpinterface/deptblogs or
> http://sftpinterface/deptblogs/index.php  needs to be redirect to
> http://intranet/template_departments.cfm and not the sub directory sites
> underneath /deptblogs.  What's the possible way of doing this.
>
>
have you checked
https://httpd.apache.org/docs/current/mod/mod_alias.html#redirectmatch ?

Luca


Re: [users@httpd] SFTP JAIL

2018-01-16 Thread Luca Toscano
Hi Rodrigo,

2018-01-16 14:51 GMT+01:00 Rodrigo Cunha :

> Hi everyone,
> I have a problem with setup sftp access.My sftp user can't  jaule.
> I configure setup with this procedures:
> https://wiki.archlinux.org/index.php/SFTP_chroot
> But when i setup my user webmaster in group sftponly my client is not work.
>
> Any feedback would be greatly appreciated.Tks
>

You have probably got the wrong list, this is for the Apache httpd
webserver :)

Luca


Re: [users@httpd] SSL checker reports server vulnerable to BEAST attack

2018-01-16 Thread Luca Toscano
Hi Robert,

2018-01-16 10:21 GMT+01:00 Robert S :

> Hi.
>
> I have run a server test on
> https://cryptoreport.rapidssl.com/checker/views/certCheck.jsp.  It
> reports that my certificate is installed correctly but the server is
> vulnerable to a BEAST attack.  It says "Make sure you have the TLSv1.2
> protocol enabled on your server. Disable the RC4, MD5, and DES
> algorithms. Contact your web server vendor for assistance."
>
> I believe that I have disabled these protocols - here are the relevant
> lines in my config:
>
> SSLEngine on
> SSLProtocol ALL -SSLv2 -SSLv3
> SSLCipherSuite "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:
> ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:
> ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-
> AES256-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES128-SHA256:
> ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-
> ECDSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:
> AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:AES:
> CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!
> PSK:!EDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-
> SHA:!KRB5-DES-CBC3-SHA"
> SSLHonorCipherOrder On
>
> Can anyone help here?
>

IIRC a permanent solution for BEAST was to disable TLS 1.0, but I'd check
https://mozilla.github.io/server-side-tls/ssl-config-generator/ and see how
the above SSLCipherSuite setting can be changed to be up to date.

Hope that helps,

Luca


Re: [users@httpd] Reverse proxy not working

2018-01-02 Thread Luca Toscano
Hi,

2017-12-31 10:25 GMT+01:00 Noor Mohammad :

> I have an application correctly working on locahost:8080 and I am setting
> up a reverse proxy as follows but on a remote browser, when using the
> proxy, i am getting local links as if apache is ignoring the reverse proxy.
> The definition of the proxy is as follows:
>
> LoadModule proxy_module modules/mod_proxy.so
> LoadModule proxy_http_module modules/mod_proxy_http.so
> ProxyPreserveHost On
> ProxyPass /marmotta/ http://localhost:8080/marmotta/
> ProxyPassReverse /marmotta/ http://localhost:8080/marmotta/
>
>
> Any idea why this is not working ?
>

Can you explain a bit more what do you mean with "I am getting local links"
?


Re: [users@httpd] How to connect Apache and Tomcat using http2 protocol

2017-12-12 Thread Luca Toscano
Hi!

2017-12-12 6:48 GMT+01:00 Ananya Dey :

> Hi
>
> I am trying to connect Apache and Tomcat using HTTP2 protocol.
> 1. These are the changes that I have made in my server.xml.
>  maxThreads="150" SSLEnabled="false"
>sslImplementationName="org.apache.tomcat.util.net.
> openssl.OpenSSLImplementation">
>  />
> 
>   certificateFile="/home/
> ananya/tomcat_server1/apache-tomcat-8.5.23/conf/server.crt"/>
> 
> 
> 2. In my httpd.conf,
> I have added
> Protocols h2
> LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
> LoadModule http2_module modules/mod_http2.so
>
> and ProxyPass "http://15.213.91.33:10905/;
>
> But through the various tests, it is mostly concluded that it supports
> websocket protocol using http proxy. But not supporting http2.
> Can someone please help me out??
>

I think that you'd need to try
https://httpd.apache.org/docs/2.4/mod/mod_proxy_http2.html.

Hope that helps!

Luca


Re: [users@httpd] ProxyPassReverse rewrites Location header where it should not

2017-11-30 Thread Luca Toscano
Hi Vlad,

2017-11-29 20:54 GMT+01:00 Vlad Liapko :

> I have below config, non essential stuff removed
> 
> ProxyPassReverse http://backendhost.com
> ProxyPassReverse  /
> 
>
> It happens that backend sends Location header already correctly pointing
> to the front end, no need to rewrite, like this
> Location:https://frontendhost.com/test/
>
> Apache directive
> ProxyPassReverse  /  is suppose to handle use cases when a back end server
> is not adding any host in Location, just a relative URI, but what happens
> it converts
> Location:https://frontendhost.com/test/ into
> Location:https://frontendhost.com/test/test/
>
>

As far as I know the second ProxyPassReverse is translated to something
like the following (since it is used in a Location block):

ProxyPassReverse /test/ /

So it seems to behave correctly. What happens if you remove it and use only
"ProxyPassReverse http://backendhost.com; ?

Luca


Re: [users@httpd] Using variables with mod_substitute to rewrite dynamically

2017-11-24 Thread Luca Toscano
Hi Vlad,

2017-11-23 16:29 GMT+01:00 Vlad Liapko :

> Hi,
>
> I’m trying to substitute a server name dynamically in xml responses
> Substitute s|http://blah.com|${SERVER_NAME}|n
> to now success. Apache complains conf variable is not defined, but it is
> there in VirtualHost.
>
> So far I was able only to use  and put a specific substitute for a
> specific host. The reason I need it dynamically is that I have thousands of
> proxy rules through  directives, I want to be able move those
> rules from one environment to another without updating a hardcoded value.
>
>
I don't think that mod_substitute works with expressions, have you tried
https://httpd.apache.org/docs/2.4/mod/mod_proxy_html.html#proxyhtmlurlmap ?

Luca


Re: [users@httpd] Apache creates Semaphore

2017-11-02 Thread Luca Toscano
Hi Hemant,

as indicated in https://httpd.apache.org/docs/2.4/mod/core.html#mutex you
can use different kind of mutex implementations and experiment with them.
>From your description though it seems to me that your approach of killing
httpd leads to semaphore leaking, something that would be avoided with a
graceful shutdown. I would encourage you to not use httpd -X but to
configure your mpm to spawn one process only (note that you might need
svn.apache.org/r1748336 or httpd >= 2.4.21).

Hope that helps,

Luca

2017-11-02 12:08 GMT+01:00 Hemant Chaudhary 
:

> Hi Yann,
>
> For my product I need to run apache as single process. As httpd -X works
> for me. But the issue is to stop httpd -X, we need to kill process. While
> killing the process, semaphore exists in kernel directory.
> If I repeat for 10-15 times, then it will give error like no space
> available on device.
> To solve this error, either I need to explicitly kill each semaphore id
> and then start. So I thought not to create semaphore itself.
> Therefore I commented following lines in worker.c
>
> if ((rv = SAFE_ACCEPT((apr_snprintf(id, sizeof id, "%i",
> i),ap_proc_mutex_create(_buckets[i].mutex,NULL,
> AP_ACCEPT_MUTEX_TYPE,  id, s, pconf, 0) {
>ap_log_error(APLOG_MARK, APLOG_CRIT | level_flags, rv, (startup
> ? NULL : s),  "could not create accept mutex");
>  return !OK;
>
>  After commenting above line, when I am starting httpd -X, it is giving
> signal 31 error. When starting with httpd, its only starting parent and
> killing child processes with signal 31 error.
>
> What should I change if I want to start apache in debug mode and not to
> create semaphore ?
>
> Thanks
> Hemant
>
>
>
> On Thu, Nov 2, 2017 at 4:01 PM, Yann Ylavic  wrote:
>
>> Hi Hemant,
>>
>> On Thu, Nov 2, 2017 at 5:47 AM, Hemant Chaudhary
>>  wrote:
>> >
>> > Semaphore is used in multi process environment to share resources within
>> > processes. But when I am starting apache in debug mode i:e single
>> process
>> > then still it creates semaphore. May I know the reason why it is
>> creating
>> > semaphore in debug mode also.
>>
>> The debug mode has no particular code optimization or walkout,
>> precisely because we want to be able to diagnose potential bugs for
>> the non-debug case.
>>
>> Regards,
>> Yann.
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: users-h...@httpd.apache.org
>>
>>
>


Re: [users@httpd] rpmbuild of httpd-2.4.29

2017-10-24 Thread Luca Toscano
2017-10-24 4:06 GMT+02:00 kohmoto :

> Hi,
>
> I have finished  rpmbuild of httpd-2.4.29 perfectly and installed it
> successfully using the rpm.
> Thank you all relative to this release.
>
> CentOS 7.4
> kernel: 3.10.0-693.5.2
>
> Yours truly,
> Kazuhiko Kohmoto


 Thanks for the feedback!

Luca


Re: [users@httpd] SSL hooks

2017-10-19 Thread Luca Toscano
Hi,

2017-10-19 1:06 GMT+02:00 Adi Mallikarjuna Reddy V <
adimallikarjunare...@gmail.com>:

> Hi
>
> I am looking at this file https://github.com/apache/httpd/blob/trunk/
> modules/ssl/mod_ssl_openssl.h and see that there are 3 hooks defined for
> handling SSL connections. Are these available for modules/handlers to use?
>
> Can my module register to thees hooks and manipulate SSL context?
>
>
>From the git blame:
https://github.com/apache/httpd/commit/6fd55ccc770c5b898d0c612584c9eedf8a8c5378#diff-8517096c9c992f986d308655575f8e7d

"mod_ssl: Add hooks to allow other modules to perform processing at
several stages of initialization and connection handling.  See
mod_ssl_openssl.h.

This is enough to allow implementation of Certificate Transparency
outside of mod_ssl."

So I'd say yes, but bare in mind that those hooks are executed way before
the (content) handler. I'd suggest to play with them and figure out if they
are enough for your needs. mod_md (https://github.com/icing/mod_md) could
also be a module to take as example for mod_ssl interactions.

Hope that helps!

Luca


Re: [users@httpd] Apache load module path

2017-10-19 Thread Luca Toscano
Hi,

2017-10-18 21:14 GMT+02:00 renee ko :

> Team,
>
> I have LoadModules configured under the default RedHat httpd directory.
>
> Example:
> LoadModule proxy_module /usr/lib64/httpd/modules/mod_proxy.
>
> I would like the modules to be changed to another
> directory(/usr/local/apache2/modules).
>
> Should I just update the new modules path in the httpd.conf file and
> restart httpd?
>
> Please let me know if it is correct. Are there any additional steps
> required?
>

>From https://httpd.apache.org/docs/2.4/mod/mod_so.html#loadmodule I'd also
check the ServerRoot settings. Please also try this change in a testing
environment first to figure out if everything works as expected :)

Luca


Re: [users@httpd] how to exit a C Apache module

2017-10-12 Thread Luca Toscano
Hi!

2017-10-12 12:42 GMT+02:00 eeadev dev :

> I tried with the C exit() but it returns a page with this content:
>
>
>
>
>
>
>
> *The connection was resetThe connection to the server was reset while the
> page was loading.The site could be temporarily unavailable or too busy.
> Try again in a few moments.If you are unable to load any pages, check
> your computer’s network connection.If your computer or network is
> protected by a firewall or proxy, make sure that Firefox is permitted to
> access the Web.*
>
> What should I use instead for exiting my module without doing anything
> else?
>

I suggest to start reading some documentation about how to write a module,
starting from the following:

- https://httpd.apache.org/docs/2.4/developer/modguide.html
- The Apache Modules Book, written by Nick Kew

There are a lot of gotchas and information to learn in my opinion before
doing any development attempt, otherwise it will be a loong and painful
process :)

If you want to experiment with something less heavy than C but still very
powerful, I also suggest you to check
https://httpd.apache.org/docs/current/mod/mod_lua.html

Luca


Re: [users@httpd] mod_authz_core and http response 451

2017-09-06 Thread Luca Toscano
Hi Galen,

2017-09-05 22:02 GMT+02:00 Galen Johnson :

> Hello,
>
> I've googled a bit and I can't find a way to handle this without using a
> rewrite rule.
>
> I'm setting up a rule using mod_geoip to block embargoed countries.  I set
> up the config as follows:
>
> 
>   # Blocking a client based on country
>   SetEnvIf GEOIP_COUNTRY_CODE CU BlockCountry
>   SetEnvIf GEOIP_COUNTRY_CODE IR BlockCountry
>   SetEnvIf GEOIP_COUNTRY_CODE KP BlockCountry
>   SetEnvIf GEOIP_COUNTRY_CODE SY BlockCountry
>
>   
> Require all granted
> 
>   Require env BlockCountry
> 
>   
> 
>
> This works but returns a 403.  I'd like for it to return a 451.  Is this
> possible?  Or am I going to have to stick with using a rewrite rule
> (without the require block)?
>
> 
> RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^(CU|IR|KP|SY)$
> RewriteRule ^(.*)$ https://example.com/$1 [NE,R=451,L]
> 
>
> If there is a preferred way to handle this, I'd be interested in that as
> well.
>
>
probably the rewrite rule is more flexible for your use case, I am not
aware of any way of returning something different than 403 after breaching
Require constraints.

Luca


Re: [users@httpd] MPM Modules Rule of Thumb

2017-09-05 Thread Luca Toscano
Hi Tony,

usually httpd consumes a very little amount of memory, if it is behaving in
that way it is probably due to some module like mod_php. Can you give us a
bit more info about your mpm used and the list of modules loaded? For
example, the most common use case that we see is mpm-prefork and mod_php
causing a ton of RAM consumed (each httpd process allocates memory for a
PHP interpreter), meanwhile a solution like mpm-worker|event +
mod_proxy_fcgi + php-fpm works way better.

My suggestion would be to narrow down what module is really causing your
memory to saturate before tuning the mpm.

Luca


2017-09-06 1:33 GMT+02:00 Tony DiLoreto <t...@miglioretechnologies.com>:

> Hi Luca,
>
> Basically my server runs out of free memory and freezes. On AWS I have to
> stop/start it again to be able to SSH in. What I'd really like is a
> MAX_PERCENTAGE_AVAILABLE_MEMORY directive that limits Apache to <= some %
> of free memory. That way it can never halt my system.
>
> Hope this helps.
>
> On Tue, Sep 5, 2017 at 1:16 PM Luca Toscano <toscano.l...@gmail.com>
> wrote:
>
>> Hi Tony,
>>
>> 2017-08-31 23:43 GMT+02:00 Tony DiLoreto <t...@miglioretechnologies.com>:
>>
>>> Hi All,
>>>
>>> I've been scouring the internet for best practices or heuristics for
>>> specifying parameter values of the MPM directives. My server seems to lock
>>> up regardless of the values I enter. Are there "rules of thumb" for each
>>> MPM type (prefork, worker, event)?
>>>
>>>
>> Can you tell us what do you mean with "lock up"?
>>
>> Luca
>>
> --
> Tony DiLoreto
> President & CEO
> Migliore Technologies Inc
>
> 716.997.2396
> t...@miglioretechnologies.com
>
>
>
> miglioretechnologies.com
> *The best in the business...period!*
>


Re: [users@httpd] MPM Modules Rule of Thumb

2017-09-05 Thread Luca Toscano
Hi Tony,

2017-08-31 23:43 GMT+02:00 Tony DiLoreto :

> Hi All,
>
> I've been scouring the internet for best practices or heuristics for
> specifying parameter values of the MPM directives. My server seems to lock
> up regardless of the values I enter. Are there "rules of thumb" for each
> MPM type (prefork, worker, event)?
>
>
Can you tell us what do you mean with "lock up"?

Luca


Re: [users@httpd] mod_rewrite + proxy + unix socket results in 400 bad request

2017-08-31 Thread Luca Toscano
Hi David,

2017-08-29 17:41 GMT+02:00 David Mugnai :

> Hi,
>
> I'm trying to configure a virtual host that, based on the host name,
> forwards the request on a backend server listening on an unix socket.
>
> My apache version is 2.4.18 as shipped by Ubuntu 16.04
>
> The configuration I've tried so far is:
>
> 
>ServerAdmin webmaster@localhost
>DocumentRoot /var/www/html
>LogLevel trace2
>
>UseCanonicalName Off
>
>RewriteEngine On
>RewriteCond %{HTTP_HOST} ^(.+)\.example.com
>RewriteRule "(.*)" "unix:/home/user/%1/server.sock|http://127.0.0.1$1
> [P,NE]
>
>ErrorLog ${APACHE_LOG_DIR}/error.log
>CustomLog ${APACHE_LOG_DIR}/access.log combined
> 
>
> The rewrite module works as expected (in the log file I can see the full
> path to the unix socket), but trying to access the web server results in
> a "400 Bad Request" *without* the involvment of the backend server.
>
> I made a test with ProxyPass directive, and it works, but obviously is
> not what I want:
>
> 
>ServerAdmin webmaster@localhost
>DocumentRoot /var/www/html
>LogLevel trace2
>
>UseCanonicalName Off
>
>ProxyPass / unix:/home/user/subdomain1/server.sock|
> http://127.0.0.1/
>ProxyPassReverse / unix:/home/user/subdomain1/server.sock|
> http://127.0.0.1/
>
>ErrorLog ${APACHE_LOG_DIR}/error.log
>CustomLog ${APACHE_LOG_DIR}/access.log combined
> 
>
> How can I fix it?
>

Didn't have much time to try this use case manually but I have a couple of
suggestions:

1) Do you find any log in the error_log that could give us some clue about
the 400 returned? (maybe increasing the LogLevel to debug or trace)
2) Have you tried
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassmatch ?

Luca


Re: [users@httpd] Problems with Http11NioProtocol and proxy server

2017-08-31 Thread Luca Toscano
Hi Marco,

2017-08-29 11:07 GMT+02:00 :

> Hi,
>
> we've build a web application with JSF 2.1 and RichFaces 4.5.13.Final
> running on JBoss EAP 6.4.12. We're also using a Apache HTTP server 2.4.7 as
> a HTTPS/WSS proxy to access the application for customers.
>
> After we've changed the EAP http connector to 
> "org.apache.coyote.http11.Http11NioProtocol"
> (for using websockets/push), ajax requests are often answered by HTTP 400
> (Bad Request) or HTTP 500 (Bad Gateway).
>
> We're switched back to the EAP http connector "HTTP/1.1" and everything is
> working again as expected.
>
> If we're accessing the EAP directly (not via proxy) there is also no
> problem. It looks like that there is a problem using Http11NioProtocol with
> Apache HTTP server as a proxy.
>
> Is there anyone who run into the same problem or has an idea how we can
> solove it?
>

We'd need to know a couple of things to help:

1) Is there any specific error in the httpd error_log that gets logged when
the 400/500s are returned?
2) What kind of requests are you issuing to httpd when you get a 400 and a
500? Are they different?
3) Can you share your configuration? (at least the one for the proxy)
4) Do you find anything useful in the Tomcat logs when httpd returns the
errors?

Luca


Re: [users@httpd] MPM_Worker main process

2017-08-31 Thread Luca Toscano
Hi Hemant,

2017-08-30 13:05 GMT+02:00 Hemant Chaudhary 
:

> Hi Luca,
>
> Thanks for reply.
> Actually I want to use apache web server for some transaction where I
> can't afford any type of failure. That's why I was trying If by mistake
> someone killed or something happen to my parent process then still apache
> can handle requests and serve. Thats why I have query that if parent
> process is not there then what are the functionalities will I lost ?
>
>
If I were you I'd concentrate my efforts in alarming, reliability (multiple
httpds behind a load balancer for example) and basic prevention for
mistakes like the one you described (limit access to production environment
/ sudo commands, etc..).

Luca


Re: [users@httpd] Build apache without mpm

2017-08-31 Thread Luca Toscano
Hi Hemant,

2017-08-31 7:32 GMT+02:00 Hemant Chaudhary :

> Hi
>
> By which configuration I can build apache without threaded> I dont want to
> sue mpm.
>

The mpm is mandatory and you can choose between prefork (not threaded) and
worker/event. If you need more info about the httpd's internal you can
check the docs and the wonderful "The Apache Modules Book".

Hope that helps,

Luca


Re: [users@httpd] MPM_Worker main process

2017-08-30 Thread Luca Toscano
Hi Hemant,

2017-08-30 8:26 GMT+02:00 Hemant Chaudhary :

> Hi folks,
>
> I have my apache-2.4.25 with worker mpm. For testing, I have killed the
> master/main process and send simultaneous requests from apache j-meter and
> my apache serves all the requests. What I have observed is that  even with
> loads number of  worker threads are same, it means I lost  forking
> capability because of main process.
>
> My query is without Master process, what functionalities will I loose?
>

afaik anything that requires the master process (graceful reload, restart,
spawn of new processes due to MaxRequestPerChild reached, etc..) will not
work anymore. I'd strongly suggest not to play with the master process or
attempt to get rid of it for some reason, but I am not sure what is the
goal of your test :)

Luca


Re: [users@httpd] RewriteRule: Pattern matching and grouping part of the URL expands to its local filesystem path

2017-08-29 Thread Luca Toscano
Hi Gustau,

2017-08-22 9:01 GMT+02:00 Gustau Perez :

>
>Hello everybody,
>
>I’ve checking all kinds of sources of information so far without
> success, I hope I didn’t miss anything.
>
>I have a very simple RewriteRule which should take the requested
> resource part. What I want to achieve is to prepend an string before that
> matched path. Something like:
>
> RewriteRule   ^(.*)$ http://myserver/special_path/$1 [R=301]
>
>I’d say that should take the requested resource path and redirect the
> client to a new location. It does work in some places, but I’d like to use
> under a conditional . When I do
> that, the match is instead expanded to the local filesystem path, if I
> request the “/“ (root) of the server, the client is being redirected to:
>
>   http://myserver/special_path/var/www/html
>
>Never mind about the DocumentRoot path. Anyway, that’s not the desired
> behaviour.
>
>Perhaps we’re doing something wrong, but what puzzles me is the fact
> that if move the redirect logic outside the  block, then it apparently
> works fine.
>

I'd suggest to try Redirect or RedirectMatch for these use cases, they are
less complicated and more straightforward:

https://httpd.apache.org/docs/2.4/mod/mod_alias.html#redirect

As you can see in https://httpd.apache.org/docs/current/mod/core.html#if
the  block accepts only directives that are usable in the directory
context (
https://httpd.apache.org/docs/current/mod/directive-dict.html#Context), and
this is a special note in the RewriteRule docs that might clarify what's
happening:

"In per-directory context (Directory and .htaccess), the Pattern is matched
against only a partial path, for example a request of "/app1/index.html"
may result in comparison against "app1/index.html" or "index.html"
depending on where the RewriteRule is defined."

If I am right a note in the docs might clarify any doubts like yours, but
need to double check first :)

Luca


Re: [users@httpd] ''AH00288: scoreboard is full, not at MaxRequestWorkers'

2017-08-29 Thread Luca Toscano
Hi,

2017-08-29 2:07 GMT+02:00 :

>
> Some malicious persons are flooding our server ( Server
> Version: Apache/2.4.27 (cPanel) OpenSSL/1.0.2k mod_bwlimited/1.4
> Server MPM: worker Server Built: Aug 17 2017 00:51:40 ) with bogus
> traffic.  It's been going down every few hours, often posting AH00288
> errors first.
> What does this error mean?  Any suggestions for preventing
> this?
>

there is a (hopefully) good explanation in
https://httpd.apache.org/docs/2.4/mod/event.html#how-it-works "Graceful
process termination and Scoreboard usage", in which it is explained what it
means and how mpm-event tried to solve the problem. I would try to switch
to mpm-event if possible to see if things improve.

Hope that helps,

Luca


Re: [users@httpd] Honouring the DNS ttl in proxy-pass

2017-08-28 Thread Luca Toscano
Hi Gustau,

2017-08-23 12:47 GMT+02:00 Gustau Perez :

>Hi,
>
>We’re trying to set a bunch of Apaches 2.4.18 to proxy pass the
> requests it receives to our partner's upstream server. Our partner uses
> Amazon’s Elastic Load Balancing and thus the only we know about their
> servers is its DNS names.
>
>The TTL of the DNS records is 60 seconds and I’d like to know if Apache
> can honour that ttl, keeping the connection alive as long as the DNS record
> is valid and then requesting the translation when the TTL has expired.
>
>Using mod_proxy DisableReuse = on forces opening a new connection every
> time a resource is needed upstream. That would do the trick as long as the
> underneath operating system does the DNS TTL caching. If not, every time a
> new resource is needed Apache will force a new DNS request, increasing the
> response time.
>
>I’ve thought of playing with the mod_proxy ttl and timeout parameters,
> but I think I’m not correctly solving the problem. According to the docs,
> the mod_proxy’s timeout parameter controls the time a socket will wait for
> data from upstream, but I’m not sure if the Apache instance will close the
> connection an open a new one. Also, playing with the timeout is error
> prone, because a lower value may sent an wrong answer to the client.
>
>I’ve spend a few time trying to tackle this setup with no joy.  Is
> there any special setup to cover that scenario? Or perhaps I’ve skipped
> something? Any help would be appreciated.
>

one of the side effects of reusing the backend connections is to force the
Apache child process to cache the DNS resolution for its life (that is
until a main restart happens or when MaxConnectionsPerChild is met), so I
am afraid that if you need something more flexible you'd need to deploy
something like https://www.unbound.net on the host running Apache to reduce
the DNS resolution latencies (and the pressure to your DNS resolvers).

Hope it helps,

Luca


Re: [users@httpd] Two questions on httpd tuning

2017-08-18 Thread Luca Toscano
Hi Martin,

2017-08-18 10:09 GMT+02:00 Martin Knoblauch :
>
>
>  Lets say I wanted to increase MaxRequestWorkers to e.g. 800. One of the
> several solutions would be to up ServerLimit to 32 and leave
> ThreadsPerChild at 25. But I could also leave ServerLimit at 16 and up
> ThreadsPerChild to 50. Or I could choose any combination where
> ServerLimit*ThreadsPerChild is bigger than 800. What I would like to know
> is which way to go. And why. What gives the best and most reliable results.
> Assumption is that there is plenty of free memory, CPU and network
> bandwidth available.
>

my personal recommendation would be to prefer increasing the number of
threads rather than the number of processes (threads are supposed to share
more and be less heavy to context switch for the OS, but on Linux this can
generate lng discussions and flames so I will not go there :), but
you'd probably need to perform some load tests with different
configurations and see which one suits your system more. From the
reliability point of view I don't see much difference between the two
approaches, again testing would probably give you more datapoints to choose
the best approach for you use cases.

Interesting to read is also what mpm_event offers over mpm_worker and what
use cases it solves: https://httpd.apache.org/docs/2.4/mod/event.html

One thing that might also be good to check is the following:
https://httpd.apache.org/docs/current/mod/mpm_common.html#listencoresbucketsratio
When you read it please keep in mind that each process in mpm_worker/event
allocates one thread as listener, so there is contention for the listen
socket.

Luca


Re: [users@httpd] Two questions on httpd tuning

2017-08-18 Thread Luca Toscano
Hi Martin,

2017-08-17 17:40 GMT+02:00 Martin Knoblauch :

> Hi,
>
>  this is for httpd-2.4.26 with the mpm_worker_module. I have one practical
> and one more theoretical question.
>
> First, is there a way to determine the maximum number of concurrent
> requests that have been processed at any time since the last server
> (re)start?
>

I would periodically poll
https://httpd.apache.org/docs/2.4/mod/mod_status.html and push metrics to a
backend like graphite/prometheus/etc.


>
> Second, if I have the default configuration and I ever want to increase
> MaxRequestWorkers above 400 my understanding is that I need to either
> increase ServerLimit or ThreadsPerChild (or adjust both). Which should I
> touch first (why :-)?
>

>From https://httpd.apache.org/docs/2.4/mod/worker.html:

"ServerLimit is a hard limit on the number of active child processes, and
must be greater than or equal to the MaxRequestWorkers directive divided by
the ThreadsPerChild directive."

Basically mpm_worker creates some processes (maximum ServerLimit) that in
turn spawn ThreadsPerChild threads each, so you need to tune them to reach
MaxRequestWorkers.

Hope that helps :)

Luca


Re: [users@httpd] How to different SSLProtocol for each of the conf files

2017-07-25 Thread Luca Toscano
As Eric pointed out earlier on:

> The file names don't matter very much. What matters is whether they
> are separate IP:PORT based vhosts. If they're not, they can't have
> separate SSL configurations.

In all files you have  and you use a different
ServerName to differentiate. I am not a big expert but I believe that what
Eric is saying is that if you want to use a different SSL configuration on
one VirtualHost you can with the constraint that the IP:PORT (stated in
) is unique and not used in another VirtualHost block.

Luca

2017-07-25 12:01 GMT+02:00 chetan jain <cpjai...@gmail.com>:

> Hi Luca,
>
> I have uploaded the content :
>
> https://apaste.info/t5ez
>
> Please review.
>
> --Chetan
>
> On Tue, Jul 25, 2017 at 4:17 AM, Luca Toscano <toscano.l...@gmail.com>
> wrote:
>
>> Hi,
>>
>> we'd need to get your vhost configuration before helping further on, as
>> Eric mentioned you have probably some overlapping but it is very difficult
>> to debug only from your description. If you can put your configuration in
>> https://apaste.info/ it would be great, otherwise I'd suggest to reach
>> out to the folks in #httpd (IRC Freenode) to get some live help.
>>
>> Luca
>>
>>
>> 2017-07-25 6:45 GMT+02:00 chetan jain <cpjai...@gmail.com>:
>>
>>> Hi All,
>>>
>>> Any more input on this?
>>>
>>> --Chetan
>>>
>>> On 21 Jul 2017 10:40 p.m., "chetan jain" <cpjai...@gmail.com> wrote:
>>>
>>>> Hi Eric,
>>>>
>>>> Thanks for the reply.
>>>> We have a different server alias for each of the host, It does get
>>>> honoured that is how requests go to correct sites.
>>>>
>>>> It's just that something with the SSLProtocol, i read somewhere after
>>>> googling that SSLProtocol are taken from the first virtual host which is
>>>> loaded and rest are ignored, trying to seek confirmation if that is
>>>> correct...and what can be done to achieve the needful
>>>>
>>>> On 21 Jul 2017 5:09 p.m., "Eric Covener" <cove...@gmail.com> wrote:
>>>>
>>>>> On Fri, Jul 21, 2017 at 2:37 AM, chetan jain <cpjai...@gmail.com>
>>>>> wrote:
>>>>> > Hi All,
>>>>> >
>>>>> > We have an Apache WebServer (2.2.15) setup on CentOS 6 where in
>>>>> httpd,conf
>>>>> > we have included conf.d/*.conf files which has configuration for all
>>>>> the
>>>>> > virtual hosts.
>>>>> >
>>>>> > In conf.d we have respective .conf file for each of the virtual
>>>>> hosts like :
>>>>> >
>>>>> > abc_com.conf for abc.com
>>>>> > xyz_com.conf for xyz.com
>>>>> >
>>>>> > etc
>>>>> >
>>>>> > now I want to disable the TLSv1.0 and SSLv3 request only for one of
>>>>> this
>>>>> > virtual hosts, but even if i put the values like :
>>>>> >
>>>>> > SSLProtocol   ALL -SSLv3 -SSLv2 -TLSv1 -TLSv1.1  in
>>>>> xyz_com.conf
>>>>> > file TLSv1.0 and 1.1 are still enabled for xyz.com
>>>>> >
>>>>> > to disable it, I have to put the same value in abc_com.conf file as
>>>>> well,
>>>>> > then only it get disabled for xyz.com as well (even if i remove the
>>>>> paramter
>>>>> > from xyz_com.conf in that case it is still disabled)
>>>>> >
>>>>> > can't we have different SSLProtocol for different virtual hosts?
>>>>> >
>>>>> > I can not disable it for all the websites, have to do it for only
>>>>> one of
>>>>> > them, how can i achieve this?
>>>>>
>>>>> The file names don't matter very much. What matters is whether they
>>>>> are separate IP:PORT based vhosts. If they're not, they can't have
>>>>> separate SSL configurations.
>>>>>
>>>>>
>>>>> --
>>>>> Eric Covener
>>>>> cove...@gmail.com
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>>>>
>>>>>
>>
>


Re: [users@httpd] How to different SSLProtocol for each of the conf files

2017-07-25 Thread Luca Toscano
Hi,

we'd need to get your vhost configuration before helping further on, as
Eric mentioned you have probably some overlapping but it is very difficult
to debug only from your description. If you can put your configuration in
https://apaste.info/ it would be great, otherwise I'd suggest to reach out
to the folks in #httpd (IRC Freenode) to get some live help.

Luca


2017-07-25 6:45 GMT+02:00 chetan jain :

> Hi All,
>
> Any more input on this?
>
> --Chetan
>
> On 21 Jul 2017 10:40 p.m., "chetan jain"  wrote:
>
>> Hi Eric,
>>
>> Thanks for the reply.
>> We have a different server alias for each of the host, It does get
>> honoured that is how requests go to correct sites.
>>
>> It's just that something with the SSLProtocol, i read somewhere after
>> googling that SSLProtocol are taken from the first virtual host which is
>> loaded and rest are ignored, trying to seek confirmation if that is
>> correct...and what can be done to achieve the needful
>>
>> On 21 Jul 2017 5:09 p.m., "Eric Covener"  wrote:
>>
>>> On Fri, Jul 21, 2017 at 2:37 AM, chetan jain  wrote:
>>> > Hi All,
>>> >
>>> > We have an Apache WebServer (2.2.15) setup on CentOS 6 where in
>>> httpd,conf
>>> > we have included conf.d/*.conf files which has configuration for all
>>> the
>>> > virtual hosts.
>>> >
>>> > In conf.d we have respective .conf file for each of the virtual hosts
>>> like :
>>> >
>>> > abc_com.conf for abc.com
>>> > xyz_com.conf for xyz.com
>>> >
>>> > etc
>>> >
>>> > now I want to disable the TLSv1.0 and SSLv3 request only for one of
>>> this
>>> > virtual hosts, but even if i put the values like :
>>> >
>>> > SSLProtocol   ALL -SSLv3 -SSLv2 -TLSv1 -TLSv1.1  in
>>> xyz_com.conf
>>> > file TLSv1.0 and 1.1 are still enabled for xyz.com
>>> >
>>> > to disable it, I have to put the same value in abc_com.conf file as
>>> well,
>>> > then only it get disabled for xyz.com as well (even if i remove the
>>> paramter
>>> > from xyz_com.conf in that case it is still disabled)
>>> >
>>> > can't we have different SSLProtocol for different virtual hosts?
>>> >
>>> > I can not disable it for all the websites, have to do it for only one
>>> of
>>> > them, how can i achieve this?
>>>
>>> The file names don't matter very much. What matters is whether they
>>> are separate IP:PORT based vhosts. If they're not, they can't have
>>> separate SSL configurations.
>>>
>>>
>>> --
>>> Eric Covener
>>> cove...@gmail.com
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>>
>>>


Re: [users@httpd] configure apache2 on ubuntu 16.04 vps to use php-fpm is not leading to the desired outcome

2017-07-25 Thread Luca Toscano
Hi Dino,


2017-07-23 1:32 GMT+02:00 Dino Vliet :
>
> Modified this file:
>
> /etc/apache2/sites-available/000-default.conf to now have this inside:
>
>
>  
>
>   Require all granted
>
>   
>
>   
>
>   AddHandler php7-fcgi .php
>
>   Action php7-fcgi /php7-fcgi virtual
>
>   Alias /php7-fcgi /usr/lib/cgi-bin/php7-fcgi
>
>   FastCgiExternalServer /usr/lib/cgi-bin/php7-fcgi -socket
> /var/run/php/php7.0-fpm.sock -pass-header Authorization
>
>   
>

In here it seems that you are using mod_fastcgi (configured to not manage
FCGI processes afaict)

>
> Also modified this file /etc/apache2/conf-available and now it contains
>
>
> # Redirect to local php-fpm if mod_php is not available
>
> 
>
>   # Enable http authorization headers
>
>   SetEnvIfNoCase ^Authorization$ "(.+)" HTTP_AUTHORIZATION=$1
>
>
>   
>
>   SetHandler "proxy:unix:/run/php/php7.0-fpm.sock|fcgi://localhost"
>
>   
>

And in here mod_proxy_fcgi? Are you sure that this is what you wanted to
achieve?


> However, when I look at the output of the info.php page I have created in
> the document root I see Server API --> Apache 2.0 Handler in stead of what
> I expected after fiddling with the configuration. I expected Server API -->
> FPM/FastCGI
>
>
> What have I missed and what should I do to have apache2 run with FastCGI?
>

I'd suggest to follow
https://httpd.apache.org/docs/2.4/mod/mod_proxy_fcgi.html and
https://wiki.apache.org/httpd/php to gather more info :)

Hope that helps!

Luca


Re: [users@httpd] Apache Struts Vulnerability - CVE-2017-9791

2017-07-21 Thread Luca Toscano
Hi,

2017-07-21 18:35 GMT+02:00 Chunduru, Krishnachaithanya <
krishnachaithanya.chund...@broadridge.com>:

> Hi All,
>
>
> Can someone please confirm if Apache 2.4.10 is vulnerable to the 
> CVE-2017-9791.
> We came to know that Apache which is having Apache Struts version 2.3.x
> with Struts 1 plugin and Struts 1 action is highly vulnerable . If
> exploited, this vulnerability would allow a remote code execution attack.
>

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-9791 seems to be
related to Apache Struts only (that is a JEE framework) with no connection
with httpd, so probably it would be worth to follow up with the project's
user email list in my opinion: https://struts.apache.org/mail.html

Luca


Re: [users@httpd] How does Apache detects a stopped Tomcat JVM?

2017-07-20 Thread Luca Toscano
Hello,

2017-07-18 15:48 GMT+02:00 Suvendu Sekhar Mondal :

> Hello Folks,
>
> I am new to Apache httpd world and wanted to know more about it. :)
>
> Reason I got interested in this is that, in our case, we are running
> multiple Tomcat JVMs under a single Apache cluster. If we shut down
> all the JVMs except one, sometime we get 503s. If we increase the
> retry interval to 180(from retry=10), problem goes away. That bring me
> to this question, how does Apache detects a stopped Tomcat JVM? If I
> have a cluster which contains multiple JVMs and some of them are down,
> how Apache finds that one out? Somewhere I read, Apache uses a real
> request to determine health of a back end JVM. In that case, will that
> request failed(with 5xx) if JVM is stopped? Why higher retry value is
> making the difference?
>
> If someone can explain a bit or point me to some doc, that would be
> awesome.
>
> We are using Apache 2.4.10, byrequests LB algorithm, sticky session,
> keepalive is on and ttl=300 for all balancer member.
>

https://httpd.apache.org/docs/2.4/mod/mod_proxy_hcheck.html is surely a
good read for your use case. Moreover be really careful with sticky
sessions, they have the downside to tie requests to a specific backend
altering the work of the LB algorithm.

Luca


Re: [users@httpd] Apache server response very very slow from chrome/ firefox and works fine from Safari - User-Agent issue

2017-07-20 Thread Luca Toscano
Hi Kumar,

2017-07-18 9:14 GMT+02:00 Kumar Devarakonda :

> Hi,
>
> We have a strange issue recently with Apache. When we request some
> webpages (running on apache web server) from our server, if we make the
> request from Safari, they are loaded instantly. If we load the web page
> from Chrome or Firefox, it takes approximately 10 minutes to get the
> response. The same behavior is observed with curl too. After much research,
> we found that if the User-Agent header has "Mozilla" String in it, the
> requests are taking time. Response time is normal with below command curl
> -i -v -H "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:51.0)"
> ourserverurl.com curl -i -v -H "User-Agent: Gecko/20100101"
> ourserverurl.com
>
> Response time is very slow (approximately 3 to 10 minutes): curl -i -v -H
> "User-Agent: Gecko/20100101 Firefox/51.0" ourserverurl.com curl -i -v -H
> "User-Agent: Firefox/51.0" ourserverurl.com
>
> We have not upgraded anything recently. But this issue popped up suddenly.
> Any pointers to resolve the issue will help. Thank you in advance.
>
> It is the same results (same kind of slowness) for all kinds of resources
> (that can be compressed). We have tried multiple things like adding
> compress/ gzip header/ removing it etc.. Still the same. It just worked
> fine if we remove user-agent "Mozilla" string. We have multiple
> applications running through this server and all of them behaved the same
> way (same kind of slowness)..
>

the best place to start are the logs, what does the error log says for the
slow requests? Moreover, have you any specific config for the "slow" user
agents in httpd.conf? Can you share your config? Without more details it is
really difficult to help.

Luca


Re: [users@httpd] Crashes in CentOS 7

2017-07-20 Thread Luca Toscano
Hi Bruno,

2017-07-20 16:33 GMT+02:00 Bruno Dorchain :

> We got the following crash when under load:
> *** Error in `/usr/sbin/httpd': double free or corruption (!prev):
> 0x7f19a010cf80 ***
> === Backtrace: =
> /lib64/libc.so.6(+0x7c503)[0x7f19ce15c503]
> /lib64/libapr-1.so.0(apr_pool_destroy+0x1a7)[0x7f19ce8da2d7]
> /lib64/libapr-1.so.0(apr_pool_destroy+0x55)[0x7f19ce8da185]
> /etc/httpd/modules/mod_ssl.so(+0x164d0)[0x7f19c89844d0]
> /etc/httpd/modules/mod_ssl.so(+0x1307a)[0x7f19c898107a]
> /usr/sbin/httpd(ap_process_request_after_handler+0x5d)[0x7f19cfc3766d]
> /usr/sbin/httpd(ap_process_request+0x14)[0x7f19cfc382e4]
> /usr/sbin/httpd(+0x52c32)[0x7f19cfc34c32]
> /usr/sbin/httpd(ap_run_process_connection+0x40)[0x7f19cfc2cc90]
> /etc/httpd/modules/mod_mpm_event.so(+0x6bf5)[0x7f19cc9f1bf5]
> /lib64/libpthread.so.0(+0x7dc5)[0x7f19ce6acdc5]
> /lib64/libc.so.6(clone+0x6d)[0x7f19ce1d776d]
>
> May be linked to "ModSecurity: collection_retrieve_ex: Failed deleting
> collection (name ..." appearing on a regular basis.
>
> Any hint to troubleshoot that?
>


I would start from https://httpd.apache.org/dev/debugging.html#crashes and
see if you can get more detail from the stack-traces. Are you running a
recent version of httpd / mod-security (which is not maintained by the
httpd dev community, it is a third party product) ?

It would be also great if you could find how to reproduce this issue (maybe
checking the requests logged around the time of the crashes, etc..).

Hope that helps!

Luca


Re: [users@httpd] 2.4.27 installed, no con fig change, but web site down!

2017-07-19 Thread Luca Toscano
Hi Tom,

2017-07-19 3:33 GMT+02:00 Tom Browder :

> I installed 2.4.27, along with the latest openssl. no config was changed,
> but my server isn't serving.
>
> I show no errors in the error log.
>
> I will try to go back to previous versions to see if I can recover, but
> wonder if anyone can guess what has happened.
>

you'd need to provide more info to get help, your description is really
generic. For example:

1) What was the previous version of httpd and openssl?
2) How did you upgrade? Was it manually or via distribution upgrade? Did
you test it before?
3) What do you mean with "my server isn't serving" ? Are requests timing
out? Are they returning 503s? Are all of them failing or only some of them?
4) What is your httpd configuration?

I could go ahead with other questions but you have probably got what we'd
need to help :)

Thanks!

Luca


Re: [users@httpd] virtual host double slash effect, need solution

2017-07-16 Thread Luca Toscano
Hi David,

2017-07-15 3:11 GMT+02:00 David Mehler :

> Hello,
>
> I'm running Apache 2.4 on a FreeBSD 10.3 system, with several virtual
> hosts. My goal is to have all of them completely ssl, except for the
> .well-known area needed for letsencrypt.
>
> 
> ServerName example.com
> RewriteEngine On
> RewriteRule ^/?(.*) http://www.example.com$1 [R,L]
> # This line also produces the double slash effect
> # RewriteRule ^/?(.*) http://www.example.com/$1 [R,L]
> 
>
>
Have you tried with RewriteRule ^(.*)$ https://www.example.com$1 [R=301,L]
? (note also the https, IIUC you need to force TLS).

Hope that helps,

Luca


Re: [users@httpd] rpmbuild of httpd-2.4.27 is successful

2017-07-12 Thread Luca Toscano
Hello David,

we don't have much control in the Centos release schedule, I would suggest
to follow up in their support mailing lists :)

Thanks!

Luca

2017-07-12 11:03 GMT+02:00 David Goudet :

> Hello,
>
> This is great news, thank you for the job.
>
> Currently in Centos7 (release 7.3.1611) repo (centos-7-updates-x86_64) we
> have httpd-2.4.6-45.el7.centos.4.
> When and how httpd-2.4.27 will be available on Centos7? (in SCL repository
> or in centos-7-updates-x86_64).
>
> Thank you for precisions
>
> BR,
>
> - Original Message -
> From: "kohmoto" 
> To: "httpd mailing list" 
> Sent: Wednesday, July 12, 2017 9:49:48 AM
> Subject: [users@httpd] rpmbuild of httpd-2.4.27 is successful
>
> Dear,
>
> I would like to report the rpmbuild of httpd-2.4.27 has been finished
> beautifully and successfully on CentOS Linux release 7.3.1611.
>
> Thank you all for the efforts to release this.
>
> Yours truly,
> Kazuhiko Kohmoto
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
> --
> David GOUDET
>
> LYRA NETWORK
> IT Operations service
> Tel : +33 (0)5 32 09 09 74 | Poste : 574
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] [ANNOUNCEMENT] Apache HTTP Server 2.4.27 Released

2017-07-11 Thread Luca Toscano
Also a more in depth explanation from the dev@ mailing list:

https://lists.apache.org/thread.html/bae472cadaeeb761b88bb4569cc0b7d87bc2dcb2fbcbf472d895f32e@%3Cdev.httpd.apache.org%3E

Luca

2017-07-11 15:56 GMT+02:00 Luca Toscano <toscano.l...@gmail.com>:

> Hi David,
>
> https://bz.apache.org/bugzilla/show_bug.cgi?id=61237 contains the
> background that brought to this decision :)
>
> Luca
>
>
> 2017-07-11 15:41 GMT+02:00 David Copeland <david.copel...@jsidata.ca>:
>
>> I'm wondering what the reason for this is?
>>
>> Thanks.
>>
>> On 11/07/17 09:04 AM, Jim Jagielski wrote:
>> >Apache HTTP Server 2.4.27 Released
>> >
>> >
>> > o HTTP/2 will not be negotiated when using the Prefork MPM
>> >
>>
>> --
>> David Copeland
>> JSI Data Systems Limited
>> 613-727-9353
>> www.jsidata.ca
>>
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: users-h...@httpd.apache.org
>>
>>
>


Re: [users@httpd] [ANNOUNCEMENT] Apache HTTP Server 2.4.27 Released

2017-07-11 Thread Luca Toscano
Hi David,

https://bz.apache.org/bugzilla/show_bug.cgi?id=61237 contains the
background that brought to this decision :)

Luca

2017-07-11 15:41 GMT+02:00 David Copeland :

> I'm wondering what the reason for this is?
>
> Thanks.
>
> On 11/07/17 09:04 AM, Jim Jagielski wrote:
> >Apache HTTP Server 2.4.27 Released
> >
> >
> > o HTTP/2 will not be negotiated when using the Prefork MPM
> >
>
> --
> David Copeland
> JSI Data Systems Limited
> 613-727-9353
> www.jsidata.ca
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] Graphical representation of serer status

2017-06-23 Thread Luca Toscano
2017-06-23 15:56 GMT+02:00 Hemant Chaudhary 
:

> Hi
>
> I want to have graphical representation of my apache server. Any module
> available to acheive this. I am working on httpd -2.4.25
>
>
There is the awesome https://github.com/Humbedooh/server-status that was
recently donated to the httpd project (
https://svn.apache.org/repos/asf/httpd/httpd/branches/2.4.x/docs/server-status
).

More info about how to use it in
https://svn.apache.org/repos/asf/httpd/httpd/branches/2.4.x/docs/server-status/README.md

Luca


Re: [users@httpd] server-statut ACC value and MaxConnectionsPerChild

2017-06-22 Thread Luca Toscano
Hi Bertrand,

2017-06-20 15:54 GMT+02:00 Bertrand Lods :

> Hi
>
> [root@fusion ~]# httpd -V
> Server version: Apache/2.4.6 (CentOS)
> Server built:   Apr 12 2017 21:03:28
> Server's Module Magic Number: 20120211:24
> Server loaded:  APR 1.4.8, APR-UTIL 1.5.2
> Compiled using: APR 1.4.8, APR-UTIL 1.5.2
> Architecture:   64-bit
> Server MPM: prefork
>   threaded: no
> forked: yes (variable process count)
> Server compiled with
>  -D APR_HAS_SENDFILE
>  -D APR_HAS_MMAP
>  -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
>  -D APR_USE_SYSVSEM_SERIALIZE
>  -D APR_USE_PTHREAD_SERIALIZE
>  -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
>  -D APR_HAS_OTHER_CHILD
>  -D AP_HAVE_RELIABLE_PIPED_LOGS
>  -D DYNAMIC_MODULE_LIMIT=256
>  -D HTTPD_ROOT="/etc/httpd"
>  -D SUEXEC_BIN="/usr/sbin/suexec"
>  -D DEFAULT_PIDLOG="/run/httpd/httpd.pid"
>  -D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
>  -D DEFAULT_ERRORLOG="logs/error_log"
>  -D AP_TYPES_CONFIG_FILE="conf/mime.types"
>  -D SERVER_CONFIG_FILE="conf/httpd.conf"
>
> This my apache conf for mpm_prefork_module
>
> 
> StartServers 5
> MinSpareServers  5
> MaxSpareServers 10
> MaxRequestWorkers   32
> MaxConnectionsPerChild  350
> 
>
> When i look at server-status web interface for AccNumber of accesses (this
> connection / this child / this slot), I notice that ACC this child value
> don't tie in with my MaxConnectionsPerChild.
>
> The process don't die when AccNumber of accesses this child reaches 350
>
> Is this normal?
>
>
>
Have you verified if the PID changes or not (even with ps -aux etc..)? Does
the counter keep increasing or just stop at a given value bigger than 350?
I'd also make some tests with a more recent version of httpd.

Thanks!

Luca


Re: [users@httpd] 'require' directive result

2017-06-21 Thread Luca Toscano
Hi Andrei,

2017-06-16 15:23 GMT+02:00 Andrei Ivanov :

> Hi,
> Now that I've managed to configure my 'require' directive, I have a
> requirement to log some details to syslog in case the request is not
> authorized.
>
> 
>   Require expr ""
>   // if expression is false, log details about the request and maybe
> the SSL certificate to syslog
> 
>
> I've searched around, but I can't find how I could do that.
>

sorry for what might be trivial, but have you tried  etc.. ?

https://httpd.apache.org/docs/2.4/mod/core.html#if

Luca


Re: [users@httpd] if directive not being respected in Apache 2.4.6

2017-06-21 Thread Luca Toscano
Hi Chuck,

2017-06-09 18:36 GMT+02:00 Day, Chuck :

> While trying to set a conditional parameter for the OpenIDC apache module,
> it seems the directive is not being respected at run-time. For example:
>
>
>
> 
>
>Define locale1 fr-FR
>
> 
>
> 
>
>Define locale1 en-UK
>
> 
>
> OIDCAuthRequestParams locale=${locale1}
>
>
>
>
>
> The value of locale is set to en-UK. Have tried string match(i.e.
> -strmatch) with same results.
>
>
>
> Anyone successfully using the if directive in Apache 2.4 for a similar
> use-case? Thank You.
>
>
>
not sure if you are still working on this issue but would it be possible
for you to turn logs to trace8 (
https://httpd.apache.org/docs/2.4/mod/core.html#loglevel) and verify what
gets written in the error log? Moreover it would be interesting if you
could check again with httpd 2.4.26.

Thanks!

Luca


Re: [users@httpd] How does apache2.4 maintains php7.0 opcache in prefork model

2017-06-09 Thread Luca Toscano
Hi,

2017-06-09 17:42 GMT+02:00 Kalyana sundaram :

> Does each apache2.4 child processes maintain their own opcache or is there
> a global opcache shared by all children?
>

if you are talking about a prefork model with mod_php then I'd say that the
opcache is one per children and not shared.

Luca


Re: [users@httpd] Vendor Connection via Proxy to SNI Server response 403 Forbidden

2017-06-08 Thread Luca Toscano
Hi Reid,

 while re-reading the logs I noticed one thing:

2017-06-07 2:42 GMT+02:00 Reid Watson :

>
> [Wed Jun 07 11:54:28.887001 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(1086): [remote 54.230.144.17:443] SNI
> extension for SSL Proxy request set to 'Internal-site.test.com'
> [Wed Jun 07 11:54:28.887011 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1788): [remote 54.230.144.17:443]
> OpenSSL: Handshake: start
>
[..]

> [Wed Jun 07 11:54:29.302044 2017] [proxy_http:trace3] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1424): [client 10.0.0.1:19478] Status
> from backend: 403
> [Wed Jun 07 11:54:29.302056 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1099): [client 10.0.0.1:19478] Headers
> received from backend:
> [Wed Jun 07 11:54:29.302063 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] Server:
> CloudFront
> [Wed Jun 07 11:54:29.302068 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] Date:
> Tue, 06 Jun 2017 23:54:29 GMT
> [Wed Jun 07 11:54:29.302075 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> Content-Type: text/html
> [Wed Jun 07 11:54:29.302078 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> Content-Length: 555
> [Wed Jun 07 11:54:29.302082 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> Connection: close
> [Wed Jun 07 11:54:29.302085 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] X-Cache:
> Error from cloudfront
> [Wed Jun 07 11:54:29.302089 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] Via: 1.1
> 515297ac55a7ae01bf8c7d03df4fecb1.cloudfront.net (CloudFront)
> [Wed Jun 07 11:54:29.302092 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> X-Amz-Cf-Id: 
> [Wed Jun 07 11:54:29.302103 2017] [proxy_http:trace3] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1687): [client 10.0.0.1:19478] start
> body send
>

There is a clear indication that the SNI is wrong:

SNI extension for SSL Proxy request set to 'Internal-site.test.com'

So my understanding is that you perform correctly the TLS handshake to
Amazon Cloudfront (used as CDN), but since the SNI is wrong you get a 403
from the backend. Can you try to replace your Rewrite rules with
mod_proxy_http and ProxyPass (
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html) and see if anything
changes (namely if the SNI is set to the one that you expect) ?

Luca


Re: [users@httpd] Vendor Connection via Proxy to SNI Server response 403 Forbidden

2017-06-07 Thread Luca Toscano
2017-06-07 2:42 GMT+02:00 Reid Watson :

> Hi Luca,
>
> I think the vendor is might be putting me down the wrong path because I
> receive
>
> "[Wed Jun 07 11:54:29.302145 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1807): [remote 54.230.144.17:443]
> OpenSSL: Write: SSL negotiation finished successfully"
>
> I thought I would receive "SNI Hostname Error” if I had a mismatch
>
> auckland.collegescheduler.com (54.230.144.17) = External Vendor
>
> Log Snippet
>
> [Wed Jun 07 11:54:28.750881 2017] [proxy:debug] [pid 9177:tid
> 140532624602880] proxy_util.c(2394): [client 10.0.0.1:19478] AH00947:
> connected /api/institutiondata//COHORTS to
> auckland.collegescheduler.com:443
> [Wed Jun 07 11:54:28.886833 2017] [proxy:debug] [pid 9177:tid
> 140532624602880] proxy_util.c(2771): AH02824: HTTPS: connection established
> with 54.230.144.17:443 (*)
> [Wed Jun 07 11:54:28.886887 2017] [proxy:debug] [pid 9177:tid
> 140532624602880] proxy_util.c(2923): AH00962: HTTPS: connection complete to
> 54.230.144.17:443 (auckland.collegescheduler.com)
> [Wed Jun 07 11:54:28.886897 2017] [ssl:info] [pid 9177:tid
> 140532624602880] [remote 54.230.144.17:443] AH01964: Connection to child
> 0 established (server Internal-site.test.com:80)
> [Wed Jun 07 11:54:28.886921 2017] [ssl:trace2] [pid 9177:tid
> 140532624602880] ssl_engine_rand.c(124): Seeding PRNG with 144 bytes of
> entropy
> [Wed Jun 07 11:54:28.886985 2017] [ssl:trace4] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(1489): [remote 54.230.144.17:443]
> coalesce: have 0 bytes, adding 776 more
> [Wed Jun 07 11:54:28.886993 2017] [ssl:trace4] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(1551): [remote 54.230.144.17:443]
> coalesce: passing on 545 bytes
> [Wed Jun 07 11:54:28.887001 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(1086): [remote 54.230.144.17:443] SNI
> extension for SSL Proxy request set to 'Internal-site.test.com'
> [Wed Jun 07 11:54:28.887011 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1788): [remote 54.230.144.17:443]
> OpenSSL: Handshake: start
> [Wed Jun 07 11:54:28.887022 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1797): [remote 54.230.144.17:443]
> OpenSSL: Loop: before/connect initialization
> [Wed Jun 07 11:54:28.887040 2017] [ssl:trace4] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(2050): [remote 54.230.144.17:443]
> OpenSSL: write 277/277 bytes to BIO#7fd04400ad80 [mem: 7fd044021b10] (BIO
> dump follows)
>
> [Wed Jun 07 11:54:28.887149 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1797): [remote 54.230.144.17:443]
> OpenSSL: Loop: SSLv2/v3 write client hello A
> [Wed Jun 07 11:54:28.887154 2017] [core:trace6] [pid 9177:tid
> 140532624602880] core_filters.c(527): [remote 54.230.144.17:443]
> core_output_filter: flushing because of FLUSH bucket
> [Wed Jun 07 11:54:29.024967 2017] [ssl:trace4] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(2050): [remote 54.230.144.17:443]
> OpenSSL: read 7/7 bytes from BIO#7fd044019290 [mem: 7fd00c024be0] (BIO dump
> follows)
>
> [Wed Jun 07 11:54:29.165225 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1797): [remote 54.230.144.17:443]
> OpenSSL: Loop: SSLv3 read finished A
> [Wed Jun 07 11:54:29.165239 2017] [ssl:trace3] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1792): [remote 54.230.144.17:443]
> OpenSSL: Handshake: done
> [Wed Jun 07 11:54:29.165269 2017] [ssl:debug] [pid 9177:tid
> 140532624602880] ssl_engine_kernel.c(1841): [remote 54.230.144.17:443]
> AH02041: Protocol: TLSv1.2, Cipher: ECDHE-RSA-AES128-GCM-SHA256 (128/128
> bits)
> [Wed Jun 07 11:54:29.165288 2017] [ssl:trace4] [pid 9177:tid
> 140532624602880] ssl_engine_io.c(2050): [remote 54.230.144.17:443]
> OpenSSL: write 574/574 bytes to BIO#7fd04400ad80 [mem: 7fd00c02cd33] (BIO
> dump follows)
>
> [Wed Jun 07 11:54:29.302044 2017] [proxy_http:trace3] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1424): [client 10.0.0.1:19478] Status
> from backend: 403
> [Wed Jun 07 11:54:29.302056 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1099): [client 10.0.0.1:19478] Headers
> received from backend:
> [Wed Jun 07 11:54:29.302063 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] Server:
> CloudFront
> [Wed Jun 07 11:54:29.302068 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478] Date:
> Tue, 06 Jun 2017 23:54:29 GMT
> [Wed Jun 07 11:54:29.302075 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> Content-Type: text/html
> [Wed Jun 07 11:54:29.302078 2017] [proxy_http:trace4] [pid 9177:tid
> 140532624602880] mod_proxy_http.c(1101): [client 10.0.0.1:19478]
> Content-Length: 555
> [Wed Jun 07 11:54:29.302082 2017] [proxy_http:trace4] [pid 9177:tid
> 

Re: [users@httpd] Vendor Connection via Proxy to SNI Server response 403 Forbidden

2017-06-05 Thread Luca Toscano
Hi Reid,

2017-06-03 3:11 GMT+02:00 Reid Watson :

> Hi Everyone,
>
> There are few posts going around and I was wondering if any one had some
> advice or experienced a similar issues
>
> Current Apache Version: httpd-2.4.12
>
> Issue
>
> - External Vendor WebServer enables SNI check
> - I currently connect to vendor via proxy (from Http to Https)
> - I disable ssl checks on the certificate
> - Each time we make a connection I’m returned 403, the reason is the
> vendor enables SNI check and within the Client Hello (SSL Handshake) packet
> we set ServerName from vHost “Internal-site.test.com”
>
> Basic config
>
> 
>
>  ServerName Internal-site.test.com
>
>   SSLProxyCheckPeerName off
>   SSLProxyCheckPeerCN off
>   SSLProxyCheckPeerExpire off
>
>  RewriteCond %{REQUEST_URI} ^/path
>  RewriteRule ^/path/(.*) https://vendor-site.com/$1 [P,L,E=
> vendor-site.com]
>
> 
>
> Does any one have any advice on the current issue or a trick / workaround
> with mod_ssl / mod_proxy
>
> for example would I attempt to overwrite the environment variable "SetEnv
> SSL_TLS_SNI vendor-site.com” ?
>

My understanding is that you want to have a (reverse) http proxy that
respond to Internal-site.test.com with the content of vendor-site.com,
leaving to httpd the responsibility to set the "right" TLS SNI domain (in
this case the one that you want is vendor-site.com).

Is my understanding correct? Can you please turn loglevel to trace8 (
https://httpd.apache.org/docs/2.4/mod/core.html#loglevel) and show us what
httpd logs during a request that returns 403?

Luca


Re: [users@httpd] Apache 2.4.25 with openssl 1.1.0e

2017-06-05 Thread Luca Toscano
Hi,

2017-06-05 8:52 GMT+02:00 Hemant Chaudhary :

> Hi
>
> I am trying to build httpd-2.4.25 with openssl-1.1.0e. But getting error
> in SSLv2_Client_Method, CRYPTO_malloc_init functions .
>
> Whether anyone encountered the same problem?
> Does apache-2.4.25 support openssl 1.1.0e?
>

The support has been added only recently, so 2.4.25 does not support it.
You can try to apply the following patch though:

svn.apache.org/r1787728 (commit msg is a bit misleading, probably related
to another change)

Luca


Re: Re[2]: [users@httpd] apache in proxy mode introduces extra delay for sockjs in xhr poll mode

2017-06-01 Thread Luca Toscano
Hi Stepan,

Have you tried to explicitly set ProxyTimeout? If your environment is a
testing one, would it be possible for you to raise the LogLevel to trace8
and send us the logs (
https://httpd.apache.org/docs/2.4/mod/core.html#loglevel) ? I am assuming
that you have httpd 2.4, but which version? Have you tried the same with
the most recent release?

Thanks!

Luca

2017-05-29 13:49 GMT+02:00 Stepan Yakovenko 
:

> Tried disablereuse=On, nothing changes.
>
>
> Пятница, 26 мая 2017, 14:12 +03:00 от Daniel :
>
> Have you tried with disabling reuse to see if the problem persists?
>
> 2017-05-26 12:54 GMT+02:00 Stepan Yakovenko  invalid>:
> > I need to handle users disconnecting from my sockjs application running
> in
> > xhr-polling mode. When I connect to localhost, everything works as
> expected.
> > When I put apache between nodejs and browser, I get ~20 sec delay between
> > closed browser and disconnect event inside nodejs. My apache proxy
> config is
> > following:
> >
> > 
> > ProxyPass http://127.0.0.1:8080/
> > ProxyPassReverse http://127.0.0.1:8080/
> > 
> >
> > The rest of the file is default, you can see it here. I tried playing
> with
> > ttl=2 and timeout=2 options, but either nothing changes, or I get
> > reconnected each 2 seconds without closing browser. How can I reduce
> > additional disconnect timeout, introduced, but apache, somewhere in its
> > defaults?
> >
> >
> >
> >
> > --
> > Stepan Yakovenko
>
>
>
> --
> Daniel Ferradal
> IT Specialist
>
> email dferradal at gmail.com
> linkedin es.linkedin.com/in/danielferradal
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>
>
> --
> Stepan Yakovenko
>


Re: [users@httpd] http/2 vs. Headername

2017-05-23 Thread Luca Toscano
Hi Hajo,

any chance that you could download/build/test the latest release of
https://github.com/icing/mod_h2/releases ?

Luca

2017-05-23 11:30 GMT+02:00 Hajo Locke :

> Hello,
>
> no one has an idea? Currently i believe this is a kind of apache bug.
> I compiled curl with http2 Support to view more debugdetails:
>
> curl -v --http2 https://example.com/
>
> *   Trying ip.ip.ip.ip...
> * Connected to example.com (ip.ip.ip.ip) port 443 (#0)
> * found 173 certificates in /etc/ssl/certs/ca-certificates.crt
> * found 696 certificates in /etc/ssl/certs
> * ALPN, offering h2
> * ALPN, offering http/1.1
> * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
> *server certificate verification OK
> *server certificate status verification SKIPPED
> *common name: example.com (matched)
> *server certificate expiration date OK
> *server certificate activation date OK
> *certificate public key: RSA
> *certificate version: #3
> *subject: CN=example.com
> *start date: Mon, 22 May 2017 05:04:00 GMT
> *expire date: Sun, 20 Aug 2017 05:04:00 GMT
> *issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
> *compression: NULL
> * ALPN, server accepted to use h2
> * Using HTTP2, server supports multi-use
> * Connection state changed (HTTP/2 confirmed)
> * TCP_NODELAY set
> * Copying HTTP/2 data in stream buffer to connection buffer after upgrade:
> len=0
> * Using Stream ID: 1 (easy handle 0x55c2a77b4bd0)
> > GET / HTTP/1.1
> > Host: example.com
> > User-Agent: curl/7.47.0
> > Accept: */*
> >
> * Connection state changed (MAX_CONCURRENT_STREAMS updated)!
> * HTTP/2 stream 1 was not closed cleanly: error_code = 1
> * Closing connection 0
> curl: (16) HTTP/2 stream 1 was not closed cleanly: error_code = 1
>
> Same problem as in webbrowsers.
> problem can be avoided by disabling http2 modul or by removing Headername
> from .htaccess. Both is not intended.
> Somebody can confirm this problem?
>
> Thanks,
> Hajo
>
>
> Am 22.05.2017 um 09:22 schrieb Hajo Locke:
>
> Apache 2.4.25
>
> Hello,
>
> i have a small .htaccess with following content to view Foldercontents:
> ###
> Options +Indexes
> Headername /foo/bar.htm
> ###
> This is working by http, but fails in https if browser uses http/2.
> Chrome Message: ERR_SPDY_PROTOCOL_ERROR
> Firefox: Secure Connection Failed
>
> I dont see any error in my logs, http/2 Browsers just stop loading.
> When disabling http/2, also https is working.
> What to do now?
>
> Thanks,
> Hajo
>
>
>


Re: [users@httpd] Apache HTTP Server - 2.4.15-mod_prefork module

2017-05-23 Thread Luca Toscano
Hi!

Probably you have another LoadModule some-mpm in one of the included files
(look for Include in your httpd config), can you double check?

Luca

2017-05-23 10:07 GMT+02:00 Velmurugan Dhakshnamoorthy :

> But,  I am loading only one,  others are commented out.
>
> Thanks.
>
> On May 23, 2017 14:36, "Daniel"  wrote:
>
>> Of course, you should not load different mpm modules, only one at a
>> time. Load only the one you need.
>>
>> 2017-05-23 6:02 GMT+02:00 Velmurugan Dhakshnamoorthy > >:
>> > thanks, I installed the preform using "--enable-mpms-shared=all" , now
>> when
>> > I try to start the apache,
>> >
>> >  it throws an error "AH00534: httpd: Configuration error: More than one
>> MPM
>> > loaded"
>> >
>> > LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
>> > #LoadModule mpm_worker_module modules/mod_mpm_worker.so
>> > #LoadModule mpm_event_module modules/mod_mpm_event.so
>> >
>> >
>> >
>> > Regards,
>> > Velmurugan Dhakshnamoorthy (Vel)
>> >
>> >
>> > On Mon, May 22, 2017 at 7:49 PM, Daniel  wrote:
>> >>
>> >> You should ask the maintainer of the installation source from where
>> >> you got that Apache installation.
>> >>
>> >> If you compiled it yourself make sure to have this options with
>> configure:
>> >> --enable-mpms-shared=all
>> >>
>> >> 2017-05-22 10:04 GMT+02:00 Velmurugan Dhakshnamoorthy
>> >> :
>> >> > Dear,
>> >> > Any help how do I explicitly  install and enable mod_prefork module
>> for
>> >> > Apache 2.4.15 proxy.
>> >> >
>> >> > When I installed Apache proxy,  chose mod_modules to all,  but
>> prefork
>> >> > is
>> >> > not installed,  cannot see it in modules folder.
>> >> >
>> >> > Please help.
>> >> >
>> >> > Regards,
>> >> > Vel
>> >>
>> >>
>> >>
>> >> --
>> >> Daniel Ferradal
>> >> IT Specialist
>> >>
>> >> email dferradal at gmail.com
>> >> linkedin es.linkedin.com/in/danielferradal
>> >>
>> >> -
>> >> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> >> For additional commands, e-mail: users-h...@httpd.apache.org
>> >>
>> >
>>
>>
>>
>> --
>> Daniel Ferradal
>> IT Specialist
>>
>> email dferradal at gmail.com
>> linkedin es.linkedin.com/in/danielferradal
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: users-h...@httpd.apache.org
>>
>>


Re: [users@httpd] apache 2.4 includes vi .swp files

2017-05-11 Thread Luca Toscano
2017-05-09 11:40 GMT+02:00 Nick Kew :

>
> > But i wonder if apache should basically tries to include a file
> > "beginning with dot"/"ending with swp" which generelly indicates a
> > temporary/hidden file.
>
> Once you start excluding files by convention (which may be
> entirely different and inappropriate on another platform),
> it's a minefield.


+1, I completely agree with Nick, I think that it is safer just to use a
more specific syntax for loading (like something/*.conf) than to implement
heuristics.

Luca


Re: [users@httpd] cgi script error output logging

2017-05-11 Thread Luca Toscano
Hi Sandro,

have you checked
https://httpd.apache.org/docs/2.4/mod/core.html#errorlogformat ? What is
the current format that you are using? Also, what version of httpd?

Luca

2017-05-11 10:07 GMT+02:00 KASPAR Sandro :

> Hi suomi,
>
>
> Thank you for your answer. Unfortunately I am not using php-fpm but fcgid.
> As far as I know there is no such possibility in fcgid.
>
> Any other ideas?
>
>
> Sandro
>
>
> --
> *From:* fedora 
> *Sent:* Thursday, May 11, 2017 6:54 AM
> *To:* users@httpd.apache.org
> *Subject:* Re: [users@httpd] cgi script error output logging
>
> Hi Sandro
>
> are you using php-fpm as a cgi frontend? If yes: the stdout and stderr
> are both redirectet to the php-fpm log (/var/log/php-fpm/*) if you have
> in /etc/php-fpm/www.conf:
>
> catch_workers_output = yes
>
> I don't think this will solve all your problems, but it is a good
> starting point.
>
> suomi
>
> On 05/10/2017 03:59 PM, KASPAR Sandro wrote:
> >
> > Hi,
> >
> > According to this documentation http://httpd.apache.org/docs/
> 2.4/logs.html everything a cgi script sends to stderr is written to the
> apache  error log file.
> >
> > Unfortunately I can not control those scripts running on my server and
> often garbage is sent to stderr and then written to my error log. Because
> there isn't even a timestamp or any other useful information on those
> lines, I can't find out, which vhost created  the error. In addition those
> log lines often lack a "new line" at the end, and it the next log message
> gets appended to the current line instead of being written to a new line.
> Because of this, the next line which would have been in the correct format
> is also lost, because it can't be parsed automatically anymore.
> >
> > Is there any way to change this behaviour? What I would like to achieve
> is to have the cgi errors in a seperate logfile or even better change the
> log format of these lines. For example prefix the log line with the vhost
> and a timestamp.
> > Because I don't have control over the cgi scripts, I would need to
> configure this in apaches main or vhost configuration. Unfortunately I
> could not find anything about cgi script output logging in the apache
> documentation (except that everything sent to  stderr is written to
> httpd-error.log) or mailing list archives.
> >
> > Any ideas appreciated. Thanks in advance!
> >
> > Sandro Kaspar
> > -
> > To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> > For additional commands, e-mail: users-h...@httpd.apache.org
> >
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] I need help figuring out a 500 response code

2017-05-05 Thread Luca Toscano
Hi John,

can you share with us your error log (redacting IPs and personal info) so
we can check as well?

Otherwise I'd suggest to reach out to the #httpd Freenode IRC channel more
a quicker response, there are a lot of people in there that might help you.

Luca

2017-05-04 19:33 GMT+02:00 John Covici :

> Hi again.  Is there any way I can get help on my problem?  I am pretty
> desperate -- I have shared hundreds of links and they are all no good
> till I get this working again.
>
> On Wed, 03 May 2017 09:08:35 -0400,
> Daniel wrote:
> >
> > [1  ]
> > [2  ]
> > Perhaps you should also add how you are configuring httpd to handle the
> interpretation of PHP files.
> >
> > That is, if you are, for example using mod_proxy_fcgi to send php file
> requests to php-fpm you should see your 500 detailed errors there instead
> of Apache.
> >
> > Apache will always log 500status errors, so maybe you should make sure
> you are checking the correct login if you are not using the case I describe
> above.
> >
> > If you are using the dreaded mod_php you should check for php directives
> you can specify for more verbose logging onto why your php scripts fail.
> >
> > I use owncloud too, so if you want I can show you a configuration
> snippet on how to set apache with mod_proxy_fcgi reverse proxy php requests
> to a php-fpm pool
> >
> > 2017-05-03 11:21 GMT+02:00 John Covici :
> >
> >  The error_log just had one line or in debug mode a lot of information
> >  about ssl and several lines about requireall granted, but no further
> >  information about the error.
> >
> >  On Wed, 03 May 2017 02:55:28 -0400,
> >  Dr James Smith wrote:
> >  >
> >  > Is there an error.log in the same directory? This is usually in
> >  > the same directory this should contain some information about why
> >  > the system failed.
> >  >
> >  >
> >  > On 03/05/2017 07:41, John Covici wrote:
> >  > > Hi. I am having major problems figuring out a 500 response code I am
> >  > > getting on my hserver.
> >  > >
> >  > > I am using apache 2.4.25 on gentoo linux up to date as of a few days
> >  > > ago.
> >  > >
> >  > > So, I havinstalled owncloud which is a cloud server written in php
> and
> >  > > it has worked for a long time, but for a few days I have gotten 500
> >  > > when I try to access it. Now, I am using https normally to access
> and
> >  > > when I look at the error_log, I get just one line like this:
> >  > >
> >  > > [Wed May 03 02:14:37.074791 2017] [ssl:info] [pid 22312] [client
> >  > > 192.168.0.2:56613] AH01964: Connection to child 0 established
> (server
> >  > > ccs.covici.com:443)
> >  > >
> >  > > If I change the loglevel to debug, I get all kinds of ssl
> information
> >  > > and the lines saying that requireall was granted, but nothing about
> >  > > the error.
> >  > >
> >  > > Now, if I change to http access, on my access_log I get lines like
> the
> >  > > following:
> >  > >
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud
> HTTP/1.1"
> >  > > 301 295
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud
> HTTP/1.1"
> >  > > 301 295 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0;
> >  > > rv:11.0) like Gecko"
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud
> HTTP/1.1"
> >  > > 301 295 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0;
> >  > > rv:11.0) like Gecko"
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud/
> HTTP/1.1"
> >  > > 302 -
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud/
> HTTP/1.1"
> >  > > 302 - "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0;
> rv:11.0)
> >  > > like Gecko"
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET /owncloud/
> HTTP/1.1"
> >  > > 302 - "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0;
> rv:11.0)
> >  > > like Gecko"
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET
> >  > > /owncloud/index.php/login HTTP/1.1" 500 -
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET
> >  > > /owncloud/index.php/login HTTP/1.1" 500 - "-" "Mozilla/5.0 (Windows
> NT
> >  > > 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
> >  > > 192.168.0.2 - - [03/May/2017:02:33:38 -0400] "GET
> >  > > /owncloud/index.php/login HTTP/1.1" 500 - "-" "Mozilla/5.0 (Windows
> NT
> >  > > 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
> >  > >
> >  > > Now, owncloud has theirownw log, but I get nothing in it.
> >  > >
> >  > > So, my question is how to find out more about why I am getting the
> 500
> >  > > response and what I can do about it.
> >  > >
> >  > > Thanks in advance for any suggestions.
> >  > >
> >  >
> >  >
> >  >
> >  > --
> >  > The Wellcome Trust Sanger Institute is operated by Genome
> >  > Research Limited, a charity registered in England with number
> >  > 1021457 and a company registered in England with number 2742969,
> >  > whose registered office is 215 Euston Road, London, NW1 2BE.
> >  > 

Re: [users@httpd] Apache + Squid Proxy: AH01991: SSL input filter read failed

2017-05-03 Thread Luca Toscano
Hi,

2017-05-02 19:18 GMT+02:00 chiasa.men :

> Hi,
> my apache is behind a squid proxy which is configured like that:
> https_port 3128 accel cert=/cert.pem key=/cert.key defaultsite=
> ww1.example.com
> vhost
> acl server20_domains dstdomain ww1.example.com ww2.example.com
> http_access allow server20_domains
> cache_peer server20 parent 443 0 no-query originserver name=server20
> login=PASSTHRU ssl sslversion=6
> cache_peer_access server20 allow server20_domains
> cache_peer_access server20 deny all
>
> The idea was to send ww1 and ww2 to server20 which is hosting an apache
> webservice for both sites.
> It works but each time I visit one of those sites the following messages
> appear in apache's logs:
>
> [00:00:39.641665] ---
> [00:00:44.641883] [ssl:info] ssl_engine_io.c(675): (70007)The timeout
> specified has expired: [client wwwclient:47122] AH01991: SSL input filter
> read
> failed.
> [00:00:44.642170] [ssl:info] ssl_engine_io.c(675): (70007)The timeout
> specified has expired: [client wwwclient:47120] AH01991: SSL input filter
> read
> failed.
> [00:00:44.642442] [ssl:info] ssl_engine_io.c(675): (70007)The timeout
> specified has expired: [client wwwclient:47118] AH01991: SSL input filter
> read
> failed.
> [00:00:44.642570] [ssl:info] ssl_engine_io.c(675): (70007)The timeout
> specified has expired: [client wwwclient:47124] AH01991: SSL input filter
> read
> failed.
> [00:00:44.642977] [ssl:debug] ssl_engine_io.c(1016): -: [client wwwclient:
> 47118] AH02001: Connection closed to child 11 with standard shutdown
> (server
> ww1.example.com:443)
> [00:00:44.643241] [ssl:debug] ssl_engine_io.c(1016): -: [client wwwclient:
> 47124] AH02001: Connection closed to child 6 with standard shutdown (server
> ww1.example.com:443)
> [00:00:44.643373] [ssl:debug] ssl_engine_io.c(1016): -: [client wwwclient:
> 47120] AH02001: Connection closed to child 5 with standard shutdown (server
> ww1.example.com:443)
> [00:00:44.643560] [ssl:debug] ssl_engine_io.c(1016): -: [client wwwclient:
> 47122] AH02001: Connection closed to child 8 with standard shutdown (server
> ww1.example.com:443)
> [00:00:44.647119] [ssl:info] ssl_engine_io.c(675): (70007)The timeout
> specified has expired: [client wwwclient:47116] AH01991: SSL input filter
> read
> failed.
> [00:00:44.647347] [ssl:debug] ssl_engine_io.c(1016): -: [client wwwclient:
> 47116] AH02001: Connection closed to child 3 with standard shutdown (server
> ww1.example.com:443)
>
> The corresponding squid access.log entries would be:
> [00:00:39] "GET https://ww1.example.com/a/ HTTP/1.1" 503 4033 "-" "ua"
> TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/some.js HTTP/1.1" 304 240
> "https://
> ww1.example.com/a/" "ua" TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/someother.js HTTP/1.1" 304 239
> "https://ww1.example.com/a/; "ua" TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/more.js HTTP/1.1" 304 241
> "https://
> ww1.example.com/a/" "ua" TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/some.css HTTP/1.1" 304 277
> "https://
> ww1.example.com/a/" "ua" TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/someother.css HTTP/1.1" 304 277
> "https://ww1.example.com/a/; "ua" TCP_MISS:FIRSTUP_PARENT
> [00:00:39] "GET https://ww1.example.com/a.png HTTP/1.1" 304 241 "https://
> ww1.example.com/a/" "ua" TCP_MISS:FIRSTUP_PARENT
>
>
> You can see that approximately after 5s the timeout happens. Is it a
> message
> to worry about? (it is just "info" labled) Why does it occur?
>
> I sent basically the same problem to squid's mailing list because I
> supposed
> squid was the problematic part here. But since they suggested apache could
> be
> the weirdo, I'm asking here
> Thanks for your help
>

I'd need to ask you a couple of questions since I am not familiar with
Squid:

1) Does Squid terminate TLS/SSL or is it proxied to httpd in some way? Can
you describe a bit more your set up?
2) Can you share your httpd configuration? Do you have any timeout set on
it that might explain this in httpd or Squid (check also default timeouts)?
3) Not super familiar with Squid but from the logs it seems that a 503 is
logged for https://ww1.example.com/a.. Is it normal?

Luca


Re: [users@httpd] Apache 2.4 with Mysql authentication

2017-05-02 Thread Luca Toscano
Hi David,

2017-05-01 23:39 GMT+02:00 David Mehler :

>
> Can someone take a look at my mysql setup and tell me if I have any
> mistakes in it?
>

Can you tell us what is the issue that you are seeing? Anything relevant in
the error_log? What version of httpd?

Thanks,

Luca


Re: [users@httpd] how to enable TLS v1.1 and TLS v1.2 alone in Apache 2.4.10 ?

2017-05-02 Thread Luca Toscano
Hi,

I'd suggest to reach out to the IRC #httpd channel on Freenode, a lot of
people in there can help you quickly than a users@ email thread, especially
due to the fact that your issue will require a lot of details not yet
provided.

Luca

2017-05-01 15:20 GMT+02:00 Chunduru, Krishnachaithanya <
krishnachaithanya.chund...@broadridge.com>:

> Hi,
>
>
>
> Thanks for the info.
>
>
>
> I have already tried this, but was getting fatal mod_ssl error while
> enabling TLSv1.1 or 1.2.
>
>
>
> *Regards,*
>
> *Krishna*
>
>
>
> *From:* K R [mailto:kp0...@gmail.com]
> *Sent:* Saturday, April 29, 2017 9:28 AM
>
> *To:* users@httpd.apache.org
> *Subject:* Re: [users@httpd] how to enable TLS v1.1 and TLS v1.2 alone in
> Apache 2.4.10 ?
>
>
>
> https://serverfault.com/questions/314858/how-to-
> enable-tls-1-1-and-1-2-with-openssl-and-apache
>
>
>
> On Wed, Apr 19, 2017 at 7:37 AM, Chunduru, Krishnachaithanya <
> krishnachaithanya.chund...@broadridge.com> wrote:
>
> Hi Eric/All,
>
> Can you please help me with the below.
>
> Regards,
> Krishna
>
>
> -Original Message-
> From: Chunduru, Krishnachaithanya [mailto:Krishnachaithanya.
> chund...@broadridge.com]
> Sent: Monday, April 17, 2017 6:34 PM
> To: users@httpd.apache.org
> Subject: RE: [users@httpd] how to enable TLS v1.1 and TLS v1.2 alone in
> Apache 2.4.10 ?
>
> Hi Eric,
>
> We used the openssl version is 1.0.1.515 while installing the Apache
> 2.4.10.
>
> Regards,
> Krishna
>
> -Original Message-
> From: Eric Covener [mailto:cove...@gmail.com]
> Sent: Monday, April 17, 2017 6:18 PM
> To: users@httpd.apache.org
> Subject: Re: [users@httpd] how to enable TLS v1.1 and TLS v1.2 alone in
> Apache 2.4.10 ?
>
> On Mon, Apr 17, 2017 at 6:59 AM, Chunduru, Krishnachaithanya <
> krishnachaithanya.chund...@broadridge.com> wrote:
> > Is TLS v1.1 and v1.2 not supported in Apache 2.4.10 running with
> > Openssl
> > 1.0.2.1000 ? your suggestions are highly appreciated as this is
> > pending in my account from long time.
>
> It probably depends what openssl  build your httpd was built against, not
> just what's loaded at runtime.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>


Re: [users@httpd] Apache as HTTP Proxy: GZIP compression handling configuration question

2017-05-01 Thread Luca Toscano
Hi Markus,

from your previous emails I understood a different picture, namely that you
didn't want to send compressed requests to the backend to keep it as simple
as possible.

To solve your problem you might try to use SetOutputFilter INFLATE inside a
dedicated https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxy
section and leave the rest as it is (so httpd not trying to compress any
response to the client.

Hope that helps!

Luca

2017-04-28 17:36 GMT+02:00 Markus Gausling <markusgausl...@googlemail.com>:

> I am not sure exactly how I can configure that, i.e. when I used the
> following:
>
> 
>  RequestHeader set Accept-Encoding gzip
>  SetOutputFilter INFLATE
>
>  SetOutputFilter DEFLATE
> 
>
> How would Apache know that the content when going to the backend shall be
> compressed while the content provided to the clients shall be decompressed?
>
> Forward Proxy Use Case:
>
> Client   Forward Proxy  Backend
>   |  |  |
>   |   HTTP POST  |  |
>   |  (
> ​Body ​
> uncompressed) |  |
>   |->|  |
>   |  |  HTTP POST   |
>   |  |  Content-Encoding: gzip  |
>   |  |  Accept-Encoding: gzip   |
>   |  |->|
>   |  |  |
>   |  |  200 OK  |
>   |  | (Resp. body might be compressed) |
>   |  |<-|
>   |  |  |
>   |  200 OK  |  |
>   | (Body uncompressed)  |  |
>   |<-|  |
>   |  |  |
>
>
>
> The GET requests for Forward Proxy and Reverse Proxy are similar.
> Proxy adds "Accept-Encoding: gzip" header to
> ​ each​
> client request. If
> content
> ​ ​
> from Backend comes compressed it will be decompressed and
> returned with
> ​ ​
> correct header type to client:
>
> Client   Reverse Proxy  Backend
>   |  |  |
>   |   HTTP GET   |  |
>   |  (
> ​Body ​
> uncompressed) |  |
>   |->|  |
>   |  |  HTTP GET|
>   |  |  Accept-Encoding: gzip   |
>   |  |->|
>   |  |  |
>   |  |  200 OK  |
>   |  |
> ​ ​
> Content-Encoding: gzip
> ​  ​
> |
>   |  |<-|
>   |  |  |
>   |  200 OK  |          |
>   | (Body uncompressed)  |  |
>   |<-|  |
>   |  |  |
>
>
>
> 2017-04-28 15:57 GMT+02:00 Luca Toscano <toscano.l...@gmail.com>:
>
>> Hi Markus,
>>
>> 2017-04-26 12:21 GMT+02:00 Markus Gausling <markusgausl...@googlemail.com
>> >:
>>
>>> Hello,
>>>
>>> I am using Apache (2.4.10) as an HTTP Proxy with two virtual hosts
>>> listening
>>> on different ports:
>>> - Forward Proxy
>>> - Reverse Proxy
>>>
>>> Depending on the use case applications either use the Forward Proxy or
>>> the
>>> Reverse Proxy.
>>>
>>> Now I want to make sure that for both virtual hosts the proxy does
>>> handles
>>> content compression (using gzip). Basically there are two use cases that
>>> need to be configured:
>>> - Use Case 1 - Request compressed content and decompress received content
>>> - Use Case 2 - Compress outgoing traffic (HTTP POST)
>>>
>>> This is to ensure that applications using any of the two HTTP Proxies do
>>> not
>>> need to handle content inflation/deflation (besides other things the
>>> proxies are configured to do).
>>> The applications are

Re: [users@httpd] Apache as HTTP Proxy: GZIP compression handling configuration question

2017-04-28 Thread Luca Toscano
Hi Markus,

2017-04-26 12:21 GMT+02:00 Markus Gausling :

> Hello,
>
> I am using Apache (2.4.10) as an HTTP Proxy with two virtual hosts
> listening
> on different ports:
> - Forward Proxy
> - Reverse Proxy
>
> Depending on the use case applications either use the Forward Proxy or the
> Reverse Proxy.
>
> Now I want to make sure that for both virtual hosts the proxy does handles
> content compression (using gzip). Basically there are two use cases that
> need to be configured:
> - Use Case 1 - Request compressed content and decompress received content
> - Use Case 2 - Compress outgoing traffic (HTTP POST)
>
> This is to ensure that applications using any of the two HTTP Proxies do
> not
> need to handle content inflation/deflation (besides other things the
> proxies are configured to do).
> The applications are basically simple libcurl programms that shall be kept
> as
> simple as possible, which is the reason of this exercise.
>
> Use Case 1 works fine when I add the "Accept-Encoding: gzip" header to
> each
> outgoing request and when I inflate received content. This is achieved by
> adding the following to the Virtual Host section of each proxy:
>
> 
> RequestHeader set Accept-Encoding gzip
> SetOutputFilter INFLATE
> 
>
> My problem is that I am not able to configure the Virtual Hosts so that
> each
> HTTP POST request from an application (with uncompressed body) gets
> deflated
> in the HTTP Proxy before being sent to the Web Server.
>
> So my questions are:
> - Is this supported by mod_deflate anyway?
> - How would I need to configure mod_deflate for this?
> - Do I need to handle the Forward and the Reverse Proxy separately or is
> the
>  configuration the same?
>

I am probably missing something important but I'd have just used
SetOutputFilter DEFLATE for your use cases. My assumption is that the
reverse proxy can handle request compression and does not pass any
(compressed) requests to the backend as they are, because it shouldn't
assume anything about the backend (like being able to handle compression).
The HTTP POST outgoing traffic should be deflated with the SetOutputFilter
directive as every response returned by the proxy (or maybe a subset of
them, depending on the config).

Luca


Re: [users@httpd] Re: Error trying to use 'mod_auth_form' and 'mod_dbd' with sqlite3

2017-04-28 Thread Luca Toscano
Hi Tom,

2017-04-28 1:16 GMT+02:00 Tom Donovan <donov...@bellatlantic.net>:

> On 04/26/2017 06:49 AM, Tom Browder wrote:
>
>> On Wed, Apr 26, 2017 at 05:06 Tom Browder <tom.brow...@gmail.com > tom.brow...@gmail.com>> wrote:
>>
>> On Wed, Apr 26, 2017 at 04:04 Luca Toscano <toscano.l...@gmail.com
>> <mailto:toscano.l...@gmail.com>> wrote:
>>
>> > I think I just discovered I what the problem is: I'm using
>> harp.js to
>> > build my site and the  is compiling incorrectly.
>>
>> Well, that wasn't the problem.
>>
>> The error is still:
>>
>> [dbd:error] [pid 18921:tid 140512673658624] (20014)Internal
>> error:
>> Â  AH00632: failed to prepare SQL statements: near
>> "authn_query": syntax error
>>
>>
>> I have no clue as to why dbd isn'initializing. Untill someone can tell me
>> exactly how to get it
>> working, I'm going to try the file method. Bummer!
>>
>> -Tom
>>
>
> I am able to reproduce your error, and it does seem to be a bug in mod_dbd
> when prepared statements are used inside  blocks for
> authentication.
>
> Instead of using DBDPrepareSQL to create a prepared statement, please try
> removing the DBDPrepareSQL line entirely and put your SQL statement
> directly in your AuthDBDUserPWQuery directive, like this:
>
>  AuthDBDUserPWQuery "SELECT password FROM authn WHERE user = %s"
>
> Note that Luca's advice was correct:  no single-quotes around the %s, and
> no terminating semicolon should be used in the SQL for httpd configuration
> directives, even though you would use them in interactive sqlite3 SQL
> commands.
>
> There should be no performance penalty for doing it this way.
> AuthDBDUserPWQuery automatically generates prepared statements.  Your
> original directives seem reasonable per the current documentation, so it's
> a bug - although I'm not sure (yet) if it's a doc bug or a code bug.



Thanks a lot for your review, I didn't check where the prepared statement
was used. As far as I can tell from
https://httpd.apache.org/docs/2.4/mod/mod_authn_dbd.html#authdbduserpwquery,
 AuthDBDUserPWQuery requires a SQL statement and not a label, so I think
that the error message "failed to prepare SQL statements: near
"authn_query": syntax error" is related to the fact that mod_dbd can't make
any prepared statement out of the raw string "authn_query".

Am I missing something in the docs that talks about prepared statements and
AuthDBDUserPWQuery? (sorry in case)

About the syntax of the prepared statement: httpd leverages APR's dbd SQL
syntax, that is outlined in:
https://apr.apache.org/docs/apr/2.0/group___a_p_r___util___d_b_d.html#gacf21412447c4357c64d1e9200a0f5eec
(so no vendor specific statements).

The documentation definitely needs some improvement :/

Luca


Re: [users@httpd] Re: Reg : Limiting http connections at Apache 2.4.25

2017-04-27 Thread Luca Toscano
Hi!

The closest option seems to be
http://mod-qos.sourceforge.net/#connectionlevelcontrol but I don't have a
lot of experience about it so you'll need to make some tests :)

Luca

2017-04-26 13:29 GMT+02:00 Velmurugan Dhakshnamoorthy <dvel@gmail.com>:

> Thanks Luca,  your help is much appreciated.
>
> I was able to compile and load mod_qos in Apache 2.4.25 proxy.
> There are many parameters related to QOS,  I am trying to implement a
> parameter to restrict HTTP session beyond specified limit,  example I would
> like to allow only maximum 100 connections,  beyond that should throw an
> error.
>
> Regards,
> Vel
>
> On Apr 21, 2017 18:52, "Luca Toscano" <toscano.l...@gmail.com> wrote:
>
>> Hi,
>>
>> I think that you'd just need to install httpd without any reference to
>> mod_qos (that is a third party module, so configure is not aware of it) and
>> finally use apxs to compile/install the new module (more info in
>> https://httpd.apache.org/docs/2.4/programs/apxs.html).
>>
>> I'd also suggest to test it with httpd 2.4.25 rather than 2.2.27 :)
>>
>> Hope that helps!
>>
>> Luca
>>
>> 2017-04-21 8:29 GMT+02:00 Velmurugan Dhakshnamoorthy <dvel@gmail.com>
>> :
>>
>>> Hi Luca,
>>> I am trying to use the mod_qos mod_qos-11.39. Is this applicable for
>>> Apache 2.4.25 reverse proxy in Red Hat Enterprise Linux 7.2. I followed the
>>> below command. however, mod_qos.so file is not getting created though I
>>> don't see any error with below commands. I am using non-root privileged
>>> account.
>>>
>>> I tried compiling it using "apxs -i -c mod_qos.c -lcrypto -lpcre" , but
>>> could not succeed.
>>> any help ?
>>>
>>> tar xfz httpd-2.2.27.tar.gz
>>> tar xfz mod_qos-11.39-src.tar.gz
>>> ln -s httpd-2.2.27 httpd
>>> cd httpd
>>> mkdir modules/qos
>>> cp ../mod_qos-11.39/apache2/* modules/qos
>>> ./buildconf
>>> ./configure --with-mpm=worker --enable-so --enable-qos=shared
>>> --enable-ssl --enable-unique-id
>>> make
>>>
>>>
>>> Regards,
>>> Velmurugan Dhakshnamoorthy (Vel)
>>> Singapore.
>>>
>>> On Tue, Apr 18, 2017 at 2:51 PM, Luca Toscano <toscano.l...@gmail.com>
>>> wrote:
>>>
>>>> Not sure what is the status of mod_qos (third party module), but you
>>>> might want to give it a try and see if it fits your needs!
>>>>
>>>> http://mod-qos.sourceforge.net/#requestlevelcontrol
>>>>
>>>> Luca
>>>>
>>>> 2017-04-17 3:08 GMT+02:00 Velmurugan Dhakshnamoorthy <
>>>> dvel@gmail.com>:
>>>>
>>>>> Dear All,
>>>>> Any specific setup to cut and disallow the the further HTTP
>>>>> connections after specified limit (ex: 50 sessions?).
>>>>>
>>>>> My requirement is to allow only 50 users and 51st user should get a
>>>>> custom error message to login after sometime.
>>>>>
>>>>> Regards,
>>>>> Vel
>>>>>
>>>>> On Mar 16, 2017 21:30, "Velmurugan Dhakshnamoorthy" <
>>>>> dvel@gmail.com> wrote:
>>>>>
>>>>>> Thanks for response,
>>>>>>
>>>>>> Yes my requirement is to completely restrict/disalllow any further
>>>>>> connections, example I want to allow only 50 sessions,  51st connection
>>>>>> should get an error message to login later  after certain period of time.
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Vel
>>>>>>
>>>>>> On Mar 16, 2017 18:58, "Nick Kew" <n...@apache.org> wrote:
>>>>>>
>>>>>>> On Thu, 2017-03-16 at 02:05 +0100, Daniel wrote:
>>>>>>> > See about mpm settings/directives such as MaxRequestWorkers, which
>>>>>>> > will limit the number of concurrent requests your server can take.
>>>>>>>
>>>>>>> Indeed, but I don't think that's what the OP is looking for in an
>>>>>>> apache proxy.  Rather the proxy reply with a "too busy" error page
>>>>>>> than not take the connection at all, right?
>>>>>>>
>>>>>>> The proxy balancer would be a place to look: that offers various
>>>>>>> ways to determine how much traffic to send to a backend.  If that
>>>>>>> doesn't meet your needs, there are several third-party traffic-
>>>>>>> limiting modules.
>>>>>>>
>>>>>>> --
>>>>>>> Nick Kew
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 
>>>>>>> -
>>>>>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>>>>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>>>>>>
>>>>>>>
>>>>
>>>
>>


Re: [users@httpd] Looking for direction: porting server from Apache 2.2.2 to 2.4.6 - ProxyHTMLURLMap ?

2017-04-27 Thread Luca Toscano
Hi Jeff,

2017-04-26 19:35 GMT+02:00 Jeff Cauhape :

> Hi,
>
>
>
> I’ve been given the task of moving a website from Apache 2.2.2 on Solaris
> to Apache 2.4.6 on Linux.
>
>
>
> So far, so good, but I’m running into a ‘syntax’ error when a config file
> uses the ProxyHTMLURLMap
>
> function. Apache 2.4.6 does not recognize this, and I don’t have the
> module (proxy_html_module /
>
> mod_proxy_html.so) required to support it.
>
>
>
> So… should I substitute calls to ProxyHTMLURLMap to some other function in
> 2.4.6 (if so which),
>
> or if not, where can I get the compiled function or source code?  From my
> take on the doc, proxy_html_module
>
> is supposed to be available, but is not included. It would be handy to
> know where I can get the source
>
> and compile my own, if that’s the way to go.
>
>
>
> Thanks in advance for any tips or suggestions,
>

mod_proxy_html is included in the official httpd release so you can easily
find it in https://httpd.apache.org. I think that your distribution might
offer it under a separate package, but it is difficult to say without more
info.

Hope that helps!

Luca

>


Re: [users@httpd] Re: Error trying to use 'mod_auth_form' and 'mod_dbd' with sqlite3

2017-04-26 Thread Luca Toscano
Hi Tom,

2017-04-26 3:23 GMT+02:00 Tom Browder :

> On Tue, Apr 25, 2017 at 14:47 Tom Browder  wrote:
> >
> > On Tue, Apr 25, 2017 at 12:03 PM, Tom Browder 
> wrote:
> > > Host: httpd version 2.4.25, Debian 8, 64-bit
> > >
> > > I am so close but getting the following error:
> > ...
> >
> > I think I just discovered I what the problem is: I'm using harp.js to
> > build my site and the  is compiling incorrectly.
>
>
> Well, that wasn't the problem.
>
> The error is still:
>
> [dbd:error] [pid 18921:tid 140512673658624] (20014)Internal error:
>   AH00632: failed to prepare SQL statements: near "authn_query": syntax
> error
>

 > DBDPrepareSQL "SELECT password FROM authn WHERE user = '%s';" authn_query

Not a big expert with dbd but looking in
https://httpd.apache.org/docs/current/mod/mod_session_dbd.html#dbdconfig it
seems that you have a wrong prepare sql statement. Can you try something
like:

DBDPrepareSQL "SELECT password FROM authn WHERE user = %s" authn_query

?

Luca


Re: [users@httpd] Re: Reg : Limiting http connections at Apache 2.4.25

2017-04-21 Thread Luca Toscano
Hi,

I think that you'd just need to install httpd without any reference to
mod_qos (that is a third party module, so configure is not aware of it) and
finally use apxs to compile/install the new module (more info in
https://httpd.apache.org/docs/2.4/programs/apxs.html).

I'd also suggest to test it with httpd 2.4.25 rather than 2.2.27 :)

Hope that helps!

Luca

2017-04-21 8:29 GMT+02:00 Velmurugan Dhakshnamoorthy <dvel@gmail.com>:

> Hi Luca,
> I am trying to use the mod_qos mod_qos-11.39. Is this applicable for
> Apache 2.4.25 reverse proxy in Red Hat Enterprise Linux 7.2. I followed the
> below command. however, mod_qos.so file is not getting created though I
> don't see any error with below commands. I am using non-root privileged
> account.
>
> I tried compiling it using "apxs -i -c mod_qos.c -lcrypto -lpcre" , but
> could not succeed.
> any help ?
>
> tar xfz httpd-2.2.27.tar.gz
> tar xfz mod_qos-11.39-src.tar.gz
> ln -s httpd-2.2.27 httpd
> cd httpd
> mkdir modules/qos
> cp ../mod_qos-11.39/apache2/* modules/qos
> ./buildconf
> ./configure --with-mpm=worker --enable-so --enable-qos=shared --enable-ssl
> --enable-unique-id
> make
>
>
> Regards,
> Velmurugan Dhakshnamoorthy (Vel)
> Singapore.
>
> On Tue, Apr 18, 2017 at 2:51 PM, Luca Toscano <toscano.l...@gmail.com>
> wrote:
>
>> Not sure what is the status of mod_qos (third party module), but you
>> might want to give it a try and see if it fits your needs!
>>
>> http://mod-qos.sourceforge.net/#requestlevelcontrol
>>
>> Luca
>>
>> 2017-04-17 3:08 GMT+02:00 Velmurugan Dhakshnamoorthy <dvel@gmail.com>
>> :
>>
>>> Dear All,
>>> Any specific setup to cut and disallow the the further HTTP connections
>>> after specified limit (ex: 50 sessions?).
>>>
>>> My requirement is to allow only 50 users and 51st user should get a
>>> custom error message to login after sometime.
>>>
>>> Regards,
>>> Vel
>>>
>>> On Mar 16, 2017 21:30, "Velmurugan Dhakshnamoorthy" <dvel@gmail.com>
>>> wrote:
>>>
>>>> Thanks for response,
>>>>
>>>> Yes my requirement is to completely restrict/disalllow any further
>>>> connections, example I want to allow only 50 sessions,  51st connection
>>>> should get an error message to login later  after certain period of time.
>>>>
>>>>
>>>> Regards,
>>>> Vel
>>>>
>>>> On Mar 16, 2017 18:58, "Nick Kew" <n...@apache.org> wrote:
>>>>
>>>>> On Thu, 2017-03-16 at 02:05 +0100, Daniel wrote:
>>>>> > See about mpm settings/directives such as MaxRequestWorkers, which
>>>>> > will limit the number of concurrent requests your server can take.
>>>>>
>>>>> Indeed, but I don't think that's what the OP is looking for in an
>>>>> apache proxy.  Rather the proxy reply with a "too busy" error page
>>>>> than not take the connection at all, right?
>>>>>
>>>>> The proxy balancer would be a place to look: that offers various
>>>>> ways to determine how much traffic to send to a backend.  If that
>>>>> doesn't meet your needs, there are several third-party traffic-
>>>>> limiting modules.
>>>>>
>>>>> --
>>>>> Nick Kew
>>>>>
>>>>>
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>>>>
>>>>>
>>
>


Re: [users@httpd] Virtual hosts, include php.conf, DirectoryIndex failure

2017-04-20 Thread Luca Toscano
Hi Marc,

+1 to what Rick is saying, if you could avoid mod_php it would be really
better (more info https://wiki.apache.org/httpd/php).

> But when I included the php7.file in the global http.conf or in the
global default-server.conf files then it works!

Sorry for the extra question but are you super sure that you were hitting
the correct Virtual Host while running your tests? I am asking that since
sometimes it may happen to forget about the Host header and hit the default
virtual host instead.

If not, how did you check that DirectoryIndex wasn't working? What request
did you make?

Luca

2017-04-20 14:28 GMT+02:00 Houser, Rick :

> I suggest you remove the php stuff from Apache, switch it over to php-fpm
> (which basically turns it into an appserver).  If you do so, consider
> changing your MPM over to either event or worker at that point.
>
>
> Rick Houser
> Web Administration
>
> > -Original Message-
> > From: Marc Chamberlin [mailto:m...@marcchamberlin.com]
> > Sent: Wednesday, April 19, 2017 15:58
> > To: users@httpd.apache.org
> > Subject: [users@httpd] Virtual hosts, include php.conf, DirectoryIndex
> failure
> >
> > EXTERNAL EMAIL
> >
> >
> > Hi -  While I have a work-around for this issue, I thought I would post
> > it here to see what, if any, feedback I might get. Perhaps I am doing
> > something wrong?  I run a rather complex server environment where I use
> > both Apache HTTPD and Apache Tomcat servers in combination and host a
> > number of virtual hosts (named and all using the same IP address) On one
> > of my virtual hosts I recently installed WordPress and this required me
> > to also add in PHP support, since WordPress uses mostly PHP scripts.
> > Since this is only needed on one of my virtual hosts, I tried to
> > configure just that virtual host to include the additional stuff needed
> > to support PHP. My attempts to do so failed and the workaround was to
> > include the PHP configuration stuff (php7.conf) at a global level that
> > will affect all the virtual hosts that I am supporting. For now, that is
> > OK and won't bother anything but I am wondering why my initial approach
> > failed and if there is something that I am missing or don't understand.
> >
> > Basically, the symptoms, of what happened, was that the DirectoryIndex
> > index.php  setting fails when I just included the php7.conf file in the
> > configuration file for the virtual host. But when I included the
> > php7.file in the global http.conf or in the global default-server.conf
> > files then it works! Error logs do not show anything other than the fact
> > that an index file could not be found when referencing a directory that
> > does indeed have an index.php file in it. I will show the pertinent
> > config files (urls obscured) below in the configuration that fails, and
> > I hope this is not overwhelming. First here is the version info for
> > Apache just so we are all on the same page -
> >
> > httpd -V
> > Server version: Apache/2.4.23 (Linux/SUSE)
> > Server built:   2017-03-22 14:54:04.0 +
> > Server's Module Magic Number: 20120211:61
> > Server loaded:  APR 1.5.1, APR-UTIL 1.5.3
> > Compiled using: APR 1.5.1, APR-UTIL 1.5.3
> > Architecture:   64-bit
> > Server MPM: prefork
> >threaded: no
> >  forked: yes (variable process count)
> > Server compiled with
> >   -D APR_HAS_SENDFILE
> >   -D APR_HAS_MMAP
> >   -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
> >   -D APR_USE_PROC_PTHREAD_SERIALIZE
> >   -D APR_USE_PTHREAD_SERIALIZE
> >   -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
> >   -D APR_HAS_OTHER_CHILD
> >   -D AP_HAVE_RELIABLE_PIPED_LOGS
> >   -D DYNAMIC_MODULE_LIMIT=256
> >   -D HTTPD_ROOT="/srv/www"
> >   -D SUEXEC_BIN="/usr/sbin/suexec"
> >   -D DEFAULT_PIDLOG="/run/httpd.pid"
> >   -D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
> >   -D DEFAULT_ERRORLOG="/var/log/apache2/error_log"
> >   -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types"
> >   -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"
> >
> > 
> >
> > Here is php7.conf that I am including -
> >
> > cat php7.conf
> > 
> > 
> > SetHandler application/x-httpd-php
> > 
> > 
> > SetHandler application/x-httpd-php-source
> > 
> >  DirectoryIndex index.php4
> >  DirectoryIndex index.php5
> >  DirectoryIndex index.php
> > 
> >
> > 
> >
> > And this is my virtual host configuration  (comments removed and URLs
> > obscured)-
> >
> > cat myvirtualhost.conf
> > 
> >  ServerAdmin m...@mydomain.com
> >  ServerName www.myvirtualhost.org
> >  ServerAlias myvirtualhost.org
> >  DocumentRoot "/srv/tomcat/myvirtualhost_webapps/ROOT"
> >  ErrorLog "/var/log/apache2/myvirtualhost.org-error_log"
> >  TransferLog "/var/log/apache2/myvirtualhost.org-access_log"

Re: Re: [users@httpd] Reg: Custom error message at Apache 2.4.25

2017-04-20 Thread Luca Toscano
Hi!

I checked your httpd config and you are using mod_weblogic, not mod_proxy,
so the ProxyErrorOverride option will not be effective :)

Luca

2017-04-20 3:18 GMT+02:00 Velmurugan Dhakshnamoorthy <dvel@gmail.com>:

> Hi,
> Any help  to identify and correct  what is the issue in my setting to
> re-write  the 500 error by Apache Proxy 2.4.25
>
> Regards,
> Vel
> -- Forwarded message --
> From: "Velmurugan Dhakshnamoorthy" <dvel@gmail.com>
> Date: Apr 18, 2017 16:03
> Subject: Re: [users@httpd] Reg: Custom error message at Apache 2.4.25
> To: <users@httpd.apache.org>
> Cc:
>
> Hi Luca,
>> Is it possible to pinpoint what is the wrong in my setting. I am still
>> unable to display the custom error message.
>>
>> *The actual message from weblogic 12c in browser*
>>
>> Error 500--Internal Server Error
>> From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
>> 10.5.1 500 Internal Server Error
>> The server encountered an unexpected condition which prevented it from
>> fulfilling the request.
>>
>> *Apache Proxy 2.4.25 setting in httpd.conf*
>>
>> *Configuration to forward request from Apache to Weblogic 12c*
>> 
>>
>>SetHandler weblogic-handler
>>WebLogicHost hawley760
>>   WebLogicPort 8062
>>Debug ON
>>WLLogFile /opt/app/bea/apache2.4/httpd-2.4.25/logs/RPS-8060.log
>>   
>> 
>>
>> *config related to error document in httpd.conf*
>>
>> DocumentRoot "/opt/app/bea/apache2.4/httpd-2.4.25/htdocs"
>> ProxyPreserveHost On
>> ProxyPass /error !
>> ProxyErrorOverride On
>> Alias /error /opt/app/bea/apache2.4/httpd-2.4.25/htdocs
>> ErrorDocument 500 /error/500.html
>>
>> I tried to setup this in virtual host as well, but cannot re-write the
>> default 500 error message. I am also attaching my httpd.conf file.
>>
>> Appreciate if you can tell me what I am doing wrong, it would be much
>> appreciated.
>>
>> Regards,
>> Vel
>>
>>
>>
>>
>>
>>
>> Regards,
>> Velmurugan Dhakshnamoorthy (Vel)
>> Singapore.
>>
>> On Tue, Apr 18, 2017 at 6:56 AM, Velmurugan Dhakshnamoorthy <
>> dvel@gmail.com> wrote:
>>
>>> Thanks again for your valuable inputs,  I am actually restricting number
>>> of HTTP sessions at weblogic layer,  beyond the specified limit,  weblogic
>>> throws 500 error message,  which is not very useful to users,  I want only
>>> the 500 error page to be re-written by Apache proxy with simple message
>>> (ex: server is busy,  login after sometime), I want only 500 generic error
>>> message to re-write,  I don't want to re-write any other content from
>>> back-end server.
>>>
>>> Regards,
>>> Vel
>>>
>>> On Apr 18, 2017 00:19, "Luca Toscano" <toscano.l...@gmail.com> wrote:
>>>
>>>> Hi!
>>>>
>>>> As Nick mentioned there are a couple of options:
>>>>
>>>> 1) https://httpd.apache.org/docs/2.4/mod/mod_substitute.html or
>>>> https://httpd.apache.org/docs/current/mod/mod_proxy_html.html in case
>>>> you want to replace some parts of the response coming from the backend with
>>>> your content.
>>>>
>>>> 2) Write your own content output filter to modify the backend response
>>>> as you wish before flushing it out to the client. I'd suggest to follow
>>>> https://httpd.apache.org/docs/2.4/mod/mod_lua.html#modifying_buckets
>>>> if you want to attempt this road since using Lua instead of C is generally
>>>> easier for people not used to write Apache code.
>>>>
>>>> My personal suggestion is to not use any of the above but to re-think
>>>> about why you want to force the proxy to do this work. A proxy should be as
>>>> lightweight as possible and ideally should mask backend failures with
>>>> pre-defined error pages.
>>>>
>>>> Hope that helps!
>>>>
>>>> Luca
>>>>
>>>> 2017-04-17 9:57 GMT+02:00 Velmurugan Dhakshnamoorthy <
>>>> dvel@gmail.com>:
>>>>
>>>>> Hi Nick,
>>>>> yes exactly,  I want the error message produced by back-end weblogic
>>>>> server to be re-written by Apache proxy and then display custom message to
>>>>> user.
>>>>>
>>>>> Regard

Re: [users@httpd] Help: Apache Crashing Everyday

2017-04-20 Thread Luca Toscano
Hi!

2017-04-19 8:41 GMT+02:00 Jayaram Ponnusamy :

> Hi Luca,
>
> Thanks for the details.
> 1. our server's ulimit values are:
> ]$ ulimit -a
> max user processes  (-u) 1024
>
> Please let me know whether the values are sufficient to allow at least 500
> concurrent connections.
>

To be sure you should check /proc/$pid/limits (where $pid is one of the
Apache processes), but I'd say that your original issue (quoting "and when
the Total Children value is reached 999 the Apache is not responding") is
related to this limit being enforced.


>
> 2. Yes I checked mod_jk log when hang happens, and getting below errors
> continuously.
>
> [Wed Apr 19 02:00:38 2017]loadbalancer www.cmsp1.com 24.843284
> [Wed Apr 19 02:00:38 2017][16313:3878614784 <387%20861%204784>] [info]
> ajp_process_callback::jk_ajp_common.c (1788): Writing to client aborted
> or client network problems
> [Wed Apr 19 02:00:38 2017][16313:3878614784 <387%20861%204784>] [info]
> ajp_service::jk_ajp_common.c (2447): (qu_prod_live_svr1) sending request to
> tomcat failed (unrecoverable), because of client write error (attempt=1)
> [Wed Apr 19 02:00:38 2017][16313:3878614784 <387%20861%204784>] [info]
> service::jk_lb_worker.c (1384): service failed, worker qu_prod_live_svr1 is
> in local error state
> [Wed Apr 19 02:00:38 2017][16313:3878614784 <387%20861%204784>] [info]
> service::jk_lb_worker.c (1403): unrecoverable error 200, request failed.
> Client failed in the middle of request, we can't recover to another
> instance.
> [Wed Apr 19 02:00:38 2017]loadbalancer www.cmsp1.com 19.170901
> [Wed Apr 19 02:00:38 2017][16313:3878614784 <387%20861%204784>] [info]
> jk_handler::mod_jk.c (2608): Aborting connection for worker=loadbalancer
> [Wed Apr 19 02:00:39 2017][16261:3878614784 <387%20861%204784>] [warn]
> map_uri_to_worker_ext::jk_uri_worker_map.c (962): Uri * is invalid. Uri
> must start with /
> [Wed Apr 19 02:00:40 2017][16308:3878614784 <387%20861%204784>] [warn]
> map_uri_to_worker_ext::jk_uri_worker_map.c (962): Uri * is invalid. Uri
> must start with /
>

Was apache asked to reload for logrotation before this? Or did you see an
increase in traffic?


>
> 3. We will upgrade to 2.4.25, could you please share optimal configuration
> for mpm-event to allow more concurrent users, please.
>

I'd suggest to start from https://httpd.apache.org/docs/2.4/mod/event.html,
but every server has its own set of requirements and a proper configuration
needs a bit of testing, so I suggest to set up a fake production
environment first and start playing with 2.4.25 in there first.

Please also check https://httpd.apache.org/docs/current/upgrading.html,
upgrading to 2.4 is not super difficult but you'll might be required to
make some changes to your config.

Hope that helps!

Luca


Re: [users@httpd] Problem with Apache2 after upgrade from Ubuntu14.04 to 16.04

2017-04-18 Thread Luca Toscano
Hi!

2017-04-18 13:35 GMT+02:00 Purvez :

> Hi
>
> Newbie to the forum here so I hope I'm doing this right.  If not please
> would someone guide me.  Thx in advance.
>
> As the subject line says Apache2 is not working at all / satisfactorily
> since the Ubuntu upgrade.  The details follow:
>
> ===
>
> Here is my full post on askubuntu:
>
> http://askubuntu.com/questions/904042/upgrade-to-
> 16-04-lts-has-broken-apache
>
> Currently the biggest help I could get would be if someone would decipher
> what the following output means when I do :
>
> systemctl status apache2.service
>
> output:
> ===
> *Code:*
>
> purvez@127:~$ systemctl status apache2.service
> ● apache2.service - LSB: Apache2 web server
>Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
>   Drop-In: /lib/systemd/system/apache2.service.d
>└─apache2-systemd.conf
>Active: inactive (dead) since Thu 2017-04-13 10:01:02 BST; 11s ago
>  Docs: man:systemd-sysv-generator(8)
>   Process: 6997 ExecStop=/etc/init.d/apache2 stop (code=exited,
> status=0/SUCCESS)
>   Process: 6978 ExecStart=/etc/init.d/apache2 start (code=exited,
> status=0/SUCCESS)
>
> Apr 13 10:01:02 127.0.1.1purvez-Aspire-5750 apache2[6978]: (98)Address
> already in use: AH00072: make_sock: could not bind to address [::]:80
> Apr 13 10:01:02 127.0.1.1purvez-Aspire-5750 apache2[6978]: (98)Address
> already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
> Apr 13 10:01:02 127.0.1.1purvez-Aspire-5750 apache2[6978]: no listening
> sockets available, shutting down
>

It seems that you have another process holding TCP port 80, so you need to
kill/stop it first. You can use something like netstat -nlpt (needs super
user to list all the info in this case) to find your target.

Hope that helps!

Luca


Re: [users@httpd] Help: Apache Crashing Everyday

2017-04-18 Thread Luca Toscano
Hi,

Some suggestions:

1) check your RHEL ulimits applied to httpd, the error message "Resource
temporarily unavailable: setuid: unable to change to uid" could be related
to maximum number of processes (allowed by the OS) reached. This should
allow you to spawn more httpd processes.

2) Have you checked when the "hang" happens? If you have long lived
connections and your httpd server reloads (for example for log rotation)
then it might hang a bit while waiting for the remaining connections to
drain.

3) If possible I'd consider to upgrade httpd to >= 2.4.25 and use mpm-event
(rather than prefork).

Hope that helps!

Luca

2017-04-16 13:18 GMT+02:00 Jayaram Ponnusamy :

> Dear All,
>
> We were runnig our site in PHP based CMS tool earlier, and normally 20-30K
> users will access our sites daily. But in new system with Tomcat, we are
> facing performance and availability issue frequently, when i access the
> tomcat url directly the page is loading within 3seconds, but if we access
> webServer URL then its taking more than 9seconds.
>
> Also, Each day I am seeing more and more of these in my error_logs, and
> when the Total Children value is reached 999 the Apache is not responding
> and Server reboot only help to bring the site back. Every day atleast 4-5
> times we are facing this issue (we are using mod_jk to connect with tomcat).
>
> Kindly please help on this.
>
> Usually I am seeing this on my error_log:
> [Sat Apr 15 20:49:33 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 8 children, there
> are 4 idle, and 31 total children
> [Sat Apr 15 20:51:14 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 8 children, there
> are 0 idle, and 20 total children
> [Sat Apr 15 20:51:15 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 16 children, there
> are 0 idle, and 28 total children
> [Sat Apr 15 20:51:16 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 44 total children
> We are using two Apache Nodes and Connected with Two Tomcat (at
> Application Level Clustering).
> Apache Servers:
> 4 Core 64-bit, Rhel System running on 16GB RAM (Both Servers)
> Server version: Apache/2.2.21 (Unix)
>
> *httpd.conf*
> KeepAlive On
> Timeout 300
> MaxKeepAliveRequests 100
> KeepAliveTimeout 15
> 
> StartServers 80
> ServerLimit 3500
> MaxClients 3500
> MaxRequestsPerChild  0
> 
>
> *workers.properties*
> worker.list=loadbalancer,status
> worker.qu_prod_live_svr.type=ajp13
> worker.qu_prod_live_svr.host=cmsp1
> worker.qu_prod_live_svr.port=8009
> worker.qu_prod_live_svr.socket_keepalive=1
> worker.qu_prod_live_svr.socket_timeout=300
> worker.qu_prod_live_svr1.type=ajp13
> worker.qu_prod_live_svr1.host=cmsp2
> worker.qu_prod_live_svr1.port=8009
> worker.qu_prod_live_svr1.socket_keepalive=1
> worker.qu_prod_live_svr1.socket_timeout=300
> worker.qu_prod_live_svr.lbfactor=1
> worker.qu_prod_live_svr1.lbfactor=1
> worker.loadbalancer.type=lb
> worker.loadbalancer.balance_workers=qu_prod_live_svr,qu_prod_live_svr1
> worker.status.type=status
>
> *Tomcat Servers:*
> 4 Core 64-bit, Rhel System running on 16GB RAM (Both Servers)
> Server version: Apache Tomcat/7.0.42
>  URIEncoding="UTF-8" emptySessionPath="true" maxThreads="500"
> minSpareThreads="10" connectionTimeout="-1" />
>  URIEncoding="UTF-8" />
>
> *error_log:*
> [Sat Apr 15 21:52:36 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 839 total children
> [Sat Apr 15 21:52:37 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 871 total children
> [Sat Apr 15 21:52:38 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 903 total children
> [Sat Apr 15 21:52:39 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 935 total children
> [Sat Apr 15 21:52:40 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 967 total children
> [Sat Apr 15 21:52:41 2017] [info] server seems busy, (you may need to
> increase StartServers, or Min/MaxSpareServers), spawning 32 children, there
> are 0 idle, and 999 total children
> [Sat Apr 15 21:52:41 2017] [alert] (11)Resource temporarily unavailable:
> setuid: unable to change to uid: 2
> [Sat Apr 15 21:52:41 2017] [alert] (11)Resource temporarily unavailable:
> setuid: unable to change to uid: 2
> [Sat Apr 15 21:52:41 2017] [alert] (11)Resource temporarily unavailable:
> setuid: 

Re: [users@httpd] Re: Reg : Limiting http connections at Apache 2.4.25

2017-04-18 Thread Luca Toscano
Not sure what is the status of mod_qos (third party module), but you might
want to give it a try and see if it fits your needs!

http://mod-qos.sourceforge.net/#requestlevelcontrol

Luca

2017-04-17 3:08 GMT+02:00 Velmurugan Dhakshnamoorthy :

> Dear All,
> Any specific setup to cut and disallow the the further HTTP connections
> after specified limit (ex: 50 sessions?).
>
> My requirement is to allow only 50 users and 51st user should get a custom
> error message to login after sometime.
>
> Regards,
> Vel
>
> On Mar 16, 2017 21:30, "Velmurugan Dhakshnamoorthy" 
> wrote:
>
>> Thanks for response,
>>
>> Yes my requirement is to completely restrict/disalllow any further
>> connections, example I want to allow only 50 sessions,  51st connection
>> should get an error message to login later  after certain period of time.
>>
>>
>> Regards,
>> Vel
>>
>> On Mar 16, 2017 18:58, "Nick Kew"  wrote:
>>
>>> On Thu, 2017-03-16 at 02:05 +0100, Daniel wrote:
>>> > See about mpm settings/directives such as MaxRequestWorkers, which
>>> > will limit the number of concurrent requests your server can take.
>>>
>>> Indeed, but I don't think that's what the OP is looking for in an
>>> apache proxy.  Rather the proxy reply with a "too busy" error page
>>> than not take the connection at all, right?
>>>
>>> The proxy balancer would be a place to look: that offers various
>>> ways to determine how much traffic to send to a backend.  If that
>>> doesn't meet your needs, there are several third-party traffic-
>>> limiting modules.
>>>
>>> --
>>> Nick Kew
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>> For additional commands, e-mail: users-h...@httpd.apache.org
>>>
>>>


Re: [users@httpd] Reg: Custom error message at Apache 2.4.25

2017-04-17 Thread Luca Toscano
Hi!

As Nick mentioned there are a couple of options:

1) https://httpd.apache.org/docs/2.4/mod/mod_substitute.html or
https://httpd.apache.org/docs/current/mod/mod_proxy_html.html in case you
want to replace some parts of the response coming from the backend with
your content.

2) Write your own content output filter to modify the backend response as
you wish before flushing it out to the client. I'd suggest to follow
https://httpd.apache.org/docs/2.4/mod/mod_lua.html#modifying_buckets if you
want to attempt this road since using Lua instead of C is generally easier
for people not used to write Apache code.

My personal suggestion is to not use any of the above but to re-think about
why you want to force the proxy to do this work. A proxy should be as
lightweight as possible and ideally should mask backend failures with
pre-defined error pages.

Hope that helps!

Luca

2017-04-17 9:57 GMT+02:00 Velmurugan Dhakshnamoorthy :

> Hi Nick,
> yes exactly,  I want the error message produced by back-end weblogic
> server to be re-written by Apache proxy and then display custom message to
> user.
>
> Regards,
> Vel
>
>
> On Apr 17, 2017 15:34, "Nick Kew"  wrote:
>
> On Mon, 2017-04-17 at 09:04 +0800, Velmurugan Dhakshnamoorthy wrote:
>
> >
> > Thanks Luca,  I tried setting proxyerroroverride and error
> > document  in virtual host, however,  the 500 error produced by
> > content server is displayed as it is via Apache proxy. Any
> > further help?
>
> Are you saying you want an error message coming from the backend
> but modified by the proxy?  That would imply using a content filter
> (such as mod_proxy_html, mod_sed, or mod_substitute) to rewrite
> the response from the backend.
>
> --
> Nick Kew
>
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>
>


Re: [users@httpd] Help with conditional ProxyPassMatch

2017-04-13 Thread Luca Toscano
Hi!

2017-04-12 20:42 GMT+02:00 Gryzli Bugbear :

> Hi to all,
>
> I want to make conditional forward proxy within Apache ,based on request
> header if a given request header exists, I want to proxy the request, if
> not, not proxy
> and also I need to do this NOT with RewriteRule and [P] flags.
>
> I can't find how to define conditional proxying based on some environment
> variable or request header, to achieve the following:
>
> 
> ProxyPassMatch / http://127.0.0.1:8080
> 
>

You should be able to use something like:

   # or maybe req('Some_Header') -eq 'xyz'
  ProxyPass etc..


More info about available operators and functions in
http://httpd.apache.org/docs/2.4/en/expr.html#functions

Hope that helps!

Luca


Re: [users@httpd] Apache substitute issue

2017-04-10 Thread Luca Toscano
Hi!

2017-04-10 8:24 GMT+02:00 Hemalatha A :

> Hi,
>
> I am facing 2 issues with Apache mod_proxy and substitute.
>
> 1. I have a substitute like say:
>  Substitute "s/http/https/ni"
>  It works perfectly fine when I do curl. But on browser, it somehow
> doesn't seem to apply the substitute, it still remains http. What could be
> the reason, how to debug this?
>

Can you give us more info about how to reproduce the issue? For example a
HTML snipped that is not rewritten as you expect, otherwise it is really
difficult to help :)

Please also check
https://httpd.apache.org/docs/current/mod/mod_proxy_html.html that might be
more suitable/flexible/complete for what you are trying to do.


>
> 2. I have reverse proxy server on machine M1 for a backend server(http)
> running on local machine, whose service  if down redirect me to http
> service of backend machine M2, which also has a reverse proxy running.
>
> M1 proxyserver  --> http(M1) --> http (M2)
>
> If backend on M1 is down, I want the redirection to go to https of backend
> machine, instead of http or M1 to act as proxy for M2 backend also, if M1
> backend is down
> How can this be done?
>
>
What is the configuration that you currently use for the redirection? Can
you share your httpd config?

Luca


  1   2   >