Re: [users@httpd] Fwd: Apache 2.4.39 update for Ubuntu 14.04

2019-05-15 Thread Hajo Locke

Hello,
Am 15.05.2019 um 13:43 schrieb Frank Gingras:

Nitin,

You will need to ask the folks that maintain your
distribution/repository for help with updated packages.

On Tue, 14 May 2019 at 01:18, Nitin Kadam mailto:nitinkadam1...@gmail.com>> wrote:

Hello Team,

I have ubuntu 14.04 web server with apache 2.4.33 package and with
the latest release of 2.4.39 internal security asked to update
same ASAP.


Ubuntu 14.04 is end of life.
You should upgrade your OS and distribution will update version or patch
current version.


when I do apt-cache policy its shows installed version 2.4.33 and
candidate also 2.4.33.

Can you please help here its production application server.

*apache2:*
*  Installed: 2.4.33-1+ubuntu14.04.1+deb.sury.org
+1*
*  Candidate: 2.4.33-1+ubuntu14.04.1+deb.sury.org
+1*
*  Version table:*
* *** 2.4.33-1+ubuntu14.04.1+deb.sury.org +1 0*
*        100 /var/lib/dpkg/status*
--
Regards
Nitin Kadam




Hajo


[users@httpd] how to put geodata into $_SERVER for php-fpm using proxy_fcgid

2019-05-15 Thread Hajo Locke

Hello List,

we use latest apache 2.4.39 and various php-version connected with
proxy_fcgid.

previously we used mod_fastcgi to bind php-fpm to apache.  Watching a
phpinfo() in this scenario offered also complete geodata section in
phpinfo.
mod_geoip ist installed and mod_fastcgi put this into $_SERVER
Environment for phpfpm, so geodata was easys to use in scripts.

Now with new method using proxy_fcgid this geo-section is lost.  As an
alternative we could install geoip-extension for php. But this requires
changing scripts in standardsoftware, not possible for every user.
May be this is more a feature-request as an question.
Is it possible to add geodata from mod_geoip to Environment of php-fpm
connected by proxy_fcgid?

Thanks,
Hans

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] ssl stapling error - sectigo

2019-04-25 Thread Hajo Locke

Hello,

thanks to Tom, who informed me offlist about this. It seems that problem
was triggered by some kind of maintenance.
https://sectigo.status.io/pages/history/5938a0dbef3e6af26b001921#

Currently it is working again for us.

Such unexpected problems with ocsp-urls are really annying for visitors
and admins, only possibility is to deactivate ssl-stapling. We had
really slow webpages and also complete page load errors.
Is it possible to change the way the validation-process is included into
request-process? delivery speed of website should not be affected by
ocsp problems.
Tom an I would be happy to have a fix in this case ;)

Thanks,
Hajo


Am 25.04.2019 um 11:43 schrieb Hajo Locke:

Hello,

Am 25.04.2019 um 09:51 schrieb Stefan Eissing:



Am 24.04.2019 um 16:22 schrieb Hajo Locke :

Hello List,

Apache is 2.4.39, System is Ubuntu 18.04 and 16.04

since yesterday evening we have massive mod_ssl problems with ssl
stapling:

Apr 24 11:20:59 myhostname apache2[16094]: [ssl:error] [pid 16094]
AH01941: stapling_renew_response: responder error

We had complaints about slow webpages, this forced us to deactivate
stapling on all our servers.

Sorry to hear that.


Affected are certificates of sectigo (previously comodo) with ocsp-url
http://ocsp.sectigo.com
I cant confirm for other providers, we use comodo/sectigo the most.

But it seems there is no basic problem on our system/network because i
can manually confirm ocsp status with openssl on affected machines:

# openssl ocsp -issuer bundle -cert crt -url http://ocsp.sectigo.com
WARNING: no nonce in response
Response verify OK
crt: good
 This Update: Apr 22 12:46:48 2019 GMT
 Next Update: Apr 26 12:46:48 2019 GMT

I try to figure out on which side problem is. We use basic sslstapling
directives in /etc/apache2/mods-enabled/ssl.conf
this is unchanged for months

SSLUseStapling On
SSLStaplingCache shmcb:${APACHE_RUN_DIR}/ssl_stapling(256)
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off

Is there somebody who can confirm this behaviour and explain what
happens?

AFIK, there have been no (intentional) changes regarding OCSP
stapling in recent versions. Are you doing the openssl test on the
same machine that the affected servers run?


Yes, same server. Apachelog produces the stapling errors, manually
confirmation with openssl works.
Today it seems the problems are over, but we are afraid of reenabling it.
Main problem vor websiteowner/visitors  is a significat noticable delay
when requesting a site. I think the ocsp stapling process is included in
requestprocess and lags the whole process if ocsp url is not acting like
expected.
Unfortunately i have no technical contact at sectigo who could
reestablish my trust into ssl-stapling.


- Stefan
-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org





-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] ssl stapling error - sectigo

2019-04-25 Thread Hajo Locke

Hello,

Am 25.04.2019 um 09:51 schrieb Stefan Eissing:



Am 24.04.2019 um 16:22 schrieb Hajo Locke :

Hello List,

Apache is 2.4.39, System is Ubuntu 18.04 and 16.04

since yesterday evening we have massive mod_ssl problems with ssl stapling:

Apr 24 11:20:59 myhostname apache2[16094]: [ssl:error] [pid 16094]
AH01941: stapling_renew_response: responder error

We had complaints about slow webpages, this forced us to deactivate
stapling on all our servers.

Sorry to hear that.


Affected are certificates of sectigo (previously comodo) with ocsp-url
http://ocsp.sectigo.com
I cant confirm for other providers, we use comodo/sectigo the most.

But it seems there is no basic problem on our system/network because i
can manually confirm ocsp status with openssl on affected machines:

# openssl ocsp -issuer bundle -cert crt -url http://ocsp.sectigo.com
WARNING: no nonce in response
Response verify OK
crt: good
 This Update: Apr 22 12:46:48 2019 GMT
 Next Update: Apr 26 12:46:48 2019 GMT

I try to figure out on which side problem is. We use basic sslstapling
directives in /etc/apache2/mods-enabled/ssl.conf
this is unchanged for months

SSLUseStapling On
SSLStaplingCache shmcb:${APACHE_RUN_DIR}/ssl_stapling(256)
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off

Is there somebody who can confirm this behaviour and explain what happens?

AFIK, there have been no (intentional) changes regarding OCSP stapling in 
recent versions. Are you doing the openssl test on the same machine that the 
affected servers run?


Yes, same server. Apachelog produces the stapling errors, manually
confirmation with openssl works.
Today it seems the problems are over, but we are afraid of reenabling it.
Main problem vor websiteowner/visitors  is a significat noticable delay
when requesting a site. I think the ocsp stapling process is included in
requestprocess and lags the whole process if ocsp url is not acting like
expected.
Unfortunately i have no technical contact at sectigo who could
reestablish my trust into ssl-stapling.


- Stefan
-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] ssl stapling error - sectigo

2019-04-24 Thread Hajo Locke

Hello List,

Apache is 2.4.39, System is Ubuntu 18.04 and 16.04

since yesterday evening we have massive mod_ssl problems with ssl stapling:

Apr 24 11:20:59 myhostname apache2[16094]: [ssl:error] [pid 16094]
AH01941: stapling_renew_response: responder error

We had complaints about slow webpages, this forced us to deactivate
stapling on all our servers.
Affected are certificates of sectigo (previously comodo) with ocsp-url
http://ocsp.sectigo.com
I cant confirm for other providers, we use comodo/sectigo the most.

But it seems there is no basic problem on our system/network because i
can manually confirm ocsp status with openssl on affected machines:

# openssl ocsp -issuer bundle -cert crt -url http://ocsp.sectigo.com
WARNING: no nonce in response
Response verify OK
crt: good
    This Update: Apr 22 12:46:48 2019 GMT
    Next Update: Apr 26 12:46:48 2019 GMT

I try to figure out on which side problem is. We use basic sslstapling
directives in /etc/apache2/mods-enabled/ssl.conf
this is unchanged for months

SSLUseStapling On
SSLStaplingCache shmcb:${APACHE_RUN_DIR}/ssl_stapling(256)
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off

Is there somebody who can confirm this behaviour and explain what happens?

Thansk,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Re: CVE-2019-0211 - Apache 2.2

2019-04-03 Thread Hajo Locke

Hello,

Am 03.04.2019 um 11:06 schrieb Rainer Canavan:

On Wed, Apr 3, 2019 at 10:18 AM LuKreme  wrote:

On Apr 3, 2019, at 02:05, Hajo Locke  wrote:

Is apache 2.2 exploitable by CVE-2019-0211 ?
Description says that first affected version is 2.4.17, but may be 2.2 was not 
analyzed.

“Apache HTTP Server 2.4 releases 2.4.17 to 2.4.38” seems clear.

Since Apache httpd 2.2 is not supported anymore, it is quite possible
that nobody has
checked if 2.2 is affected. However, it looks like redhat has checked
for their old
RHEL releases that ship with 2.2 and they appear to be unaffected:
https://access.redhat.com/security/cve/cve-2019-0211

rainer

thanks Reiner,  i hoped but did not know that some LTS distribution
still supports 2.2


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] CVE-2019-0211 - Apache 2.2

2019-04-03 Thread Hajo Locke

Hello,

i have still a bunch of apache 2.2 servers. ;(
Is apache 2.2 exploitable by CVE-2019-0211 ?
Description says that first affected version is 2.4.17, but may be 2.2
was not analyzed.

Thanks,
Hajo



Re: [users@httpd] Re: HTTP Method Patch

2019-02-21 Thread Hajo Locke

Hello,

Am 21.02.2019 um 05:29 schrieb Christophe JAILLET:

Le 20/02/2019 à 14:58, Hajo Locke a écrit :

Hello list,

this is Apache 2.4.34

I was asked if Apache is supporting HTTP Request Methode PATCH.
To be honest i did not really found something useful in the web.
Is Apache supporting this method and is an additionally modul required?
Iam not aware of allowing or forbidding PATCH in httpd.conf

Thanks,
Hajo



Hi,

httpd "understand" the PATCH method (i.e. directive such as 
"AllowMethods PATCH" is accepted), but will do nothing with it.

No modules is provided with httpd to handle the PATCH method.

It is likely that a 405 error ("Method Not Allowed") will be returned, 
unless a third party module able to handle such a method is installed.

Thank you.



(sorry for the double post, I replied in the wrong list)

CJ


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org




Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] HTTP Method Patch

2019-02-20 Thread Hajo Locke

Hello list,

this is Apache 2.4.34

I was asked if Apache is supporting HTTP Request Methode PATCH.
To be honest i did not really found something useful in the web.
Is Apache supporting this method and is an additionally modul required?
Iam not aware of allowing or forbidding PATCH in httpd.conf

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] ErrorDocument with URL containing URL encoded chars

2019-01-09 Thread Hajo Locke

Hello,

Am 09.01.2019 um 09:48 schrieb Hajo Locke:

Hello List,

have a interesting problem here.
I have a .htaccess with Errordocument containing Text to be displayed:

ErrorDocument 404 "not existing"

This works with standard URLs like http://example.com/fubar.htm
I get response 404 and in Browser displayed text is correct.

Now i try URLs like this: http://example.com/%2ffubar
The URL encoded part of URL seems to be a problem for errordocument. I 
still get the 404 respone, but displayed text has changed.
In place of "not existing" apache answers with "The requested URL 
//fubar was not found on this server."
So apache is decoding $2f to / and use decoded URL for response-text 
in place of "not existing"


i get a change of behaviour if i put the ErrorDocument directive 
direct into Vhost instead of .htaccess.
in this case the ErrorDocument is working as expected also with URLs 
with url encoded Parts.


In Apache 2.2 and 2.4  is same behaviour.
What is problem here and how to solve this?

we solved it with directive AllowEncodedSlashes


Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] ErrorDocument with URL containing URL encoded chars

2019-01-09 Thread Hajo Locke

Hello List,

have a interesting problem here.
I have a .htaccess with Errordocument containing Text to be displayed:

ErrorDocument 404 "not existing"

This works with standard URLs like http://example.com/fubar.htm
I get response 404 and in Browser displayed text is correct.

Now i try URLs like this: http://example.com/%2ffubar
The URL encoded part of URL seems to be a problem for errordocument. I 
still get the 404 respone, but displayed text has changed.
In place of "not existing" apache answers with "The requested URL 
//fubar was not found on this server."
So apache is decoding $2f to / and use decoded URL for response-text in 
place of "not existing"


i get a change of behaviour if i put the ErrorDocument directive direct 
into Vhost instead of .htaccess.
in this case the ErrorDocument is working as expected also with URLs 
with url encoded Parts.


In Apache 2.2 and 2.4  is same behaviour.
What is problem here and how to solve this?

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] define variables by vhost only

2018-11-05 Thread Hajo Locke

Hello,

thanks for your answer.

Am 05.11.2018 um 14:00 schrieb Gillis J. de Nijs:
Alternatlvely, you can just put the AddHandler in the VirtualHost 
directly, and not bother with the .htaccess files.
yes, i have in Vhost a preconfigured addhandler which fits for most 
needs. These parts of VHost-Configurations are created automatically by 
our own customer-menu. The addhandler in .htaccess file should help 
people with some special requirements.
We moved from classic fastcgi to mod_proxy_fcgid, and we try to keep 
userspaceconfiguration unchanged, but seems to be impossible.
May be we should say good buy to our former use of addhandler to choose 
php-versions and only use the modern way. But its not easy for 
support-people. Its harder to support uneven machines with mixed setups.
The use of "define" was our closest attempt, but also seems to be off 
the track.


On Mon, Nov 5, 2018 at 9:43 AM Hajo Locke <mailto:hajo.lo...@gmx.de>> wrote:


Hello List,

iam looking for a way to use define to create variables limited to
vhosts (apache 2.4).
Currently i have some vhosts and use this syntax:

define myvar mycontent.

Name of variables is in all vhosts the same, "mycontent" is different
and vhost related. Later i use this variable in .htaccess files
for users:

Addhandler ${myvar} .php

Unfortunately define-directive defines the variable for complete
server
and not to vhost only. so content of "myvar" gets overwritten with
every
following vhost-config.
So if user A uses this variable, he sees content of variable
created in
vhost for user z.

Is there a possibility to use variables limited to vhost but can
be used
the same way in .htaccess files? I think setenv seems not suitable
for this.

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
<mailto:users-unsubscr...@httpd.apache.org>
For additional commands, e-mail: users-h...@httpd.apache.org
<mailto:users-h...@httpd.apache.org>



Thanks,
Hajo


Re: [users@httpd] define variables by vhost only

2018-11-05 Thread Hajo Locke

Hello,

thanks, for your answer.

Am 05.11.2018 um 13:54 schrieb David Spector:
Just in case it wasn't obvious, the message I just sent assumes that 
your server is managed by WHM/cPanel. If not, just use Include 
directives in your conf file.
sorry, i dont understand. Is this a documented feature? Currently i use 
multiple files for vhosts, but i dont see how it helps to reduce the 
scope of variables created by Define-Directive to particular VHost only.




David Spector
Springtime Software

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org




Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] define variables by vhost only

2018-11-05 Thread Hajo Locke

Hello List,

iam looking for a way to use define to create variables limited to 
vhosts (apache 2.4).

Currently i have some vhosts and use this syntax:

define myvar mycontent.

Name of variables is in all vhosts the same, "mycontent" is different 
and vhost related. Later i use this variable in .htaccess files for users:


Addhandler ${myvar} .php

Unfortunately define-directive defines the variable for complete server 
and not to vhost only. so content of "myvar" gets overwritten with every 
following vhost-config.
So if user A uses this variable, he sees content of variable created in 
vhost for user z.


Is there a possibility to use variables limited to vhost but can be used 
the same way in .htaccess files? I think setenv seems not suitable for this.


Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Configuration help - addhandler <> mod_proxy_fcgi

2018-08-17 Thread Hajo Locke

Hello,


Am 08.03.2018 um 08:54 schrieb Hajo Locke:

Hello,

Am 11.09.2017 um 14:58 schrieb Eric Covener:

On Mon, Sep 11, 2017 at 4:28 AM, Hajo Locke  wrote:

Hello List,

currently i use classic mod_fastcgi (fastcgiexternalserver) with php-fpm,
which is quite reliable.
A disadvantage of this setup is, that not every response-header set by
.htaccess will really send to client.
Something like this is the current setup:


 AddHandler myphp-cgi .php
 Action myphp-cgi /cgi-fpm/php71-fpm


The big advantage is, that my users are able to use addhandler by .htaccess
to choose any provided php-version.

Now i try to switch from mod_fastcgi to new recommend way of mod_proxy_fcgi

The basic variants with SetHandler are working easily:

 SetHandler "proxy:unix:/dev/shm/php70fpm.sock|fcgi://localhost/"
  

Now i want to use AddHandler again, so .htaccess files of my users will
automatically work former way and choose proper php-version.
Unfortunately i was not able to combine AddHandler, Action and the proxy in
a working way:

Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

When enabling this in global conf every request to php files results in a
400 response:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1

Please give me a hint to a working configuration. All my attempts were not
successful.


Action could be tricky here. Are you using php-fpm? Have you
considered allowing users to point at different sockets for diffrenent
fpm pools?
I have to continue on this problem. Unfortunately i did not found any 
useful solution. Seems to be my last problem on switching from 
mod_fastcgi to proxy_fcgi

May be i describe my problem again, finding a better wording.

Current setup is apache->mod_fastcgi->php-fpm
php-fpm is installed in multiple version and provides a socket for 
each version for each user.
every user can choose a fitting php-version for his script simple by 
.htaccess:

Addhandler php71 .php
or
Addhander php56 .php

Every usable addhandler has a fitting action directive in global conf, 
so mod_fastcgi knows which php-socket to connect.


I would like to offer same service after switching to proxy_fcgi. 
Unfortunately i did not found any useful setup. it seems that
action directive of mod_actions not understands a proxy notation. 
following is not working:


Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

This results in 400 error:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client 
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1


Basically this would work by .htaccess
Addhandler "proxy:unix:///dev/shm/php70fpm.sock|fcgi://localhost/" .php
But there are two big disadvantages:
- Our Users have thousands of .htaccess files with traditional 
AddHandler notation. We cant require that all users rewrite there 
.htaccess Files.
- this addhandler is not working if a proxy for this file-extension is 
already established in VHost. But this is a requirement, because not 
all Users use .htaccess files.


So a user is not able to combine a fileextension to any php-fpm 
version by his own.


So my objective is to make former .htaccess entries "Addhandler php71 
.php" to work with a proxy_fcgi setup without the need to change 
hundreds of thousands .htaccess files.

I was not able to find a good solution for this.

Please help and give me some tips to get this work.

Thanks,
Hajo
old problem, still without a good solution. this is closest i currently 
found:

we define a proxy:


    ProxySet ..
 
after that we define a variable:
Define php72-cgi 
"proxy:unix:/dev/shm/user1-php72fpm.sock|fcgi://user1-php72fpm/"

now we are able to use this for addhandler, also in .htaccess
AddHandler ${php72-cgi} .php

this is closest to former "Addhandler php72-cgi .php" which i found. do 
you think this is suitable solution?

Iam not sure. looks not really good...

Thanks,
Hajo

-
To unsubscribe, e-mail:users-unsubscr...@httpd.apache.org
For additional commands, e-mail:users-h...@httpd.apache.org








Re: [users@httpd] Configuration help - addhandler <> mod_proxy_fcgi

2018-03-07 Thread Hajo Locke

Hello,

Am 11.09.2017 um 14:58 schrieb Eric Covener:

On Mon, Sep 11, 2017 at 4:28 AM, Hajo Locke <hajo.lo...@gmx.de> wrote:

Hello List,

currently i use classic mod_fastcgi (fastcgiexternalserver) with php-fpm,
which is quite reliable.
A disadvantage of this setup is, that not every response-header set by
.htaccess will really send to client.
Something like this is the current setup:


 AddHandler myphp-cgi .php
 Action myphp-cgi /cgi-fpm/php71-fpm


The big advantage is, that my users are able to use addhandler by .htaccess
to choose any provided php-version.

Now i try to switch from mod_fastcgi to new recommend way of mod_proxy_fcgi

The basic variants with SetHandler are working easily:

 SetHandler "proxy:unix:/dev/shm/php70fpm.sock|fcgi://localhost/"
  

Now i want to use AddHandler again, so .htaccess files of my users will
automatically work former way and choose proper php-version.
Unfortunately i was not able to combine AddHandler, Action and the proxy in
a working way:

Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

When enabling this in global conf every request to php files results in a
400 response:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1

Please give me a hint to a working configuration. All my attempts were not
successful.


Action could be tricky here. Are you using php-fpm? Have you
considered allowing users to point at different sockets for diffrenent
fpm pools?
I have to continue on this problem. Unfortunately i did not found any 
useful solution. Seems to be my last problem on switching from 
mod_fastcgi to proxy_fcgi

May be i describe my problem again, finding a better wording.

Current setup is apache->mod_fastcgi->php-fpm
php-fpm is installed in multiple version and provides a socket for each 
version for each user.
every user can choose a fitting php-version for his script simple by 
.htaccess:

Addhandler php71 .php
or
Addhander php56 .php

Every usable addhandler has a fitting action directive in global conf, 
so mod_fastcgi knows which php-socket to connect.


I would like to offer same service after switching to proxy_fcgi. 
Unfortunately i did not found any useful setup. it seems that
action directive of mod_actions not understands a proxy notation. 
following is not working:


Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

This results in 400 error:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client 
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1


Basically this would work by .htaccess
Addhandler "proxy:unix:///dev/shm/php70fpm.sock|fcgi://localhost/" .php
But there are two big disadvantages:
- Our Users have thousands of .htaccess files with traditional 
AddHandler notation. We cant require that all users rewrite there 
.htaccess Files.
- this addhandler is not working if a proxy for this file-extension is 
already established in VHost. But this is a requirement, because not all 
Users use .htaccess files.


So a user is not able to combine a fileextension to any php-fpm version 
by his own.


So my objective is to make former .htaccess entries "Addhandler php71 
.php" to work with a proxy_fcgi setup without the need to change 
hundreds of thousands .htaccess files.

I was not able to find a good solution for this.

Please help and give me some tips to get this work.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org






Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-19 Thread Hajo Locke

Hello,

Am 19.02.2018 um 10:11 schrieb Hajo Locke:

Hello,

Am 08.02.2018 um 19:33 schrieb Luca Toscano:



2018-02-02 12:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:




Am 02.02.2018 um 07:05 schrieb Luca Toscano:

Hello Hajo,

2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello Luca,

Am 01.02.2018 um 09:10 schrieb Hajo Locke:

Hello Luca,

Am 01.02.2018 um 04:46 schrieb Luca Toscano:

Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello List,

currently i compare features and behaviour of
proxy_fcgi to classical methods like mod_fastcgi/mod_php.

mod_php/fastcgi have options to send every output from
backend immediately to client. So it is possible to
see progressing output in browser and not complete
websiteoutput at once.

Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted
mod_php/mod_fastcgi you see progress in browser and
numbers 0 1 2 appear one after another.
If you run this with proxy_fcgi you will see no
progress, but complete output at once.

mod_proxy knows about worker parameter flushpackets,
but the docs say this is in effect only for AJP. I can
confirm that this and related options have no effect.
There are some workarounds posted in the web, but only
one worked for me. If i add following line to the
script, i also see a progress with proxy_fcgi in browser:

header('Content-Encoding: none');

Somebody knows a working workaround which works
without scriptediting? some workarounds tell about
using "SetEnv no-gzip 1". This was not working for me
and iam not please to disable content-compression.
Is it planned to support >>flushpackets<< also to
proxy_fcgi?

May be this is not important for typical website but
some service/monitoring scripts.


The functionality is committed to trunk but never
backported to 2.4.x because I was not sure about its
importance, it looks like some users might benefit from it :)

The trunk patch is http://svn.apache.org/r1802040
<http://svn.apache.org/r1802040>, it should apply to 2.4.x
if you want to test it and give me some feedback.

Thanks!

I tried this and it works great. I see same behaviour as
expected with other methods. I think some users might
benefit from this. I saw some discussion related to this
topic and people just ended up by ungainly workaround.
Great news!

Unfortunately i spoke too soon. I was too euphoric when
reading your answer ;)
Behaviour is definitively more then expected, but it seems
there is still a minimum-limit for the buffer to flush. I
suppose this limit is 4096 bytes.
you can comprehend this with pastebinexample above.
Change line 2 from "$string_length = 14096;" to
"$string_length = 1331;"
When calling this php-file you will see no progress. All
output appears at once.
Change scriptline to "$string_length = 1332;", you will see
at least 2 steps of output, because first step seems to
break this 4096 bufferlimit.  increasing $string_length more
and more results in more steps of output.
So current mod_proxy_fcgi.c from svn with configured
"flushpackets=On" seems to work exaktly like
"flushpackets=auto iobuffersize=4096".
setting iobuffersize to lower numbers has no effect.
What do you think? Is there still a hard-coded limit or do i
have a problem in my configuration?
I would be really glad, if you could take a look at this issue.


I am far from being an expert in PHP, but I added "ob_flush();"
right before "flush()" in your script and the 1331 use case
seems flushing correctly. Do you mind to check and let me know
what do you get on your testing environment? As far as I can see
in the mod_proxy_fcgi's code the iobuffersize variable is taken
into account..

It seems that i was additional mocked by my browser. There is no
need to edit this script, just using the right browser ;)
I think your new mod_proxy_fcgi.c did it and my testing was
incorrect. I think we can go into weekend..



Full list of commits is: svn merge -c 1802040,1807876,1808014,1805490 
^/httpd/httpd/trunk .


mod_proxy_fcgi.c only patch: 
http://

Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-19 Thread Hajo Locke

Hello,

Am 08.02.2018 um 19:33 schrieb Luca Toscano:



2018-02-02 12:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:




Am 02.02.2018 um 07:05 schrieb Luca Toscano:

Hello Hajo,

2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello Luca,

Am 01.02.2018 um 09:10 schrieb Hajo Locke:

Hello Luca,

Am 01.02.2018 um 04:46 schrieb Luca Toscano:

Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello List,

currently i compare features and behaviour of
proxy_fcgi to classical methods like mod_fastcgi/mod_php.

mod_php/fastcgi have options to send every output from
backend immediately to client. So it is possible to see
progressing output in browser and not complete
websiteoutput at once.

Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted
mod_php/mod_fastcgi you see progress in browser and
numbers 0 1 2 appear one after another.
If you run this with proxy_fcgi you will see no
progress, but complete output at once.

mod_proxy knows about worker parameter flushpackets,
but the docs say this is in effect only for AJP. I can
confirm that this and related options have no effect.
There are some workarounds posted in the web, but only
one worked for me. If i add following line to the
script, i also see a progress with proxy_fcgi in browser:

header('Content-Encoding: none');

Somebody knows a working workaround which works without
scriptediting? some workarounds tell about using
"SetEnv no-gzip 1". This was not working for me and iam
not please to disable content-compression.
Is it planned to support >>flushpackets<< also to
proxy_fcgi?

May be this is not important for typical website but
some service/monitoring scripts.


The functionality is committed to trunk but never
backported to 2.4.x because I was not sure about its
importance, it looks like some users might benefit from it :)

The trunk patch is http://svn.apache.org/r1802040
<http://svn.apache.org/r1802040>, it should apply to 2.4.x
if you want to test it and give me some feedback.

Thanks!

I tried this and it works great. I see same behaviour as
expected with other methods. I think some users might
benefit from this. I saw some discussion related to this
topic and people just ended up by ungainly workaround.
Great news!

Unfortunately i spoke too soon. I was too euphoric when
reading your answer ;)
Behaviour is definitively more then expected, but it seems
there is still a minimum-limit for the buffer to flush. I
suppose this limit is 4096 bytes.
you can comprehend this with pastebinexample above.
Change line 2 from "$string_length = 14096;" to
"$string_length = 1331;"
When calling this php-file you will see no progress. All
output appears at once.
Change scriptline to "$string_length = 1332;", you will see
at least 2 steps of output, because first step seems to break
this 4096 bufferlimit.  increasing $string_length more and
more results in more steps of output.
So current mod_proxy_fcgi.c from svn with configured
"flushpackets=On" seems to work exaktly like
"flushpackets=auto iobuffersize=4096".
setting iobuffersize to lower numbers has no effect.
What do you think? Is there still a hard-coded limit or do i
have a problem in my configuration?
I would be really glad, if you could take a look at this issue.


I am far from being an expert in PHP, but I added "ob_flush();"
right before "flush()" in your script and the 1331 use case seems
flushing correctly. Do you mind to check and let me know what do
you get on your testing environment? As far as I can see in the
mod_proxy_fcgi's code the iobuffersize variable is taken into
account..

It seems that i was additional mocked by my browser. There is no
need to edit this script, just using the right browser ;)
I think your new mod_proxy_fcgi.c did it and my testing was
incorrect. I think we can go into weekend..



Full list of commits is: svn merge -c 1802040,1807876,1808014,1805490 
^/httpd/httpd/trunk .


mod_proxy_fcgi.c only patch: 
http://people.apache.org/~elukey/httpd_2.4.x-mod_proxy_fcgi-force_flush.p

Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-02-05 Thread Hajo Locke

Hello Luca,

Am 05.02.2018 um 02:27 schrieb Luca Toscano:

Hi Hajo,

2018-02-01 3:58 GMT+01:00 Luca Toscano <toscano.l...@gmail.com 
<mailto:toscano.l...@gmail.com>>:


Hi Hajo,

2018-01-31 2:37 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello,


Am 22.01.2018 um 11:54 schrieb Hajo Locke:

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was
successfully done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done,
10 succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may
be i should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully
used with classical fastcgi-benchmark, so i think i have
to doublecheck the proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done,
10 succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance
when reusing backend connections rather then creating
new ones.
I did some further tests and changed some other
php-fpm/proxy values, but once "enablereuse=on" is set,
the problem returns.

Should i just run the proxy with enablereuse=off? Or do
you have an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost
socket rather than a unix one.

I changed my setup to use tcp-sockets in php-fpm and
proxy-fcgi. Currently i see the same behaviour.

2) What changes on the php-fpm side. Are there more busy
workers when enablereuse is set to on? I am wondering how
php-fpm handles FCGI requests happening on the same socket,
as opposed to assuming that 1 connection == 1 FCGI request.

If "enablereuse=off" is set i see a lot of running
php-workerprocesses (120-130) and high load. Behaviour is
like expected.
When set "enablereuse=on" i can see a big change. number of
running php-workers is really low (~40). The test is running
some time and then it stucks.
I can see that php-fpm processes are still active and waiting
for connections, but proxy_fcgi is not using them nor it is
establishing new connections. loadavg is low and
benchmarktest is not able to finalize.

I did some further tests to solve this issue. I set ttl=1 for
this Proxy and achieved good performance and high number of
working childs. But this is paradoxical.
proxy_fcgi knows about inactive connection to kill it, but not
reenable this connection for working.
May be this is helpful to others.

May be a kind of communicationproblem and checking
health/busy status of php-processes.
Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600
max=15000


   SetHandler "proxy:fcgi://php70fpm"




Thanks a lot for following up and reporting these interesting
results! Yann opened a thread[1] on dev@ to discuss the issue,
let's follow up in there so we don't keep two conversations open.

Luca

[1]:

https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E

<https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E>



reporting in here what I think it is happening in your test 
environment when enablereuse is set to on. Recap of your settings:


/etc/apache2/conf.d/limits.conf
StartServers          10
MaxClients          500
MinSpareThreads      450
MaxSpareThreads      500
ThreadsPerChild      150
MaxRequestsPerChild   0
Serverlimit 500


    ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


request_terminate_timeout = 7200
listen = /dev/shm/php70fpm.sock
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000

By default mod_proxy allows a connection pool of Thr

Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-02 Thread Hajo Locke



Am 02.02.2018 um 07:05 schrieb Luca Toscano:

Hello Hajo,

2018-02-01 13:20 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello Luca,

Am 01.02.2018 um 09:10 schrieb Hajo Locke:

Hello Luca,

Am 01.02.2018 um 04:46 schrieb Luca Toscano:

Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de
<mailto:hajo.lo...@gmx.de>>:

Hello List,

currently i compare features and behaviour of proxy_fcgi to
classical methods like mod_fastcgi/mod_php.

mod_php/fastcgi have options to send every output from
backend immediately to client. So it is possible to see
progressing output in browser and not complete websiteoutput
at once.

Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted mod_php/mod_fastcgi
you see progress in browser and numbers 0 1 2 appear one
after another.
If you run this with proxy_fcgi you will see no progress,
but complete output at once.

mod_proxy knows about worker parameter flushpackets, but the
docs say this is in effect only for AJP. I can confirm that
this and related options have no effect.
There are some workarounds posted in the web, but only one
worked for me. If i add following line to the script, i also
see a progress with proxy_fcgi in browser:

header('Content-Encoding: none');

Somebody knows a working workaround which works without
scriptediting? some workarounds tell about using "SetEnv
no-gzip 1". This was not working for me and iam not please
to disable content-compression.
Is it planned to support >>flushpackets<< also to proxy_fcgi?

May be this is not important for typical website but some
service/monitoring scripts.


The functionality is committed to trunk but never backported to
2.4.x because I was not sure about its importance, it looks like
some users might benefit from it :)

The trunk patch is http://svn.apache.org/r1802040
<http://svn.apache.org/r1802040>, it should apply to 2.4.x if
you want to test it and give me some feedback.

Thanks!

I tried this and it works great. I see same behaviour as expected
with other methods. I think some users might benefit from this. I
saw some discussion related to this topic and people just ended
up by ungainly workaround.
Great news!

Unfortunately i spoke too soon. I was too euphoric when reading
your answer ;)
Behaviour is definitively more then expected, but it seems there
is still a minimum-limit for the buffer to flush. I suppose this
limit is 4096 bytes.
you can comprehend this with pastebinexample above.
Change line 2 from "$string_length = 14096;" to "$string_length =
1331;"
When calling this php-file you will see no progress. All output
appears at once.
Change scriptline to "$string_length = 1332;", you will see at
least 2 steps of output, because first step seems to break this
4096 bufferlimit.  increasing $string_length more and more results
in more steps of output.
So current mod_proxy_fcgi.c from svn with configured
"flushpackets=On" seems to work exaktly like "flushpackets=auto
iobuffersize=4096".
setting iobuffersize to lower numbers has no effect.
What do you think? Is there still a hard-coded limit or do i have
a problem in my configuration?
I would be really glad, if you could take a look at this issue.


I am far from being an expert in PHP, but I added "ob_flush();" right 
before "flush()" in your script and the 1331 use case seems flushing 
correctly. Do you mind to check and let me know what do you get on 
your testing environment? As far as I can see in the mod_proxy_fcgi's 
code the iobuffersize variable is taken into account..
It seems that i was additional mocked by my browser. There is no need to 
edit this script, just using the right browser ;)
I think your new mod_proxy_fcgi.c did it and my testing was incorrect. I 
think we can go into weekend...


Luca


Thanks,
Hajo



Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-01 Thread Hajo Locke

Hello Luca,

Am 01.02.2018 um 09:10 schrieb Hajo Locke:

Hello Luca,

Am 01.02.2018 um 04:46 schrieb Luca Toscano:

Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello List,

currently i compare features and behaviour of proxy_fcgi to
classical methods like mod_fastcgi/mod_php.

mod_php/fastcgi have options to send every output from backend
immediately to client. So it is possible to see progressing
output in browser and not complete websiteoutput at once.

Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted mod_php/mod_fastcgi you
see progress in browser and numbers 0 1 2 appear one after another.
If you run this with proxy_fcgi you will see no progress, but
complete output at once.

mod_proxy knows about worker parameter flushpackets, but the docs
say this is in effect only for AJP. I can confirm that this and
related options have no effect.
There are some workarounds posted in the web, but only one worked
for me. If i add following line to the script, i also see a
progress with proxy_fcgi in browser:

header('Content-Encoding: none');

Somebody knows a working workaround which works without
scriptediting? some workarounds tell about using "SetEnv no-gzip
1". This was not working for me and iam not please to disable
content-compression.
Is it planned to support >>flushpackets<< also to proxy_fcgi?

May be this is not important for typical website but some
service/monitoring scripts.


The functionality is committed to trunk but never backported to 2.4.x 
because I was not sure about its importance, it looks like some users 
might benefit from it :)


The trunk patch is http://svn.apache.org/r1802040, it should apply to 
2.4.x if you want to test it and give me some feedback.


Thanks!
I tried this and it works great. I see same behaviour as expected with 
other methods. I think some users might benefit from this. I saw some 
discussion related to this topic and people just ended up by ungainly 
workaround.

Great news!
Unfortunately i spoke too soon. I was too euphoric when reading your 
answer ;)
Behaviour is definitively more then expected, but it seems there is 
still a minimum-limit for the buffer to flush. I suppose this limit is 
4096 bytes.

you can comprehend this with pastebinexample above.
Change line 2 from "$string_length = 14096;" to "$string_length = 1331;"
When calling this php-file you will see no progress. All output appears 
at once.
Change scriptline to "$string_length = 1332;", you will see at least 2 
steps of output, because first step seems to break this 4096 
bufferlimit.  increasing $string_length more and more results in more 
steps of output.
So current mod_proxy_fcgi.c from svn with configured "flushpackets=On" 
seems to work exaktly like "flushpackets=auto iobuffersize=4096".

setting iobuffersize to lower numbers has no effect.
What do you think? Is there still a hard-coded limit or do i have a 
problem in my configuration?

I would be really glad, if you could take a look at this issue.


Luca



Thank you,
Hajo



Re: [users@httpd] proxy_fcgi - force flush to client

2018-02-01 Thread Hajo Locke

Hello Luca,

Am 01.02.2018 um 04:46 schrieb Luca Toscano:

Hi Hajo,

2018-01-31 1:27 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello List,

currently i compare features and behaviour of proxy_fcgi to
classical methods like mod_fastcgi/mod_php.

mod_php/fastcgi have options to send every output from backend
immediately to client. So it is possible to see progressing output
in browser and not complete websiteoutput at once.

Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted mod_php/mod_fastcgi you
see progress in browser and numbers 0 1 2 appear one after another.
If you run this with proxy_fcgi you will see no progress, but
complete output at once.

mod_proxy knows about worker parameter flushpackets, but the docs
say this is in effect only for AJP. I can confirm that this and
related options have no effect.
There are some workarounds posted in the web, but only one worked
for me. If i add following line to the script, i also see a
progress with proxy_fcgi in browser:

header('Content-Encoding: none');

Somebody knows a working workaround which works without
scriptediting? some workarounds tell about using "SetEnv no-gzip
1". This was not working for me and iam not please to disable
content-compression.
Is it planned to support >>flushpackets<< also to proxy_fcgi?

May be this is not important for typical website but some
service/monitoring scripts.


The functionality is committed to trunk but never backported to 2.4.x 
because I was not sure about its importance, it looks like some users 
might benefit from it :)


The trunk patch is http://svn.apache.org/r1802040, it should apply to 
2.4.x if you want to test it and give me some feedback.


Thanks!
I tried this and it works great. I see same behaviour as expected with 
other methods. I think some users might benefit from this. I saw some 
discussion related to this topic and people just ended up by ungainly 
workaround.

Great news!


Luca


Thanks,
Hajo


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-31 Thread Hajo Locke

Hello,

Am 22.01.2018 um 11:54 schrieb Hajo Locke:

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully
done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may be i
should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with
classical fastcgi-benchmark, so i think i have to doublecheck the
proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance when
reusing backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy
values, but once "enablereuse=on" is set, the problem returns.

Should i just run the proxy with enablereuse=off? Or do you have
an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket 
rather than a unix one.
I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. 
Currently i see the same behaviour.
2) What changes on the php-fpm side. Are there more busy workers when 
enablereuse is set to on? I am wondering how php-fpm handles FCGI 
requests happening on the same socket, as opposed to assuming that 1 
connection == 1 FCGI request.
If "enablereuse=off" is set i see a lot of running php-workerprocesses 
(120-130) and high load. Behaviour is like expected.
When set "enablereuse=on" i can see a big change. number of running 
php-workers is really low (~40). The test is running some time and 
then it stucks.
I can see that php-fpm processes are still active and waiting for 
connections, but proxy_fcgi is not using them nor it is establishing 
new connections. loadavg is low and benchmarktest is not able to finalize.
I did some further tests to solve this issue. I set ttl=1 for this Proxy 
and achieved good performance and high number of working childs. But 
this is paradoxical.
proxy_fcgi knows about inactive connection to kill it, but not reenable 
this connection for working.

May be this is helpful to others.
May be a kind of communicationproblem and checking health/busy status 
of php-processes.

Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000


   SetHandler "proxy:fcgi://php70fpm"




Luca


Alltogether i have collected interesting results. this should be 
remarkable for Stefan, because some results are not as expected. I 
will show this results in separate mail, to not mix up with this proxy 
problem.



Thanks,
Hajo



Re: [users@httpd] minimal custom modul with no functionality

2018-01-31 Thread Hajo Locke

Hello List,

Am 29.01.2018 um 11:32 schrieb Hajo Locke:

Hello List,

i try to remove mod_php and switch to php-cgi with proxy_fcgi and 
mpm_event.
An example setup is running well.  But by removing libphp7.so i want 
to keep support for php_value/php_flag directives  in .htaccess
This is done by php-htscanner extension. But for a working 
php-htscanner extension it is needed that apaches knows about this 
directives.
(a threadsafe compiled libphp7.so is currently no option because of 
other problems).


So following this tutorial i "created" a custom module which just 
registers my needed directives and does nothing else:

https://httpd.apache.org/docs/2.4/developer/modguide.html

I reduced the example to minimum. Please look here:
https://pastebin.com/gEDqJYLR

compiling and using are successful. Apache knows about 
php_flag/php_value and my .htaccess is working together with 
htscanner. my php-ini settings are edited like expected.


This is a minimal apache-modul i just relinquished to use a hook 
registering function.
My question is: Is this safe for using? I did not notice any error, 
but iam no programmer.


Please take a short look at the code and tell me your opinion.
seems we have no programmers here. I think i will start a small question 
in dev-list.



Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] proxy_fcgi - force flush to client

2018-01-31 Thread Hajo Locke

Hello List,

currently i compare features and behaviour of proxy_fcgi to classical 
methods like mod_fastcgi/mod_php.


mod_php/fastcgi have options to send every output from backend 
immediately to client. So it is possible to see progressing output in 
browser and not complete websiteoutput at once.


Here is an example script:
https://pastebin.com/4drpgBMq

if you ran this with php-cli or adjusted mod_php/mod_fastcgi you see 
progress in browser and numbers 0 1 2 appear one after another.
If you run this with proxy_fcgi you will see no progress, but complete 
output at once.


mod_proxy knows about worker parameter flushpackets, but the docs say 
this is in effect only for AJP. I can confirm that this and related 
options have no effect.
There are some workarounds posted in the web, but only one worked for 
me. If i add following line to the script, i also see a progress with 
proxy_fcgi in browser:


header('Content-Encoding: none');

Somebody knows a working workaround which works without scriptediting? 
some workarounds tell about using "SetEnv no-gzip 1". This was not 
working for me and iam not please to disable content-compression.

Is it planned to support >>flushpackets<< also to proxy_fcgi?

May be this is not important for typical website but some 
service/monitoring scripts.


Thank you,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] minimal custom modul with no functionality

2018-01-29 Thread Hajo Locke

Hello List,

i try to remove mod_php and switch to php-cgi with proxy_fcgi and mpm_event.
An example setup is running well.  But by removing libphp7.so i want to 
keep support for php_value/php_flag directives  in .htaccess
This is done by php-htscanner extension. But for a working php-htscanner 
extension it is needed that apaches knows about this directives.
(a threadsafe compiled libphp7.so is currently no option because of 
other problems).


So following this tutorial i "created" a custom module which just 
registers my needed directives and does nothing else:

https://httpd.apache.org/docs/2.4/developer/modguide.html

I reduced the example to minimum. Please look here:
https://pastebin.com/gEDqJYLR

compiling and using are successful. Apache knows about 
php_flag/php_value and my .htaccess is working together with htscanner. 
my php-ini settings are edited like expected.


This is a minimal apache-modul i just relinquished to use a hook 
registering function.
My question is: Is this safe for using? I did not notice any error, but 
iam no programmer.


Please take a short look at the code and tell me your opinion.

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] h2load http/2 benchmarkingresults using different mpm/php configurations

2018-01-22 Thread Hajo Locke



Am 22.01.2018 um 14:38 schrieb Eric Covener:

but i never expected that my winner in this test is mod_php. also there was 
lowest
loadavg.

I don't think the motivations to pull the PHP interpreter out of the
webserver process is performance -- that's one of the costs.
yes. my statement was made in the context of http2 > mpm_prefork > 
mod_php > "H2Worker 1"
basically mod_php is a fast technique and depending on purpose a good 
solution.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org





-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] h2load http/2 benchmarkingresults using different mpm/php configurations

2018-01-22 Thread Hajo Locke

Hello List,

separatly from other mail with proxy_fcgi/enablereuse problem i want to 
tell about my results. This is quite interesting.

System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, php 7.0.25

All tests were startet with this params: h2load  -n10 -c100 -m10 
https://example.com/infophp.php

Tests used for this example were 100% successful:
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
failed, 0 errored, 0 timeout
I run 10 Tests for every configuration, i just write here one 
significant result as example.


First test is mod_php with mpm_prefork
finished in 42.87s, 2332.39 req/s, 253.63MB/s load average: 47,69

2nd test is php-fpm with classic mod_fastcgi configuration using 
FastCGIExternalServer

finished in 51.28s, 1950.25 req/s, 227.16MB/s load average: 60,70

3rd test is php-fpm using fcgi_proxy configuration 
(unixsocket/enablereuse=off) and mpm_event

finished in 47.54s, 2103.41 req/s, 225.35MB/s load average: 61,50

I expected that fcgi_proxy/mpm_event is quicker then mod_fastcgi, but i 
never expected that my winner in this test is mod_php. also there was 
lowest loadavg.
This is especially remarkable because i use unofficial patch to activate 
http2 with mpm prefork along with "H2MaxWorkers 1" to avoid segfaults.
Version 2.4.27 dropped support for http2 when using mpm_prefork through 
performance-probs reported by users.

http://httpd.markmail.org/search/?q=Apache%20HTTP%20Server%202.4.27%20Released#query:Apache%20HTTP%20Server%202.4.27%20Released+page:1+mid:nsnewcr74hg6527f+state:results

I wonder why my test showed this result. May be mass-requesting of 
phpinfo is not comparable with a real production-server but altogether 
iam surprised.

How to understand this result ?

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-22 Thread Hajo Locke

Hello,

Am 19.01.2018 um 15:48 schrieb Luca Toscano:

Hi Hajo,

2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully
done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may be i
should look deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with
classical fastcgi-benchmark, so i think i have to doublecheck the
proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10
succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance when
reusing backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy
values, but once "enablereuse=on" is set, the problem returns.

Should i just run the proxy with enablereuse=off? Or do you have
an other suspicion?



Before giving up I'd check two things:

1) That the same results happen with a regular localhost socket rather 
than a unix one.
I changed my setup to use tcp-sockets in php-fpm and proxy-fcgi. 
Currently i see the same behaviour.
2) What changes on the php-fpm side. Are there more busy workers when 
enablereuse is set to on? I am wondering how php-fpm handles FCGI 
requests happening on the same socket, as opposed to assuming that 1 
connection == 1 FCGI request.
If "enablereuse=off" is set i see a lot of running php-workerprocesses 
(120-130) and high load. Behaviour is like expected.
When set "enablereuse=on" i can see a big change. number of running 
php-workers is really low (~40). The test is running some time and then 
it stucks.
I can see that php-fpm processes are still active and waiting for 
connections, but proxy_fcgi is not using them nor it is establishing new 
connections. loadavg is low and benchmarktest is not able to finalize.
May be a kind of communicationproblem and checking health/busy status of 
php-processes.

Whole proxy configuration is  this:


    ProxySet enablereuse=off flushpackets=On timeout=3600 max=15000


   SetHandler "proxy:fcgi://php70fpm"




Luca


Alltogether i have collected interesting results. this should be 
remarkable for Stefan, because some results are not as expected. I will 
show this results in separate mail, to not mix up with this proxy problem.


Thanks,
Hajo


Re: [users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Hajo Locke

Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully done 
within only a few seconds.


finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
failed, 0 errored, 0 timeout


so problem seems to be not h2load and basic apache. may be i should look 
deeper into proxy_fcgi configuration.
php-fpm configuration is unchanged and was successfully used with 
classical fastcgi-benchmark, so i think i have to doublecheck the proxy.


now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 10 total, 10 started, 10 done, 10 succeeded, 0 
failed, 0 errored, 0 timeout


iam surprised by that. i expected a higher performance when reusing 
backend connections rather then creating new ones.
I did some further tests and changed some other php-fpm/proxy values, 
but once "enablereuse=on" is set, the problem returns.


Should i just run the proxy with enablereuse=off? Or do you have an 
other suspicion?


Thanks,
Hajo


Am 19.01.2018 um 12:45 schrieb Daniel:

which are the results exactly and which are the results to a non-php
file such as a gif or similar?

2018-01-19 12:38 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:

Hello list,

i do some http/2 benchmarks on my machine and have problems to finish at
least one test.

System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

I start h2load with standard-params:

h2load  -n10 -c100 -m10 https://example.com/phpinfo.php

first steps are really quick and i can see a progress to 50-70%. but after
that requests by h2load to server decrease dramatically.
it seems that h2load ist stopping requests to server, but i dont see any
reason for that on serverside. i can start a 2nd h2load and this is starting
furious again, while the first one stucks with no progress, so i can't
believe there is a serverproblem.

all serverconfigs are really high, to avoid any kind of bottleneck.

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500

my test-vhost just has some default values like servername, docroot etc.
additional there is the proxy_fcgi config

 ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


SetHandler "proxy:fcgi://php70fpm/"


fpm-config also has high limits to serve every incoming connection:
request_terminate_timeout = 7200
security.limit_extensions = no
listen = /dev/shm/php70fpm.sock
listen.owner = myuser
listen.group = mygroup
listen.mode = 0660
user = myuser
group = mygroup
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000
catch_workers_output = yes

Currently i have no explanation for this. a really fast start and then
decreasing to low-activity.  but i cant see that limits are reached or
processes not respond.
Possible to have a problem in h2load or a hidden problem in my
configuration? Is there an other recommend way to do a h2-speedbenchmarking?

before using proxy_fcgi i used the classical mod_fastcgi with
fastcgiexternalserver and did not have this kind of problems. with
mod_fastcgi the test could complete.
Currently iam stumped and need a hint please.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org







-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] problems benchmarking php-fpm/proxy_fcgi with h2load

2018-01-19 Thread Hajo Locke

Hello list,

i do some http/2 benchmarks on my machine and have problems to finish at 
least one test.


System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

I start h2load with standard-params:

h2load  -n10 -c100 -m10 https://example.com/phpinfo.php

first steps are really quick and i can see a progress to 50-70%. but 
after that requests by h2load to server decrease dramatically.
it seems that h2load ist stopping requests to server, but i dont see any 
reason for that on serverside. i can start a 2nd h2load and this is 
starting furious again, while the first one stucks with no progress, so 
i can't believe there is a serverproblem.


all serverconfigs are really high, to avoid any kind of bottleneck.

/etc/apache2/conf.d/limits.conf
StartServers  10
MaxClients  500
MinSpareThreads  450
MaxSpareThreads  500
ThreadsPerChild  150
MaxRequestsPerChild   0
Serverlimit 500

my test-vhost just has some default values like servername, docroot etc. 
additional there is the proxy_fcgi config


    ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500


   SetHandler "proxy:fcgi://php70fpm/"


fpm-config also has high limits to serve every incoming connection:
request_terminate_timeout = 7200
security.limit_extensions = no
listen = /dev/shm/php70fpm.sock
listen.owner = myuser
listen.group = mygroup
listen.mode = 0660
user = myuser
group = mygroup
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000
catch_workers_output = yes

Currently i have no explanation for this. a really fast start and then 
decreasing to low-activity.  but i cant see that limits are reached or 
processes not respond.
Possible to have a problem in h2load or a hidden problem in my 
configuration? Is there an other recommend way to do a h2-speedbenchmarking?


before using proxy_fcgi i used the classical mod_fastcgi with 
fastcgiexternalserver and did not have this kind of problems. with 
mod_fastcgi the test could complete.

Currently iam stumped and need a hint please.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] high count h2 idle streams

2017-10-09 Thread Hajo Locke

Hello,


Am 09.10.2017 um 12:33 schrieb Hajo Locke:

Hello List,

found today an abnormality in my apachestatus for some servers.
There are a lot of "h2  idle, streams" in apachestatus. This looks 
like this:


14-0 28241 0/41/41 K  0.25 128 1 0.0 0.10 0.10  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
15-0 28242 0/11/11 K  0.25 120 1 0.0 0.61 0.61  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
16-0 28243 0/15/15 K  0.22 8 1 0.0 0.39 0.39  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
17-0 28245 0/25/25 K  0.40 278 1 0.0 1.13 1.13  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
18-0 28246 0/46/46 K  0.52 35 54 0.0 1.53 1.53  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
19-0 28250 0/7/7 K  0.12 58 0 0.0 0.02 0.02  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
20-0 28277 0/3/3 K  0.24 243 66 0.0 0.23 0.23  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
21-0 28278 0/8/8 K  0.15 102 1 0.0 0.29 0.29  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
22-0 28280 0/5/5 K  0.12 18 1 0.0 0.31 0.31  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)


Some servers have hundreds of this, never noticed this before.
This connections have status K or W. Ist this a kind of attack to 
reach MaxRequestWorkers?
It seems the number of this connections can be reduced by reducing 
H2MaxWorkerIdleSeconds to a lower value.

Apacheversion is 2.4.27.
What should i do now?
it seems that i found problem. it looks like standard-dos with 
slowloris. i think i just was confused by mod_http2 output. deactivating 
http2 just shows same problem with http1.1

mod_qos is a really good helper for this kind of problems.


Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org





-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] high count h2 idle streams

2017-10-09 Thread Hajo Locke

Hello List,

found today an abnormality in my apachestatus for some servers.
There are a lot of "h2  idle, streams" in apachestatus. This looks like 
this:


14-0 28241 0/41/41 K  0.25 128 1 0.0 0.10 0.10  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
15-0 28242 0/11/11 K  0.25 120 1 0.0 0.61 0.61  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
16-0 28243 0/15/15 K  0.22 8 1 0.0 0.39 0.39  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
17-0 28245 0/25/25 K  0.40 278 1 0.0 1.13 1.13  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
18-0 28246 0/46/46 K  0.52 35 54 0.0 1.53 1.53  ip.ip.ip.ip h2 idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
19-0 28250 0/7/7 K  0.12 58 0 0.0 0.02 0.02  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
20-0 28277 0/3/3 K  0.24 243 66 0.0 0.23 0.23  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
21-0 28278 0/8/8 K  0.15 102 1 0.0 0.29 0.29  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)
22-0 28280 0/5/5 K  0.12 18 1 0.0 0.31 0.31  ip.ip.ip.ip h2  idle, 
streams: 0/0/0/0/0 (open/recv/resp/push/rst)


Some servers have hundreds of this, never noticed this before.
This connections have status K or W. Ist this a kind of attack to reach 
MaxRequestWorkers?
It seems the number of this connections can be reduced by reducing 
H2MaxWorkerIdleSeconds to a lower value.

Apacheversion is 2.4.27.
What should i do now?

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Configuration help - addhandler <> mod_proxy_fcgi

2017-09-11 Thread Hajo Locke

Hello,

Am 11.09.2017 um 14:58 schrieb Eric Covener:

On Mon, Sep 11, 2017 at 4:28 AM, Hajo Locke <hajo.lo...@gmx.de> wrote:

Hello List,

currently i use classic mod_fastcgi (fastcgiexternalserver) with php-fpm,
which is quite reliable.
A disadvantage of this setup is, that not every response-header set by
.htaccess will really send to client.
Something like this is the current setup:


 AddHandler myphp-cgi .php
 Action myphp-cgi /cgi-fpm/php71-fpm


The big advantage is, that my users are able to use addhandler by .htaccess
to choose any provided php-version.

Now i try to switch from mod_fastcgi to new recommend way of mod_proxy_fcgi

The basic variants with SetHandler are working easily:

 SetHandler "proxy:unix:/dev/shm/php70fpm.sock|fcgi://localhost/"
  

Now i want to use AddHandler again, so .htaccess files of my users will
automatically work former way and choose proper php-version.
Unfortunately i was not able to combine AddHandler, Action and the proxy in
a working way:

Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

When enabling this in global conf every request to php files results in a
400 response:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1

Please give me a hint to a working configuration. All my attempts were not
successful.


Action could be tricky here. Are you using php-fpm? Have you
considered allowing users to point at different sockets for diffrenent
fpm pools?
thanks for your answer. yes, i want to proxy to a socket provided by 
php-fpm. I run some different php-versions and allow my Users to choose 
fitting version by:


AddHander php70 .php
or
Addhandler php71 .php

With classical modfastcgi this is easy. Every User has its own socket 
provided by php-fpm pool-configuration and is used in VHosts of this user.


I investigated some time and found this: 
http://grokbase.com/t/apache/dev/1426dr36b5/adding-addhandler-support-for-mod-proxy

To my surprise this code is alreay present in apache 2.4.27
This is working:
Addhandler "proxy:unix:///dev/shm/php70fpm.sock|fcgi://localhost/" .php

It seems problem it is more mod_actions related and action is not 
supporting the proxy-notation. This would not be a problem for a pure 
serverside configuration.
But in this case all my users have to rewrite their .htaccess . with too 
much users in hosting this is not possible.
I have to find a way where notation like "Addhandler php71 .php" also 
works with a fcgi-proxy, so creating a basic handlername and bind this 
to a proxy action.

But unfortunately i cant achieve this on my own.

Thanks for your help.



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org




Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Configuration help - addhandler <> mod_proxy_fcgi

2017-09-11 Thread Hajo Locke

Hello List,

currently i use classic mod_fastcgi (fastcgiexternalserver) with 
php-fpm, which is quite reliable.
A disadvantage of this setup is, that not every response-header set by 
.htaccess will really send to client.

Something like this is the current setup:


    AddHandler myphp-cgi .php
    Action myphp-cgi /cgi-fpm/php71-fpm


The big advantage is, that my users are able to use addhandler by 
.htaccess to choose any provided php-version.


Now i try to switch from mod_fastcgi to new recommend way of mod_proxy_fcgi

The basic variants with SetHandler are working easily:

    SetHandler "proxy:unix:/dev/shm/php70fpm.sock|fcgi://localhost/"
 

Now i want to use AddHandler again, so .htaccess files of my users will 
automatically work former way and choose proper php-version.
Unfortunately i was not able to combine AddHandler, Action and the proxy 
in a working way:


Addhandler php-mycgi .php
Action php-mycgi "proxy:unix:/dev/shm/php71fpm.sock|fcgi://localhost/"

When enabling this in global conf every request to php files results in 
a 400 response:
[Mon Sep 11 10:10:09.375597 2017] [core:error] [pid 23826] [client 
x.x.x.x:53050] AH00126: Invalid URI in request GET /phpinfo.php HTTP/1.1


Please give me a hint to a working configuration. All my attempts were 
not successful.


Thanks,
Hajo





-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] [ANNOUNCEMENT] Apache HTTP Server 2.4.27 Released

2017-07-11 Thread Hajo Locke

Hello,

Am 11.07.2017 um 16:08 schrieb David Copeland:

On 11/07/17 09:58 AM, Eric Covener wrote:

On Tue, Jul 11, 2017 at 9:41 AM, David Copeland
 wrote:

o HTTP/2 will not be negotiated when using the Prefork MPM

I'm wondering what the reason for this is?

In the previous release, HTTP2 made prefork run multi-threaded. People
often chose prefork due to non-threadsafe code running in the server.



Right, understood.

Just looking at the HTTP/2 HowTo
(https://httpd.apache.org/docs/trunk/howto/http2.html). It suggests
setting H2MiniWorkers will make it possible anyway if one wishes to take
the risk and try it. Is this not correct?

Thanks.

this was answer to my question? I think i will try it, if H2MiniWorkers 
will restore old behaviour. We use mod_php+http/2 on a lot of 
productionservers. May be there is better performance with other mpm, 
but we never noticed serious problems.
We also use php-fpm, which basically runs with worker etc. but mod_php 
has bigger range of functions and we cant replace it the easy way.


Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] [ANNOUNCEMENT] Apache HTTP Server 2.4.27 Released

2017-07-11 Thread Hajo Locke

Hello,

Am 11.07.2017 um 15:58 schrieb Eric Covener:

On Tue, Jul 11, 2017 at 9:41 AM, David Copeland
 wrote:

o HTTP/2 will not be negotiated when using the Prefork MPM

I'm wondering what the reason for this is?

In the previous release, HTTP2 made prefork run multi-threaded. People
often chose prefork due to non-threadsafe code running in the server.


so we cant use http/2 in 2.4.27 when using mod_prefork? it is not 
configurable?

We use mod_prefork because of mod_php.

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] http/2 vs. Headername

2017-05-23 Thread Hajo Locke

Hello,

thanks for this hint. I compiled v1.10.5 
<https://github.com/icing/mod_h2/releases/tag/v1.10.5> and replaced 
version which is bundled with apache 2.4.25.

It seems to work now. Browsers and curl show webpage now.
So i think we will have this bugfix already included in next version 
which is officially released.


Thanks,
Hajo

Am 23.05.2017 um 11:36 schrieb Luca Toscano:

Hi Hajo,

any chance that you could download/build/test the latest release of 
https://github.com/icing/mod_h2/releases ?


Luca

2017-05-23 11:30 GMT+02:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello,

no one has an idea? Currently i believe this is a kind of apache bug.
I compiled curl with http2 Support to view more debugdetails:

curl -v --http2 https://example.com/

*   Trying ip.ip.ip.ip...
* Connected to example.com <http://example.com> (ip.ip.ip.ip) port
443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 696 certificates in /etc/ssl/certs
* ALPN, offering h2
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*server certificate verification OK
*server certificate status verification SKIPPED
*common name: example.com <http://example.com> (matched)
*server certificate expiration date OK
*server certificate activation date OK
*certificate public key: RSA
*certificate version: #3
*subject: CN=example.com <http://example.com>
*start date: Mon, 22 May 2017 05:04:00 GMT
*expire date: Sun, 20 Aug 2017 05:04:00 GMT
*issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
*compression: NULL
* ALPN, server accepted to use h2
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* TCP_NODELAY set
* Copying HTTP/2 data in stream buffer to connection buffer after
upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c2a77b4bd0)
> GET / HTTP/1.1
> Host: example.com <http://example.com>
> User-Agent: curl/7.47.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* HTTP/2 stream 1 was not closed cleanly: error_code = 1
* Closing connection 0
curl: (16) HTTP/2 stream 1 was not closed cleanly: error_code = 1

Same problem as in webbrowsers.
problem can be avoided by disabling http2 modul or by removing
Headername from .htaccess. Both is not intended.
Somebody can confirm this problem?

    Thanks,
Hajo


Am 22.05.2017 um 09:22 schrieb Hajo Locke:

Apache 2.4.25

Hello,

i have a small .htaccess with following content to view
Foldercontents:
###
Options +Indexes
Headername /foo/bar.htm
###
This is working by http, but fails in https if browser uses http/2.
Chrome Message: ERR_SPDY_PROTOCOL_ERROR
Firefox: Secure Connection Failed

I dont see **any error in my logs, http/2 Browsers just stop loading.
When disabling http/2, also https is working.
What to do now?

Thanks,
Hajo








Re: [users@httpd] http/2 vs. Headername

2017-05-23 Thread Hajo Locke

Hello,

no one has an idea? Currently i believe this is a kind of apache bug.
I compiled curl with http2 Support to view more debugdetails:

curl -v --http2 https://example.com/

*   Trying ip.ip.ip.ip...
* Connected to example.com (ip.ip.ip.ip) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 696 certificates in /etc/ssl/certs
* ALPN, offering h2
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
*server certificate verification OK
*server certificate status verification SKIPPED
*common name: example.com (matched)
*server certificate expiration date OK
*server certificate activation date OK
*certificate public key: RSA
*certificate version: #3
*subject: CN=example.com
*start date: Mon, 22 May 2017 05:04:00 GMT
*expire date: Sun, 20 Aug 2017 05:04:00 GMT
*issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
*compression: NULL
* ALPN, server accepted to use h2
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* TCP_NODELAY set
* Copying HTTP/2 data in stream buffer to connection buffer after 
upgrade: len=0

* Using Stream ID: 1 (easy handle 0x55c2a77b4bd0)
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.47.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* HTTP/2 stream 1 was not closed cleanly: error_code = 1
* Closing connection 0
curl: (16) HTTP/2 stream 1 was not closed cleanly: error_code = 1

Same problem as in webbrowsers.
problem can be avoided by disabling http2 modul or by removing 
Headername from .htaccess. Both is not intended.

Somebody can confirm this problem?

Thanks,
Hajo

Am 22.05.2017 um 09:22 schrieb Hajo Locke:

Apache 2.4.25

Hello,

i have a small .htaccess with following content to view Foldercontents:
###
Options +Indexes
Headername /foo/bar.htm
###
This is working by http, but fails in https if browser uses http/2.
Chrome Message: ERR_SPDY_PROTOCOL_ERROR
Firefox: Secure Connection Failed

I dont see **any error in my logs, http/2 Browsers just stop loading.
When disabling http/2, also https is working.
What to do now?

Thanks,
Hajo





[users@httpd] http/2 vs. Headername

2017-05-22 Thread Hajo Locke

Apache 2.4.25

Hello,

i have a small .htaccess with following content to view Foldercontents:
###
Options +Indexes
Headername /foo/bar.htm
###
This is working by http, but fails in https if browser uses http/2.
Chrome Message: ERR_SPDY_PROTOCOL_ERROR
Firefox: Secure Connection Failed

I dont see **any error in my logs, http/2 Browsers just stop loading.
When disabling http/2, also https is working.
What to do now?

Thanks,
Hajo



[users@httpd] apache 2.4 includes vi .swp files

2017-05-09 Thread Hajo Locke

Hello,

found an interesting difference between include behaviour of apache 2.2 
and 2.4


Have an include in apache2.conf:

Include /etc/apache2/conf.d/

When editing a conf file in this folder by vi, vi creates a new swp file.
lets say i edit a file logging.conf, so vi creates a file .logging.conf.swp

When running "apachectl configtest" at this particular time, apache 2.4 
tries to include the .logging.conf.swp which fails, because 
.logging.conf.swp is binary and invalid.

This prevents apache 2.4 from sucessfully start and leads to downtime.

Apache 2.2 tries not to include this .swp file and restarts 
successfully. Include is the same as above. (Include /etc/apache2/conf.d/)


A quick fix could be to include only *.conf files:

Include /etc/apache2/conf.d/*.conf

But i wonder if apache should basically tries to include a file 
"beginning with dot"/"ending with swp" which generelly indicates a 
temporary/hidden file.

In my opinion include behaviour of apache 2.2 was more practice-oriented.

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] http/2 Misdirected Request

2017-04-11 Thread Hajo Locke

Apache 2.4.25

Hello,

have an issue with http/2 and response "421 Misdirected Request".
I read this to inform about issues with multiple hosts and same 
certificate. https://httpd.apache.org/docs/2.4/mod/mod_http2.html

Unfortunately i can't solve my problem on my own.
Involved are to subdomains www.foobar.com and en.foobar.com
In HTML from https://www.foobar.com is a link to https://en.foobar.com
After clicking fast enough, we see the 421 response and in errorlog:
AH02032: Hostname www.foobar.com provided via SNI and hostname 
en.foobar.com provided via HTTP have no compatible SSL setup


As i understood apache returns with 421 if client wants to reuse already 
established connection but detects differences in ssl-setup.

The problem is to find that differences...
Both Subdomains use same wildcard-certificate which fits all hosts. Both 
subdomains use default-ssl.conf, there is no individual ssl-settings.
Only difference i see are filenames for 
SSLCertificateKeyFile/SSLCertificateFile in VHost. Both Subdomains use 
same certificate, but everey subdomain has stored the cert. in different 
files. Could that be a detected difference and reason for the 421? First 
tests seem to confirm.
I tried to debug ssl (LogLevel info ssl:trace5) but i see only a lot of 
openssl messages and no further explanation about ssl-differences.


What should i do now?

Why reacts Chrome in this harsh way and stops browsing? Firefox receives 
same response and just opens new connection.


Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] apache 2.4 handling of subdomains with unallowed characters

2017-01-23 Thread Hajo Locke

Hello,

Am 24.01.2017 um 07:01 schrieb Nick Kew:

On Mon, 2017-01-23 at 21:26 +, Darryl Philip Baker wrote:

DNS doesn’t allow underscore in host and domain names so how a URL
with an underscore would have ever worked is beyond me.

Yeah, but is it the webserver's role to enforce that?

Old answer: be liberal in what you accept.
New answer: enforce HTTP much more strictly to pre-empt the next
security alert based on smuggling something through.

In reply to the OP, does HTTPProtocolOptions may be what you're
looking for, though I haven't verified it.

yes, |HttpProtocolOptions is the option i was looking for, Thanks. The 
invalid subdomain is working again.
I am aware of dangers by setting this to unsafe. I will try to avoid 
this und eliminate this invalid hosts.


Thanks,
Hajo
|


[users@httpd] apache 2.4 handling of subdomains with unallowed characters

2017-01-23 Thread Hajo Locke

Hello list,

i have some subdomains with unallowed characters, in my case the underscore.

In apache 2.2 subdomains like this worked: sub_domain.domain.com
In apache 2.4 this produces a 400 servererror (bad request)

It seems that apache 2.4's handling of allowed/not allowed chars is more 
strict.


Is there a config-option to relax this behaviour to 2.2 standard? I 
looked but did not find proper directives.

Otherwise i would quit using not allowed chars.

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] HTTPD 2.4.25 crash in mod_proxy (ajp)

2017-01-02 Thread Hajo Locke

Hello,

Am 02.01.2017 um 12:47 schrieb Yann Ylavic:

On Mon, Jan 2, 2017 at 12:43 PM, Yann Ylavic <ylavic@gmail.com> wrote:

On Mon, Jan 2, 2017 at 12:41 PM, Yann Ylavic <ylavic@gmail.com> wrote:

Hi Hajo,

On Mon, Jan 2, 2017 at 11:54 AM, Hajo Locke <hajo.lo...@gmx.de> wrote:

sorry guys. i think i have lost overview. Has this resulted in a public
patch?

This patch: http://svn.apache.org/r1775775

Or here as plaintext:
http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/proxy_util.c?r1=1775775=1775774=1775775=patch

Also, it seems that new ApacheLounge binaries include this fix:
http://www.apachelounge.com/viewtopic.php?p=34723


thank you all for your help. I think i got it now.
I included the patch in my build-process.



Regards,
Yann.

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] HTTPD 2.4.25 crash in mod_proxy (ajp)

2017-01-02 Thread Hajo Locke

Hello list,

sorry guys. i think i have lost overview. Has this resulted in a public 
patch?


Thanks,
Hajo

Am 23.12.2016 um 13:18 schrieb Konstantin Kolinko:

BCC: Steffen

I did quick tests to verify whether shutdown issues are related to
mod_proxy.  They are not related.

2016-12-23 15:01 GMT+03:00 Konstantin Kolinko :

2. Oddities at shutdown that I also mentioned are still there.

I mean the following:
- On Windows 7 (running as service, complex configuration):
"AH00431: Parent: Forcing termination of child process" log message

I do not see such message in old logs from 2.4.23.

Maybe the process is still broken, although it did not crash?

Quick test:

1) Start server service, Stop server service   (No HTTPS requests served)

No issue.

[Fri Dec 23 15:06:22.542629 2016] [mpm_winnt:notice] [pid 2636:tid
364] AH00422: Parent: Received shutdown signal -- Shutting down the
server.
[Fri Dec 23 15:06:24.570633 2016] [mpm_winnt:notice] [pid 3996:tid
256] AH00364: Child: All worker threads have exited.
[Fri Dec 23 15:06:24.648633 2016] [mpm_winnt:notice] [pid 2636:tid
364] AH00430: Parent: Child process 3996 exited successfully.

2) Start server service, Request a static page (root page of the
site), Stop server service.

The child process does not stop, is terminated forcedly.

[Fri Dec 23 15:07:02.353899 2016] [mpm_winnt:notice] [pid 3084:tid
364] AH00422: Parent: Received shutdown signal -- Shutting down the
server.
[Fri Dec 23 15:07:32.368352 2016] [mpm_winnt:notice] [pid 3084:tid
364] AH00431: Parent: Forcing termination of child process 5564

So this issue is real, but it is not related to mod_proxy.



- On Windows 10 (running as console, simple configuration example - GitHub):

Before I hit Ctrl+C the error.log file is as follows:
(I added additional line breaks to separate lines that are wrapped in e-mail.)
...
After I hit Ctrl+C in HTTPD console window, it becomes:
(I added additional line breaks to separate lines that are wrapped in e-mail.)
...

The "Apache server interrupted..." line appears in the middle of the
file, overwriting some of existing text.


Quick test:

1) Start server service, Stop server service   (No HTTPS requests served)

This issue is observed.
("Apache server interrupted..." line appears in the middle of the file).

So this oddity is real, but it is not related to mod_proxy, not
related to processing of HTTP requests.

Maybe this is not a real issue, just an oddity.


Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org





-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] apache 2.4 wildcardsubdomains

2016-09-13 Thread Hajo Locke

Hello,

Am 13.09.2016 um 13:00 schrieb Daniel:
Always define a ServerName, AFAIK *.example.com <http://example.com> 
would not be valid in 2.2 either, even if it let you define it without 
error, ServerName should always have a valid resolvable name, at least 
from the client that will query it.


Note that if you have several virtualhosts ServerName is important so 
httpd will know exactly to which virtualhost it must deliver each request.


In your case since you want to match all subdomains, just add a name 
of one of your main subdomains for ServerName directive. eg: 
ServerName main.example.com <http://main.example.com>
ok, thanks. so we will generate a uniq name for servername.  Because 
wildcardsubdomainhost should only trigger for not existing subdomains, 
we can not choose main.example.com as name. we have to make sure it is 
not used already.


2016-09-13 11:36 GMT+02:00 Hajo Locke <hajo.lo...@gmx.de 
<mailto:hajo.lo...@gmx.de>>:


Hello List,

in apache 2.2 we had a typical vhost like this to realize
wildcardsubdomains:


ServerName *.example.com <http://example.com>
ServerAlias *.example.com <http://example.com>
DocumentRoot /var/www/wildcardexample/public_html


In apache 2.4 wildcards are not allowed in servername. Is it ok to
just comment out servername and run this vhost only with
"ServerAlias *.example.com <http://example.com>"?
It seems that servername is not a mandatory directive, apache 2.4
is starting without problems.
Or is there an better way to realize?

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
<mailto:users-unsubscr...@httpd.apache.org>
For additional commands, e-mail: users-h...@httpd.apache.org
<mailto:users-h...@httpd.apache.org>




--
*Daniel Ferradal*
IT Specialist

email dferradal at gmail.com <http://gmail.com>
linkedin es.linkedin.com/in/danielferradal 
<http://es.linkedin.com/in/danielferradal>


Thanks,
Hajo


[users@httpd] apache 2.4 wildcardsubdomains

2016-09-13 Thread Hajo Locke

Hello List,

in apache 2.2 we had a typical vhost like this to realize 
wildcardsubdomains:



ServerName *.example.com
ServerAlias *.example.com
DocumentRoot /var/www/wildcardexample/public_html


In apache 2.4 wildcards are not allowed in servername. Is it ok to just 
comment out servername and run this vhost only with "ServerAlias 
*.example.com"?
It seems that servername is not a mandatory directive, apache 2.4 is 
starting without problems.

Or is there an better way to realize?

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] unexpected behaviour of default host

2015-12-29 Thread Hajo Locke

Hello List,

used for years apache 2.2, now trying to upgrade to 2.4 and do some 
configtests.

I noticed an unexpected behaviour of default host.
like suggested here, i use a minimal default vhost:
https://httpd.apache.org/docs/2.4/vhosts/examples.html

DocumentRoot"/www/default"


In Apache 2.2 we used additional "Servername *", but with 2.4 it is not 
allowed to use wildcards with Servername-Directive.
So we leave it empty like suggested in the docs. First loaded valid 
VHost will be the default host.
I think the missing Servername is internal evaluated and filled with 
hostname of local machine. I did a lot of tests and this is my only 
conclusion.
So this leads to problems when using local hostname in other VHosts as 
Servername, which may lead now to wrong Documentroots. We do it this way 
on a couple of thousend servers. I think this happens also to the guy 
who commented here at bottom of page 
https://httpd.apache.org/docs/2.4/vhosts/examples.html
To avoid this it seems necessary to add  Servername with an unexisting 
hostname. "ServerName non.existing_host.noTld"


Alltogether i think the 2.4 solution for defaulthosts is quite 
unfortunate. May be there should be used a really Default-Servername to 
mark defaulthost.
At least the docs should be updated and admins should be informed that 
they are "loosing" a usable servername and there is one fault probability.


Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] unexpected behaviour of default host

2015-12-29 Thread Hajo Locke



Am 29.12.2015 um 20:07 schrieb Eric Covener:

On Tue, Dec 29, 2015 at 2:05 PM, Hajo Locke <hajo.lo...@gmx.de> wrote:

In Apache 2.2 we used additional "Servername *", but with 2.4 it is not
allowed to use wildcards with Servername-Directive.

I think it was treated as a literal * in 2.2. It's just a shorter/more
confusing version of non.existing_host.noTld.

Using a hostname which is already declared in same conf is not less 
confusing.

Problem with shotcuts is that some people get lost in thicket.
May be cleanest way would be a directive "usethisaddefault true".
But as long the docs indicates to presented problem, iam satisfied.

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] spdy/http/2 and mod_php

2015-06-30 Thread Hajo Locke

Hello,

iam planning to upgrade my apache2.2 to 2.4.,  i have 2 questions before 
where i need your help.


former SPDY Implementation conflicts with non-threadsafe Moduls like 
mod_php. To use SPDY it is necessary to use worker-mpm and php-cgi.
Now HTTP/2 is new standard and i would like to know if HTTP/2 
Implementation has same conflicts with non-threadsafe Moduls like 
mod_php. As far as i know HTTP/2 is based on SPDY.


I have some non-standard Modules compiled and packaged for Apache2.2. Is 
it possible to use these Moduls again on Apache2.4 or is it necessary to 
compile all new for new Apacheversion?


Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] mod_rewrite vs. mod_jk

2015-05-06 Thread Hajo Locke

Hello,

i have a small mod_jk.conf and want to use mod_rewrite also:

JkMount /* ajp13
JkUnmount /test/* ajp13
RewriteEngine On
RewriteRule ^/$ /java_app/ [L]

Rewriting by mod_rewrite only works with urls which are unmounted by 
JkUnmount. So above Rule is not working because its immediately passed 
to the java-worker.
Ist there a way to change this behaviour, so all mod_rewrite is done at 
first and passing to java-worker follows last.

Or is unmounting mandatory for this?

Thanks,
Hajo

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] strange 32bit apache-problem

2014-09-15 Thread Hajo Locke

Hello,

one of my machines i upgraded tu ubuntu 14.04 32bit.
there is a apache 2.2.27 running on it (non ubuntu-repo).
i have a textfile which is 512byte long, it contains just some chars, 
just one long line with a linebreak at the end.


If i request this file by wget from the same machine, all is looking 
fine and readable.
If i request this file from a other machine, then file seems to be 
corrupted. response-header and filesize are still ok. file contents 
looks like i would have requested some binary content.

content looks like this (just first 8 bytes):
^@^@^@^@

if i reduce length of line, then at a point file is ok again and 
readable. What could be the problem here?
Other files with shorter lines are also ok. it seems to be special in 
here, that file is very long and only one linebreak at the end
i have other machines which are upgraded the same way with same 
software, only difference is that they are 64bit machines and they are 
working without problems.
Never had something like this, some other not upgraded 32bit machines 
still have no problem with original file.


What to do now?

Thanks,
Hajo


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] strange 32bit apache-problem

2014-09-15 Thread Hajo Locke

Hello,

Am 15.09.2014 um 11:57 schrieb Fiedler Roman:

Hi,


Von: Hajo Locke [mailto:hajo.lo...@gmx.de]

Hello,

one of my machines i upgraded tu ubuntu 14.04 32bit.
there is a apache 2.2.27 running on it (non ubuntu-repo).
i have a textfile which is 512byte long, it contains just some chars,
just one long line with a linebreak at the end.

If i request this file by wget from the same machine, all is looking
fine and readable.
If i request this file from a other machine, then file seems to be
corrupted. response-header and filesize are still ok. file contents
looks like i would have requested some binary content.
content looks like this (just first 8 bytes):
^@^@^@^@

if i reduce length of line, then at a point file is ok again and
readable. What could be the problem here?
Other files with shorter lines are also ok. it seems to be special in
here, that file is very long and only one linebreak at the end
i have other machines which are upgraded the same way with same
software, only difference is that they are 64bit machines and they are
working without problems.
Never had something like this, some other not upgraded 32bit machines
still have no problem with original file.

What to do now?

Did you look at it in an editor? If yes, editors might be tricky.

Could you post xxd of original file, file retrieved locally and on remote
machine?

Roman
i used different editors, graphical or text. display is different, but 
in all cases corrupt.

for testing i created a 512byte file containing a lot of a's
i post xxd of files. first one is the file a.html retrieved from same 
machine which is ok:



000: 6161 6161 6161 6161 6161 6161 6161 6161  
010: 6161 6161 6161 6161 6161 6161 6161 6161  
020: 6161 6161 6161 6161 6161 6161 6161 6161  
030: 6161 6161 6161 6161 6161 6161 6161 6161  
040: 6161 6161 6161 6161 6161 6161 6161 6161  
050: 6161 6161 6161 6161 6161 6161 6161 6161  
060: 6161 6161 6161 6161 6161 6161 6161 6161  
070: 6161 6161 6161 6161 6161 6161 6161 6161  
080: 6161 6161 6161 6161 6161 6161 6161 6161  
090: 6161 6161 6161 6161 6161 6161 6161 6161  
0a0: 6161 6161 6161 6161 6161 6161 6161 6161  
0b0: 6161 6161 6161 6161 6161 6161 6161 6161  
0c0: 6161 6161 6161 6161 6161 6161 6161 6161  
0d0: 6161 6161 6161 6161 6161 6161 6161 6161  
0e0: 6161 6161 6161 6161 6161 6161 6161 6161  
0f0: 6161 6161 6161 6161 6161 6161 6161 6161  
100: 6161 6161 6161 6161 6161 6161 6161 6161  
110: 6161 6161 6161 6161 6161 6161 6161 6161  
120: 6161 6161 6161 6161 6161 6161 6161 6161  
130: 6161 6161 6161 6161 6161 6161 6161 6161  
140: 6161 6161 6161 6161 6161 6161 6161 6161  
150: 6161 6161 6161 6161 6161 6161 6161 6161  
160: 6161 6161 6161 6161 6161 6161 6161 6161  
170: 6161 6161 6161 6161 6161 6161 6161 6161  
180: 6161 6161 6161 6161 6161 6161 6161 6161  
190: 6161 6161 6161 6161 6161 6161 6161 6161  
1a0: 6161 6161 6161 6161 6161 6161 6161 6161  
1b0: 6161 6161 6161 6161 6161 6161 6161 6161  
1c0: 6161 6161 6161 6161 6161 6161 6161 6161  
1d0: 6161 6161 6161 6161 6161 6161 6161 6161  
1e0: 6161 6161 6161 6161 6161 6161 6161 6161  
1f0: 6161 6161 6161 6161 6161 6161 6161 610a  aaa.



now comes xxd of same file retrieved by an other client:

000:          
010:          
020:          
030:          
040:          
050:          
060:          
070:          
080:          
090:          
0a0:          
0b0:          
0c0:          
0d0:          
0e0:          
0f0:          
100:          
110:     

Re: [users@httpd] strange 32bit apache-problem

2014-09-15 Thread Hajo Locke

Hello,

Am 15.09.2014 um 13:51 schrieb Eric Covener:

On Mon, Sep 15, 2014 at 5:30 AM, Hajo Locke hajo.lo...@gmx.de wrote:

If i request this file by wget from the same machine, all is looking fine
and readable.
If i request this file from a other machine, then file seems to be
corrupted. response-header and filesize are still ok. file contents looks
like i would have requested some binary content.
content looks like this (just first 8 bytes):
^@^@^@^@

Try EnableSendfile off?

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org


i tried to disable sendfile, but unfortunately no change. file still 
damaged.  :(


Thanks,
Hajo



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] strange 32bit apache-problem

2014-09-15 Thread Hajo Locke

Hello,

Am 15.09.2014 um 13:51 schrieb Eric Covener:

On Mon, Sep 15, 2014 at 5:30 AM, Hajo Locke hajo.lo...@gmx.de wrote:

If i request this file by wget from the same machine, all is looking fine
and readable.
If i request this file from a other machine, then file seems to be
corrupted. response-header and filesize are still ok. file contents looks
like i would have requested some binary content.
content looks like this (just first 8 bytes):
^@^@^@^@

Try EnableSendfile off?

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org


sorry, EnableSendfile off is the solution.  earlier i had a problem 
with restarting httpd, so EnableSendfile off could not take effect.

problem is solved.

Thanks,
Hajo



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] filesmatch suspends AccessFileName?

2013-04-05 Thread Hajo Locke

Hello,

interesting thing here. Ist this a bug or expected?
Apache is 2.2.23

Costumer uses .htaccess which uses some SetEnvIfNoCase Directives to filter 
bad bots.

the allow,deny directive is placed within a filesmatch directive.
example:

SetEnvIfNoCase user-agent hallohallo bad_bot=1

FilesMatch (.*)
Order Allow,Deny
Allow from all
Deny from env=bad_bot
/FilesMatch


The regex in filesmatch Directive is quite useless but this leads to the 
problem that .htaccess file can called by http in browser and shows all of 
its contents.


http://example.com/.htaccess

Seems to me quite simple for a user to disclose his .htaccess contents by 
simple filesmatch directive which suddenly ignores AccessFileName directive.

Is this a bug or expected?

Thanks,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Re: filesmatch suspends AccessFileName?

2013-04-05 Thread Hajo Locke

Hello,


I have the following in the httpd.conf:



FilesMatch ^\.ht
   Order allow,deny
   Deny from all
   Satisfy All

/FilesMatch



Don't you have something similar?


i have this:

Files ~ ^\.ht
   Order allow,deny
   Deny from all
/Files

but this is overwritten by the .htaccess of costumer.
i thought .htaccess is always protected by AccessFileName Directive, this 
was my fallacy because AccessFileName has other meaning as Paul mentioned.


So thanks at all, case solved,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] german umlauts in filename

2012-03-20 Thread Hajo Locke

Hello List,

i have some files with german umlauts ö ä ü in filename and want to request 
them by http.
filename is coded in latin1, in console/ftp etc. all works well and looks 
good.

when requesting file ü.txt i see this error in log:
File does not exist: \xc3\xbc.txt

It just works if i recode charset of filename with convmv to utf8.
convmv -f latin1 -t utf8 ü.txt --notest

I cant do this for all my files.
Is there a way to help apache find the file even if its filename is coded 
not in utf8?


Thanks,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] german umlauts in filename

2012-03-20 Thread Hajo Locke



Use links that are URL-encoded with the proper bytes so clients don't
have to choose the codepage to request in.


hmm, but when typing url directly in browser in most cases utf8 is used by 
browser.

how to solve this?

Thanks,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] german umlauts in filename

2012-03-20 Thread Hajo Locke

You could try rewriting utf-8 representation of umlaut [or other
common char people type into the URL directly] into your local
codepage representation.


this works. is not my preferred solution for this but it works for now. i 
will fix my ftp-server to store filenames utf8


Thanks,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] mod_status, disable server-status for users

2012-03-05 Thread Hajo Locke

Hello List,

ist there any possibility to hide server-status page provided by mod-status 
for my users?
every user with .htaccess is able to use sethandler and able to view 
complete status.

how to disable this?

Thanks,
Hajo 



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] mod_status, disable server-status for users

2012-03-05 Thread Hajo Locke

hello,


I'm afraid the only way to disable this is to disable mod_status.
I don't know of any other way and I that's why I don't use mod_status.


which module you are using? i cant renounce to view a statuspage of my 
server.


Thanks,
Hans


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] keepalivetimeout - odd behaviour

2011-07-13 Thread Hajo Locke

Apache 2.2.14

Hello,

try to linkcheck my domain with http://validator.w3.org/checklink
The linkchecker tells in some cases that my server would answer with 500: 
Error: 500 Server closed connection without sending any data back
All i see in Log is no error but successful request to /robots.txt from 
klink.w3.org.
When changing keepalivetimeout from 1 to 3 the error is gone and every test 
of the linkchecker shows a correct analysis.

When changing back to 1, again only 50% of requests are successful.
Sounds strange to me...
Somebody has an explanation?

Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] reload separate fcgid-application

2011-04-19 Thread Hajo Locke

Hello,

You could issue a kill pid of your fcgi-wrapper process that handles 
your specific vhost (i distinguish mine through the use of different users 
via suexec, so i can do a pkill -u username), apache will spawn a new 
process when it recieves the next request. However, note that this is not 
graceful, any tasks that the process is up will be stopped.

Seems to me to be more gentle to cpu then reloading all applications.
Do you have often trouble with your users when killing the processes or are 
these killings barely noticeable?


Thanks,
Hajo


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] reload separate fcgid-application

2011-04-14 Thread Hajo Locke
I am not very familiar with mod_fcgid, but hat you want is possible with 
what I am running:


httpd 2.3.12-dev with mod_proxy_fcgi
PHP 5.3.7-dev with php-fpm

interesting, but not an option for productive systems.
the killing of user-processes, like suggested by Björn, also isnt a nice 
solution but more practicable.
on well visited servers too much parallel starting php-procesess are able to 
overload whole machine.
i can not find any hints by the developer to avoid cpu-impact after 
reloading apache.

hmm, no official recommend procedure?

Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] reload separate fcgid-application

2011-04-13 Thread Hajo Locke

Hello,

is there a possibility to reload a separate fcgid-application (mod_fcgid) if 
something has changed?
May be the php.ini for my wrapper-script has changed and i want to reload 
this application for vhost without disturbing other apps.
Is this possible? I think a reload of apache stops all fcgid-applications 
and force to restart them, is this notice correct? there are a lot of httpd 
defunct in processlist after reloading apache.
iam afraid of killed apps and a cpu-overload if a lot applications start at 
the same time.


Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] loadbalancing apache/tomcat

2011-03-25 Thread Hajo Locke

Hello List,

following situation: i have 1 apache which is connected by mod_jk to 
multiple tomcat servers.
Now it seems to get necessary that i also need to balance the 
apache-applications.

What is best practice in my case?
I think about nginx in first line which is connected to apache-servers and 
tomcatservers as backends.
in nginx-conf i should be able to devide requests to adequate servers, 
mod_jk is not needed any more because nginx is connected directly to 
tomcats.

is this a well setup or should be putted into practice in an other way?

Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Re: ssl-vhost-mixing issue

2011-02-22 Thread Hajo Locke

Hello,

Krist wrote:

You don't have a NameVirtualHost directive?
What happens if you enter https://ip2.ip2.ip2.ip2 in your browser?


we use

NameVirtualHost *:80

in httpd.conf
I did some tests with setting NameVirtualHost  to base-ip of the server 
ip0.ip0.ip0.ip0, but nothing changed.


ip1 and ip2 are especially used for SSL-Hosts.
It makes no difference if i call https://ip2.ip2.ip2.ip2 or 
https://ip1.ip1.ip1.ip1

In both cases i see data of cert which came as first in httpd.conf
But the both different vhosts are really separately requestet. i added a 
customlog directive for both vhosts and there was no mistake.
If i call https://ip2.ip2.ip2.ip2   log2 was written and if i call 
https://ip1.ip1.ip1.ip1  log1 was written.

I have read this:
http://wiki.apache.org/httpd/NameBasedSSLVHosts
Apache is ignoring config from second host if IP was already used for a 
SSL-Host. But in my case all used IPs are different.

did i understand this correct?
if yes, may be apache is confused when reading the certificates and finding 
same hostname in certificates...


Eric Covener wrote:

It's hard to tell which IP-based vhost you should have hit, or did
hit, since you didn't specify which IP you connected to and you didn't
log separately or show _all_ of your vhosts.


i dont find any fault in my conf. logging separatly did show separatly 
requests by same cert-content.

either this is a tricky conf-thing or a bug.

Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Re: ssl-vhost-mixing issue

2011-02-22 Thread Hajo Locke





See https://issues.apache.org/bugzilla/show_bug.cgi?id=43218#c5

It will work if you use a different ServerName (even varying the port
would fix it) in the vhost with a different cert.

Regards, Joe



ahh, a bug.
changing port to non-standard would solve this problem but cause others...
i did some scripting and now writing vhost with active ip always as first 
one in conf.

This solves this problem for me...

Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Re: ssl-vhost-mixing issue

2011-02-21 Thread Hajo Locke

Hello,


Apache 2.2.14



Hello List,



have a question to ssl and two vhosts.



i have 2 ip-based vhosts for enabling ssl for one domain in httpd.conf



VirtualHost ip1.ip1.ip1.ip1:443
Servername example.com
SSLCertificateFile crt1
/VirtualHost



VirtualHost ip2.ip2.ip2.ip2:443
Servername example.com
SSLCertificateFile crt2
/VirtualHost


document-root and Servername for the two vhosts are identical. i do this 
to
switch the domain to a new ip and new certificate at the same time 
without

downtime by DNS.
ip1 and crt1 are the new ones.
Now i can view an odd behaviour.



I call https://example.com which is pointing still to old ip2 and old
certificate crt2. Now i view details of certificate in browser and wonder
that i can sea details of crt1, but crt1 belongs to the other vhost with
other ip.
seems that always the crt from the first vhost with same servername is
loaded. if i turn around order of the both vhosts and ip2 comes before 
ip1

in httpd.conf then all is ok and details of crt2 are displayed.
Is this an expected behaviour? Seems to me that Apache is mixing some 
VHost

Params in this Case. Bug or expected?



Nobody has an opinion about this issue? I think this is critical. Either a 
bug in apache or a bug in my conf. my conf seems clean, i cannot solve this. 
it should be impossible that apache is mixing vhost-special directives. i 
can reproduce this on demand.


Hajo




-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] ssl-vhost-mixing issue

2011-02-15 Thread Hajo Locke

Apache 2.2.14

Hello List,

have a question to ssl and two vhosts.

i have 2 ip-based vhosts for enabling ssl for one domain in httpd.conf

VirtualHost ip1.ip1.ip1.ip1:443
   Servername example.com
   SSLCertificateFile crt1
/VirtualHost

VirtualHost ip2.ip2.ip2.ip2:443
   Servername example.com
   SSLCertificateFile crt2
/VirtualHost

document-root and Servername for the two vhosts are identical. i do this to 
switch the domain to a new ip and new certificate at the same time without 
downtime by DNS.

ip1 and crt1 are the new ones.
Now i can view an odd behaviour.

I call https://example.com which is pointing still to old ip2 and old 
certificate crt2. Now i view details of certificate in browser and wonder 
that i can sea details of crt1, but crt1 belongs to the other vhost with 
other ip.
seems that always the crt from the first vhost with same servername is 
loaded. if i turn around order of the both vhosts and ip2 comes before ip1 
in httpd.conf then all is ok and details of crt2 are displayed.
Is this an expected behaviour? Seems to me that Apache is mixing some VHost 
Params in this Case. Bug or expected?


Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] webdav antivir

2011-01-04 Thread Hajo Locke

Hello,

would like to activate virus scanning and block uploads for my webdav 
clients.

Is there a practicable way to do this?
is someone using mod_clamav for apache? 
http://software.othello.ch/mod_clamav/

seems to be not very up to date. last version from 2009.
are there other solutions existent which stop uploading infected files to 
the webdav-share?


Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Re: mod_dav - practical use

2010-10-12 Thread Hajo Locke

Hello,


http://wiki.apache.org/httpd/ExtendingPrivilegeSeparation



ahh,thanks a lot for your help. now i can go on...

Thanks,
Hajo

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Re: mod_dav - practical use

2010-10-10 Thread hajo . locke

 It's not so much a trick.. You reverse-proxy DAV (write) requests to a
 back-end which is running on an unprivileged port, as an unprivileged
 user, who has the permission to do writes on the FS.

ahh, sure... but i would need a new backend for every dav user. which software 
is recommend for this kind of backend? have no idea at the moment.

Thanks,
Hajo

-- 
GRATIS! Movie-FLAT mit über 300 Videos. 
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Re: mod_dav - practical use

2010-10-10 Thread hajo . locke

Thanks for your help.
 The obvious answer, of course, is to run httpd ;)
may be i misunderstood something... at first line is apache which redirects 
dav-requests coming on special port or alias by reverse-proxy to a backend 
which is able to read/write within users folder (doing the dav-stuff itself). 
This backend should run with rights of my special user. 
i can ran multiple httpd? one as a main-httpd for general stuff and a lot more 
under unprivileged users? never tried this...

But if you're looking for something more light-weight, you may want
to take a look at Yaws:

http://yaws.hyber.org/yman.yaws?page=yaws.conf

I will have a look at this.


Thanks,
Hajo
-- 
Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!  
Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] mod_dav - practical use

2010-10-08 Thread Hajo Locke

Hello List,

a question to mod_dav. Some providers offer mod_dav to edit files which are 
also editable/writeable by ftp-user?
In most cases ftp-users/apacheuser are different to avoid security problems. 
Whats the trick to make this possible without security risk?
could imagine a special user/group setup but all my solutions result in 
securityproblems by to much  readability.


Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Re: recommended setup apache/php

2010-07-27 Thread Hajo Locke

PHP in CGI mode consumes lot of memory because, for every request a
PHP interpreter is called up, so your memory is filled with N php
interpreters executing the same PHP code where N is the number of
online users.
There is no way to avoid it except using some different method like
fastcgi/mod_php.

FastCgi (also fcgid, its the same thing) is better as compared to
mod_php, because it gives more security i.e. the PHP interpreter is
not embedded into apache, php interpreter runs separately.
The PHP interpreter once started isn't killed by mod_{fastcgi,fcgid}
on the end of request as in CGI, but that is configurable.
Once a request finishes, the PHP interpreter keeps running, and as
soon as another request is received, the running PHP interpreter is
used to process the PHP file.
So there's no overhead of initiating the process again and again.
Also, mod_fcgid (not mod_fastcgi) caches the compiled code in memory,
so you don't need opcode caching mechanisms to accelerate PHP
performance like (eaccelerator, xcache, etc.) - reduces the memory
used by those extensions.



I personally use mod_fcgid on my server and am happy with it. It gives
stunning performance.



You should try out mod_fcgid.


sounds good. did some tests with mod_fcgid. cpu-load is higher then using 
mod_php but not as high as expected.

is this the way how it the big ones do?
is it possible to show your config?

there is one sentences in the docs which sounds strange
###
Warning
Currently, only one FastCGI application of any type (AAA or handler) can be 
used for a particular request URI. Otherwise, the wrong FastCGI application 
may be invoked for one or more phases of request processing.

###
this means that only on fcgid-script per location is possible? i would like 
to have .php4 files bind to a php4-wrapper and .php5 files bind to a 
php5-wrapper in same directory. i did some tests but could not confirm or 
decline.


Thanks,
Hajo






-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Re: recommended setup apache/php

2010-07-27 Thread Hajo Locke

On 2010-07-27 10:15, Nilesh Govindarajan wrote:

If I understood your question properly, you're asking that
/htdocs/a.php is one fastcgi app and /htdocs/b.php is another.
If you want it this way, then you will have to add the shebang (#!)
line to all of your scripts before ?php starts, which is not a viable
solution if you have many php scripts which directly interact with the
public.

I don't use that method, see my config below. .php is processed
without any shebang stuff.

FcgidMaxProcesses 100
FcgidMaxProcessesPerClass 50
FcgidFixPathInfo 1
FcgidPassHeader HTTP_AUTHORIZATION
FcgidMaxRequestsPerProcess 100
FcgidOutputBufferSize 1048576
FcgidProcessLifeTime 60
FcgidMinProcessesPerClass 0
FcgidIOTimeout 120

ExpiresActive On
ExpiresDefault access plus 1 month

# This config below ensures that php is processed w/o presence of 
shebang line


DirectoryIndex index.html index.php
AddType text/html .php
AddHandler php-fastcgi .php
Action php-fastcgi /cgi-bin/php.fcgi

FilesMatch \.php$
Options +ExecCGI
ExpiresActive Off
/FilesMatch

And the source code for /cgi-bin/php.fcgi:

#!/bin/bash
export PHPRC=/usr/local/etc/php PHP_FCGI_CHILDREN=0
exec /usr/local/bin/php-cgi $@


I wouldn't put that in your /cgi-bin if I were you, or anywhere it could
be invoked directly. It looks unsafe.



Well it doesn't seem to work that way, see this-
http://www.itech7.com/cgi-bin/php.fcgi


but may be your users have ftp-access to this file and can change path to 
binary?


btw, thanks for your help above. iam a little bit suprised you are not using 
directive

AddHandler fcgid-script .php
and
FCGIWrapper
like shown in the docs.
action is part of mod_actions. i thought FCGIWrapper is a must-have 
directive to point to binary.

did you also do some tests with prefork vs. worker?

Thanks,
Hans 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] recommended setup apache/php

2010-07-26 Thread Hajo Locke

Hello List,

iam looking for most recommendend setup of apache/php for my purposes. I 
want to provide dynamical webspace for some users and there moderate volume 
pages.
I know apache in combination with mod_php is fastest setup but i want to 
avoid mod_php for some reasons.
Could imagine a setup like this: php is provided as fastcgi using mod_fcgid 
or mod_fastcgi (different versions possible) . Which threading model should 
apache use? worker or prefork?

some pages recommend worker-threading for faster requests.
Whats the opinion of the experts? which kind of setup is most recommend for 
my purposes? some people telled that php in cgi mode consumes a lot more cpu 
then mod_php. is that correct? how to avoid this?


Thanks,
Hajo 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
 from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org