Re: Backporting 1823047 for 2.4.30 / 2.4.3x?

2018-02-28 Thread Christian Folini
Hello Yann,

We have meanwhile tested httpd 2.4.30 in combination with mod_qos 11.51 in
production.  The evil scans persist, but blocking works again and no segfaults
anymore.

So the problem is solved.

In the end, it was as you supposed: The fix had already been Backported to
2.4.

Thank you for your good work!

Christian



On Fri, Feb 16, 2018 at 12:56:40PM +0100, Yann Ylavic wrote:
> On Fri, Feb 16, 2018 at 12:54 PM, Yann Ylavic <ylavic@gmail.com> wrote:
> > On Fri, Feb 16, 2018 at 11:47 AM, Christian Folini
> > <christian.fol...@netnea.com> wrote:
> >>
> >> We have just been told, that a regression affecting several production 
> >> servers
> >> is fixed by
> >> http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event.c?r1=1822535=1823047_format=h#l1065
> >
> > Are you sure that r1823047 is the commit fixing the issue?
> > I would have thought more of r1820796 (already backported to 2.4.30)
> > which looks more related to third-party modules.
> 
> Or was it a regression between the two maybe?
> 
> >
> >>
> >> It's an interaction between mod_qos, mod_reqtimeout and the event mpm that
> >> led to segfault in our case (triggered by aggressive ssl scanners setting
> >> off alarms in mod_qos). The qos  developers states it's been introduced in
> >> 2.4.29 and the above patch fixes httpd's part of the problem. He will issue
> >> a new release as well.
> >
> > Do you have more details on the issue and/or relevent commit on the
> > mod_qos side?
> >
> >>
> >> So if you could backport this for 2.4.30 or a following release, it would
> >> be very welcome.
> >
> > Real tests and fixes certainly help backports ;)
> > It would be nice to be sure about the right fix, though.
> >
> >
> > Regards,
> > Yann.

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: Backporting 1823047 for 2.4.30 / 2.4.3x?

2018-02-18 Thread Christian Folini
Hey Yann,

On Fri, Feb 16, 2018 at 12:56:40PM +0100, Yann Ylavic wrote:
> On Fri, Feb 16, 2018 at 12:54 PM, Yann Ylavic <ylavic@gmail.com> wrote:
> > On Fri, Feb 16, 2018 at 11:47 AM, Christian Folini
> > <christian.fol...@netnea.com> wrote:
> >>
> >> We have just been told, that a regression affecting several production 
> >> servers
> >> is fixed by
> >> http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event.c?r1=1822535=1823047_format=h#l1065
> >
> > Are you sure that r1823047 is the commit fixing the issue?
> > I would have thought more of r1820796 (already backported to 2.4.30)
> > which looks more related to third-party modules.
> 
> Or was it a regression between the two maybe?

Honestly, I can not really tell. It's what the mod_qos dev (in CC) told me.

@Pascal can you chime in on that?


On Fri, Feb 16, 2018 at 12:54:32PM +0100, Yann Ylavic wrote:
> > It's an interaction between mod_qos, mod_reqtimeout and the event mpm that
> > led to segfault in our case (triggered by aggressive ssl scanners setting
> > off alarms in mod_qos). The qos  developers states it's been introduced in
> > 2.4.29 and the above patch fixes httpd's part of the problem. He will issue
> > a new release as well.
> 
> Do you have more details on the issue and/or relevent commit on the
> mod_qos side?

There is a very aggressive ssl scanner opening several thousand connections in
a few seconds (have not found out which one). mod_qos tries to close the 
connections, new 2.4 httpd ignores that, then mod_reqtimeout sets in and
segfaults.

> > So if you could backport this for 2.4.30 or a following release, it would
> > be very welcome.
> 
> Real tests and fixes certainly help backports ;)
> It would be nice to be sure about the right fix, though.

We'd be glad to help. Functioning workaround is in place. Replacing it with a
patch httpd in prod is no problem.

Cheers,

Christian


-- 
No man is more unhappy than he who never faces adversity. 
For he is not permitted to prove himself.
-- Seneca


Backporting 1823047 for 2.4.30 / 2.4.3x?

2018-02-16 Thread Christian Folini
Hello,

You guys seem to be close to releasing 2.4.30, so this might be too late this
time around.

We have just been told, that a regression affecting several production servers
is fixed by 
http://svn.apache.org/viewvc/httpd/httpd/trunk/server/mpm/event/event.c?r1=1822535=1823047_format=h#l1065

It's an interaction between mod_qos, mod_reqtimeout and the event mpm that
led to segfault in our case (triggered by aggressive ssl scanners setting
off alarms in mod_qos). The qos  developers states it's been introduced in 
2.4.29 and the above patch fixes httpd's part of the problem. He will issue
a new release as well.

So if you could backport this for 2.4.30 or a following release, it would
be very welcome.

Best regards,

Christian Folini

-- 
https://www.feistyduck.com/training/modsecurity-training-course
https://www.feistyduck.com/books/modsecurity-handbook/
mailto:christian.fol...@netnea.com
twitter: @ChrFolini



Re: 2.4.27

2017-07-06 Thread Christian Folini
Thank you Jim.

On Wed, Jul 05, 2017 at 12:48:48PM -0400, Jim Jagielski wrote:
> These are just the fixes/regressions noted in CHANGES:
> 
> Changes with Apache 2.4.27
> 
>   *) mod_lua: Improve compatibility with Lua 5.1, 5.2 and 5.3.
>  PR58188, PR60831, PR61245. [Rainer Jung]
>   
>   *) mod_http2: disable and give warning when mpm_prefork is encountered. The 
> server will
>  continue to work, but HTTP/2 will no longer be negotiated. [Stefan 
> Eissing]
>   
>   *) Allow single-char field names inadvertantly disallowed in 2.4.25.
>  PR 61220. [Yann Ylavic]
> 
>   *) htpasswd / htdigest: Do not apply the strict permissions of the temporary
>  passwd file to a possibly existing passwd file. PR 61240. [Ruediger 
> Pluem]
> 
>   *) mod_proxy_fcgi: Revert to 2.4.20 FCGI behavior for the default
>  ProxyFCGIBackendType, fixing a regression with PHP-FPM. PR 61202.
>  [Jacob Champion, Jim Jagielski]
> 
>   *) core: Avoid duplicate HEAD in Allow header.
>  This is a regression in 2.4.24 (unreleased), 2.4.25 and 2.4.26.
>      PR 61207. [Christophe Jaillet]
> 
> > On Jul 3, 2017, at 1:39 PM, Christian Folini <christian.fol...@netnea.com> 
> > wrote:
> > 
> > On Mon, Jul 03, 2017 at 07:33:01AM -0400, Jim Jagielski wrote:
> >> Anyone opposed to a quick T and release of 2.4.27 within
> >> the next week?
> > 
> > Will this be a release primarily addressing the open fast cgi regression
> > or are the additional security concerns with 2.4.26?
> > 
> > A quick note would help with the holiday schedule.
> > 
> > Regards,
> > 
> > Christian Folini
> > 
> > -- 
> > Christian Folini - <christian.fol...@netnea.com>

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: 2.4.27

2017-07-03 Thread Christian Folini
On Mon, Jul 03, 2017 at 07:33:01AM -0400, Jim Jagielski wrote:
> Anyone opposed to a quick T and release of 2.4.27 within
> the next week?

Will this be a release primarily addressing the open fast cgi regression
or are the additional security concerns with 2.4.26?

A quick note would help with the holiday schedule.

Regards,

Christian Folini

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: Tool to analyze and minimize loaded modules.

2017-05-18 Thread Christian Folini
Hello Mike,

This is probably more a user-ML related question, but I have a little
script to do this in the 2nd of my Apache/ModSecurity tutorials at
https://www.netnea.com/cms/apache-tutorial-2_minimal-apache-configuration/
-> Step 9

Cheers,

Christian


On Mon, May 15, 2017 at 09:12:52AM -0700, Mike Rumph wrote:
> Hello all,
> 
> I was wondering is there is any tool available that can analyze the
> directives in an httpd instance's configuration files and determine which
> loaded module are not being used.
> If not, maybe such a tool could be quite useful for reducing the memory
> footprint.
> 
> Thanks,
> 
> Mike Rumph

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: HTTP/1.1 strict ruleset

2016-08-03 Thread Christian Folini
On Wed, Aug 03, 2016 at 06:58:26PM -0500, William A Rowe Jr wrote:
> > I see a lot of value in logging when not applying the strict parsing,
> > so you can passively assess your traffic for a day/week/month.
> 
> That requires additional CPU, and significantly more code complexity.
> In fact, I wonder whether such 'logging-only' behavior shouldn't simply
> be a no-choice default? I also wonder if those tools or others such as
> mod_security won't already provide such an option and we can wash
> our hands of this 'extra effort'?

ModSecurity Core Rules committer here.

As you know it's all in the rules with ModSecurity and the 
OWASP ModSecurity Core Rules (CRS) are the most widespread ruleset 
on the net.

We block per default, but all the checks can run log-only. 

They are listed in these rulefiles:
https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/v3.0.0-rc1/rules/REQUEST-911-METHOD-ENFORCEMENT.conf
https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/v3.0.0-rc1/rules/REQUEST-920-PROTOCOL-ENFORCEMENT.conf
https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/v3.0.0-rc1/rules/REQUEST-921-PROTOCOL-ATTACK.conf

The default policy definitions: 
https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/v3.0.0-rc1/modsecurity_crs_10_setup.conf.example

(Links are for the upcoming major release 3.0, RC1 will be out within
days now).

Overall, I think the rules are not overly aggressive. Apache has been
liberal so far and we try to avoid too many false positives due to
crazy clients and bad implementations. Missing Accept headers,
silly Range headers and numerical Host headers as frequent source
of false positives spring to mind.

Also, I think the coverage is not very systematic. Joining forces and
providing a systematic coverage for all aspects of RFC 2068 for 
CRS 3.1 would be very interesting for our project. If it would simplify
the httpd code base to refer users to ModSecurity and CRS, the
CRS could profit a lot from the endorsement (and the httpd-dev
experience brought to our rules resulting in a higher security level
overall).

A possible issue is the fact that ModSecurity runs fairly late in the
lifecycle. In fact, the default hook for the first ModSecurity rule
phase has been shifted backwards a few years ago. I take it a httpd
implementation of protocol enforcement rules would run immediately after
receiving the request line and then as the headers come in. ModSecurity
would definitely run later. However, there have been discussions to
introduce additional rule phase(s) into the ModSecurity engine / module
in the past and if there is a need from the Apache project, then the
development might be open in this regard (but it would certainly take
quite a while to get this out the door).

Cheers,

Christian Folini

-- 
https://www.feistyduck.com/training/modsecurity-training-course
mailto:christian.fol...@netnea.com
twitter: @ChrFolini


Re: Allow SSLProxy* config in context?

2016-04-13 Thread Christian Folini
Rainer,

There is a commercial apache-based reverse proxy in Switzerland 
(with substantial market share) which is able to use / create
a client certificate _per_ session.

So the client connects to the RP, performs authentication. When
creating the session serverside, the RP creates a client cert and
fills it with information received from the client and binds this
cert to the session. Then it connects to the backend and uses this
dynamic client cert in the handshake.

I realise this is way beyond what Apache is capable of doing. But
when looking into the limitations of SSLProxy..., one might consider
an architecture, that would allow this. Maybe not immediately, but
sometime down the road.

Best,

Christian


-- 
Seek simplicity, and distrust it.
-- Alfred North Whitehead


Re: reverse proxy wishlist

2015-12-05 Thread Christian Folini
On Sat, Dec 05, 2015 at 11:01:54AM +, Tim Bannister wrote:
> ProxyErrorOverride is a good starting point. Often I want to let through only 
> some error pages: the ones explicitly coded to be shown to this website's 
> visitors. If the backend fails and produces an unstyled page of jargon and 
> diagnostics, I want httpd to intervene.

I'd like to follow up on that. The last time I checked,
ProxyErrorOverride was silent in the logs. A notice- or warning-level
message when it intervenes would be helpful in many situations.  The
typical conversation around ProxyErrorOverride starts with "It's the
error page of the proxy, so the proxy must have caused the error." That
discussion could be cut short with a log message stating
~"ProxyErrorOverride applied after receiving status XXX from the
backend."
 
> The application could signal to httpd that its response has a user-friendly 
> body via a special header.

I thought about this before. I think a more general, more flexible
approach would be very helpful. In the end it boils down to something 
like rewrite rules on the response.
You can to this with ModSecurity, but that is too late for 
ProxyErrorOverride AFAICT.

Ahoj,

Christian Folini

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: "httpd -X" segfaults with 2.4.17

2015-10-16 Thread Christian Folini
On Fri, Oct 16, 2015 at 01:58:17PM +0200, Jan Kaluža wrote:
> httpd 2.4.17 segfaults when used with prefork MPM (and probably also
> with other MPMs) and -X option since r1705492.
> 
> The crash happens in the following call in prefork.c (and probably
> also worker.c and so on):

Works fine here with event. At least so far.

Ahoj,

Christian Folini


-- 
The test of every religious, political, or educational system is the 
man which it forms.
-- Henri-Frédéric Amiel


Re: Expression Parser: search and replace with s/PATTERN/REPLACEMENT/FLAGS

2015-10-01 Thread Christian Folini
On Thu, Oct 01, 2015 at 01:55:40PM +0200, Rainer Jung wrote:
> Something different. Example:
> 
> Header set X-USER "expr=%{REMOTE_USER} =~ s/([^@]*)@.*/$1/"
> 
> ...
> 
> The example might be artificial and mod_header might support doing
> this in another way, but IMHO it would be a nice general feature for
> the expression parser which would work without cooperation from the
> modules/directives that use the expression parser.

This would be really neat. We have a few recipes where we abuse
ModSecurity or mod_rewrite to achieve this. Having it available
within the expression parser would simplify things a lot
(and get rid of timing and hook precedence issues).

Ahoj,

Christian Folini

-- 
Christian Folini - <christian.fol...@netnea.com>


Re: mod_lua: Accessing multiple Set-Cookie response headers

2015-05-18 Thread Christian Folini
Daniel,

Thank you for your swift response.

It seems I did not make myself clear and made it look like this
is a users@ question, but I think it is not.

I am not developing an application. I have the application and I
am now using mod_lua on the reverse proxy in front of the
application.

So I am running a LuaOutputFilter and want to access the
Set-Cookie headers in the response, which has been created
by the application on the backend.

lua getcookie is based on ap_cookie_read, which reads the
Cookie request header. But I need to read the Set-Cookie
response headers. All of them. And r.headers_out is only
giving me the one of them, while the application issued
multiple Cookies in multiple Set-Cookie headers.

Any hint is still appreciated, but I really doubt users@
could help me. It's a special case not covered by the 
mod_lua documentation, AFAIK.

Best,

Christian


On Mon, May 18, 2015 at 06:58:15PM +0200, Daniel Gruno wrote:
 This should really go to users@, but anyway...
 You might want to take a look at:
 
 http://modlua.org/api/builtin#getcookie
 http://modlua.org/api/builtin#setcookie
 
 With regards,
 Daniel.
 
 On 2015-05-18 16:53, Christian Folini wrote:
 Hello,
 
 Mod_lua gave me a few quick wins when I started to play around
 with cookies on a reverse proxy. But then the obvious happened:
 the backend started to issue multiple Set-Cookie response headers
 in the same http response.
 
 mod_lua returns the headers via r.headers_out, but while the
 documentation states the return value is of lua-type table,
 it is actually of lua-type userdata and I can not seem my way
 around accessing more then a single Set-Cookie header per
 request. The latter is done via r.headers_out['Set-Cookie'],
 but now I got stuck.
 
 Any ideas?
 
 Best,
 
 Christian Folini
 

-- 
Christian Folini - christian.fol...@netnea.com


2.2.25 build problem (was: Re: svn commit: r1497466 - in /httpd/httpd/branches/2.2.x: CHANGES STATUS modules/ssl/ssl_engine_io.c)

2013-07-09 Thread Christian Folini
On Wed, Jul 03, 2013 at 01:04:54PM -0400, Eric Covener wrote:
 A user on IRC reported that the SSL_PROTOCOL_SSLV2 here caused a build
 break on his debian system. Does it need to be wrapped in a
 OPENSSL_NO_SSL2 macro?

I have the same build problem for 2.2.25 on Ubuntu 12.04.1 LTS.
Is this going to be fixed before the release?

Rainer's proposed patch worked here.

Regs,

Christian Folini

-- 
Christian Folini - christian.fol...@netnea.com


Re: URL scanning by bots

2013-05-02 Thread Christian Folini
On Fri, May 03, 2013 at 09:39:44AM +1000, Noel Butler wrote:
  real-time blacklist lookup (- ModSecurity's @rbl operator).
 
 Try using that on busy servers (webhosts/ISP's)... might be fine for a
 SOHO, but in a larger commercial world, forget it, the impact is  far
 far worse than the other suggestions.

Certainly. But if we run 100% https anyways, enable a local dns cache
or even cache the results within apache, would it still be as
dangerous? So far my answer has been yes. But I would be interested
to hear a response from somebody who was crazy enough to enable it.

regs,

Christian

-- 
Complexity is the worst enemy of security, and the Internet -- 
and the computers and processes connected to it -- is getting
more complex all the time.
-- Bruce Schneier


Re: URL scanning by bots

2013-05-01 Thread Christian Folini
André,

On Wed, May 01, 2013 at 02:47:55AM +0200, André Warnier wrote:
 With respect, I think that you misunderstood the purpose of the proposal.
 It is not a protection mechanism for any server in particular.
 And installing the delay on one server is not going to achieve much.

In fact I did understand the purpose, but I wanted to get
my point across without writing a lengthy message on the
merits and flaws of your theory.

My point is: ModSecurity has all you need to do this
right now. All that is missing is enough people configuring
their servers as you propose.

Like many others, I do not think this will work. If it really
bothers you (and your bandwidth), then I would try and use a 
real-time blacklist lookup (- ModSecurity's @rbl operator).
Given the work of the spam defenders these blacklist should
contain the ipaddresses of the scanning bots as well.
I do not have this configured, but I would be really
interested to see the effect on average load, connection
use and number of scanning attempts on a server.

Interesting discussion by the way. Maybe a bit hot, though.

Best,

Christian Folini

-- 
We have to remember that what we observe is not nature herself, but
nature exposed to our method of questioning.  
-- Werner Heisenberg


Re: URL scanning by bots

2013-04-30 Thread Christian Folini
.
 
 The suggestion is based on the observation that there is a dichotomy between 
 this kind of
 access by bots, and the kind of access made by legitimate HTTP users/clients 
 : legitimate
 users/clients (including the good bots) are accessing mostly links which 
 work, so they
 rarely get 404 Not Found responses.  Malicious URL-scanning bots on the 
 other hand, by
 the very nature of what they are scanning for, are getting many 404 Not 
 Found responses.
 
 As a general idea thus, anything which impacts the delay to obtain a 404 
 response, should
 impact these bots much more than it impacts legitimate users/clients.
 
 How much ?
 
 Let us imagine for a moment that this suggestion is implemented in the Apache 
 webservers,
 and is enabled in the default configuration.  And let's imagine that after a 
 while, 20% of
 the Apache webservers deployed on the Internet have this feature enabled, and 
 are now
 delaying any 404 response by an average of 1000 ms.
 And let's re-use the numbers above, and redo the calculation.
 The same botnet of 10,000 bots is thus still scanning 300 Million 
 webservers, each bot
 scanning 10 servers at a time for 20 URLs per server.  Previously, this took 
 about 6000
 seconds.
 However now, instead of an average delay of 10 ms to obtain a 404 response, 
 in 20% of the
 cases (60 Million webservers) they will experience an average 1000 ms 
 additional delay per
 URL scanned.
 This adds (60,000,000 / 10 * 20 URLs * 1000 ms) 120,000,000 seconds to the 
 scan.
 Divided by 10,000 bots, this is 12,000 additional seconds per bot (roughly 3 
 1/2 hours).
 
 So with a small change to the code, no add-ons, no special configuration 
 skills on the
 part of the webserver administrator, no firewalls, no filtering, no need for 
 updates to
 any list of URLs or bot characteristics, little inconvenience to legitimate 
 users/clients,
 and a very partial adoption over time, it seems that this scheme could more 
 than double
 the cost for bots to acquire the same number of targets.  Or, seen another 
 way, it could
 more than halve the number of webservers being scanned every day.
 
 I know that this is a hard sell.  The basic idea sounds a bit too simple to 
 be effective.
 It will not kill the bots, and it will not stop the bots from scanning 
 Internet servers in
 other ways that they use. It does not miraculously protect any single server 
 against such
 scans, and the benefit of any one server implementing this is diluted over 
 all webservers
 on the Internet.
 But it is also not meant as an absolute weapon.  It is targeted specifically 
 at a
 particular type of scan done by a particular type of bot for a particular 
 purpose, and is
 is just a scheme to make this more expensive for them.  It may or may not 
 discourage these
 bots from continuing with this type of scan (if it does, that would be a very 
 big result).
 But at the same time, compared to any other kind of tool that can be used 
 against these
 scans, this one seems really cheap to implement, it does not seem to be easy 
 to
 circumvent, and it seems to have at least a potential of bringing big 
 benefits to the WWW
 at large.
 
 If there are reasonable objections to it, I am quite prepared to accept that, 
 and drop it.
  I have already floated the idea in a couple of other places, and gotten what 
 could be
 described as tepid responses.  But it seems to me that most of the 
 negative-leaning
 responses which I received so far, were more of the a-priori it will never 
 work kind,
 rather than real objections based on real facts.
 
 So my hope here is that someone has the patience to read through this, and 
 would have the
 additional patience to examine the idea professionally.
 

-- 
Christian Folini - christian.fol...@netnea.com


Re: Add bandwidth information to access_log

2013-01-18 Thread Christian Folini
Hi there,

On Fri, Jan 18, 2013 at 08:31:25AM +, Chau Pham wrote:
 I would like to add some bandwidth information to http server log file: 
 access_log,

The Apache Security Book by Ivan Ristic has a recipe doing that with a
former version of ModSecurity. ModSec has since changed its timestamps
but it is still possible to get a value which more or less represents
up- and downstream bandwidth. Still, you should not trust it too much.

Regs,

Christian Folini

-- 
Christian Folini - christian.fol...@netnea.com


Re: Add bandwidth information to access_log

2013-01-18 Thread Christian Folini
Hey!

You should look up the individual values in the mod_log_config
documentation. The bytecount does not constitute the
bandwidth. You have to take the time into your calculation
as well. ModSecurity can give you those timings.
You should look it up there and maybe turn to the mod-security
mailinglist for help. This list is for httpd development.

Cheers,

Christian Folini

Then you should turn to the ModSecurity
On Fri, Jan 18, 2013 at 09:33:04AM +, Chau Pham wrote:
 Thank you, I saw this line below in access log while it was playing m3u3 
 file, one of chunk below. 172.16.33.168 - - [18/Jan/2013:16:28:38 +0900] GET 
 /data/That_is_love-46.ts HTTP/1.1 200 2019496 The number 2019496,  does it 
 stand for network traffic? I think it is in byte count, Can I consider that 
 as bandwidth? 
   Date: Fri, 18 Jan 2013 09:37:01 +0100
  From: christian.fol...@netnea.com
  To: dev@httpd.apache.org
  Subject: Re: Add bandwidth information to access_log
  
  Hi there,
  
  On Fri, Jan 18, 2013 at 08:31:25AM +, Chau Pham wrote:
   I would like to add some bandwidth information to http server log file: 
   access_log,
  
  The Apache Security Book by Ivan Ristic has a recipe doing that with a
  former version of ModSecurity. ModSec has since changed its timestamps
  but it is still possible to get a value which more or less represents
  up- and downstream bandwidth. Still, you should not trust it too much.
  
  Regs,
  
  Christian Folini
  
  -- 
  Christian Folini - christian.fol...@netnea.com
 

-- 
Christian Folini - christian.fol...@netnea.com


Re: Rethinking be liberal in what you accept

2012-11-08 Thread Christian Folini
On Thu, Nov 08, 2012 at 11:47:31AM +0100, Apache Lounge wrote:
 What about mod_security, has a lot of similar checks and even more.

ModSec can perform all these checks via regexes, but it bears a 
certain overhead in performance and administration. The protocol 
checks are part of bigger rulesets and positives will be mixed 
in the logs with other security findings of varying severity.

The standard state of the art ModSec deployment with the official
Core-Ruleset works with a scoring mechanism, that does not block
a request instantly. So depending on the combination of violations
in a request, a bogus request line may pass beneath the threshold
of the Core-Rules.

A simple, single directive to stop any protocol violations once 
and for all is preferable in my eyes.

regs,

Christian Folini

 
 -Original Message- From: Stefan Fritsch
 Sent: Wednesday, November 7, 2012 12:26 Newsgroups: gmane.comp.apache.devel
 To: dev@httpd.apache.org
 Subject: Rethinking be liberal in what you accept
 
 Hi,
 
 considering the current state of web security, the old principle of be
 liberal in what you accept seems increasingly inadequate for web servers.
 It causes lots of issues like response splitting, header injection, cross
 site scripting, etc. The book Tangled Web by Michal Zalewski is a good
 read on this topic, the chapter on HTTP is available for free download at
 http://nostarch.com/tangledweb .
 
 Also, nowadays performance bottle necks are usually in other places than
 request parsing. A few more cycles spent for additional checks won't make
 much difference. Therefore, I think it would make sense to integrate some
 sanity checks right into the httpd core. For a start, these would need to
 be enabled in the configuration.
 
 Examples for such checks [RFC 2616 sections in brackets]:
 
 Request line:
 - Don't interpret all kinds of junk as HTTP/1.0 (like HTTP/ab or
   FOO) [3.1]
 - If a method is not registered, bail out early.
   This would prevent CGIs from answering requests to strange methods like
   HELO or http://foo/bar;. This must be configurable or there must be
   at least a directive to easily register custom methods.  Otherwise, at
   least forbid strange characters in the method. [The method is a token,
   which should not contain control characters and separators; 2.2, 5.1]
 - Forbid control characters in URL
 - Forbid fragment parts in the URL (i.e. #... which should never be sent
   by the browser)
 - Forbid special characters in the scheme part of absoluteURL requests,
   e.g. 
 
 Request headers:
 - In Host header, only allow reasonable characters, i.e. no control
   characters, no . Maybe: only allow ascii letters, digits, and
   -_.:[]
 - Maybe replace the Host header with the request's hostname, if they are
   different. In:
  GET http://foo/ HTTP/1.1
  Host: bar
   The Host: bar MUST be ignored by RFC 2616 [5.2]. As many webapps likely
   don't do that, we could replace the Host header to avoid any confusion.
 - Don't accept requests with multiple Content-Length headers. [4.2]
 - Don't accept control characters in header values (in particular
 single CRs,
   which we don't treat specially, but other proxies may. [4.2]
 
 Response headers:
 - Maybe error out if an output header value or name contains CR/LF/NUL (or
   all control characters?) [4.2]
 - Check that some headers appear only once, e.g. Content-Length.
 - Potentially check in some headers (e.g. Content-Disposition) that
 key=value
   pairs appear only once (this may go too far / or be too expensive).
 
 Other:
 - Maybe forbid control characters in username + password (after base64
   decoding)
 
 As a related issue, it should be possible to disable HTTP 0.9.
 
 The dividing line to modules like mod_security should be that we only
 check things that are forbidden by some standard and that we only look at
 the protocol and not the body.  Also, I would only allow to switch the
 checks on and off, no further configurability. And the checks should be
 implemented efficiently, i.e. don't parse things several times to do the
 checks, normally don't use regexes, etc.
 
 What do you think?
 
 Cheers,
 Stefan

-- 
Christian Folini - christian.fol...@netnea.com


Re: Proposal: adoption of mod_firehose subproject

2011-12-13 Thread Christian Folini
Graham,

Mod_firehose sounds very helpful. I like the record/replay
options. It would be great if you could convince the
developers.

It is possible to do similar stuff with mod_security,
though not in a very easy way, but mod_security still
helps for debugging purposes.

One thing you can not do with mod_security, though, is
the following: To log encrypted connections between a
reverse proxy and the backend applications.

So far it is very hard to prove the Apache proxy
sends the right stuff if you can not get hold of the
backend application's logs.

Now I wonder if mod_firehose could solve this problem
too.

Regards,

Christian Folini

-- 
First you make it, then it works, then you invite people to
make it better.
-- Eben Moglen, Free Software Foundation


A timestamp for mod_log_forensic (?)

2011-03-30 Thread Christian Folini
Hi there,

Mod_log_forensic is saving my day while debugging a crashing 
apache. But matching the right request with the crash and its 
corefile is difficult.

Ideally the log would show me only the active requests 
at the moment the server died. But in my case things are a bit
more difficult. The delta between incoming requests and those
finished is bigger.

So I matched the entries of the finished requests with the
access log entries to get a more or less accurate timestamp
for all those requests that never finished, so I could match
them with the crash. But that is very complicated of course.

So, is there anything speaking against a timestamp for the 
forensic log?

The format right now looks as follows
+yQtJf8CoAB4AAFNXBIEA|GET /manual/de/images/down.gif 
HTTP/1.1|Host:localhost%3a8080| etc.

A format with a microtimestamp could look as follows:

+956166333.123456|yQtJf8CoAB4AAFNXBIEA|GET /manual/de/ ...

or

+yQtJf8CoAB4AAFNXBIEA|956166333.123456|GET /manual/de/ ...

or

+yQtJf8CoAB4AAFNXBIEA|GET /manual/de/ ... |956166333.123456|


Best regards,

Christian Folini

-- 
Christian Folini - christian.fol...@netnea.com


Re: A timestamp for mod_log_forensic (?)

2011-03-30 Thread Christian Folini
On Wed, Mar 30, 2011 at 03:32:27PM +0200, Graham Leggett wrote:
 Have you taken a look at Jeff's mod_whatkilledus?
 
 http://people.apache.org/~trawick/exception_hook.html

mod_whatkilledus will be one of the next steps in my debugging,
if mod_forensic won't do.

Still I think I could add a small timestamp patch for 
mod_log_forensic for future convenience.

regs,

Christian

-- 
Christian Folini - christian.fol...@netnea.com


Re: [PATCH] Logging the handler in the access log

2010-02-01 Thread Christian Folini
On Tue, Feb 02, 2010 at 12:06:33AM +0200, Graham Leggett wrote:
 On 01 Feb 2010, at 10:59 PM, Christian Folini wrote:

 Sure. Here you go:

 Committed to trunk, and proposed for backport to v2.2. Thanks for this.

My pleasure. Thank you.

Best,

Christian

-- 
We must be diligent, we must keep learning, we will prevail.
-- Jeremia Grossman


[PATCH] Logging the handler in the access log

2010-01-31 Thread Christian Folini
Hello all,

In a heterogenous setup with multiple servers and reverse 
proxies, life can be a burden. At times, the access log could help
by sharing some insight on the handler involved with
the response.

Unfortunately, mod_log_config does not give an easy way to log
this information.

Therefore I am proposing a tiny patch to add this functionality:

Patch against branches/2.2.x
$ svn diff
Index: mod_log_config.c
===
--- mod_log_config.c(revision 903198)
+++ mod_log_config.c(working copy)
@@ -380,6 +380,10 @@
 {
 return pfmt(r-pool, r-status);
 }
+static const char *log_handler(request_rec *r, char *a)
+{
+return ap_escape_logitem(r-pool, r-handler);
+}

 static const char *clf_log_bytes_sent(request_rec *r, char *a)
 {
@@ -1516,6 +1520,7 @@
 log_pfn_register(p, T, log_request_duration, 1);
 log_pfn_register(p, U, log_request_uri, 1);
 log_pfn_register(p, s, log_status, 1);
+log_pfn_register(p, R, log_handler, 1);
 }

 return OK;


So this adds an item named %R to the LogFormat directive.
(- I'm happy with every alternative letter of course)


I reckon most admins know their handlers quite well. But the real world
is complicated at times, and this tiny enhancement could be helpful
for those in need.

Is there any opinion on this?

Best,

Christian


-- 
When there are too many policemen, there can be no liberty. When there
are too many soldiers, there can be no peace. When there are too many
lawyers, there can be no justice.
-- Lin Yutang


Re: [Fwd: Slowloris]

2009-06-22 Thread Christian Folini
On Mon, Jun 22, 2009 at 02:23:12PM +0200, Dirk-Willem van Gulik wrote:
 -Seriously rewrite apache/add a worker which mimics the
 accept_filter.ko
  of freebsd somewhat in that it as a single threaded async select()
 loop
  which buffers things up until they are cooked enough (i.e. the
 client has
  enough skin in the game) to hand off to a real worker.

Is not this mechanism limited to HTTP and misses HTTPS? So I
do not think it can be a general solution.

I am not an apache developer, but would not the event mpm be of
some use in this case?

Otherwise, I see a lack of granular timeout values. RSnake's
latest take can be fought with a low KeepAliveTimeout
(- http://ha.ckers.org/blog/20090620/http-longevity-during-dos/)
One should be able to assign timeouts to other request phases too.
And it should be possible to set these timeouts in a way that a
subsequent header or a single post payload byte is not resetting
them to zero again.

Just my 2 cents

Christian Folini

-- 
If you shut your door to all errors truth will be shut out.
--- Rabindranath Tagore


Problems with SSL environment variable SSL_CLIENT_CERT as http header

2005-12-12 Thread Christian Folini
Hello,

This question has been sent to the user mailinglist first
without provoking a reply. From the beginning i thought
it was rather a question for the developers.

In fact i am not sure i encountered a bug or a missing
feature.

I am configuring apache 2.0.54 as a revproxy that handles 
authentication based on client certificates.
Now my customer running the backend application requests to see
the client certificates as a whole.

After googling around i stumbled over a new method mentioned
by Brian Hughes on the users mailinglist
http://mail-archives.apache.org/mod_mbox/httpd-users/200506.mbox/[EMAIL 
PROTECTED]

This works fine for almost all the SSL variables mentioned
at http://www.modssl.org/docs/2.8/ssl_reference.html#ToC24

However, i only get the first line of the certificate in 
SSL_CLIENT_CERT while the client certificate has multiple lines. 
Unfortunately, the user wants to have exactly this item and 
not the single line variables...

Maybe i am not really used to certificates. Maybe i expect
too much of mod_rewrite. But generally i thought http headers
could be repeated, so it should basically be possible to get
the whole file into the headers.

So the question is: Is this a missing feature or a bug? Does it
ring a bell? Or is there someone who can point out a better way, 
how to pass on the certificate to the backend application?

best regards,

Christian

-- 
Christian Folini - [EMAIL PROTECTED]