Re: Slow http denial of service

2015-03-16 Thread Robert Klemme
On Sun, Mar 15, 2015 at 10:07 AM, Aurélien Terrestris aterrest...@gmail.com
 wrote:

 I agree with the NIO connector which gives good results to this
 problem. Also, on Linux you can configure iptables firewall to limit
 the number of connections from one IP (

 http://unix.stackexchange.com/questions/139285/limit-max-connections-per-ip-address-and-new-connections-per-second-with-iptable
 )


What I find difficult about this approach is that because of NAT the number
of individual machines (and hence connections that are reasonable) behind a
single IP can vary vastly. What value will you pick to not discriminate
large organizations?

Kind regards

robert


Re: Maximum number of JSP ?

2014-05-05 Thread Robert Klemme
Hi Sylvain,

thank you for sharing all these details!

On Mon, May 5, 2014 at 3:22 PM, Sylvain Goulmy sygou...@gmail.com wrote:
 Hi Christopher,

 Thank you for your contribution to this thread. I think we we have made
 good progress on the subject, here are some elements i'd like to share :

 - The fact that the response time was increasing with the the number of JSP
 loaded was linked to our monitoring tool... This tool hadn't the same
 impact with websphere. Without monitoring the response time remains stable
 no matter how many jsp are already loaded in the permgen.

Can you disclose what monitoring tool you use and explain how it
impacted the measured value?

 - There is no permgen defined in the IBM JVM running Websphere and i was
 wondering how much space the JVM was allocating to host this huge number of
 JSP. The memory footprint of the process on the system was quite big : Xmx
 : 1,5 Go, memory footprint of the JVM : 3,5 Go. It lets me think that
 Websphere allocates a large space to host these JSP, i increase accordingly
 the permgen size of my JVM to 1Go.

And, did it make a difference?

 - I finally noticed that when the permgen is undersized (ie it cannot host
 all the JSP of my application and has to unload class), the CPU impact is
 much more important with the CMS garbage policy than with the parallel GC.

With parallel GC do you mean the default stop the world collector?

 Our main concern so far was the CPU comsumption, we finally solved this by
 tuning our monitoring tool correctly and by increasing the size of the
 permgen.

Did you also try G1 GC?  I'd be curios to learn how well that did with
your workload and especially how well it manages to keep GC times
within given limit via -XX:MaxGCPauseMillis.

Kind regards

robert


-- 
[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Maximum number of JSP ?

2014-04-11 Thread Robert Klemme
On Fri, Apr 11, 2014 at 3:41 PM, Mikolaj Rydzewski m...@ceti.pl wrote:
 On 11.04.2014 15:31, André Warnier wrote:

 As far as I first understand such things, each of these JSP's gets
 compiled into a servlet, and the code of that servlet is held in
 memory for an extended period of time, even if unused at any
 particular moment. So this is 16000 servlets probably coexisting
 (un-)happily inside that JVM. No wonder..


 I'm pretty sure that's the problem.
 Servlets generated from JSPs contain a bunch of println statements and logic
 dependant on any tag libraries beign used.
 They all will reside in memory for the lifetime of application.
 For that huge number of pages I strongly recommend using a templating engine
 (there are plenty of them).

JSP _is_ a templating mechnism. In what way do you expect another
templating mechanism to help here? All the strings (among other stuff)
need to be stored somewhere in memory anyway.

I think André is on to something when he points to GC.  With that
large number of classes I would try to increase permanent size with
-XX:MaxPermSize. Before that an attempt with -Xnoclassgc might be
worthwhile because that will tell you if permanent size runs out of
space and an increase is in order. And then of course GC logging or
monitoring via jvisualjm and similar tools is also a good idea.

Kind regards

robert


-- 
[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Maximum number of JSP ?

2014-04-11 Thread Robert Klemme
On Fri, Apr 11, 2014 at 5:35 PM, Mikolaj Rydzewski m...@ceti.pl wrote:
 On 11.04.2014 17:22, Robert Klemme wrote:

 JSP _is_ a templating mechnism. In what way do you expect another
 templating mechanism to help here? All the strings (among other stuff)
 need to be stored somewhere in memory anyway.

 Well, IMHO JSP is not only a templating mechanism. It's also a compiler and
 deployer :-(
 All the strings are already stored on disk, I see no reason to store them in
 memory as well.

Then you underestimate the cost of IO and parsing.

 Similar case applies to various CMS systems out there -
 content is stored in database, no reason to keep it permanently in memory.

It makes a whole lot of sense to keep data that is repeatedly needed
closer to where the content is created for clients. (Side note: there
was even a webserver that kept preformatted TCP packets in memory to
be able to serve client requests faster. IIRC this was done by folks
at Sun.)

 My point was to consider using templating engine like e.g. Velocity. There's
 one servlet that is capable of serving any page. Compare it to 16000
 servlets.

You still face the same issue: either you load every template every
time it is needed, this incurs cost for IO as well as parsing of the
template. Or you keep all templates in memory (if most are used anyway
it does not really matter if you load them on demand or at start time)
where you pay the memory cost. You get a little more control with
Velocity because you could devise your own caching scheme. OTOH then
you also need to ensure it's fast for concurrent access etc. The JVM
does all that already with it's GC.

 I mean: anything is good that will read/process content on the fly and will
 not keep it forever in memory.

I beg to differ: careful analysis must show where the problem lies.
Then one can come up with a proper solution. Generally leaving things
out of memory is certainly not a good advice.

Kind regards

robert


-- 
[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: [OT] HeartBleed bug

2014-04-09 Thread Robert Klemme
On Wed, Apr 9, 2014 at 2:53 PM, Christopher Schultz
ch...@christopherschultz.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Ognjen,

 On 4/9/14, 3:30 AM, Ognjen Blagojevic wrote:
 On 9.4.2014 9:49, André Warnier wrote:
 I wonder if I may ask this list-OT question to the SSH experts on
 the list :

 I run some 25 webservers (Apache httpd-only, Tomcat-only, or
 Apache httpd + Tomcat). I do not use HTTPS on any of them. But I
 use SSH (OpenSSH) to connect to them over the Internet for
 support purposes, with authorized_keys on the servers. Are my
 servers affected by this bug ? Or is this (mainly) an
 HTTPS-related affair ?

 I mean : I will update OpenSSH on all my servers anyway.  But do
 I have to consider that, with a non-negligible probability, the
 keys stored on my servers are already compromised ?

 This is OpenSSL 1.0.1--1.0.1f vulnerabilty, so any protocol using
 OpenSSL implementation of TLS/SSL protocol (if OpenSSL libarary
 version is in mentioned range) is vulnerable

 Not necessarily. SSH, for instance, does not utilize the heartbeat
 feature of SSL and so is theoretically safe. I suppose you could have
 used the same server key for both SSH and HTTPS, but that would have
 been pretty silly.

Isn't that exactly what Ognjen said? This quote of him was not
included in your email:

 SSH protocol does not use TSL/SSL, so it is not vulnerable to Heartbleed bug.

 My recommendation would be to treat everything OpenSSL touches as
 tainted and re-key anyway.

That may be a costly recommendation because one might buy more new and
revoke more old certificates than necessary.

Cheers

robert


-- 
[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Nessus scan claims vulnerability in Tomcat 6

2013-02-26 Thread Robert Klemme
Hi Mark,

thank you for the feedback!

On Tue, Feb 26, 2013 at 2:27 AM, Mark Thomas ma...@apache.org wrote:
 On 25/02/2013 08:42, Robert Klemme wrote:

 Hi there,

 I have been confronted with a Nessus scan result which claims
 vulnerability to exploit TLS CRIME. Plugin 62565 allegedly has found
 this and the report states:

 The remote service has one of two configurations that are known to be
 required for the CRIME attack:
 - SSL / TLS compression is enabled.

 It is this one.

That's what I figured.

 - TLS advertises the SPDY protocol earlier than version 4.

 There is no spdy support in any released Tomcat version.

OK, that confirms what I was able to dig up.

 We have in server.xml:

 Connector SSLCertificateFile=/path SSLCipherSuite=***
 protocol=HTTP/1.1 connectionTimeout=2
 SSLCertificateKeyFile=/path secure=true scheme=https
 maxThreads=500 port=4712 maxSavePostSize=0 server=***
 SSLProtocol=TLSv1 maxPostSize=2048 URIEncoding=UTF-8
 SSLEnabled=true /


 That is the APR/native HTTPS connector.

So one solution would be to remove APR lib from the system. Another
one would be to change above to

Connector SSLCertificateFile=/path SSLCipherSuite=***
protocol=org.apache.coyote.http11.Http11Protocol connectionTimeout=2
SSLCertificateKeyFile=/path secure=true scheme=https
maxThreads=500 port=4712 maxSavePostSize=0 server=***
SSLProtocol=TLSv1 maxPostSize=2048 URIEncoding=UTF-8
SSLEnabled=true /

and add all necessary configurations to make that work.  And I guess a
third option is to use

export OPENSSL_NO_DEFAULT_ZLIB=1

before starting the JVM.

 Now, what to make of this?  To me it seems only compression could be
 the culprit but is there any other way to enable compression for HTTPS
 than to include compression?  Or does the TLS negotiation ignore
 setting compression?  I could not find indication of any option to
 control compression in the Javadocs

 http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/package-summary.html


 You won't. My recollection is that Java does not support compression.

OK, then it's no surprise that they do not mention it in the Javadocs. :-)

 APR/native does. An option was recently added. See:
 https://issues.apache.org/bugzilla/show_bug.cgi?id=54324

I found that but wasn't aware that this is actually used in Tomcat.

 There is no 6.0.x release with the necessary options yet.

Do you know whether there will be?

Kind regards

robert

-- 
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Nessus scan claims vulnerability in Tomcat 6

2013-02-26 Thread Robert Klemme
On Tue, Feb 26, 2013 at 4:04 PM, Mark Thomas ma...@apache.org wrote:
 On 26/02/2013 03:09, Robert Klemme wrote:

 So one solution would be to remove APR lib from the system.

 Yes, although you will see performance for SSL drop.

Yes, of course.  That's not important in our case.

 export OPENSSL_NO_DEFAULT_ZLIB=1

 before starting the JVM.

 I don't know if OpenSSL will honour that.

I'll let you know once I find out.

 There is no 6.0.x release with the necessary options yet.

 Do you know whether there will be?

 There will be but I'm not aware of any planned timing at this point. The
 changelog isn't that long but it has been a while since the last release so
 I guess we should start thinking about it.

Good!  Thanks for the update!

Kind regards

robert

-- 
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Nessus scan claims vulnerability in Tomcat 6

2013-02-25 Thread Robert Klemme
Hi there,

I have been confronted with a Nessus scan result which claims
vulnerability to exploit TLS CRIME. Plugin 62565 allegedly has found
this and the report states:

The remote service has one of two configurations that are known to be
required for the CRIME attack:
- SSL / TLS compression is enabled.
- TLS advertises the SPDY protocol earlier than version 4.

...

CVE-2012-4929 CVE-2012-4930


We have in server.xml:

Connector SSLCertificateFile=/path SSLCipherSuite=***
protocol=HTTP/1.1 connectionTimeout=2
SSLCertificateKeyFile=/path secure=true scheme=https
maxThreads=500 port=4712 maxSavePostSize=0 server=***
SSLProtocol=TLSv1 maxPostSize=2048 URIEncoding=UTF-8
SSLEnabled=true /

(paths and some other info replaced by dummies)

XML attribute compression is not present which according to the docs
means off.
I cannot find indication that SPDY does even exist in Tomcat 6.

I also could not find anything in the list of vulnerabilities at
http://tomcat.apache.org/security-6.html nor could I by searching for
combinations of tomcat with the issue numbers given above.

Now, what to make of this?  To me it seems only compression could be
the culprit but is there any other way to enable compression for HTTPS
than to include compression?  Or does the TLS negotiation ignore
setting compression?  I could not find indication of any option to
control compression in the Javadocs
http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/package-summary.html

Kind regards

robert

-- 
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org