On 22/11/2018 23:05, Christopher Schultz wrote:
On 11/22/18 17:52, Mark Thomas wrote:
On 22/11/2018 22:32, Christopher Schultz wrote:
On 11/22/18 16:43, Mark Thomas wrote:

<snip/>

syncs on encrypt() and decrypt() seem to have done the trick.
That was just a quick hack to confirm a suspicion - it isn't
the right long term fix.

If we want this to be performant under load I'd lean towards
using a Queue for encryption ciphers and another for decryption
ciphers along the lines of the way SessionIdGeneratorBase
handles SecureRandom.

We should probably handle SecureRandom the same way.

I'll start working on a patch.

Hmm... I was under the impression that the message-sending
operations were single-threaded (and similar with the receiving
operations).

There are locks in other Interceptors which are consistent with
them being multi-threaded.

I've read that calling Cipher.init() is expensive, but since
it's being done for every outgoing (and incoming) message,
perhaps there is no reason to re-use Cipher objects. I'd be
interested to see a micro-benchmark showing whether all that
complexity (Cipher object pool) is really worth it. It probably
isn't, given that the code without that complexity would be super
clear and compact.

Good point. I'll try that. It will depend on how expensive creating
the object is.

Even with the pools the code is pretty clean but I take the point
that it would be (even) cleaner without.

I have a patch ready to go but I'll do some work on the benchmark
first.

I have a patch below. Passes all my unit-tests, but I don't have any
multi-threaded ones written at this point.

I'd appreciate a review before I commit. I'm going to change the
cipher-pool-size to be configurable via an XML attribute, etc.

The patch is hard to read. Can you upload it to people.a.o (or maybe we should create a BZ issue for this).

My benchmark shows similar results to yours. Pooling is certainly worth while.

I have a patch too. Available here:
http://people.apache.org/~markt/patches/20181122-encryption-interceptor-concurrency-v1.patch

It does a little more than just add pooling.

It uses the same principle as elsewhere in Tomcat - that the pools grows as needed and doesn't shrink so no need for additional configuration options.

I expect we'll pull bits from both patches but I'm going to call it a day now and come back to this tomorrow.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to