Re: [commons-net] issue: FTP other HTTP with Proxy-Authorization fail

2024-02-13 Thread sebb
On Tue, 13 Feb 2024 at 00:40, sebb  wrote:
>
> On Mon, 12 Feb 2024 at 16:40, Elliotte Rusty Harold  
> wrote:
> >
> > Be careful with this one. I don't have full context, but this looks
> > likely to be a real bug on some code paths and perhaps not a bug on
> > others. We'll need to make sure that the patch for the broken code
> > path doesn't break a currently working path. Specifically I'm worried
> > about where \r\n might and might not show up after the if block shown
> > here.
>
> The existing code appears to have the correct number of CRLFs only if
> the conditional is true.
>
> So I think the line
>
> output.write(CRLF);
>
> should be moved into the conditional, rather than being added to the
> conditional, as that would result in an extra CRLF.
>
> i.e.
>
> https://github.com/apache/commons-net/pull/217
>

Ignore that; the 'extra' CRLF seems to be the separator for the end of
the headers, so the original fix in this thread does look correct.

> > On Mon, Feb 12, 2024 at 10:15 AM Емельянов Юрий Владимирович
> >  wrote:
> > >
> > > see FTPHTTPClient.tunnelHandshake
> > >
> > > current code is:
> > >
> > >  if (proxyUsername != null && proxyPassword != null) {
> > >  final String auth = proxyUsername + ":" + proxyPassword;
> > >  final String header = "Proxy-Authorization: Basic " +
> > > Base64.getEncoder().encodeToString(auth.getBytes(charset));
> > >  output.write(header.getBytes(charset));
> > >  }
> > > correct code is:
> > >
> > >  if (proxyUsername != null && proxyPassword != null) {
> > >  final String auth = proxyUsername + ":" + proxyPassword;
> > >  final String header = "Proxy-Authorization: Basic " +
> > > Base64.getEncoder().encodeToString(auth.getBytes(charset));
> > >  output.write(header.getBytes(charset));
> > > *output.write(CRLF);*
> > >  }
> > >
> >
> >
> > --
> > Elliotte Rusty Harold
> > elh...@ibiblio.org
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> > For additional commands, e-mail: dev-h...@commons.apache.org
> >

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [LOGGING] 2.0

2024-02-13 Thread Piotr P. Karwasz
Hi Gary,

On Sat, 10 Feb 2024 at 17:26, Gary Gregory  wrote:
> The package would change from org.apache.commons.logging to
> org.apache.commons.logging2.
> The Maven coordinates would change from
> commons-logging:commons-logging to org.apache.commons:commons-logging2

The only case in which such a change would be useful is if all the
logging API maintainers can sit down around a table and decide to
adopt `org.apache.commons.logging2.L
ogger` as the common denominator
of their APIs.

Currently logging faces new challenges that could be solved with a new
(minimal) API, such as:

 * tracing should be an integral part of the API,
 * thread-bound contexts are problematic, so there should be an easy
way to retrieve context data from the processing flux (Spring Reactor
Flow, Akka Actor, etc) and not the current thread.

If such a thing is even possible, it would be nice if we can get
`jakarta.logging` as the package prefix.

Piotr

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: (commons-net) branch master updated: Add Javadoc @return comments

2024-02-13 Thread Gary Gregory
Note that I try to always make the @since tag the last one. YMMV.

Gary

On Tue, Feb 13, 2024 at 6:55 AM  wrote:
>
> This is an automated email from the ASF dual-hosted git repository.
>
> sebb pushed a commit to branch master
> in repository https://gitbox.apache.org/repos/asf/commons-net.git
>
>
> The following commit(s) were added to refs/heads/master by this push:
>  new 2d557a20 Add Javadoc @return comments
> 2d557a20 is described below
>
> commit 2d557a2089ae40b00fa29f46b7bbe9d775f55ea8
> Author: Sebb 
> AuthorDate: Tue Feb 13 11:55:24 2024 +
>
> Add Javadoc @return comments
> ---
>  src/main/java/org/apache/commons/net/ftp/FTPSClient.java | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/src/main/java/org/apache/commons/net/ftp/FTPSClient.java 
> b/src/main/java/org/apache/commons/net/ftp/FTPSClient.java
> index 8c19c320..70b08397 100644
> --- a/src/main/java/org/apache/commons/net/ftp/FTPSClient.java
> +++ b/src/main/java/org/apache/commons/net/ftp/FTPSClient.java
> @@ -716,6 +716,7 @@ public class FTPSClient extends FTPClient {
>   * Gets the use client mode flag. The {@link #getUseClientMode()} method 
> gets the value from the socket while
>   * this method gets its value from this instance's config.
>   * @since 3.11.0
> + * @return True If the socket should start its first handshake in 
> "client" mode.
>   */
>  protected boolean isClientMode() {
>  return isClientMode;
> @@ -724,6 +725,7 @@ public class FTPSClient extends FTPClient {
>  /**
>   * Gets whether a new SSL session may be established by this socket. 
> Default true
>   * @since 3.11.0
> + * @return True if session may be established
>   */
>  protected boolean isCreation() {
>  return isCreation;
> @@ -744,6 +746,7 @@ public class FTPSClient extends FTPClient {
>  /**
>   * Gets the security mode. (True - Implicit Mode / False - Explicit Mode)
>   * @since 3.11.0
> + * @return True if enabled, false if not.
>   */
>  protected boolean isImplicit() {
>  return isImplicit;
> @@ -753,6 +756,7 @@ public class FTPSClient extends FTPClient {
>   * Gets the need client auth flag. The {@link #getNeedClientAuth()} 
> method gets the value from the socket while
>   * this method gets its value from this instance's config.
>   * @since 3.11.0
> + * @return True if enabled, false if not.
>   */
>  protected boolean isNeedClientAuth() {
>  return isNeedClientAuth;
> @@ -762,6 +766,7 @@ public class FTPSClient extends FTPClient {
>   * Gets the want client auth flag. The {@link #getWantClientAuth()} 
> method gets the value from the socket while
>   * this method gets its value from this instance's config.
>   * @since 3.11.0
> + * @return True if enabled, false if not.
>   */
>  protected boolean isWantClientAuth() {
>  return isWantClientAuth;
>

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [pool] Recovering from transient factory outages

2024-02-13 Thread Romain Manni-Bucau
Hi Phil,

What I used by the past for this kind of thing was to rely on the timeout
of the pool plus in the healthcheck - external to the pool - have some
trigger (the simplest was "if 5 healthchecks fail without any success in
between" for ex), such trigger will spawn a task (think thread even if it
uses an executor but guarantee to have a place for this task) which will
retry but at a faster pace (instead of every 30s it is 5 times in a run for
- number was tunable but 5 was my default).
If still detected as down - vs not overloaded or alike - it will consider
the database down and spawn a task which will retry every 30 seconds, if
the database comes back - I added some business check but idea is not just
check the connection but the tables are accessible cause often after such a
downtime the db does not come at once - just destroy/recreate the pool.
The destroy/recreate was handled using a DataSource proxy in front of the
pool and change the delegate.
Indeed it is not magic inside the pool but can only better work than the
pool solution cause you can integrate to your already existing checks and
add more advanced checks - if you have jpa just do a fast query on any
table to validate db is back for ex.
At the end code is pretty simple and has another big advantage: you can
circuit break the database completely while you consider the db is down
just letting passing 10% of whatever ratio you want - of the requests (kind
of canary testing which avoids too much pressure on the pool).

I guess it was not exactly the answer you expected but think it can be a
good solution and ultimately can site in a new package in dbcp or alike?

Best,
Romain Manni-Bucau
@rmannibucau  |  Blog
 | Old Blog
 | Github  |
LinkedIn  | Book



Le mar. 13 févr. 2024 à 21:11, Phil Steitz  a écrit :

> POOL-407 tracks a basic liveness problem that we have never been able to
> solve:
>
> A factory "goes down" resulting in either failed object creation or failed
> validation during the outage.  The pool has capacity to create, but the
> factory fails to serve threads as they arrive, so they end up parked
> waiting on the idle object pool.  After a possibly very brief interruption,
> the factory heals itself (maybe a database comes back up) and the waiting
> threads can be served, but until other threads arrive, get served and
> return instances to the pool, the parked threads remain blocked.
> Configuring minIdle and pool maintenance (timeBetweenEvictionRuns > 0) can
> improve the situation, but running the evictor at high enough frequency to
> handle every transient failure is not a great solution.
>
> I am stuck on how to improve this.  I have experimented with the idea of a
> ResilientFactory, placing the responsibility on the factory to know when it
> is down and when it comes back up and when it does, to keep calling it's
> pool's create as long as it has take waiters and capacity; but I am not
> sure that is the best approach.  The advantage of this is that
> resource-specific failure and recovery-detection can be implemented.
>
> Another option that I have played with is to have the pool keep track of
> factory failures and when it observes enough failures over a long enough
> time, it starts a thread to do some kind of exponential backoff to keep
> retrying the factory.  Once the factory comes back, the recovery thread
> creates as many instances as it can without exceeding capacity and adds
> them to the pool.
>
> I don't really like either of these.  Anyone have any better ideas?
>
> Phil
>


Re: [pool] Recovering from transient factory outages

2024-02-13 Thread Phil Steitz
Thanks, Romain, this is awesome.  I would really like to find a way to get
this kind of thing implemented in [pool] or via enhanced factories.  See
more on that below.

On Tue, Feb 13, 2024 at 1:27 PM Romain Manni-Bucau 
wrote:

> Hi Phil,
>
> What I used by the past for this kind of thing was to rely on the timeout
> of the pool plus in the healthcheck - external to the pool - have some
> trigger (the simplest was "if 5 healthchecks fail without any success in
> between" for ex), such trigger will spawn a task (think thread even if it
> uses an executor but guarantee to have a place for this task) which will
> retry but at a faster pace (instead of every 30s it is 5 times in a run for
> - number was tunable but 5 was my default).
> If still detected as down - vs not overloaded or alike - it will consider
> the database down and spawn a task which will retry every 30 seconds, if
> the database comes back - I added some business check but idea is not just
> check the connection but the tables are accessible cause often after such a
> downtime the db does not come at once - just destroy/recreate the pool.
> The destroy/recreate was handled using a DataSource proxy in front of the
> pool and change the delegate.
>

It seems to me that all of this might be possible using what I was calling
a ReslientFactory.  The factory could implement the health-checking itself,
using pluggable strategies for how to check, how often, what means outage,
etc.  And the factory could (if so configured and in the right state)
bounce the pool.  I like the model of escalating concern.


> Indeed it is not magic inside the pool but can only better work than the
> pool solution cause you can integrate to your already existing checks and
> add more advanced checks - if you have jpa just do a fast query on any
> table to validate db is back for ex.
> At the end code is pretty simple and has another big advantage: you can
> circuit break the database completely while you consider the db is down
> just letting passing 10% of whatever ratio you want - of the requests (kind
> of canary testing which avoids too much pressure on the pool).
>
> I guess it was not exactly the answer you expected but think it can be a
> good solution and ultimately can site in a new package in dbcp or alike?
>

I don't see anything here that is specific really to database connections
(other than the proxy setup to gracefully handle bounces), so I want to
keep thinking about how to solve the general problem by somehow enhancing
factories and/or pools.

Phil

>
> Best,
> Romain Manni-Bucau
> @rmannibucau  |  Blog
>  | Old Blog
>  | Github <
> https://github.com/rmannibucau> |
> LinkedIn  | Book
> <
> https://www.packtpub.com/application-development/java-ee-8-high-performance
> >
>
>
> Le mar. 13 févr. 2024 à 21:11, Phil Steitz  a
> écrit :
>
> > POOL-407 tracks a basic liveness problem that we have never been able to
> > solve:
> >
> > A factory "goes down" resulting in either failed object creation or
> failed
> > validation during the outage.  The pool has capacity to create, but the
> > factory fails to serve threads as they arrive, so they end up parked
> > waiting on the idle object pool.  After a possibly very brief
> interruption,
> > the factory heals itself (maybe a database comes back up) and the waiting
> > threads can be served, but until other threads arrive, get served and
> > return instances to the pool, the parked threads remain blocked.
> > Configuring minIdle and pool maintenance (timeBetweenEvictionRuns > 0)
> can
> > improve the situation, but running the evictor at high enough frequency
> to
> > handle every transient failure is not a great solution.
> >
> > I am stuck on how to improve this.  I have experimented with the idea of
> a
> > ResilientFactory, placing the responsibility on the factory to know when
> it
> > is down and when it comes back up and when it does, to keep calling it's
> > pool's create as long as it has take waiters and capacity; but I am not
> > sure that is the best approach.  The advantage of this is that
> > resource-specific failure and recovery-detection can be implemented.
> >
> > Another option that I have played with is to have the pool keep track of
> > factory failures and when it observes enough failures over a long enough
> > time, it starts a thread to do some kind of exponential backoff to keep
> > retrying the factory.  Once the factory comes back, the recovery thread
> > creates as many instances as it can without exceeding capacity and adds
> > them to the pool.
> >
> > I don't really like either of these.  Anyone have any better ideas?
> >
> > Phil
> >
>


[pool] Recovering from transient factory outages

2024-02-13 Thread Phil Steitz
POOL-407 tracks a basic liveness problem that we have never been able to
solve:

A factory "goes down" resulting in either failed object creation or failed
validation during the outage.  The pool has capacity to create, but the
factory fails to serve threads as they arrive, so they end up parked
waiting on the idle object pool.  After a possibly very brief interruption,
the factory heals itself (maybe a database comes back up) and the waiting
threads can be served, but until other threads arrive, get served and
return instances to the pool, the parked threads remain blocked.
Configuring minIdle and pool maintenance (timeBetweenEvictionRuns > 0) can
improve the situation, but running the evictor at high enough frequency to
handle every transient failure is not a great solution.

I am stuck on how to improve this.  I have experimented with the idea of a
ResilientFactory, placing the responsibility on the factory to know when it
is down and when it comes back up and when it does, to keep calling it's
pool's create as long as it has take waiters and capacity; but I am not
sure that is the best approach.  The advantage of this is that
resource-specific failure and recovery-detection can be implemented.

Another option that I have played with is to have the pool keep track of
factory failures and when it observes enough failures over a long enough
time, it starts a thread to do some kind of exponential backoff to keep
retrying the factory.  Once the factory comes back, the recovery thread
creates as many instances as it can without exceeding capacity and adds
them to the pool.

I don't really like either of these.  Anyone have any better ideas?

Phil


Re: [dbcp] Force close connections on fatal SQL Exceptions

2024-02-13 Thread Phil Steitz
Thanks, Gary.  I agree with everything below.  I think it's best to just
leave things as they are.

Phil

On Mon, Feb 12, 2024 at 7:25 PM Gary Gregory  wrote:

> I've used many JDBC drivers from different vendors, FOSS and
> proprietary, and if I've learned one thing, it is that each is its own
> world within the universe of the DBMS it operates in.
>
> It is impossible to write a generic tool; they all end up providing
> plugins for DB-specific features, sure, but those plugins invariably
> also account for behavioral differences between drivers.
>
> You can't even rely on all JDBC APIs to be implemented that one would
> imagine should be "core" to the functionality.  I've seen such APIs
> just stubbed to throw exceptions. This is especially common for
> metadata-related APIs.
>
> Relying on SQL states is pretty hopeless IMO. I've had to allow custom
> configs in apps to try and figure out the driver and connection state
> from exception class names and exception message contents because you
> can't rely on SQL states, and if you do then you realize that
> different drivers use different states in similar contexts, so then
> you allow for customization of _that_ too, bleh.
>
> All of this is to say that it feels dangerous to remove any hook we
> provide.
>
> I just can't see a reliable way to detect a broken Connection unless
> it's a detectable network breakage (if a socket or IO exception is the
> root cause).
>
> The bottom line is that connections are really expensive to create,
> and should only be thrown away if you know for sure it's gone bad. I'd
> never want to throw away a connection that is fine from the server POV
> but the driver throws exception X because for whatever reason, is
> reusable, but we throw it away.
>
> HTH,
> Gary
>
> On Mon, Feb 12, 2024 at 2:42 PM Phil Steitz  wrote:
> >
> > In DBCP-595, a change is suggested to force close connections when a
> fatal
> > SQL exception has occurred.  As of Version 2.2 of DBCP, fatal exceptions
> > are tracked and the fastFailValidation property can be set to fast fail
> > validations when a fatal exception has occurred on a connection.  This
> > change would obsolete that property, as it would make the pool close the
> > connection immediately.
> >
> > I see two pros for this change and one con.
> >
> > Pros:
> > 0) Bad connection is destroyed immediately
> > 1) Works when validation is turned off
> >
> > Con:
> > Incorrect SQL states returned by drivers or transient failures may cause
> > over-zealous purging of connections.
> >
> > I vaguely recall the "Con" as the reason why we implemented
> > fastFailValidation instead of direct close on these failures, but I can't
> > find the discussion in the archives.
> >
> > Phil
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>


Re: [dbcp] Force close connections on fatal SQL Exceptions

2024-02-13 Thread Bernd Eckenfels
Phil Steitz wrote on 13. Feb 2024 20:46 (GMT +01:00):
> Thanks, Gary.  I agree with everything below.  I think it's best to just
> leave things as they are.

If it’s plugable the project might not have to care, 
But then how to fix the reported problem? Do we have an idea what’s causing it?

And a extension to that, using a conn3crion pool you expect it abstracts and 
handles all
Those idiosyncrasies of different drivers, platforms and JDBC interpretations. 
Just giving 
Up is not the best option.

In my experience with a properitary (but very robust pool since it handles a 
small number of drivers)
throwing away connections on concrete suspicion turned out to be helpful.
Either by configurable state, substrings of error messages or actual 
sqlexception subtypes.

But maybe additional validation is also fine as long as it uses strict 
timeouts, some control for paralysis and starvation. If I understand this would 
happen in the mentioned problem case, so is there a timeout missing.

Having said that as a tip to OP use tcpkeepalive and lower the timeouts. It 
saved my ass more than once, especially with VIPs in place.

Gruss
Bernd
— 
https://bernd.eckenfels.net

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [dbcp] Force close connections on fatal SQL Exceptions

2024-02-13 Thread Phil Steitz
On Tue, Feb 13, 2024 at 1:03 PM Bernd Eckenfels 
wrote:

> Phil Steitz wrote on 13. Feb 2024 20:46 (GMT +01:00):
> > Thanks, Gary.  I agree with everything below.  I think it's best to just
> > leave things as they are.
>
> If it’s plugable the project might not have to care,
> But then how to fix the reported problem? Do we have an idea what’s
> causing it?
>

At this point, my best guess is that the app is not closing connections on
some exception paths.  Given that force-killing connections that will get
killed anyway on return improves the situation supports that idea, but I
have not dug into the application code.

>
> And a extension to that, using a conn3crion pool you expect it abstracts
> and handles all
> Those idiosyncrasies of different drivers, platforms and JDBC
> interpretations. Just giving
> Up is not the best option.
>

We decided years ago that we were not going to try to add special code for
every driver and I think that is a good decision.  The code is already
plenty complex.  I think the solution implemented now to fast fail
validation in this case is sufficient.  I don't think waiting to close
until the connection is returned is actually causing the reported problem.

>
> In my experience with a properitary (but very robust pool since it handles
> a small number of drivers)
> throwing away connections on concrete suspicion turned out to be helpful.
> Either by configurable state, substrings of error messages or actual
> sqlexception subtypes.
>
> But maybe additional validation is also fine as long as it uses strict
> timeouts, some control for paralysis and starvation. If I understand this
> would happen in the mentioned problem case, so is there a timeout missing.
>
> Having said that as a tip to OP use tcpkeepalive and lower the timeouts.
> It saved my ass more than once, especially with VIPs in place.
>

The liveness problem is a [pool] concern, per my recent post.  I think
getting support in [pool] for managing transient factory outages would
really help.  Any ideas that you have on how to do that would be much
appreciated.

Phil

>
> Gruss
> Bernd
> —
> https://bernd.eckenfels.net
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>


Re: [pool] Recovering from transient factory outages

2024-02-13 Thread Romain Manni-Bucau
Hi Phil,

You are right it can be done in pool - I'm not sure it is the right level
(for instance in my previous example it will need to expose some
"getCircuitBreakerState" to see if it can be used or not) but maybe I'm too
used to decorators ;).
The key point for [pool] is the last one, the proxying.
Pool can't do it itself since it manages banalised instances but if you add
the notion of proxy factory and fallback on jre proxy when it is only about
interfaces it will work.

Romain Manni-Bucau
@rmannibucau  |  Blog
 | Old Blog
 | Github  |
LinkedIn  | Book



Le mar. 13 févr. 2024 à 22:38, Phil Steitz  a écrit :

> Thanks, Romain, this is awesome.  I would really like to find a way to get
> this kind of thing implemented in [pool] or via enhanced factories.  See
> more on that below.
>
> On Tue, Feb 13, 2024 at 1:27 PM Romain Manni-Bucau 
> wrote:
>
> > Hi Phil,
> >
> > What I used by the past for this kind of thing was to rely on the timeout
> > of the pool plus in the healthcheck - external to the pool - have some
> > trigger (the simplest was "if 5 healthchecks fail without any success in
> > between" for ex), such trigger will spawn a task (think thread even if it
> > uses an executor but guarantee to have a place for this task) which will
> > retry but at a faster pace (instead of every 30s it is 5 times in a run
> for
> > - number was tunable but 5 was my default).
> > If still detected as down - vs not overloaded or alike - it will consider
> > the database down and spawn a task which will retry every 30 seconds, if
> > the database comes back - I added some business check but idea is not
> just
> > check the connection but the tables are accessible cause often after
> such a
> > downtime the db does not come at once - just destroy/recreate the pool.
> > The destroy/recreate was handled using a DataSource proxy in front of the
> > pool and change the delegate.
> >
>
> It seems to me that all of this might be possible using what I was calling
> a ReslientFactory.  The factory could implement the health-checking itself,
> using pluggable strategies for how to check, how often, what means outage,
> etc.  And the factory could (if so configured and in the right state)
> bounce the pool.  I like the model of escalating concern.
>
>
> > Indeed it is not magic inside the pool but can only better work than the
> > pool solution cause you can integrate to your already existing checks and
> > add more advanced checks - if you have jpa just do a fast query on any
> > table to validate db is back for ex.
> > At the end code is pretty simple and has another big advantage: you can
> > circuit break the database completely while you consider the db is down
> > just letting passing 10% of whatever ratio you want - of the requests
> (kind
> > of canary testing which avoids too much pressure on the pool).
> >
> > I guess it was not exactly the answer you expected but think it can be a
> > good solution and ultimately can site in a new package in dbcp or alike?
> >
>
> I don't see anything here that is specific really to database connections
> (other than the proxy setup to gracefully handle bounces), so I want to
> keep thinking about how to solve the general problem by somehow enhancing
> factories and/or pools.
>
> Phil
>
> >
> > Best,
> > Romain Manni-Bucau
> > @rmannibucau  |  Blog
> >  | Old Blog
> >  | Github <
> > https://github.com/rmannibucau> |
> > LinkedIn  | Book
> > <
> >
> https://www.packtpub.com/application-development/java-ee-8-high-performance
> > >
> >
> >
> > Le mar. 13 févr. 2024 à 21:11, Phil Steitz  a
> > écrit :
> >
> > > POOL-407 tracks a basic liveness problem that we have never been able
> to
> > > solve:
> > >
> > > A factory "goes down" resulting in either failed object creation or
> > failed
> > > validation during the outage.  The pool has capacity to create, but the
> > > factory fails to serve threads as they arrive, so they end up parked
> > > waiting on the idle object pool.  After a possibly very brief
> > interruption,
> > > the factory heals itself (maybe a database comes back up) and the
> waiting
> > > threads can be served, but until other threads arrive, get served and
> > > return instances to the pool, the parked threads remain blocked.
> > > Configuring minIdle and pool maintenance (timeBetweenEvictionRuns > 0)
> > can
> > > improve the situation, but running the evictor at high enough frequency
> > to
> > > handle every transient failure is not a great solution.
> > >
> > > I am stuck on how to improve this.  I have experimented with the idea
> of
> > a
> > > ResilientFactory,