This is some really really ugly code, but just wanted to share what I was
doing in the meantime to give me a reliable connection.  This is in my app
that's trying to grab a connection from the excalibur datasource.

/**
 * Opens a database connection.
 */
protected Connection getConnection() {
    int attempts = 0;
    while (attempts < 100) {
        try {
            return datasource.getConnection();
        } catch (SQLException e1) {
            if (e1.getMessage().equals("Could not create enough Components
to service your request.")) {
                //at limit
                try {
                    Thread.sleep(50);
                } catch (InterruptedException ie) {
                    //ignore
                }
                attempts++;
            } else {
                throw new CascadingRuntimeException(
                    "An exception occurred getting a database connection.",
e1);
            }
        }
    }
    throw new RuntimeException("Failed to get a connection after " +
attempts + " attempts");
}

Ideally you're not checking the sql exception's message to see if you've hit
the max limit, but Excalibur isn't creating special exception types so not
much choice.  5 seconds may not be enough time, especially if new
connections are being created.

Serge Knystautas
Loki Technologies - Unstoppable Websites
http://www.lokitech.com/
----- Original Message -----
From: "Serge Knystautas" <[EMAIL PROTECTED]>
To: "Avalon Developers List" <[EMAIL PROTECTED]>
Sent: Friday, November 02, 2001 11:21 AM
Subject: Re: Throttling Excalibur


> ----- Original Message -----
> From: "Berin Loritsch" <[EMAIL PROTECTED]>
>
>
> > My initial attempts to provide a BlockingHardResourceLimitingPool (i.e.
> one
> > that waits until something is released) resulted in DeadLock in certain
> > circumstances.  This is IMO a bad thing--because once you have deadlock,
> > you can't get ANY more connection objects.
> >
> > For the interim solution, I have opted for the fail early approach.  I
> have
> > not had the time to debug the Blocking version.
>
> I wrote the connection pooler for Town (which is why I originally used it
> for James), so I'll see if the logic used there has any mappings for this
> blocking dilema.
>
> In Town's, I had the pool try to grab a connection 20-100 (?) times,
> sleeping 50ms between attempts.  While trying, if I saw that new
connections
> were being created, I would wait/retry longer since that usually
translates
> into a delay before a connection would be available.
>
> > > On an unrelated note, I see in Jdbc2Connection (and Jdbc3Connection)
> there's
> > > a m_num_uses which seems to cap the number of times a connection is
used
> to
> > > 15.  What's the rationale behind this?  I'd rather go through hundreds
> of
> > > thousands of statements before it's closed...if my SQL is good, it
might
> > > take 1ms to run, and some servers take 1000ms to connect.  So by
capping
> it
> > > to num uses to 15, I've got from 1ms/query to 67ms/query.  Just seems
> like a
> > > weird approach.
> >
> > :)
> >
> > I know it seems to be a weird approach, however I have found in dealing
> with
> > Blobs that most JDBC drivers are buggy.  For instance, if you happen to
> look
> > for a Blob, and receive an exception because you tried to download a
null
> > blob, the Connection is now useless.
>
> Well, I've used Oracle's JDBC driver in the past years, so I can relate to
> JDBC drivers being problematic with BLOBs.  However, I use Inet Software's
> to connect to MS SQL 2K, and I can go days without needing to close a
> connection that's returning almost exclusively blobs.  I would think this
> would be easy to leave as configurable, if not just follow the oradb flag
> and cap the num uses based on that. :)
>
> > Also, if there is some inconsistency in your code where all the JDBC
> objects
> > are not properly closed by your application, you will find that the
> Connection
> > object again becomes unusable.
> >
> > It is exhausting work to audit your own code much less everyone else's.
I
> > could set a flag so that when the Connection produces an Exception, that
> it
> > gets recycled--but that means if your SQL is bad, you get a new
connection
> > everytime you execute it.  Not optimal either.
> >
> > Eventually, I should make this cut-off configurable--but with testing in
> > my environment, the setting I have works for everything.
>
> This is what I did in Town for tracking down in applications where JDBC
> connections weren't closed... when the pool got a connection, I created a
> dummy exception and set it to that conn object, which in your case would
be
> in Jdbc2/3Connection.  If I successfully recycled the connection, I would
> delete the exception.  However, if finally() was called without a recycle,
> this told me the object was gc()'d, and the application wasn't properly
> closing the connection.  So then I could dump/log the stack trace in
> finally(), and presto, I knew where the connection was grabbed and can
track
> down the offending application that wasn't closing the connection.
>
> It actually didn't create as much overhead as you might expect, but you
> could make this configurable (such as a debug flag), so you only create
> these dummy exceptions-for-stack-traces while developing.
>
> > If you have suggestions, patches, example code, etc. I would be more
than
> > happy to evaluate them and incorporate them in the pooling code.
> Eventually,
> > I want to pool the PreparedStatement objects as well (since the JDBC
> driver
> > specs state that it is possible and provides additional performance).
>
> That sounds good too.  Although I'm still just trying to hoping to get
> Excalibur reliable enough so that I don't have to call jdbc-support
experime
> ntal in James, so performance optimizations are secondary to me for now.
>
> Serge Knystautas
> Loki Technologies  - Unstoppable Websites
> http://www.lokitech.com/



--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to