The following comment has been added to this issue:

     Author: Gianny DAMOUR
    Created: Tue, 7 Oct 2003 5:07 AM
       Body:
Thanks for this feedback. I had a quick look to the preview of an Interceptor 
implementation of a ConnectionManager and I am not very happy with it. Most of 
the defined interceptors exist only to show what is an Interceptor 
implementation. Moreover the "Connection Management" section of the JCA 
specification deals only with connection pooling capabilities and the preview 
focuses on all other features.

Prior to describe in more details the reasons behind my position, I will 
comment on why I have decided to have a single(ton) ConnectionManager. The 
reason is very simple: how many WorkManager will be started during the 
bootstrapping of G? I hope only one. Otherwise, if a WorkManager is created 
each time that a resource adapter is deployed, I emit a doubt on the 
efficiency. So my conclusion was simple: one WorkManager and one 
ConnectionManager. Moreover, during an orderly shutdown, it will be pretty easy 
to stop the service instead of iterating on all the existing ConnectionManager 
(except if you have a common parent, which is at the end of the day a single 
ConnectionManager).

You understood pretty well the design, so there is no misinterpretation in your 
analysis and I thank you for your feedback.

Now, let�s talk about the Interceptor implementation. 

Useless interceptors.

I�ve got the following stack:

(ProxyConnectionManager ->) SubjectInterceptor -> 
TransactionEnlistingInterceptor -> XAResourceInsertionInterceptor -> 
NewConnectionHandleInterceptor -> MCFConnectionInterceptor

What is the benefit of having two interceptors only to retrieve the current 
transaction and security contexts? There is no additional flexibility. It only 
wastes resources.

Suppose that I need to add to a collection the values "un", "deux" and "trois". 
What is the more natural approach, this:

Collection test = new ArrayList();
test.add("un");
test.add("deux");
test.add("trois");

or to implement a Interceptor stack, which does the same thing?

This is exactly the same question for NewConnectionHandleInterceptor and 
MCFConnectionInterceptor. If you have decided to have two interceptors in order 
to prepare the connection sharing implementation, I believe it is pointless. 
Because I�ve got the feeling that connection sharing must be treated on your 
way to the pool and not coming from it.


Lack of management hooks

By now, nothing is manageable. For instance, it is impossible to have the 
number of ManagedConnection currently local or XA transacted, only active, idle 
et cetera.

Another example, I assume that TransactionCachingInterceptor is in charge of 
caching the transacted connections across method invocations. The current 
implementation does not provide a hook to force the collection of such 
connections based on a configurable policy. It should be great to be able to 
configure distinct collection policies based on various criteria.

Multiplexing of ConnectionEventListener -> ConnectionReturnAction

This multiplexing is inelegant.

Moreover, I�ve got the feeling that there will be only one type of 
ConnectionEventListener. Not very handy. Consider the following scenario:

A local transation is running. The start of an XA transaction is then 
requested. To treat this request, you will have to code various test on the 
ConnectionInfo class in order to detect that this is an invalid request. You 
will do something very like what is doing 
TransactionCachingInterceptor.returnConnection, which is only  "if then, if 
then". However, perhaps that you intent to add to ConnectionInfo various 
information such as a strategy to be executed when a specific connection event 
is received.

I am sorry, but I still see the Partition based implementation superior. I even 
think that an interceptor implementation is a subset of what I have proposed.

For instance, as you know matchManagedConnection is a point of contention. I 
have explained that one could distribute a set of latches across the available 
ManagedConnections in order to scale. Actually, you do not need: create an idle 
partition which contains N idle sub-partitions and distribute the allocation 
request.

BTW ResourceAdapter and ManagedConnectionFactory MUST be MBeans to be JSR-077 
compliant and I was fully aware of it.

Thanks for your review.
---------------------------------------------------------------------
View the issue:

  http://jira.codehaus.org/secure/ViewIssue.jspa?key=GERONIMO-97


Here is an overview of the issue:
---------------------------------------------------------------------
        Key: GERONIMO-97
    Summary: JCA - Connection Management
       Type: New Feature

     Status: Assigned
   Priority: Major

 Time Spent: Unknown
  Remaining: Unknown

    Project: Apache Geronimo
 Components: 
             core

   Assignee: David Jencks
   Reporter: Gianny DAMOUR

    Created: Thu, 2 Oct 2003 5:10 AM
    Updated: Mon, 6 Oct 2003 6:58 PM

Description:
This is the final proposal for the Connection Management section of the JCA 
specifications. It is a "clean" version of GERONIMO-90 + a minimalist deployer 
which allows to test such an implementation within a running server.

When I write "clean", it is evident that some works still need to be done, 
however based on the size of the code-base - 30 classes - it should be great to 
reflect seriously on this proposal.

The content by package is:

*********************************************************
org.apache.geronimo.connector.connection:
*********************************************************
The ConnectionManager spi interface has been implemented and delegates the 
allocation of connection handles to a pool of ManagedConnection. By now, the 
ConnectionManager is really simple: it delegates directly to the pool. However, 
one needs to hook-in the Transaction and Security services in the 
allocateConnection method.

*********************************************************
org.apache.geronimo.connector.connection.partition:
*********************************************************
The specifications do not define how connection pooling should be implemented. 
However, some non-prescriptive guidelines have been provided. One of them is to 
partition this pool. This is basically what I have decided to implement: The 
pool is partition on a per-ManagedConnectionFactory basis. By now, it is 
further partitioned on an idle, active, factory, destroy basis. The general 
idea of this design is to define distinct set of behaviors depending on the 
kind of partition. 

Examples: 
The factory partition is in charge of creating/allocating new connection 
handles. When its allocateConnection method is called, it decides if a new 
ManagedConnection should be created or if an existing one can be re-used. 
The XA partition (to be implemented) is in charge of creating/allocating new 
transacted connection handles. When its allocateConnection is called, it 
enlists the ManagedConnection with our TM and then gets a connection handle 
from this enlisted ManagedConnection. 

Inter-partition events can be propagated via an AWT like event model. This 
mechanism is used for example by the factory partition: It monitors the idle 
and destroy partitions in order to decide how to serve a new allocation 
request. More accurately, if a ManagedConnection is added to the idle 
partition, then a permit to try a matchManagedConnection is added. If a 
ManagedConnection is added to the destroy partition, then a permit to create a 
new ManagedConnection is added. 

*********************************************************
org.apache.geronimo.connector.connection.recycler:
*********************************************************
Partitions may be recycled. For instance, if a ManagedConnection seats idle too 
long time, then this ManagedConnection may be eligible for recycling (destroy 
in the case of idle ManagedConnection).


*********************************************************
org.apache.geronimo.connector.connection.logging:
*********************************************************
The inner workings of ManagedConnectionFactory and ManagedConnection can be 
tracked via a PrintWriter. LoggerFactory defines the contract to obtain a 
PrintWriter factory backed by various output streams. 

*********************************************************
org.apache.geronimo.connector.deploy:
*********************************************************
A minimalist deployer for the outbound-resourceadapter nodes is implemented.

Gianny


---------------------------------------------------------------------
JIRA INFORMATION:
This message is automatically generated by JIRA.

If you think it was sent incorrectly contact one of the administrators:
   http://jira.codehaus.org/secure/Administrators.jspa

If you want more information on JIRA, or have a bug to report see:
   http://www.atlassian.com/software/jira

Reply via email to