On Nov 21, 2006, at 10:05 AM, Ignacio Silva-Lepe wrote:
Hi Jim,
I've been taking a look at o.a.t.service.persistence.store.Store to
keep
track of component instances in the conversational scope container.
It does look to me like Store should be usable as the single
artifact to
keep track of component instances, whether instances need to be
maintained persistently or not. The container would use the Store
interface to get at its instances and the actual implementation of the
interface that is used (e.g., MemoryStore or JDBCStore) could be
configured via some mechanism, perhaps yet to be defined.
We will probably need the conversational scope container to "know"
about at least two (I can't think of more right now) Store types:
durable vs. non-durable. Some runtime configurations may have both
types, others just one (probably the non-durable one but I could also
see a runtime configured with all conversations as durable). These
would be autowired to the scope container. We can initially do this
with a marker interface for durable types but I would ultimately like
to support this via autowire intents (part of policy). For example
the scope container should be able to say "wire this reference to a
Store that is transactional and durable", i.e. an intent. I would
also like to have the spec modified where we can specify a
conversation must be durable or should be durable (default would be
non-durable) using SCA intents as well. I was planning on modifying
the autowiring capability to work with Felix's intent engine to
support this.
This would
also include handling of instance timeouts via the Store's expiration
parameter, although it when the Store returns null to readRecord when
an instance has been removed due to a timeout, a mechanism
external to the interface may be needed to determine whether the null
means the instance used to be there but it was timed out, or the null
means the instance was never needed.
hmm we will need to think about this. My initial reaction is the
store should not have to track which things were created in the past
and have expired. Something external, probably the scope container,
will and the time it needs to track particular ones has to be bounded.
Alternatively, it may be possible
for the Store to indicate this distinction as part of the interface.
That being said, there are a couple of details that the Store may
be able
to help with too.
(1) The ScopeContainer.register operation is currently implemented by
the conversational scope container in terms of its instance map.
This way
the container can signal whether a component is not registered at
getInstance. If Store defines a register operation then MemoryStore
could
have a straightforward implementation. It's not clear though what
JDBCStore's implementation would be though.
I think the scope container will need to track the AtomicComponents
registered but will not have instance wrappers. It will delegate to
the Store for the instance. If the store returns null, it will call
AtomicComponent.create() and hand it back to the store. This is
similar to the way InstanceWrappers work in the other scope
containers. I think we should avoid creating an IsntanceWrapper that
hides this since it involves an unnecessary object creation and I
don't think the store should be dependent on it since it could be
used by other services in the runtime to persist things.
(2) The conversational scope container currently removes all
instances for
a given id, for any component, when a conversation end event is
received.
It would be helpful for the Store to also provide this
functionality, which
it
would be able to implement more efficiently.
Yep
Thoughts?
I'm sure the Store API will need to change some as we figure this
out. I was planning on (at least) three implementations: a memory
store; a jdbc store; and a journal store. The first two I checked in
and the latter I have on my laptop waiting to check in. The memory
store will be very fast but non-durable and non-reliable. The jdbc
store will be dog slow ;-) but very reliable since it persists each
instance in its own transaction without any boxcaring, batching or
SQL-reordering. We could implement a JPA-based store which will give
us batching and SQL re-ordering but then we loose some reliability
since writes are not necessarily flushed (i.e. forced) and reported
to the client. At that point, it seems like the overhead of using a
database is superfluous given we would have guaranteed "ACI" but not
"D".
The journal store (based on HOWL) will hopefully be fast and
reliable. Basically, it has an in-memory cache of persistent
instances. When an instance is persisted, it will be serialized and a
header and series of blocks containing the object byte array will be
written to a log file asynchronously (i.e. non-forced). The final
block write will be forced and commit synchronously. When an
instance is requested, the Store will just pull it from its in-memory
cache and return. After a crash and during a recovery, the log file
will be replayed and the in-memory state (cache) will be
reconstituted. This will give us reliability and crash recovery as
well as be really fast (limited forced disk writes, no database
overhead, reads from memory). The only catch with this is that the
log files must be larger than the space required to persist all of
the *active* instances in a runtime instance. This shouldn't be a big
deal since a database table also needs to be large enough in the jdbc
store case as well. This is only a requirement for active instances,
though, since at some point, a log file will fill and the journal
will copy active records to a second log and the first will be
recycled freeing space occupied by expired instances. The other catch
is that all of the instances need to be small enough to fit in-memory
on a JVM. I also don't think this is too big a deal since IMO EJB2-
style passivation sucks for most cases :-)
I also believe we are going to need to clarify a few things in the
spec. First thing is we can support @Init but I don't want to require
support for @Destroy. If we require @Destroy to be supported, if an
instance is persisted (e.g. JDBC), it will need to be deserialized
just to invoke the "destructor" method. That is a mistake EJB 2 made
and hopefully we can avoid it.
The other addition I believe we are going to need is a way for
service implementors to signal the boundaries for when changes to a
conversational service should be atomic, reliable, and persisted.
This seems a lot like a transaction and perhaps we could just do this
using declarative transaction intents? For example, a method on a
client could be marked as transactional and when it completes, any
changes to conversational services that were made by the client
invoking other services would be persisted (completed). Otherwise, we
will wind up in a strange situation where we have to "autocommit" all
changes. In other words, if client A calls B and C, every invoke to B
and C would be persisted in independent operations. It would be nice
if a developer could instead specify that calls to B and C be done in
one operation.
Jim
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]