Hi Frank, On Fri, Mar 23, 2012 at 4:02 AM, frank <frank.as...@congrace.de> wrote: > [..] >> In your experience, were you working with already-transaction >> resources (via JTA?) As mentioned on the call, I think if we attempt >> to implement transactions ourselves, there's all kinds of opportunity >> for failure. But if we can "wrap" already-transactional resources >> while still keeping the ability to integrate non-transactional blob >> storage, that seems more palatable to me. > In this Test i had a service write to a Datasource connected via > Hibernate and to a Filesystem in a Transaction. The way i did this was > quite straigthforward. I defined an Action class which held all the > neccessary information about the atomic operations (e.g. one Action can > be: write object to datasource, or: write xml to file system) in a > LinkedList. I kept an index in the Transaction telling the system which > Action is the current one, and if some error occurs, the system iterates > up in the LinkedList undoing any Actions it encounters. > So the system used Hibernate's org.hibernate.Session and Transaction for > handling transactions on the datasource level, but it uses a simple > handwritten logic for handling transactions on the filesystem. > This logic has all bee wrapped into a PlatformTransactionManager from > Spring, which i weaved into the service using @Transactional annotations > and a spring bean configuration for the datasource, filesystem and the > transaction manager.
Hmm...I'd be curious to see how you wired it all together. I assume your Filesystem api was designed such that all Actions *could* be reversed. So was there some kind of staging area or was your Filesystem interface append-only (where it generated ids for new items)? >> The original point made in the paper was that there was a way to not >> *force* locking to occur (via optimistic concurrency control) if the >> storage interface provided a way to declare the previously-seen state >> with each request. > But this would mean fetching the existing objects from the storage layer > before applying any update in order to be able to compare the versions > which also introduces quite an overhead, depending on the object's > complexity. And when dealing with large datastreams it seems quite > inefficient to compare the currently stored version with one version > supplied with a put request in order to overwrite it with yet another > version given in the put request. Yes, that could get expensive, and I wasn't sure it would really make sense to force at this level. In any case, I think this will be easier to reason about with more experimentation, particularly with regard to transactional and locking capabilities. - Chris ------------------------------------------------------------------------------ This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure _______________________________________________ Fedora-commons-developers mailing list Fedora-commons-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/fedora-commons-developers