Re: [Hibernate] Re: [Hibernate-devel] distributed caching
Hey Christian! What I am trying to achieve is a distributed read-write cache. I didnt explicitly mention it in my post, but from the context (i was mentioning writes) it could be guessed. I think Gavin understood me better because we had a conversation about distributed read write caches before. Sorry if I wasnt precise enough. regards christoph - Original Message - From: "Christian Meunier" <[EMAIL PROTECTED]> To: "Christoph Sturm" <[EMAIL PROTECTED]>; Cc: <[EMAIL PROTECTED]> Sent: Wednesday, September 04, 2002 5:50 PM Subject: Re: [Hibernate] Re: [Hibernate-devel] distributed caching > I must miss something here > > I dont understand why in a read only cache ( what Christoph is trying to > achieve) you need a transaction aware distributed caching. > As i understand read only cache behaviour, when you read an object from the > database, cache it, whe you update/delete an object, flush it from the > cache, right ? > > Gavin could you please show me an example where the isolation of transaction > could be broken without a transaction aware distributed caching please ? > > Regards > Christian Meunier > > - Original Message - > From: > To: "Christoph Sturm" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Wednesday, September 04, 2002 5:03 PM > Subject: [Hibernate] Re: [Hibernate-devel] distributed caching > > > > > > Hi Christoph sorry about the slow ping time. It takes a while to collect > > thoughts and write a response to some of these things. > > > > >>> Yesterday I was thinking about implementing a distributed cache for > > Hibernate. I want each node to have its own cache, but if one node writes > > to its db the data should be invalidated in all caches. You once mentioned > > that hibernate would need a transaction aware distributed cache to support > > distributed caching. > > I dont get why this is necessary. Can you tell me what kind of problems > you > > think i will run into when trying to implement such a beast, and where you > > think I could start? > > > > I was thinking about using jcs as cache, and when a session is committed, > > just invalidate all written objects in the cache. <<< > > > > If you have a look over cirrus.hibernate.ReadWriteCache you'll see that > > theres some interesting logic that ensures transaction isolation is > > preserved. A cache entry carries around with it: > > > > (0) the cached data item if the item is fresh > > (1) the time it was cached > > (2) a lock count if any transactions are currently attempting to update > the > > item > > (3) the time at which all locks had been released for a stale item > > > > All transactions lock an item before attempting to update it and unlock it > > after transaction completion. > > > > ie. the item has a lifecycle like this: > > > > lock > ><- > > --> fresh --> locked -> stale > > put lock release > > > > (actually the item may be locked and released multiple times while in the > > "locked" state until the lock count hits zero but the difficulty of > > representing that surpasses my minimal ascii-art skills.) > > > > A transaction may read an item of data from the cache if the transaction > > start time is AFTER the time at which the item was cached. (If not, the > > transaction must go to the database to see what state the database thinks > > that transaction should see.) > > > > A transaction may put an item into the cache if > > > > (a) there is no item in the cache for that id > > OR > > (b) the item is not fresh AND > > (c) the item in the cache with that id is unlocked AND > > (d) the time it was unlocked BEFORE the transaction start time > > > > So what all this means is that when doing a put, when locking, and when > > releasing, the transaction has to grab the current cache entry, modify it, > > and put it back in the cache _as_an_atomic_operation_. > > > > If you look at ReadWriteCache, atomicity is enforced by making each of > > these methods synchronized (a rare use of synchronized blocks in > > Hibernate). However, in a distributed environment you would need some > other > > kind of method of synchronizing access from multiple servers. > > > > I imagine you would implement this using something like the following: > > > > * Create a new implementation of CacheConcurrencyStrategy > > -DistributedCacheConcurrencyStrategy > > * DistributedCacheConcurrencyStrategy would delegate its functionality to > > ReadWriteCache which in turn delegates to JCSCache (which must be a > > distributed JCS cache, so all servers see the same lock count + > timestamps) > > * implement a LockServer process that would sit somewhere on the network > > and hand out very-short-duration locks on a particular id. > > * DistributedCacheConcurrencyStrategy would use the LockServer to > > synchronize access to the JCS Cache between multiservers. > > > > Locks would be expired on th
[Hibernate] Re: [Hibernate-devel] distributed caching
Hi Christoph sorry about the slow ping time. It takes a while to collect thoughts and write a response to some of these things. >>> Yesterday I was thinking about implementing a distributed cache for Hibernate. I want each node to have its own cache, but if one node writes to its db the data should be invalidated in all caches. You once mentioned that hibernate would need a transaction aware distributed cache to support distributed caching. I dont get why this is necessary. Can you tell me what kind of problems you think i will run into when trying to implement such a beast, and where you think I could start? I was thinking about using jcs as cache, and when a session is committed, just invalidate all written objects in the cache. <<< If you have a look over cirrus.hibernate.ReadWriteCache you'll see that theres some interesting logic that ensures transaction isolation is preserved. A cache entry carries around with it: (0) the cached data item if the item is fresh (1) the time it was cached (2) a lock count if any transactions are currently attempting to update the item (3) the time at which all locks had been released for a stale item All transactions lock an item before attempting to update it and unlock it after transaction completion. ie. the item has a lifecycle like this: lock <- --> fresh --> locked -> stale put lock release (actually the item may be locked and released multiple times while in the "locked" state until the lock count hits zero but the difficulty of representing that surpasses my minimal ascii-art skills.) A transaction may read an item of data from the cache if the transaction start time is AFTER the time at which the item was cached. (If not, the transaction must go to the database to see what state the database thinks that transaction should see.) A transaction may put an item into the cache if (a) there is no item in the cache for that id OR (b) the item is not fresh AND (c) the item in the cache with that id is unlocked AND (d) the time it was unlocked BEFORE the transaction start time So what all this means is that when doing a put, when locking, and when releasing, the transaction has to grab the current cache entry, modify it, and put it back in the cache _as_an_atomic_operation_. If you look at ReadWriteCache, atomicity is enforced by making each of these methods synchronized (a rare use of synchronized blocks in Hibernate). However, in a distributed environment you would need some other kind of method of synchronizing access from multiple servers. I imagine you would implement this using something like the following: * Create a new implementation of CacheConcurrencyStrategy -DistributedCacheConcurrencyStrategy * DistributedCacheConcurrencyStrategy would delegate its functionality to ReadWriteCache which in turn delegates to JCSCache (which must be a distributed JCS cache, so all servers see the same lock count + timestamps) * implement a LockServer process that would sit somewhere on the network and hand out very-short-duration locks on a particular id. * DistributedCacheConcurrencyStrategy would use the LockServer to synchronize access to the JCS Cache between multiservers. Locks would be expired on the same timescale as the cache timeout (which is assumed in all this to be >> than the transaction timeout) to allow for misbehaving processes, server failures, etc. Of course, any kind of distributed synchronization has a *very* major impact upon system scalability. I think this would be a very *fun* kind of thing to implement and would be practical for some systems. It would also be a great demonstration of the fexibility of our approach because clearly this is exactly the kind of thing that Hibernate was never meant to be good for! :) Gavin --- This sf.net email is sponsored by: OSDN - Tired of that same old cell phone? Get a new here for FREE! https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 ___ hibernate-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/hibernate-devel
Re: [Hibernate] Re: [Hibernate-devel] distributed caching
How you described the cache behaviour you wanted ("I want each node to have
its own cache, but if one node writes to its db the data should be
invalidated in all caches") led me to think we were talking about read only
cache ;)
Now it's clear ;)
Thanks
Chris
- Original Message -
From: "Christoph Sturm" <[EMAIL PROTECTED]>
To: "Christian Meunier" <[EMAIL PROTECTED]>;
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, September 04, 2002 5:57 PM
Subject: Re: [Hibernate] Re: [Hibernate-devel] distributed caching
> Hey Christian!
>
> What I am trying to achieve is a distributed read-write cache. I didnt
> explicitly mention it in my post, but from the context (i was mentioning
> writes) it could be guessed. I think Gavin understood me better because we
> had a conversation about distributed read write caches before.
>
> Sorry if I wasnt precise enough.
>
> regards
> christoph
---
This sf.net email is sponsored by: OSDN - Tired of that same old
cell phone? Get a new here for FREE!
https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390
___
hibernate-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/hibernate-devel
[Hibernate-devel] New functionality in CVS
This mail is probably spam. The original message has been attached along with this report, so you can recognize or block similar unwanted mail in future. See http://spamassassin.org/tag/ for more details. Content preview: (1) I implemented Validatable. It would be cool if people could try this out and see if anything extra is required here. (2) You may now use the declaration to specify properties of the persistent class itself as the id properties. The form of the declaration is: [...] Content analysis details: (9.30 points, 8 required) NO_REAL_NAME (3.0 points) From: does not include a real name X_MAILING_LIST (0.0 points) Has a X-Mailing-List header FOR_FREE (6.0 points) BODY: No such thing as a free lunch (1) HTML_00_10 (1.2 points) BODY: Message is 0% to 10% HTML KNOWN_MAILING_LIST (-0.9 points) Email came from some known mailing list software --- Begin Message --- (1) I implemented Validatable. It would be cool if people could try this out and see if anything extra is required here. (2) You may now use the declaration to specify properties of the persistent class itself as the id properties. The form of the declaration is: to load() an instance, simply instantiate an instance of the class, set its id properties and call: session.load(object, object); (3) After fielding so much user confusion over the semantics of cascade ="all", I found a way to extend the functionality consistent with the current semantics and (hopefully) without any risk of breaking existing code. I will need some feedback about the details here, though: Essentially I have scrapped the notion of a cascaded update(). The functionality performed by cascaded updates is now performed every time the session is flushed. This means that any lifecycle objects are automatically updated or saved as soon as they are detected (in a collection or many-to-one association). The choice of update() or save() is still made on the basis of wether or not the id property is null. There are still some remaining wrinkles here: * objects with primitive ids can't have null ids. Should we interpret 0 as null? * the treatment of a transient object with a non-null id is different between save() and flush(). A cascaded save() would cause the object to be save()d whereas if discovered at flush-time, it would be update()d. Should we tweak the semantics of cascaded saves? (Such a change has a fairly large potential to break existing code.) Alternatively, should we reintroduce cascaded update as a concept and tweak the new behaviour of flush() to be consistent with save()? Or is the new behaviour actually fine? Everyone should be aware that, even though this looks a lot like "persistence by reachability", it still isn't because delete() remains a manual process. I have very many misgivings about introducing any kind of reference-counting scheme to solve the garbage collection problem... Gavin P.S. After this email, I will be recieving mail mainly at [EMAIL PROTECTED] (thanks to Anton + to Christoph who also offered me use of a mail server) --- This sf.net email is sponsored by: OSDN - Tired of that same old cell phone? Get a new here for FREE! https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 ___ Hibernate-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/hibernate-devel --- End Message ---
[Hibernate] Very cool...
This mail is probably spam. The original message has been attached along with this report, so you can recognize or block similar unwanted mail in future. See http://spamassassin.org/tag/ for more details. Content preview: Just been playing with the brand new Hibernate XDoclet extension written by Sébastien Guimont. Very very nice. I think theres a bit of work still to go before its all fully documented and in CVS but I'm really looking forward to using the finished product. I'm most impressed by just how concise it is. A little tag in the class comment (approximately) like: [...] Content analysis details: (8.10 points, 8 required) NO_REAL_NAME (3.0 points) From: does not include a real name X_MAILING_LIST (0.0 points) Has a X-Mailing-List header FOR_FREE (6.0 points) BODY: No such thing as a free lunch (1) KNOWN_MAILING_LIST (-0.9 points) Email came from some known mailing list software --- Begin Message --- Just been playing with the brand new Hibernate XDoclet extension written by Sébastien Guimont. Very very nice. I think theres a bit of work still to go before its all fully documented and in CVS but I'm really looking forward to using the finished product. I'm most impressed by just how concise it is. A little tag in the class comment (approximately) like: @hibernate.bean table-name="TABLE" discriminator-value="X" and one for each property something like: @hibernate.field column-name="COL" column-length="20" its actually a really nice way to handle metadata. I know a lot of people have asked about this functionality, so many thanks to Sébastien for going off and doing all the work on this. --- This sf.net email is sponsored by: OSDN - Tired of that same old cell phone? Get a new here for FREE! https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390 ___ Hibernate-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/hibernate-devel --- End Message ---
