From: Brandon Goodin <[EMAIL PROTECTED]> > You have to store every object you desire to manage in a location > along with every object nested within that object identified by a key. > So, you have to configure ahead of time what the key will be. For each > record in your resultset you would need to see whether it already > exists in your object layer by the key and update it accordingly. > Also, within your transaction you get a version of that object to use > within your thread's execution. If you make a change to that object > you have to be aware that each change to that object may result in an > update to the global object and to the database in order to keep all > the other objects in sync with it. An update to an object while > another object is checked out will result in a stale data error of > some sort (assuming you are maintaining version checking). I may be > incorrect about each thread having it's own copy of an object. But, > regardless, the need to constantly "check" and update the object to > keep items in the same application in sync shows that there is still > potential for application collisions. If your database is the common > share point of an enterprise system I think it is the best dirty > checker to date. It gives you what you want from the source. >
My understanding is ORM tools like Hibernate allow you to retrieve an object, do certain stuff on it and save it back. Before doing actual SQL statements they check if anything has changed and if so what. This is pretty easy when you just keep a copy of the retrieved object. Then only the required SQL statements will have to issued. Seeing it this way, the db still is the entity to assure consistency. Once you read an object inside a transaction you either get read locks if your db does pessimistic locking or get a copy if the db has a multi-verion approach like Oracle. Using a decent isolation level this will guarentee that the object you have read inside the transaction still holds exactly the same values you would get if you refresehd it from the db. Thus there is no conceptual problem implementing dirty checking. Even if other applications access the same data. Am I wrong? As I agreed, caching is a problem, but you could still use restricted life-times of those caches or even some sort of invalidation notification. > My philosophy is to use the database. The database is my friend. It's my friend as well, but also a busy and slow one at times ;) Oliver