hi danilo,

imo loading all object to delete them from the cache is the only way, because the cache does not know which objects will be deleted by the query.

The UPDATE statements in the 3. block will have no effect on the database, this is from my point of view a seldom >>but potentially dangerous BUG!!!

could you please be more specific about this one ?



jakob


Danilo Tommasina wrote:

Hello,

I noticed an odd behaviour when using *broker.deleteByQuery*, this issue seems to be 
known (see developer mailist, msg 652 [VOTE] deleteByQuery leaves Cache in an 
inconsistent state), however no info is still available in the javadoc nor a solution 
seems to be available.
Get a look at this code:

      broker = PersistenceBrokerFactory.defaultPersistenceBroker();
       //Insert entries
       try {
           broker.beginTransaction();
           UserAttrs ua;
           //Columns:        userid, attrName, attrValue
           //Primary Key:       x  ,    x
           ua= new UserAttrs( "id1", "attr1", "test1" );
           broker.store( ua );
           ua= new UserAttrs( "id1", "attr2", "test2" );
           broker.store( ua );
           broker.commitTransaction();
       } catch (Throwable t) {
           broker.abortTransaction();
           t.printStackTrace();
       }

       //Delete all entries with userID = "id1"
       try {
           UserAttrs ua= new UserAttrs();
           ua.setUserid( "id1" );
           Query q = new QueryByCriteria(ua);
           broker.beginTransaction();
           broker.deleteByQuery( q );
           broker.commitTransaction();
       } catch (Throwable t) {
           broker.abortTransaction();
           t.printStackTrace();
       }

       //Re-Insert entries
       try {
           broker.beginTransaction();
           UserAttrs ua;
           //Columns:        userid, attrName, attrValue
           //Primary Key:       x  ,    x
           ua= new UserAttrs( "id1", "attr1", "test1" );
           broker.store( ua );
           ua= new UserAttrs( "id1", "attr2", "test2" );
           broker.store( ua );
           broker.commitTransaction();
       } catch (Throwable t) {
           broker.abortTransaction();
           t.printStackTrace();
       }

On first execution this causes the generation of following SQL:

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME 
= 'attr1'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr1','test1')
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME 
= 'attr2'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr2','test2')
-> commit

SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE A0.USERID =  'id1'
DELETE FROM USER_ATTRS WHERE USERID =  'id1'
-> commit

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME 
= 'attr1'
UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME 
= 'attr2'
UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
-> commit

The UPDATE statements in the 3. block will have no effect on the database, this is from my point of view a seldom but potentially dangerous BUG!!!

There is a simple workaround to this, until the code is fixed, simply call a 
broker.clearCache() after the deleteByQuery transaction has been executed.
However this is a performance killer if you are going to deleteByQuery very often.
I adopted following solution, but since I am a OJB Newbie I'd like to know if you see 
a better solution, without re-implementing the ObjectCacheImpl class
I extended PersistenceBrokerImpl through a new class and did an override of the 
deleteByQuery method, then declared this new class in the OJB.properties int the 
PersistenceBrokerClass property.
Here the code:

public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
   protected SafeDeleteByQueryPBImpl() {
       super();
   }
   public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF pbf) {
       super( key, pbf );
   }

   /**
    * Bug workaround
    * Added code for clearing matching objects from cache when executing 
PersistenceBrokerImpl.deleteByQuery(query)
    * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
    */
   public void deleteByQuery(Query query) throws PersistenceBrokerException {
       //Clear cached objects
       Iterator it= super.getIteratorByQuery( query );  //List all objects affected by 
the query
       while ( it.hasNext() ) {
           super.objectCache.remove( new Identity( it.next(), this ) );    //Remove 
matching objects form cache
       }
       //Delegate deleteByQuery to super class
       super.deleteByQuery( query );
   }
}

Calling the method will cause an extra SELECT statment to be inserted and all the 
objects to be loaded in memory, however this should be faster than executing single 
deletes or clearing the cache each time.
Is there a better solution to that?
Thanks and sorry for the long message
Danilo Tommasina

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to