Brian McCallister wrote:
Speaking of second level cache, we may want to look into making the second level cache backing store pluggable. If we move to storing identity keyed hashmaps with serializables in them (any jdbc type) then we can push that out to coherence, memcached, ehcache, whirlycache, etc -- allowing for much more tunable 2nd level caching, and not having to implement it ourselves.
This way the TLCacheImpl works, you can specify the second level cache in object-cache tag as custom attribute:
<object-cache class="org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl">
<!-- meaning of attributes, please see docs section caching -->
<!-- common attributes -->
<attribute attribute-name="cacheExcludes" attribute-value=""/>
<!-- ObjectCacheTwoLevelImpl attributes -->
<attribute attribute-name="applicationCache" attribute-value="org.apache.ojb.broker.cache.ObjectCacheDefaultImpl"/>
<attribute attribute-name="copyStrategy" attribute-value="org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl$CopyStrategyImpl"/>
<!-- ObjectCacheDefaultImpl attributes -->
<attribute attribute-name="timeout" attribute-value="900"/>
<attribute attribute-name="autoSync" attribute-value="true"/>
<attribute attribute-name="cachingKeyType" attribute-value="0"/>
<attribute attribute-name="useSoftReferences" attribute-value="true"/>
</object-cache>
Armin
-brian
On Mar 11, 2005, at 9:44 AM, Armin Waibel wrote:
Brian McCallister wrote:
On Mar 10, 2005, at 3:57 PM, Armin Waibel wrote:
The basic problem is how can we make an image or a copy of a persistent object, how to copy the object fields?
On OJB java-field-type level a field (of a persistent class) could be all kind of class, because the user can declare a field-conversion in the field-descriptor, thus we don't know the field type in the persistent object.
So it's not possible to image/copy field values on this level, because the fields don't have to implement Serializeable or Cloneable.
Backwards incompatible option: provide a copy function on field conversions. Provide an AbstractFieldConversion which keeps a flat fieldwise copy of the custom object, but can be replaced by a more intelligent version. I like this option less than the next...
I have in mind the same (could be an option for 1.1), additionally we should add a equals(obj1, obj2) method in FieldConversion to compare two fields on java-field level, in AbstractFieldConversion we can do the field-conversion and use equals(...) of the assigned FieldType.
If we convert the fields to the sql-field-type using the javaToSql field-conversion we know the type of the field (performance issue when using complex field-conversions?), because this is declared in the field-descriptor and we are using the jdbc type / java type mapping of the JDBC specification:
VARCHAR --> String
VARBINARY --> byte[]
DATE --> Date
Caching the jdbc type makes the most sense to me, and going ahead and doing the conversion. I don't think the second level cache should be keeping entity instances around, just the sql values. Running them through the conversion process is still much cheaper than hitting the db.
Great note Brian! Agree, this makes sense and it expose a bug in current TLCacheImpl. Currently the second level cache cache "flat" objects, but indeed it will be better to use a HashMap and cache the sql type values by field-name.
This will prevent data corruption if someone use different metadata mappings (using the "per thread mode" in MetadataManager) for the same class with different field-conversion.
Armin
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
