Decode map will not hold the DataRows, only the "legend" to decode
them. So it is a flyweight (as in "flyweight pattern"). E.g.
Artist Decode Map:
"ARTIST_ID" -> 0
"ARTIST_NAME" -> 1
"DATE_OF_BIRTH" -> 2
Artist DataRow:
[1, 'Dali', '19...']
decodeMap // pointer to decodeMap
Andrus
On Mar 7, 2008, at 12:43 PM, Aristedes Maniatis wrote:
On 06/03/2008, at 12:44 AM, Andrus Adamchik wrote:
This got me thinking about DataRow memory/creation efficiency
throughout the framework. We are wasting lots of space on repeating
information. Essentially a DataRow for each entity has a well
defined set of keys, so ideally we can normalize the storage of
DataRows internally, saving an Object[] of values with a reference
to a shared "decode map", one per entity. Such a shared map would
have DbAttribute names for the keys and array positions for the
values. What we'll lose is the ability to serialize DataRows (e.g.
for remote notifications), but maybe we can work around it somehow.
How does this interact with the DataDomain snapshot cache? You've
explained that this cache is Map<ObjectId, DataRow> but it has an
LRU expiry policy. What happens with a DataRow which is expired from
the DataDomain but still exists in the 'decode map'? Is it possible
to merge the two concepts (snapshot cache and decode map) as long as
there was a more sophisticated expiry policy?
The big benefit to reducing memory usage is that users will be able
to create larger caches and improve performance.
Ari
-------------------------->
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001 fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A