On 19.02.2011 12:12, Ivan Zhakov wrote:
On Sat, Feb 19, 2011 at 13:47,<stef...@apache.org>  wrote:
Author: stefan2
Date: Sat Feb 19 10:47:16 2011
New Revision: 1072302

URL: http://svn.apache.org/viewvc?rev=1072302&view=rev
Log:
Merge all changes (r1068724, r1068739) from the
integrate-cache-item-serialization branch.

These patches introduce a very simple, limited purpose
serialization framework and use that to switch the
caching API from data copying to the more generally
applicable (de-)serialization mechanism.

This makes all types of caches usable with all types
of cacheable items.

Hi Stefan,

Why serialize/deserialize mechanism is better than data copying
function?
This is not (de)serialization as in "write / parse some ASCII text"
but more basic as in "combine everything into a single, movable
binary chunk of data".

Basically, I simply concatenate all structs (memcpy) and replace
all pointers by local offsets. This requires some state management
during serialization but saves on allocations. This makes the
serialization part about as fast as standard copy code.

De-serialization is much faster, though: Only a single, aligned
allocation and memcpy followed by straightforward pointer
fix-up is necessary. The latter is equivalent to just setting the
pointer values during ordinary copying. Hence, we save greatly
on the allocation and copying part.
For me implementing dup function is much easier and better
for performance than serialize/deserialize. Is it possible to keep
both mechanism in inprocess cache?
It is possible but not beneficial. We would need to keep code
for redundant functionality around that is generally slower and
limited in its applicability as it works for cache-inprocess only.

-- Stefan^2.

Reply via email to