I've been doing tests with the protoBuff and protoStuff APIs to see
whats a better fit for our problem. We are going to have multiple
clients- wich may have multiple versions of objects- accessing
serialized object data in a chache. We have been looking into
protoBuff/stuff as a possible solution to prevent data loss.
ProtoBuff seems to bee more resiliant to data loss since it retains
version information (which is a key part to our problem). ProtoStuff
on the other hand has been optimised for simplicity and speed, and
with its runtime schemas, working it into our existing code will be
very easy (which is the other part). The downside is that protoStuff
does not retain version information, and we loose added fields when
going from older to newer versions of objects. If anyone has used the
protoStuff API and can offer some insight into this issue, please let
So far, we have been abstracting the generated code from the clients
with wrapper/factory classes. This helps with the ease of
implementation, as the factory classes can transfer information from
the pojos to the and builders. The datastore manager then handles the
de/serialization logic using whichever API we are testing. Is this a
fairly standard practice, or is there a more efficient way?
Everything is being written in java.
Thanks for any input,
You received this message because you are subscribed to the Google Groups
"Protocol Buffers" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at