Should I see smaller serialization sizes going from 2.0.x to 2.3? I
was hoping to, and I compiled a sample message to compare
serialization sizes between versions. The size was the same. The
sample message has a number of different data types.
I notice in the changelog that string serialization
No revision of protobufs is ever likely to change the serialized size of
existing messages, because doing so would presumably break backwards
compatibility. A revision might introduce a new encoding mechanism that is
more compact (like packed encoding did), but this is unusual, since there is
not
On Tue, Apr 27, 2010 at 2:04 PM, Kenton Varda ken...@google.com wrote:
Note that protobufs only encode structure. They do not do any compression.
You should apply compression separately on top of your data if you need it.
Note that this will add considerable CPU cost, so you must decide if
On Apr 27, 2010, at 15:04 , Kenton Varda wrote:
The optimizations mentioned in the changelog are CPU speed or
memory usage optimizations, not encoded size optimizations.
Totally unrelated, but this reminds me that I think there may still be
one optimization possible with Java protocol
On Tue, Apr 27, 2010 at 12:38 PM, sheila miguez she...@pobox.com wrote:
I wasn't expecting a good compression
ratio for protobuf messages since they are decently packed already,
but was happy to see that result.
Yep, Protobufs are a compact encoding, but compression can still work well
In the case of repeated strings etc (excluding the enum case), I've been
toying whether something is possible by associating certain objects / values
with unique identifiers on the wire. Potentially this would also allow
graph (rather than tree) serialization.
This is obviously well into the hazy
In the case of repeated strings etc (excluding the enum case), I've been
toying whether something is possible by associating certain objects / values
with unique identifiers on the wire. Potentially this would also allow
graph (rather than tree) serialization.
This is obviously well into the