Even if the stream of bytes has no semantic meaning without
the .proto, its "format" is still protobuf binary, so the MIME type
makes some sense even if it is not sufficient.
Putting a ref to the appropriate .proto in the HTTP headers REST-style
seems sensible - loosely similar to declaring a sch
I have question regarding the future direction of protocol buffers.
Is Google planning on adding features or changing the encoding of data
types in any way that would break backwards compatibility? I've read
through the posts and it appears that the developers will try to
maintain compatibility a
We will absolutely maintain backwards-compatability of the wire format in
future versions. A version of protocol buffers that wasn't backwards
compatible would be thoroughly useless.
However, our idea of "compatibility" means that newer versions of the code
can successfully parse messages produced
I've made two similar tests in Java, comparing Thrift and Protocol
Buffers, and here is the result.
Without optimize_for = SPEED
Thrift Loop: 10,000,000
Get object : 14,394msec
Serdes thrift : 37,671msec
Objs per second: 265,456
Total bytes: 1,130,000,000
ProtoBuf Loop : 10,000,00
Thanks for getting back with me on this. Its been a while but I
believe I've seen several posts that uses something akin to the
following:
message A
{
.
}
message B
{
.
}
message wrapper
{
required fixed32 size = 1;
required fixed32 type = 2;
optional A a = 3;
Attached is a patch which changes Mutex to handle the initialization
ordering problem where Lock can be called before the constructor
is called.
On Wed, Apr 15, 2009 at 1:41 PM, Wink Saville wrote:
> Fair enough on the Mutex, I'll try to get a new patch to you soon,
> but if you get there first,