There are several important metrics in choosing a RPC framework, include: 
performance, multi-language support, version compatibility, usability and 
product maturity.
PB almost plays well in all aspects, so I think that may be the reason why 
community choose it.

Thanks,

Junping

----- Original Message -----
From: "Ted Dunning" <[email protected]>
To: [email protected]
Sent: Wednesday, January 9, 2013 3:27:36 PM
Subject: Re: Question about protocol buffer RPC

Avro and Thrift both work well for RPC implementations.

I have lately been using protobufs with protobuf-rpc-pro and have been very
happy with it.  It has much of the debuggability of Thrift, but with
protobufs.

See http://code.google.com/p/protobuf-rpc-pro/



On Tue, Jan 8, 2013 at 8:44 PM, Hangjun Ye <[email protected]> wrote:

> Our project is facing similar problem: choosing a PRC framework.
> So I want to know if there are any drawbacks in Avro/Thrift and then Hadoop
> doesn't use them.
>
> Would appreciate if any insights could be shared for this!
>
>
> 2013/1/9 Hangjun Ye <[email protected]>
>
> > Hi there,
> >
> > Looks Hadoop is using Google's protocol buffer for its RPC (correct me if
> > I'm wrong).
> >
> > Avro/Thrift do the same thing, support more language, and have a complete
> > PRC implementation. Seems Google's protocol buffer PRC only has a
> framework
> > but doesn't include implementation with a concrete network framework.
> >
> > So just curious the rationale behind this?
> >
> > --
> > Hangjun Ye
> >
>
>
>
> --
> Hangjun Ye
>

Reply via email to