I was checking the code for the RPC, and it seems that compression / decompression is always managed by the RpcServer. Am I correct ?
So we can't skip the uncompression step (for example if we want to send to the user the compressed data directly) ? Simon On Thu, Jun 6, 2013 at 3:29 AM, Stack <[email protected]> wrote: > On Wed, Jun 5, 2013 at 4:21 PM, Ted Yu <[email protected]> wrote: > > > For thrift, there is already such support. > > > > Take a look at (0.94 codebase): > > > src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java > > > > * HRegionThriftServer - this class starts up a Thrift server in the same > > * JVM where the RegionServer is running. It inherits most of the > > * functionality from the standard ThriftServer. > > > > > The embedded thrift server is abandoned functionality and should be > deprecated [1]. > > The AVRO server has been removed because it was unmaintained. > > Regards REST, we already run a webserver in each regionserver so it would > make sense attaching the REST context to the already-running http server > context. If you could make that happen Simon, I for one would be > interested in getting it in. > > vertx.io looks nice. Our server currently has a bunch of what it calls > 'Shared Data' that we'd have to make fit the verticles (doesn't look too > easy -- smile). > > Thanks, > St.Ack > > 1. > > http://search-hadoop.com/m/GAFt6B9CRZ/karthik+thrift&subj=Re+Thrift2+interface >
