Yes, we will... issue is, our product collects data from various sources and then stuffs it en'mass into HBASE. It makes no sense to do one PUT at a time because we would be opening 1000 tcp connections. Which brings to another question, what about persistent connections? Are they possible? (Compression from client side would be really nice, we could write our own wrapper I guess to make transmission more efficient).
-Jack On Wed, Nov 24, 2010 at 10:46 PM, Andrew Purtell <[email protected]> wrote: >> Btw, does it mean, I can send in a compressed query? Or only receive >> compressed data from REST or both? > > Jetty's GzipFilter only wraps response handling. > > I tested to see if Jetty has some built in support for Content-Encoding: gzip > for PUT or POST and it appears not: > > Error 415 Unsupported Media Type > > You going to be posting large amounts of compressible data? > > - Andy > > --- On Wed, 11/24/10, Jack Levin <[email protected]> wrote: > >> From: Jack Levin <[email protected]> >> Subject: Re: REST compression support (was Re: question about meta data >> query intensity) >> To: [email protected], [email protected] >> Date: Wednesday, November 24, 2010, 2:21 PM >> Btw, does it mean, I can send in a >> compressed query? Or only receive >> compressed data from REST or both? >> >> -Jack >> >> On Wed, Nov 24, 2010 at 10:15 AM, Andrew Purtell <[email protected]> >> wrote: >> > Regards compressing the HTTP transactions between the >> REST server and REST client we punted on this back when >> Stargate had a WAR target so we could push that off to the >> servlet container configuration. Thanks for the question, >> which reminded me... I have just committed HBASE-3275, which >> is a trivial patch to support Accept-Encoding: gzip,deflate >> > >> > Index: >> src/main/java/org/apache/hadoop/hbase/rest/Main.java >> > >> =================================================================== >> > --- >> src/main/java/org/apache/hadoop/hbase/rest/Main.java >> (revision 1038732) >> > +++ >> src/main/java/org/apache/hadoop/hbase/rest/Main.java >> (working copy) >> > @@ -37,6 +37,7 @@ >> > import org.mortbay.jetty.Server; >> > import org.mortbay.jetty.servlet.Context; >> > import org.mortbay.jetty.servlet.ServletHolder; >> > +import org.mortbay.servlet.GzipFilter; >> > >> > import >> com.sun.jersey.spi.container.servlet.ServletContainer; >> > >> > @@ -132,6 +133,7 @@ >> > // set up context >> > Context context = new Context(server, "/", >> Context.SESSIONS); >> > context.addServlet(sh, "/*"); >> > + context.addFilter(GzipFilter.class, "/*", 0); >> > >> > server.start(); >> > server.join(); >> > >> > Regards interactions between HBase client and server, >> there is no option available for compressing Hadoop RPC. >> > >> > - Andy >> > >> > >> > --- On Wed, 11/24/10, Jack Levin <[email protected]> >> wrote: >> > >> >> From: Jack Levin <[email protected]> >> >> Subject: Re: question about meta data query >> intensity >> >> To: [email protected], >> [email protected] >> >> Date: Wednesday, November 24, 2010, 9:25 AM >> >> >> >> Yes, but that does not alleviate CPU contention >> should there be too >> >> many queries to a single region server. On a >> separate topic, is >> >> 'compression' in the works for REST >> gateway? Similar to >> >> mysql_client_compression? We plan to drop in >> 500K or >> >> more queries at a time into the REST, and it would >> be interesting >> >> to see the performance gain against uncompressed >> data. >> >> >> >> Thanks. >> >> >> >> -Jack >> > >> > >> > >> > >> > >> > > > > >
