On Sat, Oct 1, 2016 at 11:44 AM, Andrew Purtell <[email protected]> wrote:
Slower? Hmm. For what it's worth after the merge (Monday?) I will take the > result through the YCSB workload set with security active and pull out JFR > trace files and GC logs. A Phoenix perf test would also be interesting, but > I'm not sure how much time I'd have getting it to compile against and > behave well running on branch-2, so maybe not. > > I just ran the same small-context loading test w/o the patch and perf was the same (Poor! We need to do a bit of work on master branch it looks like). Thanks for the offer Andrew. I'd be interested in whatever you might find. I'll be having a go at perf again myself in coming weeks. We can compare notes. St.Ack > > On Oct 1, 2016, at 10:57 AM, Stack <[email protected]> wrote: > > > >> On Sat, Oct 1, 2016 at 9:41 AM, Stack <[email protected]> wrote: > >> > >> > >> > >>> On Fri, Sep 30, 2016 at 8:55 PM, Sean Busbey <[email protected]> > wrote: > >>> > >>> have we experimentally confirmed that wire compatibility is > >>> maintained? I saw one mention of expecting wire compatibility to be > >>> fine, but nothing with someone using e.g. the clusterdock work or > >>> something to mix servers / clients or do replication. > >>> > >>> > >> > >> I tried it out a few times in the small: using a 1.2.0 shell to do ACL > ops > >> against a patched master server; i.e. a branch-1 client Coprocessor > >> Endpoint works against a patched branch-2 server. More tests to follow > of > >> course... > >> > >> > > > > I reread the above. You were asking about more than Coprocessor Endpoint > > compatibility (Pardon me; I have been a little fixated on CPEPs of late). > > Yeah, seems fine. I can do load tests from a branch-1 client against my > > patched master node. Ram and Anoop have also spent a bunch of time w/ PB3 > > serialization and claim it compatible. This is apart from the claims by > > protobuf crew that it is supposed to be. > > Thanks, > > M > > > > P.S. Just retried it just in case and seems to all work (if slow... I > need > > to look into that) > > > > > >> St.Ack > >> > > > > > >> > >> > >>>> On Fri, Sep 30, 2016 at 6:30 PM, Stack <[email protected]> wrote: > >>>> I intend to do a mass commit late this weekend that moves us on to a > >>> shaded > >>>> protobuf-3.1.0, either Sunday night or Monday morning. > >>>> > >>>> If objection, please speak up or if need more time for > >>>> consideration/review, just shout. > >>>> > >>>> I want to merge the branch HBASE-16264 into master (it is running here > >>> up > >>>> on jenkins https://builds.apache.org/view/H-L/view/HBase/job/HBASE- > 1626 > >>> 4/). > >>>> The branch at HBASE-16264 has three significant bodies-of-work that > >>>> unfortunately are tangled and can only go in of a piece. > >>>> > >>>> * HBASE-16264 <https://issues.apache.org/jira/browse/HBASE-16264> The > >>>> shading of our protobuf usage so we can upgrade and/or run with a > >>> patched > >>>> protobuf WITHOUT breaking REST, Spark, and in particular, Coprocessor > >>>> Endpoints. > >>>> * HBASE-16567 <https://issues.apache.org/jira/browse/HBASE-16567> A > >>> move > >>>> up on to (shaded) protobuf-3.1.0 > >>>> * HBASE-16741 <https://issues.apache.org/jira/browse/HBASE-16741> An > >>>> amendment of our generate protobufs step to include shading and a > >>> bundling > >>>> of protobuf src (with a means of calling a patch srcs hook) > >>>> > >>>> Together we're talking about 40MB of change mostly made of the > movement > >>> of > >>>> generated files or the application of a pattern that alters where we > get > >>>> imports from. When done, you should notice no difference and should be > >>> able > >>>> to go about your business as per usual. Upside is that we will be able > >>> to > >>>> avoid coming onheap doing protobuf marshalling/unmarshalling as > protobuf > >>>> 2.5.0 requires. Downside is that we repeat a good portion of our > >>> internal > >>>> protos, once non-shaded so Coprocessor Endpoints can keep working and > >>> then > >>>> again as shaded for internal use. > >>>> > >>>> I provide some more overview below on the changes. See the shading doc > >>>> here: > >>>> https://docs.google.com/document/d/1H4NgLXQ9Y9KejwobddCqaVME > >>> DCGbyDcXtdF5iAfDIEk/edit# > >>>> for more detail (Patches are up on review board -- except the latest > >>>> HBASE-16264 which is too big for JIRA and RB). I am currently working > >>> on a > >>>> devs chapter for the book on protobuf going forward that will go in as > >>> part > >>>> of this patch. > >>>> > >>>> Thanks, > >>>> St.Ack > >>>> > >>>> Items of note: > >>>> > >>>> * Two new modules; one named hbase-protocol-shaded that is used by > >>> hbase > >>>> core. It has in it a shaded (and later patched) protobuf. The other > new > >>>> module is hbase-endpoint which goes after hbase-server and has those > >>>> bundled endpoints that I was able to break out of core (there are a > few > >>>> that are hopelessly entangled that need to be undone as CPEPs but > >>>> fortunately belong in core: Auth, Access, MultiRow). > >>>> * I've tested running a branch-1 CPEP against a master with these > >>> patches > >>>> in place and stuff like ACL (A CPEP) run from the branch-1 shell work > >>>> against the branch-2 server. > >>>> > >>>> > >>>> > >>>> > >>>> > >>>>> On Mon, Aug 22, 2016 at 5:20 PM, Stack <[email protected]> wrote: > >>>>> > >>>>> This project goes on. I updated HBASE-1563 "Shade protobuf" with some > >>> doc > >>>>> on a final approach. We need to be able to refer to both shaded and > >>>>> non-shaded protobuf so we can support sending HDFS old-school pb > >>> Messages > >>>>> but also so Coprocessor Endpoints keep working though internally > >>> protobufs > >>>>> have been relocated. Funny you should ask, but yes, there are some > >>>>> downsides (as predicted by contributors on the JIRA). I'd be > >>> interested to > >>>>> hear if they are too burdensome. In particular, your IDE experience > >>> gets a > >>>>> little convoluted as you will need to add to your build path, a jar > >>> with > >>>>> the relocated pbs. A pain. > >>>>> > >>>>> Thanks, > >>>>> St.Ack > >>>>> > >>>>> > >>>>>> On Wed, Apr 13, 2016 at 6:09 AM, Stack <[email protected]> wrote: > >>>>>> > >>>>>> On Tue, Apr 12, 2016 at 9:26 PM, Sean Busbey <[email protected]> > >>> wrote: > >>>>>> > >>>>>>>> On Tue, Apr 12, 2016 at 6:17 PM, Stack <[email protected]> wrote: > >>>>>>>> > >>>>>>>> > >>>>>>>> On an initial pass, the only difficult part seems to be > interaction > >>>>>>> with > >>>>>>>> HDFS in asyncwal (might just pull in the HDFS messages). > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> I have some idea how we can make this work either by pushing > asyncwal > >>>>>>> upstream to HDFS or through some maven tricks, depending on how > much > >>>>>>> time we have. > >>>>>>> > >>>>>> > >>>>>> Maven tricks? Tell us more. Here or drop a note up in the issue. > >>>>>> Thanks Sean, > >>>>>> St.Ack > >>>>>> > >>>>> > >>>>> > >>> > >>> > >>> > >>> -- > >>> busbey > >>> > >> > >> >
