Yes, the jar file contains all the required classes. On Mon, Oct 27, 2014 at 5:23 PM, Gary Helmling <[email protected]> wrote:
> If you are configuring the coprocessor via > hbase.coprocessor.region.classes, then it is a region endpoint. > > For the moment, only table-configured coprocessors support loading > from a jar file in HDFS. Coprocessors configured in hbase-site.xml > need to be resolvable on the regionserver's classpath. > > I can't say exactly why you're only getting the ClassNotFoundException > when invoking the endpoint. Is the class that you need packaged in > your jar file on HDFS? > > On Mon, Oct 27, 2014 at 4:00 PM, Tom Brown <[email protected]> wrote: > > I tried to attach the coprocessor directly to a table, and it is able to > > load the coprocessor class. Unfortunately, when I try and use the > > coprocessor I get a ClassNotFoundException on one of the supporting > classes > > required by the coprocessor. > > > > It's almost as if the ClassLoader used to load the coprocessor initially > is > > not in use when the coprocessor is actually invoked. > > > > --Tom > > > > On Mon, Oct 27, 2014 at 3:42 PM, Tom Brown <[email protected]> wrote: > > > >> I'm not sure how to tell if it is a region endpoint or a region server > >> endpoint. > >> > >> I have not had to explicitly associate the coprocessor with the table > >> before (it is loaded via "hbase.coprocessor.region.classes" in > >> hbase-site.xml), so it might be a region server endpoint. However, the > >> coprocessor code knows to which table the request applies, so it might > be a > >> region endpoint. > >> > >> If it helps, this is a 0.94.x cluster (and upgrading isn't doable right > >> now). > >> > >> Can both types of endpoint be loaded from HDFS, or just the table-based > >> one? > >> > >> --Tom > >> > >> On Mon, Oct 27, 2014 at 3:31 PM, Gary Helmling <[email protected]> > >> wrote: > >> > >>> Hi Tom, > >>> > >>> First off, are you talking about a region endpoint (vs. master > >>> endpoint or region server endpoint)? > >>> > >>> As long as you are talking about a region endpoint, the endpoint > >>> coprocessor can be configured as a table coprocessor, the same as a > >>> RegionObserver. You can see an example and description in the HBase > >>> guide: http://hbase.apache.org/book/ch13s03.html > >>> > >>> From the HBase shell: > >>> > >>> hbase> alter 't1', > >>> > >>> > 'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2' > >>> > >>> The arguments are: HDFS path, classname, priority, key=value > >>> parameters. Arguments are separated by a '|' character. > >>> > >>> Using this configuration, your endpoint class should be loaded from > >>> the jar file in HDFS. If it's not loaded, you can check the > >>> regionserver log of any of the servers hosting the table's regions. > >>> Just search for your endpoint classname and you should find an error > >>> message of what went wrong. > >>> > >>> > >>> > >>> On Mon, Oct 27, 2014 at 2:03 PM, Tom Brown <[email protected]> > wrote: > >>> > Is it possible to deploy an endpoint coprocessor via HDFS or must I > >>> > distribute the jar file to each regionserver individually? > >>> > > >>> > In my testing, it appears the endpoint coprocessors cannot be loaded > >>> from > >>> > HDFS, though I'm not at all sure I'm doing it right (are delimiters > ":" > >>> or > >>> > "|", when I use "hdfs:///" does that map to the root hdfs path or the > >>> hbase > >>> > hdfs path, etc). > >>> > > >>> > I have attempted to google this, and have not found any clear answer. > >>> > > >>> > Thanks in advance! > >>> > > >>> > --Tom > >>> > >> > >> >
