Are you attempting to add the coprocessor via hbase-site.xml or via the shell?
If the HBase Shell, check out chapter 13.3.2 in the ref guide[1]. It walks through loading an example. The given file path is just a URL that gets handed directly to Hadoop's Path class, so the example you gave would map to the root of the hdfs filesystem. You'll have to make sure the user running individual region servers can read the given hdfs path. [1]: http://hbase.apache.org/book.html#d0e14025 On Mon, Oct 27, 2014 at 4:03 PM, Tom Brown <[email protected]> wrote: > Is it possible to deploy an endpoint coprocessor via HDFS or must I > distribute the jar file to each regionserver individually? > > In my testing, it appears the endpoint coprocessors cannot be loaded from > HDFS, though I'm not at all sure I'm doing it right (are delimiters ":" or > "|", when I use "hdfs:///" does that map to the root hdfs path or the hbase > hdfs path, etc). > > I have attempted to google this, and have not found any clear answer. > > Thanks in advance! > > --Tom > -- Sean
