One way.., Create an NFS mountable directory for your cluster and mount on all of the DNs. You can either place a symbolic link in /usr/lib/hadoop/lib or add the jar to the classpath in /etc/hadoop/conf/hadoop-env.sh (Assuming Cloudera)
On Jun 27, 2012, at 12:47 PM, Evan Pollan wrote: > What're the current best practices for making custom Filter implementation > classes available to the region servers? My cluster is running 0.90.4 from > the CDH3U3 distribution, FWIW. > > I searched around and didn't find anything other than "add your filter to > the region server's classpath." I'm hoping there's support for something > that doesn't involve actually installing jar files on each region server, > updating each region server's configuration, and doing a rolling restart of > the whole cluster... > > I did find this still-outstanding bug requesting parity between HDFS-based > co-processor class loading and filter class loading: > https://issues.apache.org/jira/browse/HBASE-1936. > > How are folks handling this? > > The stock filters are fairly limited, especially without the ability (at > least AFAIK) to combine the existing filters together via basic boolean > algebra, so I can't do much without writing my own filter(s). > > > thanks, > Evan
