AFAIK it is possible, just make sure regionservers can see hadoop jar
(which is true by default). Actually, you can call anything from these
methods ;)

On Tue, Feb 14, 2012 at 9:15 AM, NNever <[email protected]> wrote:
> As we know in HBase coprocessor methods such as prePut, we can operate
> htable from ObserverContext<RegionCoprocessorEnviroment>...
> But in many situations there will be some Tables with qualifier to record
> the File URI. Then when we delete one row and trigger some ops in
> coprocessor, it's always need to delete the real File in hdfs through the
> recorded URI...
>
> So What my question is, * can we use Hadoop API to operate hdfs in
> coprocessor*?
> If it's possible, what's the codes will like?
>
> Thanks!

Reply via email to