Thanks Sanel. I try to use *FileSystem fs = FileSystem.get(HBaseConfiguration.create());* *fs.delete(new Path(...))*
in corpocessor's preDelete method. There is no exception, but the target-path file has not deleted after those code also. I don't know why... It's late night here now. I'll try that again tomorrow morning to see if I made anything wrong....Thanks for your reply... 2012/2/14 Sanel Zukan <[email protected]> > AFAIK it is possible, just make sure regionservers can see hadoop jar > (which is true by default). Actually, you can call anything from these > methods ;) > > On Tue, Feb 14, 2012 at 9:15 AM, NNever <[email protected]> wrote: > > As we know in HBase coprocessor methods such as prePut, we can operate > > htable from ObserverContext<RegionCoprocessorEnviroment>... > > But in many situations there will be some Tables with qualifier to record > > the File URI. Then when we delete one row and trigger some ops in > > coprocessor, it's always need to delete the real File in hdfs through the > > recorded URI... > > > > So What my question is, * can we use Hadoop API to operate hdfs in > > coprocessor*? > > If it's possible, what's the codes will like? > > > > Thanks! >
