File a JIRA for the issue ?

On Sep 14, 2013, at 11:10 PM, "M. BagherEsmaeily" <[email protected]> wrote:

> Hi rajesh,
> 
> I use Hbase 0.94.11 and Hadoop 1.2.1. The file system of bulkload
> output directory and hbase cluster are the same, too.
> 
> I've also coded a MapReduce job using HFileOutputFormat. When I use
> LoadIncrementalHFiles
> to move the output of my job to HBase table, it still copies instead of cut.
> 
> Thanks
> 
> 
> On Sat, Sep 14, 2013 at 2:50 PM, rajesh babu Chintaguntla <
> [email protected]> wrote:
> 
>> Hi BagherEsmaeily,
>> 
>>       which version of hbase you are using? Is the file system of bulkload
>> output directory and hbase cluster are same?
>> 
>>       If you are using hbase older than 0.94.5 version, the Storefiles
>> generated by importtsv are getting copied instead of        moving even if
>> the file system of bulkload output directory and hbase cluster are same.
>> 
>>       Its a bug and solved in 0.94.5 (HBASE-5498).
>> 
>> Thanks.
>> Rajeshbabu
>> 
>> On Sat, Sep 14, 2013 at 12:01 PM, M. BagherEsmaeily <[email protected]
>>> wrote:
>> 
>>> Hello,
>>> 
>>> I was using HBase complete bulk load to transfer the output of ImportTsv
>> to
>>> a table in HBase, and I noticed that it copies the output instead of
>>> cutting. This takes long time for my gigabytes of data.
>>> 
>>> In HBase documentation (
>>> http://hbase.apache.org/book/ops_mgt.html#completebulkload) I read that
>>> the
>>> files would be moved not copied. Can anyone help me with this?
>>> 
>>> Kind Regards
>> 

Reply via email to