Hi,

Have you seen the reference guide
<http://hbase.apache.org/book.html#_upgrade_paths> to make sure that the
environment is ready for the upgrade?
Perhaps you could try to copy the contents of /data/ExportedFiles to the
HBase 1.2.1 cluster using distcp before import data instead of using
"hdfs://<IP>:8020/data/ExportedFiles" directly.
Then create the table on the HBase 1.2.1 cluster using HBase Shell. Column
families must be identical to the table on the old one.
Finally, import data from /data/ExportedFiles on the HBase 1.2.1 cluster.


Best Regards.

2017-10-24 1:27 GMT+08:00 Manjeet Singh <[email protected]>:

> Hi All,
>
> Can anyone help?
>
> adding few more investigations I have move all files to the destination
> cluster hdfs and I have run below command:-
>
> sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
> hdfs://<IP>:8020/data/ExportedFiles
>
> I am getting below error
>
> 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> attempt_1505781444745_0070_m_000003_0, Status : FAILED
> Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
>         at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.
> java:2306)
>         at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> nextKeyValue(SequenceFileRecordReader.java:78)
>         at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> nextKeyValue(MapTask.java:556)
>         at
> org.apache.hadoop.mapreduce.task.MapContextImpl.
> nextKeyValue(MapContextImpl.java:80)
>         at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> nextKeyValue(WrappedMapper.java:91)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1693)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>
>
>
>
> can anyone suggest how to migrate data?
>
> Thanks
> Manjeet Singh
>
>
>
>
>
> Hi All,
>
> I have query regarding hbase data migration from one cluster to another
> cluster in same N/W, but with a different version of hbase one is 0.94.27
> (source cluster hbase) and another is destination cluster hbase version is
> 1.2.1.
>
> I have used below command to take backup of hbase table on source cluster
> is:
>  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> /data/backupData/
>
> below files were genrated by using above command:-
>
>
> drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
>
>
> in order to restore these files I am assuming I have to move these files in
> destination cluster and have to run below command
>
> hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> /data/backupData/
>
> Please suggest if I am on correct direction, second if anyone have another
> option.
> I have tryed this with test data but above command took very long time and
> at end it gets fails
>
> 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> attempt_201710131340_0355_m_000002_0, Status : FAILED
> Task attempt_201710131340_0355_m_000002_0 failed to report status for 600
> seconds. Killing!
>
>
> Thanks
> Manjeet Singh
>
>
>
>
>
>
> --
> luv all
>

Reply via email to