[
https://issues.apache.org/jira/browse/HBASE-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Manjeet Singh updated HBASE-19071:
----------------------------------
Resolution: Fixed
Status: Resolved (was: Patch Available)
By performing we can get older Hbase version data into higher version even if
Hadoop version are different.
#Step 1
First step is to export hbase table data from the source table to hdfs path
command showing below
sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Export <table_name>
<hdfs_path>
#Step2
Now take these files into local Linux environment and by using scp command just
copy it to destination cluster (these steps required as Hadoop version are
different, else we can use distcp to copy one hdfs to another hdfs location).
#Step3
Create Hbase table with same schema as in source cluster.
#Step4
You already having content on destination cluster hdfs, perform below command.
/bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import
<tableName> hdfs://<ip>:8020/<HDFS location>
Data will be imported!
Thanks
Manjeet Singh
> Import from Hbase version 0.94.27 to higher version 1.2.1 not working
> ----------------------------------------------------------------------
>
> Key: HBASE-19071
> URL: https://issues.apache.org/jira/browse/HBASE-19071
> Project: HBase
> Issue Type: Bug
> Components: hbase
> Affects Versions: 1.2.1
> Reporter: Manjeet Singh
> Fix For: 1.2.1
>
>
> Data migration from one cluster to another cluster in same N/W, but with a
> different version of hbase one is 0.94.27 (source cluster hbase) and another
> is destination cluster hbase version is 1.2.1. is not working.
> I used below command to take backup of hbase table on source cluster is:
> ./hbase org.apache.hadoop.hbase.mapreduce.Export <TableName>
> /data/backupData/
> and as a result below files were genrated by using above command:-
> drwxr-xr-x 3 root root 4096 Dec 9 2016 _logs
> -rw-r--r-- 1 root root 788227695 Dec 16 2016 part-m-00000
> -rw-r--r-- 1 root root 1098757026 Dec 16 2016 part-m-00001
> -rw-r--r-- 1 root root 906973626 Dec 16 2016 part-m-00002
> -rw-r--r-- 1 root root 1981769314 Dec 16 2016 part-m-00003
> -rw-r--r-- 1 root root 2099785782 Dec 16 2016 part-m-00004
> -rw-r--r-- 1 root root 4118835540 Dec 16 2016 part-m-00005
> -rw-r--r-- 1 root root 14217981341 Dec 16 2016 part-m-00006
> -rw-r--r-- 1 root root 0 Dec 16 2016 _SUCCESS
> I have copy above files to destination cluster by using scp command and put
> them into destination cluster HDFS (It's because of two different version of
> Haddop destination cluster hadoop is 1.2.1 and destination is having Hadoop
> 2.0 ) First I get HDFS files to local linux and use scp command to put them
> into destination cluster.
> I import these file in another Hbase version (1.2.1)
> in order to restore these files I am assuming I have to move these files in
> destination cluster and have to run below command
> hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <hdfs path on same
> cluster>
> sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
> hdfs://<IP>:8020/data/ExportedFiles
> I am getting below error
> 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> attempt_1505781444745_0070_m_000003_0, Status : FAILED
> Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
> at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2306)
> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:78)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
> at
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
> at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)