[ 
https://issues.apache.org/jira/browse/HBASE-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16672:
---------------------------
    Description: 
While working on HBASE-14417, I found that taking full backup of table which 
received bulk loaded hfiles and later gone through incremental restore failed 
with the following exception:
{code}
2016-09-21 11:33:06,340 WARN  [member: '10.22.9.171,53915,1474482617015' 
subprocedure-pool7-thread-1] 
snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(347): Got 
Exception in SnapshotSubprocedurePool
java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
does not exist: 
hdfs://localhost:53901/user/tyu/test-data/48928d44-9757-4923-af29-60288fb8a553/data/ns1/test-1474482646700/
                        
d1a93d05a75c596533ba4331a378fb3a/f/3dfb3a37cf6b4b519769b3116d8ab35a_SeqId_205_
  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:323)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:139)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:159)
{code}
The cause was that the bulk loaded hfiles in source table were renamed during 
bulk load step of incremental restore.

To support incrementally restoring to multiple destinations, this issue adds 
option which would always copy hfile(s) during bulk load.

  was:
While working on HBASE-14417, I found that taking full backup of table which 
received bulk loaded hfiles and later gone through incremental restore failed 
with the following exception:
{code}
2016-09-21 11:33:06,340 WARN  [member: '10.22.9.171,53915,1474482617015' 
subprocedure-pool7-thread-1] 
snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(347): Got 
Exception in SnapshotSubprocedurePool
java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
does not exist: 
hdfs://localhost:53901/user/tyu/test-data/48928d44-9757-4923-af29-60288fb8a553/data/ns1/test-1474482646700/
                        
d1a93d05a75c596533ba4331a378fb3a/f/3dfb3a37cf6b4b519769b3116d8ab35a_SeqId_205_
  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:323)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:139)
  at 
org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:159)
{code}
The cause was that the bulk loaded hfiles were renamed during bulk load step of 
incremental restore.

To support incrementally restoring to multiple destinations, this issue adds 
option which would always copy hfile(s) during bulk load.


> Add option for bulk load to copy hfile(s) instead of renaming
> -------------------------------------------------------------
>
>                 Key: HBASE-16672
>                 URL: https://issues.apache.org/jira/browse/HBASE-16672
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Ted Yu
>
> While working on HBASE-14417, I found that taking full backup of table which 
> received bulk loaded hfiles and later gone through incremental restore failed 
> with the following exception:
> {code}
> 2016-09-21 11:33:06,340 WARN  [member: '10.22.9.171,53915,1474482617015' 
> subprocedure-pool7-thread-1] 
> snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(347): Got 
> Exception in SnapshotSubprocedurePool
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: File 
> does not exist: 
> hdfs://localhost:53901/user/tyu/test-data/48928d44-9757-4923-af29-60288fb8a553/data/ns1/test-1474482646700/
>                         
> d1a93d05a75c596533ba4331a378fb3a/f/3dfb3a37cf6b4b519769b3116d8ab35a_SeqId_205_
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool.waitForOutstandingTasks(RegionServerSnapshotManager.java:323)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.flushSnapshot(FlushSnapshotSubprocedure.java:139)
>   at 
> org.apache.hadoop.hbase.regionserver.snapshot.FlushSnapshotSubprocedure.insideBarrier(FlushSnapshotSubprocedure.java:159)
> {code}
> The cause was that the bulk loaded hfiles in source table were renamed during 
> bulk load step of incremental restore.
> To support incrementally restoring to multiple destinations, this issue adds 
> option which would always copy hfile(s) during bulk load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to