[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220356#comment-17220356
 ] 

Hudson commented on HBASE-25206:


Results for branch branch-2
[build #83 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/83/]:
 (/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/83/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/83/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/83/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2/83/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220348#comment-17220348
 ] 

Hudson commented on HBASE-25206:


Results for branch branch-2.3
[build #89 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/89/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/89/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/89/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.3/89/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220291#comment-17220291
 ] 

Hudson commented on HBASE-25206:


Results for branch master
[build #105 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/105/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/105/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/105/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/105/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-24 Thread Toshihiro Suzuki (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220211#comment-17220211
 ] 

Toshihiro Suzuki commented on HBASE-25206:
--

[~zhangduo] Thank you for reviewing and committing this!

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
> at 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-24 Thread Toshihiro Suzuki (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220210#comment-17220210
 ] 

Toshihiro Suzuki commented on HBASE-25206:
--

[~anoop.hbase] Yes, This will happens even if snapshot cloned to new table or 
not.

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
> at 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220209#comment-17220209
 ] 

Hudson commented on HBASE-25206:


Results for branch branch-2.2
[build #106 on 
builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-22 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17219457#comment-17219457
 ] 

Anoop Sam John commented on HBASE-25206:


Tks.  [~brfrn169]
So even if snapshot cloned to new table or not, this case will cause data loss 
from snapshot. 

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-22 Thread Toshihiro Suzuki (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17219440#comment-17219440
 ] 

Toshihiro Suzuki commented on HBASE-25206:
--

[~anoop.hbase] Sorry. I just noticed your comment.

This issue happens because when deleting a table, DeleteTableProcedure archives 
only online regions of the table (that don't include OFFLINE or SPLITTING 
regions).

The timeline of this issue is as follows:

1. Create a table that has RegionA
{code:java}
Original table:
RegionA
{code}
2. Take a snapshot for the table
{code:java}
Original table:
RegionA

Snapshot:
Reference to RegionA
{code}
3. Clone the snapshot to another table
{code:java}
Original table:
RegionA

Cloned table:
Reference to RegionA

Snapshot:
Reference to RegionA
{code}
4. Split the original table. The state of RegionA becomes SPLIT, and RegionB 
and RegionC are the daughter regions of RegionA:
{code:java}
Original table:
RegionA <- SPLIT state
RegionB
RegionC

Cloned table:
Reference to RegionA

Snapshot:
Reference to RegionA
{code}
5. Drop the original table. However, only RegionB and RegionC are archived due 
to the bug. As a result, the cloned table gets data lost. Also the snapshot 
gets corrupted:
{code:java}
Cloned table:
Reference to RegionA

Snapshot:
Reference to RegionA

Archive:
RegionB
RegionC
{code}

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// 

[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-22 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218830#comment-17218830
 ] 

Anoop Sam John commented on HBASE-25206:


the data loss (file deleted instead of archived) will happen if we took 
snapshot and keep that only but deleted the original table right? Did not clone 
new table from snapshot initially but later at some point user may want to do 
so. Just confirming.

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Delete the snapshot
> {code:java}
> delete_snapshot 'snap_test'
> {code}
> 5. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 6. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
>