[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-24 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25206:
--
Component/s: snapshots
 Region Assignment
 proc-v2

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-24 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-25206:
--
Fix Version/s: 2.2.7
   2.4.0
   2.3.3
   3.0.0-alpha-1

> Data loss can happen if a cloned table loses original split region(delete 
> table)
> 
>
> Key: HBASE-25206
> URL: https://issues.apache.org/jira/browse/HBASE-25206
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, Region Assignment, snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7
>
>
> Steps to reproduce are as follows:
> 1. Create a table and put some data into the table:
> {code:java}
> create 'test1','cf'
> put 'test1','r1','cf','v1'
> put 'test1','r2','cf','v2'
> put 'test1','r3','cf','v3'
> put 'test1','r4','cf','v4'
> put 'test1','r5','cf','v5'
> {code}
> 2. Take a snapshot for the table:
> {code:java}
> snapshot 'test1','snap_test'
> {code}
> 3. Clone the snapshot to another table
> {code:java}
> clone_snapshot 'snap_test','test2'
> {code}
> 4. Split the original table
> {code:java}
> split 'test1','r3'
> {code}
> 5. Drop the original table
> {code:java}
> disable 'test1'
> drop 'test1'
> {code}
> After that, we see the error like the following in RS log when opening the 
> regions of the cloned table:
> {code:java}
> 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
> Failed initialize of region= 
> test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
> memstore
> java.io.IOException: java.io.IOException: java.io.FileNotFoundException: 
> Unable to open link: org.apache.hadoop.hbase.io.HFileLink 
> locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
> link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
>  hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
> at 
> org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)

[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-22 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-25206:
-
Description: 
Steps to reproduce are as follows:

1. Create a table and put some data into the table:
{code:java}
create 'test1','cf'
put 'test1','r1','cf','v1'
put 'test1','r2','cf','v2'
put 'test1','r3','cf','v3'
put 'test1','r4','cf','v4'
put 'test1','r5','cf','v5'
{code}
2. Take a snapshot for the table:
{code:java}
snapshot 'test1','snap_test'
{code}
3. Clone the snapshot to another table
{code:java}
clone_snapshot 'snap_test','test2'
{code}
4. Split the original table
{code:java}
split 'test1','r3'
{code}
5. Drop the original table
{code:java}
disable 'test1'
drop 'test1'
{code}
After that, we see the error like the following in RS log when opening the 
regions of the cloned table:
{code:java}
2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
Failed initialize of region= 
test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
memstore
java.io.IOException: java.io.IOException: java.io.FileNotFoundException: Unable 
to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at 
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
at 
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: java.io.FileNotFoundException: Unable to open link: 
org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at 
org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.tryOpen(FileLink.java:322)
at 

[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)

2020-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-25206:
-
Description: 
Steps to reproduce are as follows:

1. Create a table and put some data into the table:
{code:java}
create 'test1','cf'
put 'test1','r1','cf','v1'
put 'test1','r2','cf','v2'
put 'test1','r3','cf','v3'
put 'test1','r4','cf','v4'
put 'test1','r5','cf','v5'
{code}
2. Take a snapshot for the table:
{code:java}
snapshot 'test1','snap_test'
{code}
3. Clone the snapshot to another table
{code:java}
clone_snapshot 'snap_test','test2'
{code}
4. Delete the snapshot
{code:java}
delete_snapshot 'snap_test'
{code}
5. Split the original table
{code:java}
split 'test1','r3'
{code}
6. Drop the original table
{code:java}
disable 'test1'
drop 'test1'
{code}
After that, we see the error like the following in RS log when opening the 
regions of the cloned table:
{code:java}
2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: 
Failed initialize of region= 
test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back 
memstore
java.io.IOException: java.io.IOException: java.io.FileNotFoundException: Unable 
to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943)
at 
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134)
at 
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283)
at 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open 
link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at 
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590)
at 
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
Caused by: java.io.FileNotFoundException: Unable to open link: 
org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89,
 hdfs://:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89]
at