[GitHub] [hbase-thirdparty] Apache-HBase commented on pull request #36: HBASE-24802 make a drop-in compatible impl of htrace APIs that does not do anything
Apache-HBase commented on pull request #36: URL: https://github.com/apache/hbase-thirdparty/pull/36#issuecomment-716096432 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 59s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +0 :ok: | spotbugs | 0m 0s | spotbugs executables are not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 23s | master passed | | +1 :green_heart: | compile | 0m 10s | master passed | | +1 :green_heart: | checkstyle | 0m 30s | master passed | | +1 :green_heart: | javadoc | 0m 5s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 5s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 16s | the patch passed | | +1 :green_heart: | javac | 0m 16s | the patch passed | | +1 :green_heart: | checkstyle | 0m 31s | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 10s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | hbase-noop-htrace in the patch passed. | | +1 :green_heart: | unit | 0m 31s | root in the patch passed. | | +1 :green_heart: | asflicense | 0m 9s | The patch does not generate ASF License warnings. | | | | 4m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-36/32/artifact/yetus-precommit-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase-thirdparty/pull/36 | | Optional Tests | dupname asflicense shellcheck shelldocs javac javadoc unit xml compile spotbugs findbugs checkstyle | | uname | Linux 59c5e25d79f4 5.4.0-1025-aws #25~18.04.1-Ubuntu SMP Fri Sep 11 12:03:04 UTC 2020 x86_64 GNU/Linux | | Build tool | maven | | git revision | master / 661f647 | | Default Java | Oracle Corporation-1.8.0_265-b01 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-36/32/testReport/ | | Max. process+thread count | 365 (vs. ulimit of 1000) | | modules | C: hbase-noop-htrace . U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-Thirdparty-PreCommit/job/PR-36/32/console | | versions | git=2.20.1 shellcheck=0.5.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220210#comment-17220210 ] Toshihiro Suzuki edited comment on HBASE-25206 at 10/24/20, 11:19 PM: -- {quote} So even if snapshot cloned to new table or not, this case will cause data loss from snapshot. {quote} [~anoop.hbase] Yes, this will happen even if snapshot cloned to new table or not. was (Author: brfrn169): [~anoop.hbase] Yes, this will happen even if snapshot cloned to new table or not. > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at >
[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220211#comment-17220211 ] Toshihiro Suzuki commented on HBASE-25206: -- [~zhangduo] Thank you for reviewing and committing this! > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at
[jira] [Comment Edited] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220210#comment-17220210 ] Toshihiro Suzuki edited comment on HBASE-25206 at 10/24/20, 11:18 PM: -- [~anoop.hbase] Yes, this will happen even if snapshot cloned to new table or not. was (Author: brfrn169): [~anoop.hbase] Yes, This will happen even if snapshot cloned to new table or not. > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at >
[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220210#comment-17220210 ] Toshihiro Suzuki commented on HBASE-25206: -- [~anoop.hbase] Yes, This will happens even if snapshot cloned to new table or not. > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at
[jira] [Comment Edited] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220210#comment-17220210 ] Toshihiro Suzuki edited comment on HBASE-25206 at 10/24/20, 11:18 PM: -- [~anoop.hbase] Yes, This will happen even if snapshot cloned to new table or not. was (Author: brfrn169): [~anoop.hbase] Yes, This will happens even if snapshot cloned to new table or not. > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at >
[jira] [Commented] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220209#comment-17220209 ] Hudson commented on HBASE-25206: Results for branch branch-2.2 [build #106 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/branch-2.2/106//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs://
[jira] [Resolved] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-25206. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to all active branches. Thanks [~brfrn169] for contributing. > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at
[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25206: -- Component/s: snapshots Region Assignment proc-v2 > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Updated] (HBASE-25206) Data loss can happen if a cloned table loses original split region(delete table)
[ https://issues.apache.org/jira/browse/HBASE-25206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-25206: -- Fix Version/s: 2.2.7 2.4.0 2.3.3 3.0.0-alpha-1 > Data loss can happen if a cloned table loses original split region(delete > table) > > > Key: HBASE-25206 > URL: https://issues.apache.org/jira/browse/HBASE-25206 > Project: HBase > Issue Type: Bug > Components: proc-v2, Region Assignment, snapshots >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > > Steps to reproduce are as follows: > 1. Create a table and put some data into the table: > {code:java} > create 'test1','cf' > put 'test1','r1','cf','v1' > put 'test1','r2','cf','v2' > put 'test1','r3','cf','v3' > put 'test1','r4','cf','v4' > put 'test1','r5','cf','v5' > {code} > 2. Take a snapshot for the table: > {code:java} > snapshot 'test1','snap_test' > {code} > 3. Clone the snapshot to another table > {code:java} > clone_snapshot 'snap_test','test2' > {code} > 4. Split the original table > {code:java} > split 'test1','r3' > {code} > 5. Drop the original table > {code:java} > disable 'test1' > drop 'test1' > {code} > After that, we see the error like the following in RS log when opening the > regions of the cloned table: > {code:java} > 2020-10-20 13:32:18,415 WARN org.apache.hadoop.hbase.regionserver.HRegion: > Failed initialize of region= > test2,,1603200595702.bebdc4f740626206eeccad96b7643261., starting to roll back > memstore > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1095) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:943) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:899) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7246) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7204) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7176) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7134) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7085) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:283) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: java.io.FileNotFoundException: Unable to open > link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs:// HOST>:8020/hbase/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/.tmp/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/mobdir/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89, > hdfs:// HOST>:8020/hbase/archive/data/default/test1/349b766b1b38e21f627ed4e441ae643c/cf/b6e39865710345c8998dec0bcc94cc89] > at > org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:590) > at > org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:557) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:303) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[jira] [Commented] (HBASE-24552) Replica region needs to check if primary region directory exists at file system in TransitRegionStateProcedure
[ https://issues.apache.org/jira/browse/HBASE-24552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220127#comment-17220127 ] Duo Zhang commented on HBASE-24552: --- Found this when backporting HBASE-25206, do we really need this check? Or it is just for safety? The comment seems to indicate that we still have some bugs which lead to this problem? In general, if we do this, we will leave a region state node in memory with corrupted state, which means we need to fix it through HBCK2? What is the suggested way to fix this problem? I think we need to add comment about all these things so that it will not make users confusing... Thanks. > Replica region needs to check if primary region directory exists at file > system in TransitRegionStateProcedure > > > Key: HBASE-24552 > URL: https://issues.apache.org/jira/browse/HBASE-24552 > Project: HBase > Issue Type: Bug > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > In hbase-1, it always runs into the situation that primary region has been > closed/removed and replica region still stays in master's in-memory db and > open at one of the region servers. Balancer can move this replica region to a > new region server. During the region open, replica region does not check if > primary region has been removed and moves forward. During store open, it will > recreates primary region directory at hdfs and caused inconsistency. > > In hbase-2, things get much better. To prevent the above inconsistency from > happening, it adds more checks for a replica region, i.e, if primary regions' > directory exists and there is a .regioninfo under. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #2569: HBASE-25206 Data loss can happen if a cloned table loses original spl…
Apache9 merged pull request #2569: URL: https://github.com/apache/hbase/pull/2569 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-25193) Add support for row prefix and type in the WAL Pretty Printer and some minor fixes
[ https://issues.apache.org/jira/browse/HBASE-25193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220090#comment-17220090 ] Hudson commented on HBASE-25193: Results for branch master [build #104 on builds.a.o|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/104/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/104/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/104/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hadoop.apache.org/job/HBase/job/HBase%20Nightly/job/master/104/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Add support for row prefix and type in the WAL Pretty Printer and some minor > fixes > -- > > Key: HBASE-25193 > URL: https://issues.apache.org/jira/browse/HBASE-25193 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Sandeep Pal >Assignee: Sandeep Pal >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.3, 1.7.0, 2.4.0 > > > Currently, the WAL Pretty Printer has an option to filter the keys with an > exact match of row. However, it is super useful sometimes to have a row key > prefix instead of an exact match. > The prefix can act as a full match filter as well due to the nature of the > prefix. > Secondly, we are not having the cell type in the WAL Pretty Printer in any of > the branches. > Lastly, the option rowkey only options prints additional stuff as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)