[jira] [Commented] (HBASE-26670) HFileLinkCleaner should be added even if snapshot is disabled
[ https://issues.apache.org/jira/browse/HBASE-26670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17505986#comment-17505986 ] Yi Mei commented on HBASE-26670: Pushed to branch-2.4+. Thanks [~zhangduo] and [~apurtell] for reviewing. > HFileLinkCleaner should be added even if snapshot is disabled > - > > Key: HBASE-26670 > URL: https://issues.apache.org/jira/browse/HBASE-26670 > Project: HBase > Issue Type: Bug > Components: snapshots >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Critical > Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11 > > > We encountered a problem in our cluster: > 1. Cluster has many snapshots, cause the archive directory is too big. > 2. We delete some snapshots, but the cleaner runs slowly because this is a > race in synchronized method of SnapshotHFileCleaner. > 3. We delete all snapshots, and disable snapshot > feature(hbase.snapshot.enabled=false), so the cleaner will skip the > synchronized method in SnapshotHFileCleaner. > 4. After cleaner runs, some back reference and data files under archive > directory are deleted, but they are still used by some restored tables. This > does not meet expectations. > One solution is add HFileLinkCleaner even if snapshot is disabled. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HBASE-26670) HFileLinkCleaner should be added even if snapshot is disabled
[ https://issues.apache.org/jira/browse/HBASE-26670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-26670: --- Resolution: Fixed Status: Resolved (was: Patch Available) > HFileLinkCleaner should be added even if snapshot is disabled > - > > Key: HBASE-26670 > URL: https://issues.apache.org/jira/browse/HBASE-26670 > Project: HBase > Issue Type: Bug > Components: snapshots >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Critical > Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11 > > > We encountered a problem in our cluster: > 1. Cluster has many snapshots, cause the archive directory is too big. > 2. We delete some snapshots, but the cleaner runs slowly because this is a > race in synchronized method of SnapshotHFileCleaner. > 3. We delete all snapshots, and disable snapshot > feature(hbase.snapshot.enabled=false), so the cleaner will skip the > synchronized method in SnapshotHFileCleaner. > 4. After cleaner runs, some back reference and data files under archive > directory are deleted, but they are still used by some restored tables. This > does not meet expectations. > One solution is add HFileLinkCleaner even if snapshot is disabled. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26827) RegionServer JVM crash when compact mob table
[ https://issues.apache.org/jira/browse/HBASE-26827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26827: -- Assignee: Yi Mei > RegionServer JVM crash when compact mob table > - > > Key: HBASE-26827 > URL: https://issues.apache.org/jira/browse/HBASE-26827 > Project: HBase > Issue Type: Bug > Components: Compaction >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > When compact a mob table, RS JVM may crash or failed to do compaction as the > following logs: > {code:java} > 2022-03-11T16:18:44,089 ERROR > [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45525-shortCompactions-0] > regionserver.CompactSplit$CompactionRunner(675): Compaction failed > Request=regionName=t1,,1646986716811.964618e679a2434aa7d27018baef8154., > storeName=A, fileCount=2, fileSize=2.0 M (1010.2 K, 1010.2 K), priority=1, > time=1646986723135java.io.IOException: Mob compaction failed for region: > 964618e679a2434aa7d27018baef8154at > org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:574) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:365) > ~[classes/:?]at > org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.compact(DefaultMobStoreCompactor.java:225) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:125) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1141) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2442) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:656) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:702) > ~[classes/:?]at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_292]at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_292]at java.lang.Thread.run(Thread.java:748) > ~[?:1.8.0_292]Caused by: java.io.IOException: Added a key not lexically > larger than previous. Current cell = > org.apache.hadoop.hbase.PrivateCellUtil$ValueAndTagRewriteByteBufferExtendedCell@565d5bac, > prevCell = > user/A:filed01/1646986721047/Put/vlen=0/mvcc=0at > org.apache.hadoop.hbase.util.BloomContext.sanityCheck(BloomContext.java:63) > ~[classes/:?]at > org.apache.hadoop.hbase.util.BloomContext.writeBloom(BloomContext.java:54) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.StoreFileWriter.appendGeneralBloomfilter(StoreFileWriter.java:296) > ~[classes/:?]at > org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(StoreFileWriter.java:315) > ~[classes/:?]at > org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:464) > ~[classes/:?]... 10 more {code} > It is the same problem as > [HBASE-25929|https://issues.apache.org/jira/browse/HBASE-25929], because > DefaultMobStoreCompactor overwrite performCompaction method of > DefaultCompactor. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26827) RegionServer JVM crash when compact mob table
Yi Mei created HBASE-26827: -- Summary: RegionServer JVM crash when compact mob table Key: HBASE-26827 URL: https://issues.apache.org/jira/browse/HBASE-26827 Project: HBase Issue Type: Bug Components: Compaction Reporter: Yi Mei When compact a mob table, RS JVM may crash or failed to do compaction as the following logs: {code:java} 2022-03-11T16:18:44,089 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45525-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(675): Compaction failed Request=regionName=t1,,1646986716811.964618e679a2434aa7d27018baef8154., storeName=A, fileCount=2, fileSize=2.0 M (1010.2 K, 1010.2 K), priority=1, time=1646986723135java.io.IOException: Mob compaction failed for region: 964618e679a2434aa7d27018baef8154at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:574) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:365) ~[classes/:?]at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.compact(DefaultMobStoreCompactor.java:225) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:125) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1141) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2442) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:656) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:702) ~[classes/:?]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_292]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_292]at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]Caused by: java.io.IOException: Added a key not lexically larger than previous. Current cell = org.apache.hadoop.hbase.PrivateCellUtil$ValueAndTagRewriteByteBufferExtendedCell@565d5bac, prevCell = user/A:filed01/1646986721047/Put/vlen=0/mvcc=0 at org.apache.hadoop.hbase.util.BloomContext.sanityCheck(BloomContext.java:63) ~[classes/:?]at org.apache.hadoop.hbase.util.BloomContext.writeBloom(BloomContext.java:54) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.StoreFileWriter.appendGeneralBloomfilter(StoreFileWriter.java:296) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(StoreFileWriter.java:315) ~[classes/:?]at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:464) ~[classes/:?]... 10 more {code} It is the same problem as [HBASE-25929|https://issues.apache.org/jira/browse/HBASE-25929], because DefaultMobStoreCompactor overwrite performCompaction method of DefaultCompactor. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26670) HFileLinkCleaner should be added even if snapshot is disabled
Yi Mei created HBASE-26670: -- Summary: HFileLinkCleaner should be added even if snapshot is disabled Key: HBASE-26670 URL: https://issues.apache.org/jira/browse/HBASE-26670 Project: HBase Issue Type: Bug Reporter: Yi Mei We encountered a problem in our cluster: 1. Cluster has many snapshots, cause the archive directory is too big. 2. We delete some snapshots, but the cleaner runs slowly because this is a race in synchronized method of SnapshotHFileCleaner. 3. We delete all snapshots, and disable snapshot feature(hbase.snapshot.enabled=false), so the cleaner will skip the synchronized method in SnapshotHFileCleaner. 4. After cleaner runs, some back reference and data files under archive directory are deleted, but they are still used by some restored tables. This does not meet expectations. One solution is add HFileLinkCleaner even if snapshot is disabled. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26646) WALPlayer should obtain token from filesystem
[ https://issues.apache.org/jira/browse/HBASE-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26646. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > WALPlayer should obtain token from filesystem > - > > Key: HBASE-26646 > URL: https://issues.apache.org/jira/browse/HBASE-26646 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > When we use WALPlayer, we got the following exceptions: > {code:java} > 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: Failed on local exception: > java.io.IOException: org.apache.hadoop.security.AccessControlException: > Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host > is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: > "c4-hadoop-tst-ct01.bj":58300; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:1488) > at org.apache.hadoop.ipc.Client.call(Client.java:1415) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) > at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) > at > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26646) WALPlayer should obtain token from filesystem
[ https://issues.apache.org/jira/browse/HBASE-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17470500#comment-17470500 ] Yi Mei commented on HBASE-26646: Pushed to branch-2.4+. Thanks [~zhangduo] and [~shahrs87] for reviewing. > WALPlayer should obtain token from filesystem > - > > Key: HBASE-26646 > URL: https://issues.apache.org/jira/browse/HBASE-26646 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > When we use WALPlayer, we got the following exceptions: > {code:java} > 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: Failed on local exception: > java.io.IOException: org.apache.hadoop.security.AccessControlException: > Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host > is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: > "c4-hadoop-tst-ct01.bj":58300; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:1488) > at org.apache.hadoop.ipc.Client.call(Client.java:1415) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) > at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) > at > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26646) WALPlayer should obtain token from filesystem
[ https://issues.apache.org/jira/browse/HBASE-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26646: -- Assignee: Yi Mei > WALPlayer should obtain token from filesystem > - > > Key: HBASE-26646 > URL: https://issues.apache.org/jira/browse/HBASE-26646 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > When we use WALPlayer, we got the following exceptions: > {code:java} > 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: Failed on local exception: > java.io.IOException: org.apache.hadoop.security.AccessControlException: > Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host > is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: > "c4-hadoop-tst-ct01.bj":58300; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:1488) > at org.apache.hadoop.ipc.Client.call(Client.java:1415) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) > at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) > at > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26646) WALPlayer should obtain token from filesystem
Yi Mei created HBASE-26646: -- Summary: WALPlayer should obtain token from filesystem Key: HBASE-26646 URL: https://issues.apache.org/jira/browse/HBASE-26646 Project: HBase Issue Type: Bug Reporter: Yi Mei When we use WALPlayer, we got the following exceptions: {code:java} 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: "c4-hadoop-tst-ct01.bj":58300; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) at org.apache.hadoop.ipc.Client.call(Client.java:1488) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26625) ExportSnapshot tool failed to copy data files for tables with merge region
[ https://issues.apache.org/jira/browse/HBASE-26625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17466683#comment-17466683 ] Yi Mei commented on HBASE-26625: Pushed to branch-2.4+. Thanks [~zhangduo] for reviewing. > ExportSnapshot tool failed to copy data files for tables with merge region > -- > > Key: HBASE-26625 > URL: https://issues.apache.org/jira/browse/HBASE-26625 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > When export snapshot for a table with merge regions, we found following > exceptions: > {code:java} > 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the > Snapshot Export > 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot > integrity > 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent > hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 > path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26625) ExportSnapshot tool failed to copy data files for tables with merge region
[ https://issues.apache.org/jira/browse/HBASE-26625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26625. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > ExportSnapshot tool failed to copy data files for tables with merge region > -- > > Key: HBASE-26625 > URL: https://issues.apache.org/jira/browse/HBASE-26625 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > When export snapshot for a table with merge regions, we found following > exceptions: > {code:java} > 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the > Snapshot Export > 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot > integrity > 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent > hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 > path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26625) ExportSnapshot tool failed to copy data files for tables with merge region
[ https://issues.apache.org/jira/browse/HBASE-26625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26625: -- Assignee: Yi Mei > ExportSnapshot tool failed to copy data files for tables with merge region > -- > > Key: HBASE-26625 > URL: https://issues.apache.org/jira/browse/HBASE-26625 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > When export snapshot for a table with merge regions, we found following > exceptions: > {code:java} > 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the > Snapshot Export > 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot > integrity > 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent > hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 > path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HBASE-26625) ExportSnapshot tool failed to copy data files for tables with merge region
[ https://issues.apache.org/jira/browse/HBASE-26625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-26625: --- Summary: ExportSnapshot tool failed to copy data files for tables with merge region (was: ExportSnapshot tool fail to copy data files for tables with merge region) > ExportSnapshot tool failed to copy data files for tables with merge region > -- > > Key: HBASE-26625 > URL: https://issues.apache.org/jira/browse/HBASE-26625 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Priority: Minor > > When export snapshot for a table with merge regions, we found following > exceptions: > {code:java} > 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the > Snapshot Export > 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot > integrity > 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent > hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 > path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26625) ExportSnapshot tool fail to copy data files for tables with merge region
Yi Mei created HBASE-26625: -- Summary: ExportSnapshot tool fail to copy data files for tables with merge region Key: HBASE-26625 URL: https://issues.apache.org/jira/browse/HBASE-26625 Project: HBase Issue Type: Bug Reporter: Yi Mei When export snapshot for a table with merge regions, we found following exceptions: {code:java} 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the Snapshot Export 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot integrity 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
[ https://issues.apache.org/jira/browse/HBASE-26615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17464470#comment-17464470 ] Yi Mei commented on HBASE-26615: Pushed to branch-2.4+. Thanks [~zhangduo] and [~huangzhuoyue] for reviewing. > Snapshot referenced data files are deleted when delete a table with merge > regions > - > > Key: HBASE-26615 > URL: https://issues.apache.org/jira/browse/HBASE-26615 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > In our cluster, we have a feature: take a snapshot when delete a table. > But when we restore the snapshot, we found that some data files are deleted. > The problem is that, when delete a table with merge regions, HBase only > archive regions in meta, and merged parent regions are deleted in file system > which contain data files in snapshot . > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
[ https://issues.apache.org/jira/browse/HBASE-26615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26615. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > Snapshot referenced data files are deleted when delete a table with merge > regions > - > > Key: HBASE-26615 > URL: https://issues.apache.org/jira/browse/HBASE-26615 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > In our cluster, we have a feature: take a snapshot when delete a table. > But when we restore the snapshot, we found that some data files are deleted. > The problem is that, when delete a table with merge regions, HBase only > archive regions in meta, and merged parent regions are deleted in file system > which contain data files in snapshot . > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
[ https://issues.apache.org/jira/browse/HBASE-26615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26615: -- Assignee: Yi Mei > Snapshot referenced data files are deleted when delete a table with merge > regions > - > > Key: HBASE-26615 > URL: https://issues.apache.org/jira/browse/HBASE-26615 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > In our cluster, we have a feature: take a snapshot when delete a table. > But when we restore the snapshot, we found that some data files are deleted. > The problem is that, when delete a table with merge regions, HBase only > archive regions in meta, and merged parent regions are deleted in file system > which contain data files in snapshot . > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
[ https://issues.apache.org/jira/browse/HBASE-26615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-26615: --- Description: In our cluster, we have a feature: take a snapshot when delete a table. But when we restore the snapshot, we found that some data files are deleted. The problem is that, when delete a table with merge regions, HBase only archive regions in meta, and merged parent regions are deleted in file system which contain data files in snapshot . was: In our cluster, we have a feature: take a snapshot when delete a table. But when we restore the snapshot, we found that some data files are deleted. The problem is that, when delete a table, HBase only archive regions in meta. > Snapshot referenced data files are deleted when delete a table with merge > regions > - > > Key: HBASE-26615 > URL: https://issues.apache.org/jira/browse/HBASE-26615 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Priority: Major > > In our cluster, we have a feature: take a snapshot when delete a table. > But when we restore the snapshot, we found that some data files are deleted. > The problem is that, when delete a table with merge regions, HBase only > archive regions in meta, and merged parent regions are deleted in file system > which contain data files in snapshot . > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
Yi Mei created HBASE-26615: -- Summary: Snapshot referenced data files are deleted when delete a table with merge regions Key: HBASE-26615 URL: https://issues.apache.org/jira/browse/HBASE-26615 Project: HBase Issue Type: Bug Reporter: Yi Mei In our cluster, we have a feature: take a snapshot when delete a table. But when we restore the snapshot, we found that some data files are deleted. The problem is that, when delete a table, HBase only archive regions in meta. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Comment Edited] (HBASE-26261) Store configuration loss when use update_config
[ https://issues.apache.org/jira/browse/HBASE-26261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17418546#comment-17418546 ] Yi Mei edited comment on HBASE-26261 at 9/22/21, 11:33 AM: --- Pushed to branch-2.3+. Thanks [~zhangduo] for reviewing. was (Author: yi mei): Pushed to branch-2.3. Thanks [~zhangduo] for reviewing. > Store configuration loss when use update_config > --- > > Key: HBASE-26261 > URL: https://issues.apache.org/jira/browse/HBASE-26261 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > When use update_config shell command, some store configuration is loss. > When initialize store, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(confParam) > .addBytesMap(region.getTableDescriptor().getValues()) > .addStringMap(family.getConfiguration()) > .addBytesMap(family.getValues()); > {code} > when change configuration, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(conf) > .addBytesMap(getColumnFamilyDescriptor().getValues()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26261) Store configuration loss when use update_config
[ https://issues.apache.org/jira/browse/HBASE-26261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26261. Fix Version/s: 2.4.7 2.3.7 3.0.0-alpha-2 2.5.0 Resolution: Fixed Pushed to branch-2.3. Thanks [~zhangduo] for reviewing. > Store configuration loss when use update_config > --- > > Key: HBASE-26261 > URL: https://issues.apache.org/jira/browse/HBASE-26261 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > When use update_config shell command, some store configuration is loss. > When initialize store, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(confParam) > .addBytesMap(region.getTableDescriptor().getValues()) > .addStringMap(family.getConfiguration()) > .addBytesMap(family.getValues()); > {code} > when change configuration, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(conf) > .addBytesMap(getColumnFamilyDescriptor().getValues()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26270) Provide getConfiguration method for Region and Store interface
[ https://issues.apache.org/jira/browse/HBASE-26270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-26270: --- Release Note: Provide 'getReadOnlyConfiguration' method for Store and Region interface (was: Provide a 'getReadOnlyConfiguration' for Store and Region interface) > Provide getConfiguration method for Region and Store interface > -- > > Key: HBASE-26270 > URL: https://issues.apache.org/jira/browse/HBASE-26270 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], > [~zhangduo] suggest that we should provide getConfiguration method for Region > and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26270) Provide getConfiguration method for Region and Store interface
[ https://issues.apache.org/jira/browse/HBASE-26270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26270. Fix Version/s: 2.4.7 2.3.7 3.0.0-alpha-2 2.5.0 Release Note: Provide a 'getReadOnlyConfiguration' for Store and Region interface Resolution: Fixed > Provide getConfiguration method for Region and Store interface > -- > > Key: HBASE-26270 > URL: https://issues.apache.org/jira/browse/HBASE-26270 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], > [~zhangduo] suggest that we should provide getConfiguration method for Region > and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26270) Provide getConfiguration method for Region and Store interface
[ https://issues.apache.org/jira/browse/HBASE-26270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17418392#comment-17418392 ] Yi Mei commented on HBASE-26270: Pushed to branch2.3+. Thanks all for reviewing. > Provide getConfiguration method for Region and Store interface > -- > > Key: HBASE-26270 > URL: https://issues.apache.org/jira/browse/HBASE-26270 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], > [~zhangduo] suggest that we should provide getConfiguration method for Region > and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26270) Provide getConfiguration method for Region and Store interface
[ https://issues.apache.org/jira/browse/HBASE-26270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26270: -- Assignee: Yi Mei > Provide getConfiguration method for Region and Store interface > -- > > Key: HBASE-26270 > URL: https://issues.apache.org/jira/browse/HBASE-26270 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], > [~zhangduo] suggest that we should provide getConfiguration method for Region > and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26270) Provide getConfiguration method for Region and Store interface
Yi Mei created HBASE-26270: -- Summary: Provide getConfiguration method for Region and Store interface Key: HBASE-26270 URL: https://issues.apache.org/jira/browse/HBASE-26270 Project: HBase Issue Type: Improvement Reporter: Yi Mei In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], [~zhangduo] suggest that we should provide getConfiguration method for Region and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-26261) Store configuration loss when use update_config
[ https://issues.apache.org/jira/browse/HBASE-26261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-26261: -- Assignee: Yi Mei > Store configuration loss when use update_config > --- > > Key: HBASE-26261 > URL: https://issues.apache.org/jira/browse/HBASE-26261 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > When use update_config shell command, some store configuration is loss. > When initialize store, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(confParam) > .addBytesMap(region.getTableDescriptor().getValues()) > .addStringMap(family.getConfiguration()) > .addBytesMap(family.getValues()); > {code} > when change configuration, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(conf) > .addBytesMap(getColumnFamilyDescriptor().getValues()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26261) Store configuration loss when use update_config
Yi Mei created HBASE-26261: -- Summary: Store configuration loss when use update_config Key: HBASE-26261 URL: https://issues.apache.org/jira/browse/HBASE-26261 Project: HBase Issue Type: Bug Reporter: Yi Mei When use update_config shell command, some store configuration is loss. When initialize store, the conf is set by: {code:java} this.conf = new CompoundConfiguration() .add(confParam) .addBytesMap(region.getTableDescriptor().getValues()) .addStringMap(family.getConfiguration()) .addBytesMap(family.getValues()); {code} when change configuration, the conf is set by: {code:java} this.conf = new CompoundConfiguration() .add(conf) .addBytesMap(getColumnFamilyDescriptor().getValues()); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24734) RegionInfo#containsRange should support check meta table
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17385394#comment-17385394 ] Yi Mei commented on HBASE-24734: Push to branch-2.3+. Thanks all for review. > RegionInfo#containsRange should support check meta table > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Assignee: Yi Mei >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } > assertTrue(assertion); > // Make set with good
[jira] [Assigned] (HBASE-24734) RegionInfo#containsRange should support check meta table
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-24734: -- Assignee: Yi Mei > RegionInfo#containsRange should support check meta table > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Assignee: Yi Mei >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } > assertTrue(assertion); > // Make set with good comparator > set = new
[jira] [Resolved] (HBASE-24734) RegionInfo#containsRange should support check meta table
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-24734. Fix Version/s: 2.4.5 3.0.0-alpha-2 2.3.6 2.5.0 Resolution: Fixed > RegionInfo#containsRange should support check meta table > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } > assertTrue(assertion); > //
[jira] [Updated] (HBASE-24734) RegionInfo#containsRange should support check meta table
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-24734: --- Summary: RegionInfo#containsRange should support check meta table (was: Wrong comparator opening Region when 'split-to-WAL' enabled.) > RegionInfo#containsRange should support check meta table > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } > assertTrue(assertion); > // Make set with good comparator > set = new
[jira] [Commented] (HBASE-24734) Wrong comparator opening Region when 'split-to-WAL' enabled.
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381808#comment-17381808 ] Yi Mei commented on HBASE-24734: Hi [~anoop.hbase] when split meta wal to hfile, it has already use META comparactor, see BoundedRecoveredHFilesOutputSink#createRecoveredHFileWriter and BoundedRecoveredHFilesOutputSink#append. So the recorvered hfile for meta is correct. Then when move the recovered hfile from tmp dir to CF dir, it will check if the first and last key of hfile is in region range and if first key <= last key(see HStore#tryCommitRecoveredHFile), it is done by the containsRange method which dose not consider meta table. > Wrong comparator opening Region when 'split-to-WAL' enabled. > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator >
[jira] [Commented] (HBASE-24734) Wrong comparator opening Region when 'split-to-WAL' enabled.
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381711#comment-17381711 ] Yi Mei commented on HBASE-24734: Maybe containsRange method should consider both normal table and meta table. I add a patch for this, please see if this fix is ok? [~stack] [~anoop.hbase] > Wrong comparator opening Region when 'split-to-WAL' enabled. > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } >
[jira] [Resolved] (HBASE-25929) RegionServer JVM crash when compaction
[ https://issues.apache.org/jira/browse/HBASE-25929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25929. Fix Version/s: 2.4.4 2.3.6 2.5.0 3.0.0-alpha-1 Resolution: Fixed > RegionServer JVM crash when compaction > -- > > Key: HBASE-25929 > URL: https://issues.apache.org/jira/browse/HBASE-25929 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.4 > > Attachments: hs_err_pid27712.log, hs_err_pid28814.log > > > In our cluster, we found region servers may be crashed in several cases. > In hs_err_pid27712.log: > {code:java} > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] > J 5884 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] > J 4274 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V > (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] > J 5211 C2 > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V > (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] > J 5985 C1 > org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I > (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] > J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 > bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] > J 6004 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; > (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] > J 6000 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; > (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 > j > org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 > j > org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 > j > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 > j > org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 > {code} > In hs_err_pid28814.log: > {code:java} > Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free > space=251k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > V [libjvm.so+0x747fa0] > J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] > j >
[jira] [Commented] (HBASE-25929) RegionServer JVM crash when compaction
[ https://issues.apache.org/jira/browse/HBASE-25929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17356309#comment-17356309 ] Yi Mei commented on HBASE-25929: Pused to branch-2.3+. Thanks [~anoop.hbase] and [~zhangduo] for reviewing. > RegionServer JVM crash when compaction > -- > > Key: HBASE-25929 > URL: https://issues.apache.org/jira/browse/HBASE-25929 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Critical > Attachments: hs_err_pid27712.log, hs_err_pid28814.log > > > In our cluster, we found region servers may be crashed in several cases. > In hs_err_pid27712.log: > {code:java} > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] > J 5884 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] > J 4274 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V > (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] > J 5211 C2 > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V > (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] > J 5985 C1 > org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I > (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] > J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 > bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] > J 6004 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; > (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] > J 6000 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; > (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 > j > org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 > j > org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 > j > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 > j > org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 > {code} > In hs_err_pid28814.log: > {code:java} > Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free > space=251k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > V [libjvm.so+0x747fa0] > J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] > j > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36 > j >
[jira] [Commented] (HBASE-25929) RegionServer JVM crash when compaction
[ https://issues.apache.org/jira/browse/HBASE-25929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352284#comment-17352284 ] Yi Mei commented on HBASE-25929: I add a UT to reproduce the problem. And I will upload a patch later. > RegionServer JVM crash when compaction > -- > > Key: HBASE-25929 > URL: https://issues.apache.org/jira/browse/HBASE-25929 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: hs_err_pid27712.log, hs_err_pid28814.log > > > In our cluster, we found region servers may be crashed in several cases. > In hs_err_pid27712.log: > {code:java} > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] > J 5884 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] > J 4274 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V > (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] > J 5211 C2 > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V > (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] > J 5985 C1 > org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I > (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] > J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 > bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] > J 6004 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; > (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] > J 6000 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; > (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 > j > org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 > j > org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 > j > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 > j > org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 > {code} > In hs_err_pid28814.log: > {code:java} > Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free > space=251k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > V [libjvm.so+0x747fa0] > J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] > j > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36 > j >
[jira] [Created] (HBASE-25929) RegionServer JVM crash when compaction
Yi Mei created HBASE-25929: -- Summary: RegionServer JVM crash when compaction Key: HBASE-25929 URL: https://issues.apache.org/jira/browse/HBASE-25929 Project: HBase Issue Type: Bug Components: Compaction Affects Versions: 2.4.3, 2.3.5, 3.0.0-alpha-1, 2.5.0 Reporter: Yi Mei Assignee: Yi Mei Attachments: hs_err_pid27712.log, hs_err_pid28814.log In our cluster, we found region servers may be crashed in several cases. In hs_err_pid27712.log: {code:java} Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] J 5884 C1 org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] J 4274 C1 org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] J 5211 C2 org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] J 5985 C1 org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] J 6004 C1 org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] J 6000 C1 org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 j org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 j org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 j org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 j org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 j org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 j org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 {code} In hs_err_pid28814.log: {code:java} Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free space=251k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x747fa0] J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] j org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36 j org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69 j org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V+39 j org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+31 J 12082 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 bytes) @ 0x7f751ef15fbc [0x7f751ef15dc0+0x1fc] J 16584 C2
[jira] [Resolved] (HBASE-25747) Remove unused getWriteAvailable method in OperationQuota
[ https://issues.apache.org/jira/browse/HBASE-25747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25747. Fix Version/s: 2.4.3 2.5.0 3.0.0-alpha-1 Resolution: Fixed > Remove unused getWriteAvailable method in OperationQuota > > > Key: HBASE-25747 > URL: https://issues.apache.org/jira/browse/HBASE-25747 > Project: HBase > Issue Type: Improvement > Components: Quotas >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > The getWriteAvailable method is unused in OperationQuota, because for write > operation, the size is accurate. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25747) Remove unused getWriteAvailable method in OperationQuota
[ https://issues.apache.org/jira/browse/HBASE-25747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317598#comment-17317598 ] Yi Mei commented on HBASE-25747: Pushed to branch-2.4+. Thanks [~stack] for reviewing. > Remove unused getWriteAvailable method in OperationQuota > > > Key: HBASE-25747 > URL: https://issues.apache.org/jira/browse/HBASE-25747 > Project: HBase > Issue Type: Improvement > Components: Quotas >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > The getWriteAvailable method is unused in OperationQuota, because for write > operation, the size is accurate. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25747) Remove unused getWriteAvailable method in OperationQuota
Yi Mei created HBASE-25747: -- Summary: Remove unused getWriteAvailable method in OperationQuota Key: HBASE-25747 URL: https://issues.apache.org/jira/browse/HBASE-25747 Project: HBase Issue Type: Improvement Components: Quotas Reporter: Yi Mei Assignee: Yi Mei The getWriteAvailable method is unused in OperationQuota, because for write operation, the size is accurate. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25736) Scan should be limited by read capacity unit quota if read size quota is not set
[ https://issues.apache.org/jira/browse/HBASE-25736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-25736: -- Assignee: Yi Mei > Scan should be limited by read capacity unit quota if read size quota is not > set > > > Key: HBASE-25736 > URL: https://issues.apache.org/jira/browse/HBASE-25736 > Project: HBase > Issue Type: Improvement > Components: Quotas >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > Scan is currently limited by available size of quota, and quota size only > considers the READ_SIZE type: > {code:java} > long maxQuotaResultSize = Math.min(maxScannerResultSize, > quota.getReadAvailable()); > {code} > If read size is not set, we should limit the result size by read capacity > unit to avoid exceeding quota. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25736) Scan should be limited by read capacity unit quota if read size quota is not set
Yi Mei created HBASE-25736: -- Summary: Scan should be limited by read capacity unit quota if read size quota is not set Key: HBASE-25736 URL: https://issues.apache.org/jira/browse/HBASE-25736 Project: HBase Issue Type: Improvement Components: Quotas Reporter: Yi Mei Scan is currently limited by available size of quota, and quota size only considers the READ_SIZE type: {code:java} long maxQuotaResultSize = Math.min(maxScannerResultSize, quota.getReadAvailable()); {code} If read size is not set, we should limit the result size by read capacity unit to avoid exceeding quota. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25636) Expose HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-25636: --- Component/s: metrics Fix Version/s: 2.4.3 2.5.0 3.0.0-alpha-1 Release Note: Expose HBCK repost results in metrics, includes: "orphanRegionsOnRS", "orphanRegionsOnFS", "inconsistentRegions", "holes", "overlaps", "unknownServerRegions" and "emptyRegionInfoRegions". > Expose HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25636) Expose HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25636. Hadoop Flags: Reviewed Resolution: Fixed > Expose HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25636) Expose HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17299385#comment-17299385 ] Yi Mei commented on HBASE-25636: Pushed to branch-2.4+. Thanks [~zhangduo] for reviewing. > Expose HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25636) Expose HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-25636: --- Summary: Expose HBCK report as metrics (was: Expost HBCK report as metrics) > Expose HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-25636) Expost HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-25636: -- Assignee: Yi Mei > Expost HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25636) Expost HBCK report as metrics
Yi Mei created HBASE-25636: -- Summary: Expost HBCK report as metrics Key: HBASE-25636 URL: https://issues.apache.org/jira/browse/HBASE-25636 Project: HBase Issue Type: Improvement Reporter: Yi Mei Currently, we have a HBCK Report page in master UI to show the problems of HBCK Chore report and CatalogJanitor Consistency report. We can expose these problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25097) Wrong RIT page number in Master UI
[ https://issues.apache.org/jira/browse/HBASE-25097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25097. Fix Version/s: 2.2.7 2.4.0 2.3.3 3.0.0-alpha-1 Assignee: Yi Mei Resolution: Fixed > Wrong RIT page number in Master UI > -- > > Key: HBASE-25097 > URL: https://issues.apache.org/jira/browse/HBASE-25097 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > Attachments: 1.png, 2.png > > > In the following picture, there are 71 RIT totally, 10 in per page, so there > should be 8 pages, rather than 15 pages: > !1.png! > !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-25097) Wrong RIT page number in Master UI
[ https://issues.apache.org/jira/browse/HBASE-25097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-25097: --- Component/s: UI > Wrong RIT page number in Master UI > -- > > Key: HBASE-25097 > URL: https://issues.apache.org/jira/browse/HBASE-25097 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Yi Mei >Priority: Minor > Attachments: 1.png, 2.png > > > In the following picture, there are 71 RIT totally, 10 in per page, so there > should be 8 pages, rather than 15 pages: > !1.png! > !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25097) Wrong RIT page number in Master UI
[ https://issues.apache.org/jira/browse/HBASE-25097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17202054#comment-17202054 ] Yi Mei commented on HBASE-25097: Pushed to branch-2.2+. Thanks [~vjasani] for reviewing. > Wrong RIT page number in Master UI > -- > > Key: HBASE-25097 > URL: https://issues.apache.org/jira/browse/HBASE-25097 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Priority: Minor > Attachments: 1.png, 2.png > > > In the following picture, there are 71 RIT totally, 10 in per page, so there > should be 8 pages, rather than 15 pages: > !1.png! > !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25097) Wrong RIT page number in Master UI
Yi Mei created HBASE-25097: -- Summary: Wrong RIT page number in Master UI Key: HBASE-25097 URL: https://issues.apache.org/jira/browse/HBASE-25097 Project: HBase Issue Type: Bug Reporter: Yi Mei Attachments: 1.png, 2.png In the following picture, there are 71 RIT totally, 10 in per page, so there should be 8 pages, rather than 15 pages: !1.png! !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25048) [HBCK2] Bypassed parent procedures are not updated in store
Yi Mei created HBASE-25048: -- Summary: [HBCK2] Bypassed parent procedures are not updated in store Key: HBASE-25048 URL: https://issues.apache.org/jira/browse/HBASE-25048 Project: HBase Issue Type: Bug Reporter: Yi Mei See code in [ProcedureExecutor|https://github.com/apache/hbase/blob/master/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java#L980]: {code:java} Procedure current = procedure; while (current != null) { LOG.debug("Bypassing {}", current); current.bypass(getEnvironment()); store.update(procedure); // update current procedure long parentID = current.getParentProcId(); current = getProcedure(parentID); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25047) WAL split edits number is negative in RegionServerUI
Yi Mei created HBASE-25047: -- Summary: WAL split edits number is negative in RegionServerUI Key: HBASE-25047 URL: https://issues.apache.org/jira/browse/HBASE-25047 Project: HBase Issue Type: Bug Reporter: Yi Mei Attachments: 2020-09-16 11-38-13屏幕截图.png !2020-09-16 11-38-13屏幕截图.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-18659) Use HDFS ACL to give user the ability to read snapshot directly on HDFS
[ https://issues.apache.org/jira/browse/HBASE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158081#comment-17158081 ] Yi Mei commented on HBASE-18659: [~ndimiduk] sure, I will add it. > Use HDFS ACL to give user the ability to read snapshot directly on HDFS > --- > > Key: HBASE-18659 > URL: https://issues.apache.org/jira/browse/HBASE-18659 > Project: HBase > Issue Type: New Feature >Reporter: Duo Zhang >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > On the dev meetup notes in Shenzhen after HBaseCon Asia, there is a topic > about the permission to read hfiles on HDFS directly. > {quote} > For client-side scanner going against hfiles directly; is there a means of > being able to pass the permissions from hbase to hdfs? > {quote} > And at Xiaomi we also face the same problem. {{SnapshotScanner}} is much > faster and consumes less resources, but only super use has the ability to > read hfile directly on HDFS. > So here we want to use HDFS ACL to address this problem. > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_File_System_API > The basic idea is to set acl and default acl on the ns/table/cf directory on > HDFS for the users who have the permission to read the table on HBase. > Suggestions are welcomed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24653) Show snapshot owner on Master WebUI
Yi Mei created HBASE-24653: -- Summary: Show snapshot owner on Master WebUI Key: HBASE-24653 URL: https://issues.apache.org/jira/browse/HBASE-24653 Project: HBase Issue Type: Improvement Reporter: Yi Mei Now Master UI shows lots of snapshot informations, and owner is also useful to find out who create this snapshot. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112034#comment-17112034 ] Yi Mei commented on HBASE-24364: Pushed to branch-2. > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-24364. Fix Version/s: 2.2.5 2.3.0 3.0.0-alpha-1 Resolution: Fixed > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107916#comment-17107916 ] Yi Mei commented on HBASE-24364: Thanks [~janh] for reviewing. Pushed to branch-2.2, branch-2.3 and master. > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106800#comment-17106800 ] Yi Mei commented on HBASE-24364: The PR is: https://github.com/apache/hbase/pull/1707 > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-24364: -- Assignee: Yi Mei > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
Yi Mei created HBASE-24364: -- Summary: [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction Key: HBASE-24364 URL: https://issues.apache.org/jira/browse/HBASE-24364 Project: HBase Issue Type: Bug Reporter: Yi Mei I found the following exception when I run ITBLL: {code:java} 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception performing action: java.lang.IllegalArgumentException: There is no data block encoder for given id '6' at org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) at org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) at org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) at org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) at org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) at org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) at java.lang.Thread.run(Thread.java:748) {code} Because PREFIX_TREE is removed in DataBlockEncoding: {code:java} /** Disable data block encoding. */ NONE(0, null), // id 1 is reserved for the BITSET algorithm to be added later PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), // id 5 is reserved for the COPY_KEY algorithm for benchmarking // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24114) [Flakey Tests] TestSnapshotScannerHDFSAclController
[ https://issues.apache.org/jira/browse/HBASE-24114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077750#comment-17077750 ] Yi Mei commented on HBASE-24114: Thank you for taking a look [~huaxiangsun]. We have a fix in our Xiaomi branch, set acl to tmp snapshot directory before RS execute snapshot regions, in this way, acls can be inherited and don't need to set it, once snapshot is done, all acls are ready. Let me upload this patch later. > [Flakey Tests] TestSnapshotScannerHDFSAclController > --- > > Key: HBASE-24114 > URL: https://issues.apache.org/jira/browse/HBASE-24114 > Project: HBase > Issue Type: Test > Components: acl >Affects Versions: 2.3.0, master, 2.4.0 >Reporter: Huaxiang Sun >Priority: Major > > Still see the following error from the test. > {code:java} > --- > Test set: > org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController > --- > Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 342.32 s <<< > FAILURE! - in > org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController > org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController.testGrantTable > Time elapsed: 6.82 s <<< FAILURE! > java.lang.AssertionError: expected:<6> but was:<-1> > at > org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController.testGrantTable(TestSnapshotScannerHDFSAclController.java:349) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24103) [Flakey Tests] TestSnapshotScannerHDFSAclController
[ https://issues.apache.org/jira/browse/HBASE-24103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17073486#comment-17073486 ] Yi Mei commented on HBASE-24103: I found that the failed UT are timed out waiting for grant global finished. Maybe too many tables are created in this test. > [Flakey Tests] TestSnapshotScannerHDFSAclController > --- > > Key: HBASE-24103 > URL: https://issues.apache.org/jira/browse/HBASE-24103 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > According to HBASE-24097, TestSnapshotScannerHDFSAclController is still > flakey: > https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-2/5950/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24103) [Flakey Tests] TestSnapshotScannerHDFSAclController
[ https://issues.apache.org/jira/browse/HBASE-24103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-24103: -- Assignee: Yi Mei > [Flakey Tests] TestSnapshotScannerHDFSAclController > --- > > Key: HBASE-24103 > URL: https://issues.apache.org/jira/browse/HBASE-24103 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > According to HBASE-24097, TestSnapshotScannerHDFSAclController is still > flakey: > https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-2/5950/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24103) [Flakey Tests] TestSnapshotScannerHDFSAclController
Yi Mei created HBASE-24103: -- Summary: [Flakey Tests] TestSnapshotScannerHDFSAclController Key: HBASE-24103 URL: https://issues.apache.org/jira/browse/HBASE-24103 Project: HBase Issue Type: Bug Reporter: Yi Mei According to HBASE-24097, TestSnapshotScannerHDFSAclController is still flakey: https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-2/5950/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24097) [Flakey Tests] TestSnapshotScannerHDFSAclController#testRestoreSnapshot
[ https://issues.apache.org/jira/browse/HBASE-24097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17072411#comment-17072411 ] Yi Mei commented on HBASE-24097: +1 on the PR > [Flakey Tests] TestSnapshotScannerHDFSAclController#testRestoreSnapshot > --- > > Key: HBASE-24097 > URL: https://issues.apache.org/jira/browse/HBASE-24097 > Project: HBase > Issue Type: Bug > Components: flakies >Reporter: Michael Stack >Priority: Major > > Fails on occasion, 15% of the time according to flakie report. I can > reproduce it failing locally. A single method fails. I don't follow how it is > supposed to work (what looks wrong to me passes...). I noticed that if I ran > testRestoreSnapshot on its own, it passed but failed when run as part of the > test suite so I broke it out into its own suite. Now both old and new suites > pass for me locally after 20 repeats. Let me push it up. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23824) TestSnapshotScannerHDFSAclController is flakey
[ https://issues.apache.org/jira/browse/HBASE-23824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17033484#comment-17033484 ] Yi Mei commented on HBASE-23824: All failed UT are related to delete table. The reason is that after table is deleted, the table descriptor is null, so the SnapshotScannerHDFSAclController skips remove table related ACLs. > TestSnapshotScannerHDFSAclController is flakey > -- > > Key: HBASE-23824 > URL: https://issues.apache.org/jira/browse/HBASE-23824 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > See > [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2/lastSuccessfulBuild/artifact/dashboard.html] > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23824) TestSnapshotScannerHDFSAclController is flakey
Yi Mei created HBASE-23824: -- Summary: TestSnapshotScannerHDFSAclController is flakey Key: HBASE-23824 URL: https://issues.apache.org/jira/browse/HBASE-23824 Project: HBase Issue Type: Bug Reporter: Yi Mei Assignee: Yi Mei See [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2/lastSuccessfulBuild/artifact/dashboard.html] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23553) Snapshot referenced data files are deleted in some case
[ https://issues.apache.org/jira/browse/HBASE-23553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993143#comment-16993143 ] Yi Mei commented on HBASE-23553: Pushed to master, branch-2, branch-2.2 The pacth has some conflicts with branch-2.1, need some time to check it. > Snapshot referenced data files are deleted in some case > --- > > Key: HBASE-23553 > URL: https://issues.apache.org/jira/browse/HBASE-23553 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > We scan snapshot in our cluster and got following exception: > {code:java} > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/.tmp/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/archive/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f] > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:867) > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:778) > at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:749) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5306) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5271) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5243) > at > org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:72) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:239) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:150) > > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at {code} > I checked to namenode logs and found that this file is deleted by hbase > cleaner although a snapshot still referenced to this file. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23555) TestQuotaThrottle is broken
[ https://issues.apache.org/jira/browse/HBASE-23555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16992282#comment-16992282 ] Yi Mei commented on HBASE-23555: The UT fails because a test function does not remove quotas before return, so the next test function will broke. And I move these tests to TestQuotaAdmin, because it test admin api, rather than throttle behaviour. > TestQuotaThrottle is broken > --- > > Key: HBASE-23555 > URL: https://issues.apache.org/jira/browse/HBASE-23555 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > > TestQuotaThrottle is broken now. And it is anotated as Ignore because it's > flakey so the Jenkins test can not report it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23555) TestQuotaThrottle is broken
Yi Mei created HBASE-23555: -- Summary: TestQuotaThrottle is broken Key: HBASE-23555 URL: https://issues.apache.org/jira/browse/HBASE-23555 Project: HBase Issue Type: Bug Reporter: Yi Mei Assignee: Yi Mei TestQuotaThrottle is broken now. And it is anotated as Ignore because it's flakey so the Jenkins test can not report it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23553) Snapshot referenced data files are deleted in some case
[ https://issues.apache.org/jira/browse/HBASE-23553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16992237#comment-16992237 ] Yi Mei commented on HBASE-23553: Snapshot referenced data files are deleted in this case: # Take a snapshot for a table which contains a merged region. The merged region files are linked to parents files in the \{.} format. # The merged region is compacted and parents regions are archived by CatalogJanitor. # The cleaner delete files which are not referenced by snapshots. But the SnapshotHFileCleaner only recognize the file links in the \{table=region-hfile} format. > Snapshot referenced data files are deleted in some case > --- > > Key: HBASE-23553 > URL: https://issues.apache.org/jira/browse/HBASE-23553 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > We scan snapshot in our cluster and got following exception: > {code:java} > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/.tmp/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/archive/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f] > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:867) > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:778) > at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:749) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5306) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5271) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5243) > at > org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:72) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:239) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:150) > > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at {code} > I checked to namenode logs and found that this file is deleted by hbase > cleaner although a snapshot still referenced to this file. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23553) Snapshot referenced data files are deleted in some case
[ https://issues.apache.org/jira/browse/HBASE-23553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-23553: -- Assignee: Yi Mei > Snapshot referenced data files are deleted in some case > --- > > Key: HBASE-23553 > URL: https://issues.apache.org/jira/browse/HBASE-23553 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > We scan snapshot in our cluster and got following exception: > {code:java} > java.io.IOException: java.io.IOException: java.io.FileNotFoundException: > Unable to open link: org.apache.hadoop.hbase.io.HFileLink > locations=[hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/.tmp/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, > > hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/archive/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f] > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:867) > > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:778) > at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:749) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5306) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5271) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5243) > at > org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:72) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:239) > > at > org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:150) > > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at {code} > I checked to namenode logs and found that this file is deleted by hbase > cleaner although a snapshot still referenced to this file. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23553) Snapshot referenced data files are deleted in some case
Yi Mei created HBASE-23553: -- Summary: Snapshot referenced data files are deleted in some case Key: HBASE-23553 URL: https://issues.apache.org/jira/browse/HBASE-23553 Project: HBase Issue Type: Bug Reporter: Yi Mei We scan snapshot in our cluster and got following exception: {code:java} java.io.IOException: java.io.IOException: java.io.FileNotFoundException: Unable to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/.tmp/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/archive/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f] at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:867) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:778) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:749) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5306) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5271) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5243) at org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:72) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:239) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:150) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) at {code} I checked to namenode logs and found that this file is deleted by hbase cleaner although a snapshot still referenced to this file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-23042. Fix Version/s: 2.2.3 2.3.0 3.0.0 Resolution: Fixed > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3 > > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955699#comment-16955699 ] Yi Mei commented on HBASE-23042: Pushed to master, branch-2, branche-2.2. Thanks for [~zghao] for reviewing. > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
[ https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-23170. Fix Version/s: 2.2.3 2.3.0 3.0.0 Resolution: Fixed > Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME > - > > Key: HBASE-23170 > URL: https://issues.apache.org/jira/browse/HBASE-23170 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3 > > > Admin#getRegionServers returns the server names. > ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and > metrics, while the metrics are not useful for Admin#getRegionServers method. > Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] > for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
[ https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954200#comment-16954200 ] Yi Mei commented on HBASE-23170: Pushed to master, branch-2, branch-2.2. Thanks for [~anoop.hbase] [~zhangduo] [~zghao] for reviewing. > Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME > - > > Key: HBASE-23170 > URL: https://issues.apache.org/jira/browse/HBASE-23170 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > Admin#getRegionServers returns the server names. > ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and > metrics, while the metrics are not useful for Admin#getRegionServers method. > Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] > for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953458#comment-16953458 ] Yi Mei commented on HBASE-23042: The cause is that in org.apache.hbase.thirdparty.com.google.protobuf.util.JsonFormat: {code:java} case BYTES: generator.print("\""); generator.print(BaseEncoding.base64().encode(((ByteString) value).toByteArray())); generator.print("\""); break; {code} > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-23042: -- Assignee: Yi Mei > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
Yi Mei created HBASE-23170: -- Summary: Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME Key: HBASE-23170 URL: https://issues.apache.org/jira/browse/HBASE-23170 Project: HBase Issue Type: Improvement Reporter: Yi Mei Assignee: Yi Mei Admin#getRegionServers returns the server names. ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and metrics, while the metrics are not useful for Admin#getRegionServers method. Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-23042: --- Attachment: 1.png > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Priority: Major > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23042) Parameters are incorrect in procedures jsp
Yi Mei created HBASE-23042: -- Summary: Parameters are incorrect in procedures jsp Key: HBASE-23042 URL: https://issues.apache.org/jira/browse/HBASE-23042 Project: HBase Issue Type: Bug Reporter: Yi Mei In procedures jps, the parameters of table name, region start end keys are wrong, please see the first picture. This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23039) HBCK2 bypass -r command does not work
[ https://issues.apache.org/jira/browse/HBASE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931970#comment-16931970 ] Yi Mei commented on HBASE-23039: Pushed to master. Thanks [~Apache9] for reviewing. > HBCK2 bypass -r command does not work > - > > Key: HBASE-23039 > URL: https://issues.apache.org/jira/browse/HBASE-23039 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-23039.001.patch > > > > The recursiveFlag is wrong: > {code:java} > boolean overrideFlag = commandLine.hasOption(override.getOpt()); > boolean recursiveFlag = commandLine.hasOption(override.getOpt()); > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23039) HBCK2 bypass -r command does not work
[ https://issues.apache.org/jira/browse/HBASE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-23039: --- Resolution: Fixed Status: Resolved (was: Patch Available) > HBCK2 bypass -r command does not work > - > > Key: HBASE-23039 > URL: https://issues.apache.org/jira/browse/HBASE-23039 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-23039.001.patch > > > > The recursiveFlag is wrong: > {code:java} > boolean overrideFlag = commandLine.hasOption(override.getOpt()); > boolean recursiveFlag = commandLine.hasOption(override.getOpt()); > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23039) HBCK2 bypass -r command does not work
[ https://issues.apache.org/jira/browse/HBASE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-23039: --- Attachment: HBASE-23039.001.patch > HBCK2 bypass -r command does not work > - > > Key: HBASE-23039 > URL: https://issues.apache.org/jira/browse/HBASE-23039 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-23039.001.patch > > > > The recursiveFlag is wrong: > {code:java} > boolean overrideFlag = commandLine.hasOption(override.getOpt()); > boolean recursiveFlag = commandLine.hasOption(override.getOpt()); > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23039) HBCK2 bypass -r command does not work
[ https://issues.apache.org/jira/browse/HBASE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-23039: --- Status: Patch Available (was: Open) > HBCK2 bypass -r command does not work > - > > Key: HBASE-23039 > URL: https://issues.apache.org/jira/browse/HBASE-23039 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: HBASE-23039.001.patch > > > > The recursiveFlag is wrong: > {code:java} > boolean overrideFlag = commandLine.hasOption(override.getOpt()); > boolean recursiveFlag = commandLine.hasOption(override.getOpt()); > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23039) HBCK2 bypass -r command does not work
[ https://issues.apache.org/jira/browse/HBASE-23039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-23039: -- Assignee: Yi Mei > HBCK2 bypass -r command does not work > - > > Key: HBASE-23039 > URL: https://issues.apache.org/jira/browse/HBASE-23039 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > > The recursiveFlag is wrong: > {code:java} > boolean overrideFlag = commandLine.hasOption(override.getOpt()); > boolean recursiveFlag = commandLine.hasOption(override.getOpt()); > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23039) HBCK2 bypass -r command does not work
Yi Mei created HBASE-23039: -- Summary: HBCK2 bypass -r command does not work Key: HBASE-23039 URL: https://issues.apache.org/jira/browse/HBASE-23039 Project: HBase Issue Type: Bug Reporter: Yi Mei The recursiveFlag is wrong: {code:java} boolean overrideFlag = commandLine.hasOption(override.getOpt()); boolean recursiveFlag = commandLine.hasOption(override.getOpt()); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23009) TestSnapshotScannerHDFSAclController is broken on branch-2
[ https://issues.apache.org/jira/browse/HBASE-23009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927210#comment-16927210 ] Yi Mei commented on HBASE-23009: The code of CleanerChore in branch-2 is different from master: {code:java} branch-2: boolean deleted = allFilesDeleted && allSubDirsDeleted; master: boolean deleted = allFilesDeleted && allSubDirsDeleted && isEmptyDirDeletable(dir); {code} Let me fix it. > TestSnapshotScannerHDFSAclController is broken on branch-2 > -- > > Key: HBASE-23009 > URL: https://issues.apache.org/jira/browse/HBASE-23009 > Project: HBase > Issue Type: Bug > Components: snapshots >Affects Versions: 2.3.0 >Reporter: Peter Somogyi >Priority: Major > Fix For: 2.3.0 > > > TestSnapshotScannerHDFSAclController.testCleanArchiveTableDir always fails on > branch-2. > {noformat} > java.lang.AssertionError at > org.apache.hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController.testCleanArchiveTableDir(TestSnapshotScannerHDFSAclController.java:745) > {noformat} > Test run: > [https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-2/4148/testReport/junit/org.apache.hadoop.hbase.security.access/TestSnapshotScannerHDFSAclController/testCleanArchiveTableDir/] -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (HBASE-22878) Show table throttle quotas in table jsp
[ https://issues.apache.org/jira/browse/HBASE-22878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22878. Resolution: Fixed > Show table throttle quotas in table jsp > --- > > Key: HBASE-22878 > URL: https://issues.apache.org/jira/browse/HBASE-22878 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.1 > > Attachments: 1.png, 2.png > > > Currently, table jsp shows space quotas but has no throttle quotas. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (HBASE-22878) Show table throttle quotas in table jsp
[ https://issues.apache.org/jira/browse/HBASE-22878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921108#comment-16921108 ] Yi Mei commented on HBASE-22878: Thanks [~zghaobac] for reviewing. Pushed to master, branch-2 and branch-2.2 > Show table throttle quotas in table jsp > --- > > Key: HBASE-22878 > URL: https://issues.apache.org/jira/browse/HBASE-22878 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png, 2.png > > > Currently, table jsp shows space quotas but has no throttle quotas. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (HBASE-22878) Show table throttle quotas in table jsp
[ https://issues.apache.org/jira/browse/HBASE-22878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22878: --- Fix Version/s: 2.2.1 2.3.0 3.0.0 > Show table throttle quotas in table jsp > --- > > Key: HBASE-22878 > URL: https://issues.apache.org/jira/browse/HBASE-22878 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.1 > > Attachments: 1.png, 2.png > > > Currently, table jsp shows space quotas but has no throttle quotas. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (HBASE-22878) Show table throttle quotas in table jsp
[ https://issues.apache.org/jira/browse/HBASE-22878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-22878: -- Assignee: Yi Mei > Show table throttle quotas in table jsp > --- > > Key: HBASE-22878 > URL: https://issues.apache.org/jira/browse/HBASE-22878 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png, 2.png > > > Currently, table jsp shows space quotas but has no throttle quotas. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
[ https://issues.apache.org/jira/browse/HBASE-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22946: --- Affects Version/s: 2.2.2 2.3.0 3.0.0 > Fix TableNotFound when grant/revoke if AccessController is not loaded > - > > Key: HBASE-22946 > URL: https://issues.apache.org/jira/browse/HBASE-22946 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.3.0, 2.2.2 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > When doing grant, revoke..., a TableNotFoundException will occur if > AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
[ https://issues.apache.org/jira/browse/HBASE-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei updated HBASE-22946: --- Fix Version/s: 2.2.2 2.3.0 3.0.0 Affects Version/s: (was: 2.2.2) (was: 2.3.0) (was: 3.0.0) > Fix TableNotFound when grant/revoke if AccessController is not loaded > - > > Key: HBASE-22946 > URL: https://issues.apache.org/jira/browse/HBASE-22946 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2 > > > When doing grant, revoke..., a TableNotFoundException will occur if > AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
[ https://issues.apache.org/jira/browse/HBASE-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22946. Resolution: Fixed > Fix TableNotFound when grant/revoke if AccessController is not loaded > - > > Key: HBASE-22946 > URL: https://issues.apache.org/jira/browse/HBASE-22946 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2 > > > When doing grant, revoke..., a TableNotFoundException will occur if > AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
[ https://issues.apache.org/jira/browse/HBASE-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16920682#comment-16920682 ] Yi Mei commented on HBASE-22946: Thanks [~zghaobac] and @stack for reviewing. Pushed to master, branch-2, branch-2.2 > Fix TableNotFound when grant/revoke if AccessController is not loaded > - > > Key: HBASE-22946 > URL: https://issues.apache.org/jira/browse/HBASE-22946 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > When doing grant, revoke..., a TableNotFoundException will occur if > AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)