[jira] [Updated] (HBASE-20886) [Auth] Support keytab login in hbase client
[ https://issues.apache.org/jira/browse/HBASE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20886: -- Status: Patch Available (was: Open) > [Auth] Support keytab login in hbase client > --- > > Key: HBASE-20886 > URL: https://issues.apache.org/jira/browse/HBASE-20886 > Project: HBase > Issue Type: Improvement > Components: asyncclient, Client, security >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Critical > Attachments: HBASE-20886.master.001.patch > > > There're lots of questions about how to connect to kerberized hbase cluster > through hbase-client api from user-mail and slack channel. > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > already existed in code base, but they are only used in {{Canary}}. > This issue is to make use of two configs to support client-side keytab based > login, after this issue resolved, hbase-client should directly connect to > kerberized cluster without changing any code as long as > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20886) [Auth] Support keytab login in hbase client
[ https://issues.apache.org/jira/browse/HBASE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-20886: -- Attachment: HBASE-20886.master.001.patch > [Auth] Support keytab login in hbase client > --- > > Key: HBASE-20886 > URL: https://issues.apache.org/jira/browse/HBASE-20886 > Project: HBase > Issue Type: Improvement > Components: asyncclient, Client, security >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Critical > Attachments: HBASE-20886.master.001.patch > > > There're lots of questions about how to connect to kerberized hbase cluster > through hbase-client api from user-mail and slack channel. > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > already existed in code base, but they are only used in {{Canary}}. > This issue is to make use of two configs to support client-side keytab based > login, after this issue resolved, hbase-client should directly connect to > kerberized cluster without changing any code as long as > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20886) [Auth] Support keytab login in hbase client
[ https://issues.apache.org/jira/browse/HBASE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544039#comment-16544039 ] Reid Chan commented on HBASE-20886: --- FYI [~elserj] > [Auth] Support keytab login in hbase client > --- > > Key: HBASE-20886 > URL: https://issues.apache.org/jira/browse/HBASE-20886 > Project: HBase > Issue Type: Improvement > Components: asyncclient, Client, security >Reporter: Reid Chan >Assignee: Reid Chan >Priority: Critical > > There're lots of questions about how to connect to kerberized hbase cluster > through hbase-client api from user-mail and slack channel. > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > already existed in code base, but they are only used in {{Canary}}. > This issue is to make use of two configs to support client-side keytab based > login, after this issue resolved, hbase-client should directly connect to > kerberized cluster without changing any code as long as > {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are > specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20886) [Auth] Support keytab login in hbase client
Reid Chan created HBASE-20886: - Summary: [Auth] Support keytab login in hbase client Key: HBASE-20886 URL: https://issues.apache.org/jira/browse/HBASE-20886 Project: HBase Issue Type: Improvement Components: asyncclient, Client, security Reporter: Reid Chan Assignee: Reid Chan There're lots of questions about how to connect to kerberized hbase cluster through hbase-client api from user-mail and slack channel. {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are already existed in code base, but they are only used in {{Canary}}. This issue is to make use of two configs to support client-side keytab based login, after this issue resolved, hbase-client should directly connect to kerberized cluster without changing any code as long as {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are specified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20882: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to branch-2.0. Thanks [~brfrn169] for contributing. > Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in > TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0 > --- > > Key: HBASE-20882 > URL: https://issues.apache.org/jira/browse/HBASE-20882 > Project: HBase > Issue Type: Sub-task > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20882.branch-2.0.001.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20865) CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT state
[ https://issues.apache.org/jira/browse/HBASE-20865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544013#comment-16544013 ] Duo Zhang commented on HBASE-20865: --- [~brfrn169] Could you please provide a patch for branch-2.0 also? This is a bug fix so should also be committed to 2.0. Thanks. > CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT > state > - > > Key: HBASE-20865 > URL: https://issues.apache.org/jira/browse/HBASE-20865 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.1 > > Attachments: HBASE-20865.master.001.patch > > > Similar to HBASE-20616, CreateTableProcedure gets stuck in retry loop in > CREATE_TABLE_WRITE_FS_LAYOUT state when writing HDFS fails. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544010#comment-16544010 ] Duo Zhang commented on HBASE-20882: --- +1. Let me commit. > Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in > TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0 > --- > > Key: HBASE-20882 > URL: https://issues.apache.org/jira/browse/HBASE-20882 > Project: HBase > Issue Type: Sub-task > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20882.branch-2.0.001.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20460) Doc offheap write-path
[ https://issues.apache.org/jira/browse/HBASE-20460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544001#comment-16544001 ] Anoop Sam John commented on HBASE-20460: Oh sure.. If you can help, most welcome.. :-) Or else I will work on that next week . > Doc offheap write-path > -- > > Key: HBASE-20460 > URL: https://issues.apache.org/jira/browse/HBASE-20460 > Project: HBase > Issue Type: Bug > Components: documentation, Offheaping >Reporter: stack >Priority: Critical > Fix For: 2.2.0 > > > We have an empty section in refguide that needs filling in on how to enable > offheap write-path, how to know you've set it up right or not, how to tune > it, and how it relates to direct memory allocation and offheap read-path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20876) Improve docs style in HConstants
[ https://issues.apache.org/jira/browse/HBASE-20876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543997#comment-16543997 ] Reid Chan commented on HBASE-20876: --- Could you please provide a patch using git-format? This way could keeps contributor's information in commit. > Improve docs style in HConstants > > > Key: HBASE-20876 > URL: https://issues.apache.org/jira/browse/HBASE-20876 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: beginner, beginners, newbie > Attachments: HBASE-20876.master.001.patch > > > In {{HConstants}}, there's a docs snippet: > {code} > /** Don't use it! This'll get you the wrong path in a secure cluster. > * Use FileSystem.getHomeDirectory() or > * "/user/" + UserGroupInformation.getCurrentUser().getShortUserName() */ > {code} > It's ugly style. > Let's improve this docs with following > {code} > /** > * Description > */ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543987#comment-16543987 ] Sean Busbey commented on HBASE-20649: - I've got this ready to go locally. FYI [~balazs.meszaros] I've got this staged with [~psomogyi] as author and you as amending-author. [~zyork] / [~Apache9] let me know if y'all would like to be listed as signed-off-by on this in addition to me. I'm not sure if your above supportive statements should be taken as specific reviews. > Validate HFiles do not have PREFIX_TREE DataBlockEncoding > - > > Key: HBASE-20649 > URL: https://issues.apache.org/jira/browse/HBASE-20649 > Project: HBase > Issue Type: New Feature >Reporter: Peter Somogyi >Assignee: Balazs Meszaros >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20649.master.001.patch, > HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, > HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, > HBASE-20649.master.006.patch > > > HBASE-20592 adds a tool to check column families on the cluster do not have > PREFIX_TREE encoding. > Since it is possible that DataBlockEncoding was already changed but HFiles > are not rewritten yet we would need a tool that can verify the content of > hfiles in the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20873) Update doc for Endpoint-based Export
[ https://issues.apache.org/jira/browse/HBASE-20873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543978#comment-16543978 ] Hadoop QA commented on HBASE-20873: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 14s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 15s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 5m 3s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20873 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12931635/HBASE-20873.master.001.patch | | Optional Tests | asflicense refguide | | uname | Linux 767fc9298808 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 2997b6d071 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/13624/artifact/patchprocess/branch-site/book.html | | refguide | https://builds.apache.org/job/PreCommit-HBASE-Build/13624/artifact/patchprocess/patch-site/book.html | | Max. process+thread count | 83 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/13624/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Update doc for Endpoint-based Export > > > Key: HBASE-20873 > URL: https://issues.apache.org/jira/browse/HBASE-20873 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HBASE-20873.master.001.patch > > > The current documentation on the usage is a little vague. I'd like to take a > stab at expanding it, based on my experience. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20873) Update doc for Endpoint-based Export
[ https://issues.apache.org/jira/browse/HBASE-20873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-20873: Status: Patch Available (was: Open) > Update doc for Endpoint-based Export > > > Key: HBASE-20873 > URL: https://issues.apache.org/jira/browse/HBASE-20873 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HBASE-20873.master.001.patch > > > The current documentation on the usage is a little vague. I'd like to take a > stab at expanding it, based on my experience. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20873) Update doc for Endpoint-based Export
[ https://issues.apache.org/jira/browse/HBASE-20873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HBASE-20873: Attachment: HBASE-20873.master.001.patch > Update doc for Endpoint-based Export > > > Key: HBASE-20873 > URL: https://issues.apache.org/jira/browse/HBASE-20873 > Project: HBase > Issue Type: Improvement > Components: documentation >Affects Versions: 2.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HBASE-20873.master.001.patch > > > The current documentation on the usage is a little vague. I'd like to take a > stab at expanding it, based on my experience. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543885#comment-16543885 ] Andrew Purtell commented on HBASE-20883: bq. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, This won't scale. At 100 nodes the list will be unwieldy. What happens when you have a cluster of 500? 1000? 2000? bq. also expose the Read/Write/Total requests per sec information in the HMaster JMX API. We could do this, since ClusterStatus information is already available in the master. Note however that all production deployments should be exporting metrics to a metrics collection system and database (such as OpenTSDB or Argus), and when you do this then a simple query of the metrics DB will give you the same information. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > JMX read/write/total request counts but since HMaster is collecting this info > anyway it shouldn't be a big change to improve it to also show Reads / Writes > Per Sec as well as Total. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also Per Region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543850#comment-16543850 ] Hudson commented on HBASE-20884: Results for branch branch-2 [build #979 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 1.4.6, 2.0.2, 2.1.1 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20879) Compacting memstore config should handle lower case
[ https://issues.apache.org/jira/browse/HBASE-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543851#comment-16543851 ] Hudson commented on HBASE-20879: Results for branch branch-2 [build #979 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Compacting memstore config should handle lower case > --- > > Key: HBASE-20879 > URL: https://issues.apache.org/jira/browse/HBASE-20879 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.1 >Reporter: Tushar >Assignee: Ted Yu >Priority: Major > Fix For: 2.2.0 > > Attachments: 20879.v2.txt > > > Tushar reported seeing the following in region server log when entering > 'basic' for compacting memstore type: > {code} > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-0] > handler.OpenRegionHandler: Failed open of > region=usertable,user6379,1531182972304.69abd81a44e9cc3ef9e150709f4f69ab., > starting to roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hbase.MemoryCompactionPolicy.valueOf(MemoryCompactionPolicy.java:26) > at > org.apache.hadoop.hbase.regionserver.HStore.getMemstore(HStore.java:331) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:271) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5531) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:999) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:996) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-1] > handler.OpenRegionHandler: Failed open of > region=temp,,1530511278693.0be48eedc68b9358aa475946d00571f1., starting to > roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at >
[jira] [Commented] (HBASE-20865) CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT state
[ https://issues.apache.org/jira/browse/HBASE-20865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543849#comment-16543849 ] Hudson commented on HBASE-20865: Results for branch branch-2 [build #979 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/979//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT > state > - > > Key: HBASE-20865 > URL: https://issues.apache.org/jira/browse/HBASE-20865 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.1 > > Attachments: HBASE-20865.master.001.patch > > > Similar to HBASE-20616, CreateTableProcedure gets stuck in retry loop in > CREATE_TABLE_WRITE_FS_LAYOUT state when writing HDFS fails. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20672) Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at every monitoring interval
[ https://issues.apache.org/jira/browse/HBASE-20672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543833#comment-16543833 ] Hadoop QA commented on HBASE-20672: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 45s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 42s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 38s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 39s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}159m 51s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}186m
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543826#comment-16543826 ] Hudson commented on HBASE-20884: Results for branch branch-2.0 [build #545 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/545/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/545//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/545//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/545//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 1.4.6, 2.0.2, 2.1.1 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543816#comment-16543816 ] Hadoop QA commented on HBASE-15320: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-assembly . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 9s{color} | {color:red} root generated 3 new + 1293 unchanged - 0 fixed = 1296 total (was 1293) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hbase-kafka-model . hbase-assembly {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}201m 2s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}256m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.replication.TestSyncReplicationStandbyKillRS | \\ \\ ||
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Fix Version/s: (was: 2.1.0) 2.1.1 > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0, 1.5.0, 1.2.7, 1.3.3, 1.4.6, 2.0.2, 2.1.1 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543720#comment-16543720 ] Hadoop QA commented on HBASE-20649: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 4m 54s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 4m 50s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}266m 17s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}333m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters
[ https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543716#comment-16543716 ] Hudson commented on HBASE-18477: Results for branch HBASE-18477 [build #263 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/263/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/263//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/263//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/263//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} --Failed when running client tests on top of Hadoop 2. [see log for details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/263//artifact/output-integration/hadoop-2.log]. (note that this means we didn't run on Hadoop 3) > Umbrella JIRA for HBase Read Replica clusters > - > > Key: HBASE-18477 > URL: https://issues.apache.org/jira/browse/HBASE-18477 > Project: HBase > Issue Type: New Feature >Reporter: Zach York >Assignee: Zach York >Priority: Major > Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase > Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope > doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf > > > Recently, changes (such as HBASE-17437) have unblocked HBase to run with a > root directory external to the cluster (such as in Amazon S3). This means > that the data is stored outside of the cluster and can be accessible after > the cluster has been terminated. One use case that is often asked about is > pointing multiple clusters to one root directory (sharing the data) to have > read resiliency in the case of a cluster failure. > > This JIRA is an umbrella JIRA to contain all the tasks necessary to create a > read-replica HBase cluster that is pointed at the same root directory. > > This requires making the Read-Replica cluster Read-Only (no metadata > operation or data operations). > Separating the hbase:meta table for each cluster (Otherwise HBase gets > confused with multiple clusters trying to update the meta table with their ip > addresses) > Adding refresh functionality for the meta table to ensure new metadata is > picked up on the read replica cluster. > Adding refresh functionality for HFiles for a given table to ensure new data > is picked up on the read replica cluster. > > This can be used with any existing cluster that is backed by an external > filesystem. > > Please note that this feature is still quite manual (with the potential for > automation later). > > More information on this particular feature can be found here: > https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-20884: --- Fix Version/s: 2.0.2 1.4.6 1.3.3 1.2.7 1.5.0 2.1.0 > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0, 2.1.0, 1.5.0, 1.2.7, 1.3.3, 1.4.6, 2.0.2 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543659#comment-16543659 ] Sakthi commented on HBASE-20885: I'm not sure if this was done on purpose (for some kind of an optimization), but having this messes up when a Space quota is being set up on the same table (See parent issue for reference). > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
[ https://issues.apache.org/jira/browse/HBASE-20885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-20885 started by Sakthi. -- > Remove entry for RPC quota from hbase:quota when RPC quota is removed. > -- > > Key: HBASE-20885 > URL: https://issues.apache.org/jira/browse/HBASE-20885 > Project: HBase > Issue Type: Sub-task >Reporter: Sakthi >Assignee: Sakthi >Priority: Minor > > When a RPC quota is removed (using LIMIT => 'NONE'), the entry from > hbase:quota table is not completely removed. For e.g. see below: > {noformat} > hbase(main):005:0> create 't2','cf1' > Created table t2 > Took 0.8000 seconds > => Hbase::Table - t2 > hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => > '10M/sec' > Took 0.1024 seconds > hbase(main):007:0> list_quotas > OWNER QUOTAS > TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => > REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE > 1 row(s) > Took 0.0622 seconds > hbase(main):008:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513014463, > value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 >\x05 \x02 > 1 row(s) > Took 0.0453 seconds > hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' > Took 0.0097 seconds > hbase(main):010:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0338 seconds > hbase(main):011:0> scan 'hbase:quota' > ROWCOLUMN+CELL > t.t2 column=q:s, timestamp=1531513039505, > value=PBUF\x12\x00 > 1 row(s) > Took 0.0066 seconds > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20885) Remove entry for RPC quota from hbase:quota when RPC quota is removed.
Sakthi created HBASE-20885: -- Summary: Remove entry for RPC quota from hbase:quota when RPC quota is removed. Key: HBASE-20885 URL: https://issues.apache.org/jira/browse/HBASE-20885 Project: HBase Issue Type: Sub-task Reporter: Sakthi Assignee: Sakthi When a RPC quota is removed (using LIMIT => 'NONE'), the entry from hbase:quota table is not completely removed. For e.g. see below: {noformat} hbase(main):005:0> create 't2','cf1' Created table t2 Took 0.8000 seconds => Hbase::Table - t2 hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => '10M/sec' Took 0.1024 seconds hbase(main):007:0> list_quotas OWNER QUOTAS TABLE => t2 TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE 1 row(s) Took 0.0622 seconds hbase(main):008:0> scan 'hbase:quota' ROWCOLUMN+CELL t.t2 column=q:s, timestamp=1531513014463, value=PBUF\x12\x0B\x12\x09\x08\x04\x10\x80\x80\x80 \x05 \x02 1 row(s) Took 0.0453 seconds hbase(main):009:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 'NONE' Took 0.0097 seconds hbase(main):010:0> list_quotas OWNER QUOTAS 0 row(s) Took 0.0338 seconds hbase(main):011:0> scan 'hbase:quota' ROWCOLUMN+CELL t.t2 column=q:s, timestamp=1531513039505, value=PBUF\x12\x00 1 row(s) Took 0.0066 seconds {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HBASE-20401) Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable
[ https://issues.apache.org/jira/browse/HBASE-20401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tak Lon (Stephen) Wu reassigned HBASE-20401: Assignee: Tak Lon (Stephen) Wu > Make `MAX_WAIT` and `waitIfNotFinished` in CleanerContext configurable > -- > > Key: HBASE-20401 > URL: https://issues.apache.org/jira/browse/HBASE-20401 > Project: HBase > Issue Type: Improvement > Components: master >Affects Versions: 3.0.0, 1.5.0, 2.0.0-beta-1, 1.4.4, 2.0.0 >Reporter: Tak Lon (Stephen) Wu >Assignee: Tak Lon (Stephen) Wu >Priority: Minor > Labels: beginner > > When backporting HBASE-18309 in HBASE-20352, the deleteFiles calls > CleanerContext.java#getResult with a waitIfNotFinished timeout to wait for > notification (notify) from the fs.delete file thread. there might be two > situation need to tune the MAX_WAIT in CleanerContext or waitIfNotFinished > when LogClearner call getResult. > # fs.delete never complete (strange but possible), then we need to wait for > a max of 60 seconds. here, 60 seconds might be too long > # getResult is waiting in the period of 500 milliseconds, but the fs.delete > has completed and setFromClear is set but yet notify(). one might want to > tune this 500 milliseconds to 200 or less . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543639#comment-16543639 ] Hadoop QA commented on HBASE-20884: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 41s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s{color} | {color:red} hbase-common: The patch generated 1 new + 27 unchanged - 1 fixed = 28 total (was 28) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 2m 41s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 1m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:1f3957d | | JIRA Issue | HBASE-20884 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12931596/HBASE-20884.branch-1.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle
[jira] [Updated] (HBASE-20879) Compacting memstore config should handle lower case
[ https://issues.apache.org/jira/browse/HBASE-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20879: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) > Compacting memstore config should handle lower case > --- > > Key: HBASE-20879 > URL: https://issues.apache.org/jira/browse/HBASE-20879 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.1 >Reporter: Tushar >Assignee: Ted Yu >Priority: Major > Fix For: 2.2.0 > > Attachments: 20879.v2.txt > > > Tushar reported seeing the following in region server log when entering > 'basic' for compacting memstore type: > {code} > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-0] > handler.OpenRegionHandler: Failed open of > region=usertable,user6379,1531182972304.69abd81a44e9cc3ef9e150709f4f69ab., > starting to roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hbase.MemoryCompactionPolicy.valueOf(MemoryCompactionPolicy.java:26) > at > org.apache.hadoop.hbase.regionserver.HStore.getMemstore(HStore.java:331) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:271) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5531) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:999) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:996) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-1] > handler.OpenRegionHandler: Failed open of > region=temp,,1530511278693.0be48eedc68b9358aa475946d00571f1., starting to > roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at >
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Attachment: HBASE-20884.branch-1.002.patch > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.branch-1.002.patch, HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543622#comment-16543622 ] Mike Drob commented on HBASE-20884: --- Made an attempt at release noting this change. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20672) Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at every monitoring interval
[ https://issues.apache.org/jira/browse/HBASE-20672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20672: --- Status: Patch Available (was: Reopened) > Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at > every monitoring interval > - > > Key: HBASE-20672 > URL: https://issues.apache.org/jira/browse/HBASE-20672 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Ankit Jain >Assignee: Ankit Jain >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20672.branch-1.001.patch, > HBASE-20672.branch-1.002.patch, HBASE-20672.branch-2.001.patch, > HBASE-20672.master.001.patch, HBASE-20672.master.002.patch, > HBASE-20672.master.003.patch, hits1vs2.4.40.400.png > > > Hbase currently provides counter read/write requests (ReadRequestCount, > WriteRequestCount). That said it is not easy to use counter that reset only > after a restart of the service, we would like to expose 2 new metrics in > HBase to provide ReadRequestRate and WriteRequestRate at region server level. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20672) Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at every monitoring interval
[ https://issues.apache.org/jira/browse/HBASE-20672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Jain updated HBASE-20672: --- Attachment: HBASE-20672.branch-1.002.patch > Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at > every monitoring interval > - > > Key: HBASE-20672 > URL: https://issues.apache.org/jira/browse/HBASE-20672 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Ankit Jain >Assignee: Ankit Jain >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20672.branch-1.001.patch, > HBASE-20672.branch-1.002.patch, HBASE-20672.branch-2.001.patch, > HBASE-20672.master.001.patch, HBASE-20672.master.002.patch, > HBASE-20672.master.003.patch, hits1vs2.4.40.400.png > > > Hbase currently provides counter read/write requests (ReadRequestCount, > WriteRequestCount). That said it is not easy to use counter that reset only > after a restart of the service, we would like to expose 2 new metrics in > HBase to provide ReadRequestRate and WriteRequestRate at region server level. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543610#comment-16543610 ] Hadoop QA commented on HBASE-20884: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-1 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 26s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 8s{color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 9s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 57s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} hbase-rest in the patch failed with JDK v1.8.0_172. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} hbase-rest in the patch failed with JDK v1.8.0_172. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 21s{color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 31s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s{color} | {color:red} hbase-thrift in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s{color} | {color:red} hbase-rest in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} hbase-examples in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 21s{color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 31s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s{color} | {color:red} hbase-thrift in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s{color} | {color:red} hbase-rest in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 18s{color} | {color:red} hbase-examples in the patch failed with JDK v1.7.0_181. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s{color} | {color:red} hbase-common: The patch generated 1 new + 27 unchanged - 1 fixed = 28 total (was 28) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} The patch
[jira] [Updated] (HBASE-20672) Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at every monitoring interval
[ https://issues.apache.org/jira/browse/HBASE-20672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Jain updated HBASE-20672: --- Attachment: HBASE-20672.branch-2.001.patch > Create new HBase metrics ReadRequestRate and WriteRequestRate that reset at > every monitoring interval > - > > Key: HBASE-20672 > URL: https://issues.apache.org/jira/browse/HBASE-20672 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Ankit Jain >Assignee: Ankit Jain >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20672.branch-1.001.patch, > HBASE-20672.branch-2.001.patch, HBASE-20672.master.001.patch, > HBASE-20672.master.002.patch, HBASE-20672.master.003.patch, > hits1vs2.4.40.400.png > > > Hbase currently provides counter read/write requests (ReadRequestCount, > WriteRequestCount). That said it is not easy to use counter that reset only > after a restart of the service, we would like to expose 2 new metrics in > HBase to provide ReadRequestRate and WriteRequestRate at region server level. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Hadoop Flags: Incompatible change,Reviewed Release Note: Class org.apache.hadoop.hbase.util.Base64 has been removed in it's entirety from HBase 2+. In HBase 1, unused methods have been removed from the class and the audience was changed from Public to Private. This class was originally intended as an internal utility class since inception, and should not have been advertised as public to end-users. This represents an incompatible change for users who relied on this implementation. An alternative implementation for affected clients is available at java.util.Base64 when using Java 8 or newer. For clients seeking to restore this specific implementation, it is available in the public domain for download at http://iharder.sourceforge.net/current/java/base64/ > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543567#comment-16543567 ] Andrew Purtell commented on HBASE-20884: +1 for the branch-1s > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543563#comment-16543563 ] Mike Drob commented on HBASE-20884: --- Pushed to branch-2.0+, going to wait for a precommit run on branch-1 patch before applying there as well. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Attachment: HBASE-20884.branch-1.001.patch > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.branch-1.001.patch, > HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20879) Compacting memstore config should handle lower case
[ https://issues.apache.org/jira/browse/HBASE-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543555#comment-16543555 ] Josh Elser commented on HBASE-20879: +1 easy enough. > Compacting memstore config should handle lower case > --- > > Key: HBASE-20879 > URL: https://issues.apache.org/jira/browse/HBASE-20879 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.1 >Reporter: Tushar >Assignee: Ted Yu >Priority: Major > Attachments: 20879.v2.txt > > > Tushar reported seeing the following in region server log when entering > 'basic' for compacting memstore type: > {code} > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-0] > handler.OpenRegionHandler: Failed open of > region=usertable,user6379,1531182972304.69abd81a44e9cc3ef9e150709f4f69ab., > starting to roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hbase.MemoryCompactionPolicy.valueOf(MemoryCompactionPolicy.java:26) > at > org.apache.hadoop.hbase.regionserver.HStore.getMemstore(HStore.java:331) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:271) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5531) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:999) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:996) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-1] > handler.OpenRegionHandler: Failed open of > region=temp,,1530511278693.0be48eedc68b9358aa475946d00571f1., starting to > roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at >
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543540#comment-16543540 ] Ted Yu commented on HBASE-15320: Attaching patch v16 from review board. > HBase connector for Kafka Connect > - > > Key: HBASE-15320 > URL: https://issues.apache.org/jira/browse/HBASE-15320 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Andrew Purtell >Assignee: Mike Wingert >Priority: Major > Labels: beginner > Fix For: 3.0.0 > > Attachments: 15320.master.16.patch, HBASE-15320.master.1.patch, > HBASE-15320.master.10.patch, HBASE-15320.master.11.patch, > HBASE-15320.master.12.patch, HBASE-15320.master.14.patch, > HBASE-15320.master.15.patch, HBASE-15320.master.2.patch, > HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, > HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, > HBASE-15320.master.7.patch, HBASE-15320.master.8.patch, > HBASE-15320.master.8.patch, HBASE-15320.master.9.patch, HBASE-15320.pdf, > HBASE-15320.pdf > > > Implement an HBase connector with source and sink tasks for the Connect > framework (http://docs.confluent.io/2.0.0/connect/index.html) available in > Kafka 0.9 and later. > See also: > http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines > An HBase source > (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) > could be implemented as a replication endpoint or WALObserver, publishing > cluster wide change streams from the WAL to one or more topics, with > configurable mapping and partitioning of table changes to topics. > An HBase sink task > (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would > persist, with optional transformation (JSON? Avro?, map fields to native > schema?), Kafka SinkRecords into HBase tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-15320: --- Attachment: 15320.master.16.patch > HBase connector for Kafka Connect > - > > Key: HBASE-15320 > URL: https://issues.apache.org/jira/browse/HBASE-15320 > Project: HBase > Issue Type: New Feature > Components: Replication >Reporter: Andrew Purtell >Assignee: Mike Wingert >Priority: Major > Labels: beginner > Fix For: 3.0.0 > > Attachments: 15320.master.16.patch, HBASE-15320.master.1.patch, > HBASE-15320.master.10.patch, HBASE-15320.master.11.patch, > HBASE-15320.master.12.patch, HBASE-15320.master.14.patch, > HBASE-15320.master.15.patch, HBASE-15320.master.2.patch, > HBASE-15320.master.3.patch, HBASE-15320.master.4.patch, > HBASE-15320.master.5.patch, HBASE-15320.master.6.patch, > HBASE-15320.master.7.patch, HBASE-15320.master.8.patch, > HBASE-15320.master.8.patch, HBASE-15320.master.9.patch, HBASE-15320.pdf, > HBASE-15320.pdf > > > Implement an HBase connector with source and sink tasks for the Connect > framework (http://docs.confluent.io/2.0.0/connect/index.html) available in > Kafka 0.9 and later. > See also: > http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines > An HBase source > (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) > could be implemented as a replication endpoint or WALObserver, publishing > cluster wide change streams from the WAL to one or more topics, with > configurable mapping and partitioning of table changes to topics. > An HBase sink task > (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would > persist, with optional transformation (JSON? Avro?, map fields to native > schema?), Kafka SinkRecords into HBase tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20866) HBase 1.x scan performance degradation compared to 0.98 version
[ https://issues.apache.org/jira/browse/HBASE-20866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543498#comment-16543498 ] Andrew Purtell edited comment on HBASE-20866 at 7/13/18 5:41 PM: - +1 for this patch for branch-1, branch-1.3, and branch-1.4. After we commit it, let's resolve this JIRA as Fixed and open another JIRA for the forward port of these changes to branch-2 and up. As [~vik.karma] mentions the code is different so it may take some time, and there is no need to make users of branch-1 wait for the improvement in the meantime. was (Author: apurtell): +1 for this patch for branch-1, branch-1.3, and branch-1.4. After we commit it, let's close this JIRA and open another JIRA for the forward port of these changes to branch-2 and up. As [~vik.karma] mentions the code is different so it may take some time, and there is no need to make users of branch-1 wait for the improvement in the meantime. > HBase 1.x scan performance degradation compared to 0.98 version > --- > > Key: HBASE-20866 > URL: https://issues.apache.org/jira/browse/HBASE-20866 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vikas Vishwakarma >Assignee: Vikas Vishwakarma >Priority: Critical > Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6 > > Attachments: HBASE-20866.branch-1.3.001.patch, > HBASE-20866.branch-1.3.002.patch, HBASE-20866.branch-1.3.003.patch > > > Internally while testing 1.3 as part of migration from 0.98 to 1.3 we > observed perf degradation in scan performance for phoenix queries varying > from few 10's to upto 200% depending on the query being executed. We tried > simple native HBase scan and there also we saw upto 40% degradation in > performance when the number of column qualifiers are high (40-50+) > To identify the root cause of performance diff between 0.98 and 1.3 we > carried out lot of experiments with profiling and git bisect iterations, > however we were not able to identify any particular source of scan > performance degradation and it looked like this is an accumulated degradation > of 5-10% over various enhancements and refactoring. > We identified few major enhancements like partialResult handling, > ScannerContext with heartbeat processing, time/size limiting, RPC > refactoring, etc that could have contributed to small degradation in > performance which put together could be leading to large overall degradation. > One of the changes is > [HBASE-11544|https://jira.apache.org/jira/browse/HBASE-11544] which > implements partialResult handling. In ClientScanner.java the results received > from server are cached on the client side by converting the result array into > an ArrayList. This function gets called in a loop depending on the number of > rows in the scan result. Example for ten’s of millions of rows scanned, this > can be called in the order of millions of times. > In almost all the cases 99% of the time (except for handling partial results, > etc). We are just taking the resultsFromServer converting it into a ArrayList > resultsToAddToCache in addResultsToList(..) and then iterating over the list > again and adding it to cache in loadCache(..) as given in the code path below > In ClientScanner → loadCache(..) → getResultsToAddToCache(..) → > addResultsToList(..) → > {code:java} > loadCache() { > ... > List resultsToAddToCache = > getResultsToAddToCache(values, callable.isHeartbeatMessage()); > ... > … > for (Result rs : resultsToAddToCache) { > rs = filterLoadedCell(rs); > cache.add(rs); > ... > } > } > getResultsToAddToCache(..) { > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && > scan.getAllowPartialResults(); > .. > if (allowPartials || isBatchSet) { > addResultsToList(resultsToAddToCache, resultsFromServer, 0, > (null == resultsFromServer ? 0 : resultsFromServer.length)); > return resultsToAddToCache; > } > ... > } > private void addResultsToList(List outputList, Result[] inputArray, > int start, int end) { > if (inputArray == null || start < 0 || end > inputArray.length) return; > for (int i = start; i < end; i++) { > outputList.add(inputArray[i]); > } > }{code} > > It looks like we can avoid the result array to arraylist conversion > (resultsFromServer --> resultsToAddToCache ) for the first case which is also > the most frequent case and instead directly take the values arraay returned > by callable and add it to the cache without converting it into ArrayList. > I have taken both these flags allowPartials and isBatchSet out in loadcahe() > and I am directly adding values to scanner cache if the above condition
[jira] [Commented] (HBASE-20866) HBase 1.x scan performance degradation compared to 0.98 version
[ https://issues.apache.org/jira/browse/HBASE-20866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543498#comment-16543498 ] Andrew Purtell commented on HBASE-20866: +1 for this patch for branch-1, branch-1.3, and branch-1.4. After we commit it, let's close this JIRA and open another JIRA for the forward port of these changes to branch-2 and up. As [~vik.karma] mentions the code is different so it may take some time, and there is no need to make users of branch-1 wait for the improvement in the meantime. > HBase 1.x scan performance degradation compared to 0.98 version > --- > > Key: HBASE-20866 > URL: https://issues.apache.org/jira/browse/HBASE-20866 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vikas Vishwakarma >Assignee: Vikas Vishwakarma >Priority: Critical > Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6 > > Attachments: HBASE-20866.branch-1.3.001.patch, > HBASE-20866.branch-1.3.002.patch, HBASE-20866.branch-1.3.003.patch > > > Internally while testing 1.3 as part of migration from 0.98 to 1.3 we > observed perf degradation in scan performance for phoenix queries varying > from few 10's to upto 200% depending on the query being executed. We tried > simple native HBase scan and there also we saw upto 40% degradation in > performance when the number of column qualifiers are high (40-50+) > To identify the root cause of performance diff between 0.98 and 1.3 we > carried out lot of experiments with profiling and git bisect iterations, > however we were not able to identify any particular source of scan > performance degradation and it looked like this is an accumulated degradation > of 5-10% over various enhancements and refactoring. > We identified few major enhancements like partialResult handling, > ScannerContext with heartbeat processing, time/size limiting, RPC > refactoring, etc that could have contributed to small degradation in > performance which put together could be leading to large overall degradation. > One of the changes is > [HBASE-11544|https://jira.apache.org/jira/browse/HBASE-11544] which > implements partialResult handling. In ClientScanner.java the results received > from server are cached on the client side by converting the result array into > an ArrayList. This function gets called in a loop depending on the number of > rows in the scan result. Example for ten’s of millions of rows scanned, this > can be called in the order of millions of times. > In almost all the cases 99% of the time (except for handling partial results, > etc). We are just taking the resultsFromServer converting it into a ArrayList > resultsToAddToCache in addResultsToList(..) and then iterating over the list > again and adding it to cache in loadCache(..) as given in the code path below > In ClientScanner → loadCache(..) → getResultsToAddToCache(..) → > addResultsToList(..) → > {code:java} > loadCache() { > ... > List resultsToAddToCache = > getResultsToAddToCache(values, callable.isHeartbeatMessage()); > ... > … > for (Result rs : resultsToAddToCache) { > rs = filterLoadedCell(rs); > cache.add(rs); > ... > } > } > getResultsToAddToCache(..) { > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && > scan.getAllowPartialResults(); > .. > if (allowPartials || isBatchSet) { > addResultsToList(resultsToAddToCache, resultsFromServer, 0, > (null == resultsFromServer ? 0 : resultsFromServer.length)); > return resultsToAddToCache; > } > ... > } > private void addResultsToList(List outputList, Result[] inputArray, > int start, int end) { > if (inputArray == null || start < 0 || end > inputArray.length) return; > for (int i = start; i < end; i++) { > outputList.add(inputArray[i]); > } > }{code} > > It looks like we can avoid the result array to arraylist conversion > (resultsFromServer --> resultsToAddToCache ) for the first case which is also > the most frequent case and instead directly take the values arraay returned > by callable and add it to the cache without converting it into ArrayList. > I have taken both these flags allowPartials and isBatchSet out in loadcahe() > and I am directly adding values to scanner cache if the above condition is > pass instead of coverting it into arrayList by calling > getResultsToAddToCache(). For example: > {code:java} > protected void loadCache() throws IOException { > Result[] values = null; > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && scan.getAllowPartialResults(); > .. > for (;;) { > try { > values = call(callable, caller, scannerTimeout); > .. > } catch
[jira] [Commented] (HBASE-16636) Regions in Transition counts wrong (zero) in HMaster /jmx, prevents detecting Regions Stuck in Transition
[ https://issues.apache.org/jira/browse/HBASE-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543495#comment-16543495 ] Andrew Purtell commented on HBASE-16636: If we need more fixes for RIT metrics would you be interested in supplying a patch for this problem [~harisekhon]? > Regions in Transition counts wrong (zero) in HMaster /jmx, prevents detecting > Regions Stuck in Transition > - > > Key: HBASE-16636 > URL: https://issues.apache.org/jira/browse/HBASE-16636 > Project: HBase > Issue Type: Bug > Components: UI >Affects Versions: 1.1.2 > Environment: HDP 2.3.2 >Reporter: Hari Sekhon >Priority: Major > Attachments: Regions_in_Transition_UI.png, ritCountOverThreshold.png > > > I've discovered that the Region in Transition counts are wrong in the HMaster > UI /jmx page. > The /master-status page clearly shows 3 regions stuck in transition but the > /jmx page I was monitoring reported 0 for ritCountOverThreshold. > {code} > }, { > "name" : "Hadoop:service=HBase,name=Master,sub=AssignmentManger", > "modelerType" : "Master,sub=AssignmentManger", > "tag.Context" : "master", > ... > "ritOldestAge" : 0, > "ritCountOverThreshold" : 0, > ... > "ritCount" : 0, > {code} > I have a nagios plugin I wrote which was checking this which I've since had > to rewrite to parse the /master-status page instead (the code is in > check_hbase_regions_stuck_in_transition.py at > https://github.com/harisekhon/nagios-plugins). > I'm attaching screenshots of both /master-status and /jmx to show the > difference in the 2 pages on the HMaster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543489#comment-16543489 ] Andrew Purtell edited comment on HBASE-20884 at 7/13/18 5:34 PM: - +1 Regarding branch-1, despite the annotation, this is a utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. Yes this breaks our compatibility guidelines but we shouldn't allow policy to straitjacket best judgement about code hygiene and potential safety risks. This class isn't maintained like a public library. I would accept and commit a patch that does this. was (Author: apurtell): +1 Regarding branch-1, despite the annotation, this is a utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. Yes this breaks our compatibility guidelines but we shouldn't allow policy to straitjacket best judgement about code hygiene and potential safety risks. This class isn't maintained like a public library. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543489#comment-16543489 ] Andrew Purtell edited comment on HBASE-20884 at 7/13/18 5:33 PM: - +1 Regarding branch-1, despite the annotation, this is a utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. Yes this breaks our compatibility guidelines but we shouldn't allow policy to straitjacket best judgement about code hygiene and potential safety risks. This class isn't maintained like a public library. was (Author: apurtell): +1 Regarding branch-1, despite the annotation, this is a utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543489#comment-16543489 ] Andrew Purtell commented on HBASE-20884: +1 Regarding branch-1, despite the annotation, this is a dumb utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543489#comment-16543489 ] Andrew Purtell edited comment on HBASE-20884 at 7/13/18 5:31 PM: - +1 Regarding branch-1, despite the annotation, this is a utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. was (Author: apurtell): +1 Regarding branch-1, despite the annotation, this is a dumb utility class meant for internal project use. Remove the unused methods in Base64 on branch-1, change its annotation to Private, and note the change with a release note. Do it for all branch-1 releases. Nobody should be using this class in their application. It shouldn't be public. > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20856) PITA having to set WAL provider in two places
[ https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543472#comment-16543472 ] Tak Lon (Stephen) Wu commented on HBASE-20856: -- I was trying to work on this issue and related code in [WALFactory|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java#L253] (master branch and branch-2) and found that the default is AsyncFSWALProvider if {{hbase.wal.meta_provider}} is not set. so, may I ask what is the `Operator` means here? > PITA having to set WAL provider in two places > - > > Key: HBASE-20856 > URL: https://issues.apache.org/jira/browse/HBASE-20856 > Project: HBase > Issue Type: Improvement > Components: Operability, wal >Reporter: stack >Priority: Minor > Fix For: 2.0.2, 2.2.0, 2.1.1 > > > Courtesy of [~elserj], I learn that changing WAL we need to set two places... > both hbase.wal.meta_provider and hbase.wal.provider. Operator should only > have to set it in one place; hbase.wal.meta_provider should pick up general > setting unless hbase.wal.meta_provider is explicitly set. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543392#comment-16543392 ] Hadoop QA commented on HBASE-20884: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 29s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 30s{color} | {color:red} hbase-mapreduce generated 1 new + 157 unchanged - 2 fixed = 158 total (was 159) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} hbase-common: The patch generated 0 new + 0 unchanged - 40 fixed = 0 total (was 40) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} The patch hbase-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 115 unchanged - 1 fixed = 115 total (was 116) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} The patch hbase-thrift passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} hbase-rest: The patch generated 8 new + 116 unchanged - 8 fixed = 124 total (was 124) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch hbase-examples passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 40s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 54s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 11s{color} |
[jira] [Commented] (HBASE-20460) Doc offheap write-path
[ https://issues.apache.org/jira/browse/HBASE-20460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543389#comment-16543389 ] Josh Elser commented on HBASE-20460: [~anoop.hbase], I like the content you've put together here. Any chance I can copy-edit this and convert this into a patch for the book? > Doc offheap write-path > -- > > Key: HBASE-20460 > URL: https://issues.apache.org/jira/browse/HBASE-20460 > Project: HBase > Issue Type: Bug > Components: documentation, Offheaping >Reporter: stack >Priority: Critical > Fix For: 2.2.0 > > > We have an empty section in refguide that needs filling in on how to enable > offheap write-path, how to know you've set it up right or not, how to tune > it, and how it relates to direct memory allocation and offheap read-path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20869) Endpoint-based Export use incorrect user to write to destination
[ https://issues.apache.org/jira/browse/HBASE-20869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543378#comment-16543378 ] Ted Yu commented on HBASE-20869: lgtm > Endpoint-based Export use incorrect user to write to destination > > > Key: HBASE-20869 > URL: https://issues.apache.org/jira/browse/HBASE-20869 > Project: HBase > Issue Type: Bug > Components: Coprocessors >Affects Versions: 2.0.0 > Environment: Hadoop 3.0.0 + HBase 2.0.0, Kerberos. >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-20869.master.001.patch, > HBASE-20869.master.002.patch > > > HBASE-15806 implemented an endpoint based export. It gets caller's HDFS > delegation token, and RegionServer is supposed to write out exported files as > the caller. > Everything works fine if you use run export as hbase user. However, once you > use a different user to export, it fails. > To reproduce, > Add to configuration key hbase.coprocessor.region.classes the coprocessor > class org.apache.hadoop.hbase.coprocessor.Export. > create a table t1, assign permission to a user foo: > > {noformat} > hbase(main):004:0> user_permission 't1' > User Namespace,Table,Family,Qualifier:Permission > hbase default,t1,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN] > foo default,t1,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]{noformat} > > As user foo, execute the following command: > > {noformat} > $ hdfs dfs -mkdir /tmp/export_hbase2 > $ hbase org.apache.hadoop.hbase.coprocessor.Export t1 /tmp/export_hbase2/t2/ > > 18/07/10 14:03:59 INFO client.RpcRetryingCallerImpl: Call exception, tries=6, > retries=6, started=4457 ms ago, cancelled=false, > msg=org.apache.hadoop.security.AccessControlException: Permission denied: > user=hbase, access=WRITE, > inode="/tmp/export_hbase2/t2":foo:supergroup:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1846) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1830) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1789) > at > org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:316) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2411) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2343) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:764) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:451) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) > at sun.reflect.GeneratedConstructorAccessor25.newInstance(Unknown Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) > at > org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:278) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1195) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1174) > at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1112) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:462) > at > org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:459) > at >
[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543356#comment-16543356 ] Sean Busbey commented on HBASE-20649: - All the failed tests are timeouts. I'll try rerunning precommit since it's not clear to me how this patchset could impact those jobs. the docs change in v6 works well enough for me. If anyone else would like to see more please give a shout. > Validate HFiles do not have PREFIX_TREE DataBlockEncoding > - > > Key: HBASE-20649 > URL: https://issues.apache.org/jira/browse/HBASE-20649 > Project: HBase > Issue Type: New Feature >Reporter: Peter Somogyi >Assignee: Balazs Meszaros >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20649.master.001.patch, > HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, > HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, > HBASE-20649.master.006.patch > > > HBASE-20592 adds a tool to check column families on the cluster do not have > PREFIX_TREE encoding. > Since it is possible that DataBlockEncoding was already changed but HFiles > are not rewritten yet we would need a tool that can verify the content of > hfiles in the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20865) CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT state
[ https://issues.apache.org/jira/browse/HBASE-20865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20865: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 2.0.2) Status: Resolved (was: Patch Available) Thanks for the patch, Toshihiro Thanks for the review, Duo > CreateTableProcedure is stuck in retry loop in CREATE_TABLE_WRITE_FS_LAYOUT > state > - > > Key: HBASE-20865 > URL: https://issues.apache.org/jira/browse/HBASE-20865 > Project: HBase > Issue Type: Bug > Components: amv2 >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.1.1 > > Attachments: HBASE-20865.master.001.patch > > > Similar to HBASE-20616, CreateTableProcedure gets stuck in retry loop in > CREATE_TABLE_WRITE_FS_LAYOUT state when writing HDFS fails. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543292#comment-16543292 ] Hadoop QA commented on HBASE-20649: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 4m 54s{color} | {color:blue} branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 23s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 4m 33s{color} | {color:blue} patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 5s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 26s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}230m 42s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 52s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}297m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Assignee: Mike Drob Status: Patch Available (was: Open) > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Assignee: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
[ https://issues.apache.org/jira/browse/HBASE-20884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-20884: -- Attachment: HBASE-20884.master.001.patch > Replace usage of our Base64 implementation with java.util.Base64 > > > Key: HBASE-20884 > URL: https://issues.apache.org/jira/browse/HBASE-20884 > Project: HBase > Issue Type: Task >Reporter: Mike Drob >Priority: Major > Fix For: 3.0.0 > > Attachments: HBASE-20884.master.001.patch > > > We have a public domain implementation of Base64 that is copied into our code > base and infrequently receives updates. We should replace usage of that with > the new Java 8 java.util.Base64 where possible. > For the migration, I propose a phased approach. > * Deprecate on 1.x and 2.x to signal to users that this is going away. > * Replace usages on branch-2 and master with j.u.Base64 > * Delete our implementation of Base64 on master. > Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20884) Replace usage of our Base64 implementation with java.util.Base64
Mike Drob created HBASE-20884: - Summary: Replace usage of our Base64 implementation with java.util.Base64 Key: HBASE-20884 URL: https://issues.apache.org/jira/browse/HBASE-20884 Project: HBase Issue Type: Task Reporter: Mike Drob Fix For: 3.0.0 We have a public domain implementation of Base64 that is copied into our code base and infrequently receives updates. We should replace usage of that with the new Java 8 java.util.Base64 where possible. For the migration, I propose a phased approach. * Deprecate on 1.x and 2.x to signal to users that this is going away. * Replace usages on branch-2 and master with j.u.Base64 * Delete our implementation of Base64 on master. Does this seem in line with our API compatibility requirements? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19572) RegionMover should use the configured default port number and not the one from HConstants
[ https://issues.apache.org/jira/browse/HBASE-19572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543182#comment-16543182 ] Hudson commented on HBASE-19572: Results for branch master [build #395 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/395/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should use the configured default port number and not the one > from HConstants > - > > Key: HBASE-19572 > URL: https://issues.apache.org/jira/browse/HBASE-19572 > Project: HBase > Issue Type: Bug >Reporter: Esteban Gutierrez >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.1.1 > > Attachments: HBASE-19572.master.001.patch, > HBASE-19572.master.001.patch, HBASE-19572.master.003.patch, > HBASE-19572.master.004.patch, HBASE-19572.master.004.patch, > HBASE-19572.master.005.patch > > > The issue I ran into HBASE-19499 was due RegionMover not using the port used > by {{hbase-site.xml}}. The tool should use the value used in the > configuration before falling back to the hardcoded value > {{HConstants.DEFAULT_REGIONSERVER_PORT}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20880) Fix for warning It would fail on the following input in hbase-spark
[ https://issues.apache.org/jira/browse/HBASE-20880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543181#comment-16543181 ] Hudson commented on HBASE-20880: Results for branch master [build #395 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/395/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/395//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix for warning It would fail on the following input in hbase-spark > --- > > Key: HBASE-20880 > URL: https://issues.apache.org/jira/browse/HBASE-20880 > Project: HBase > Issue Type: Bug > Environment: {code:java} > Maven home: /opt/apache-maven-3.5.3 > Java version: 1.8.0_172, vendor: Oracle Corporation > Java home: > /Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home/jre > Default locale: en_US, platform encoding: UTF-8 > OS name: "mac os x", version: "10.13.5", arch: "x86_64", family: "mac"{code} > last commit: 3fc23fe930aa93e8755cf2bd478bd9907f719fd2 >Reporter: Artem Ervits >Assignee: Artem Ervits >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20880.v01.patch > > > compiling hbase-spark module returns a warning > {code:java} > [WARNING] > /.../hbase/hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/TableOutputFormatSuite.scala:117: > warning: match may not be exhaustive. > [WARNING] It would fail on the following input: Failure((x: Throwable forSome > x not in Exception)) > [WARNING] Try { > [WARNING] ^ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HBASE-14644) Region in transition metric is broken
[ https://issues.apache.org/jira/browse/HBASE-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543016#comment-16543016 ] Hari Sekhon edited comment on HBASE-14644 at 7/13/18 12:51 PM: --- Did anyone check if ritCountOverThreshold was actually fixed, and not just ritCount as I definitely saw ritCountOverThreshold was showing zero while HMaster UI showed regions stuck in transition: See https://issues.apache.org/jira/browse/HBASE-16636? was (Author: harisekhon): Did anyone check under the scenario of having regions stuck in transition if ritCountOverThreshold was actually fixed as documented in https://issues.apache.org/jira/browse/HBASE-16636? > Region in transition metric is broken > - > > Key: HBASE-14644 > URL: https://issues.apache.org/jira/browse/HBASE-14644 > Project: HBase > Issue Type: Bug >Reporter: Elliott Clark >Assignee: huaxiang sun >Priority: Major > Fix For: 1.3.0, 1.2.2, 2.0.0 > > Attachments: HBASE-14644-v001.patch, HBASE-14644-v002-addendum.patch, > HBASE-14644-v002.patch, HBASE-14644-v002.patch, branch-1.diff > > > ritCount stays 0 no matter what -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-14644) Region in transition metric is broken
[ https://issues.apache.org/jira/browse/HBASE-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543016#comment-16543016 ] Hari Sekhon commented on HBASE-14644: - Did anyone check under the scenario of having regions stuck in transition if ritCountOverThreshold was actually fixed as documented in https://issues.apache.org/jira/browse/HBASE-16636? > Region in transition metric is broken > - > > Key: HBASE-14644 > URL: https://issues.apache.org/jira/browse/HBASE-14644 > Project: HBase > Issue Type: Bug >Reporter: Elliott Clark >Assignee: huaxiang sun >Priority: Major > Fix For: 1.3.0, 1.2.2, 2.0.0 > > Attachments: HBASE-14644-v001.patch, HBASE-14644-v002-addendum.patch, > HBASE-14644-v002.patch, HBASE-14644-v002.patch, branch-1.diff > > > ritCount stays 0 no matter what -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542919#comment-16542919 ] Guangxu Cheng commented on HBASE-20883: --- Duplicate of HBASE-20626? > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > JMX read/write/total request counts but since HMaster is collecting this info > anyway it shouldn't be a big change to improve it to also show Reads / Writes > Per Sec as well as Total. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also Per Region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Description: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers JMX read/write/total request counts but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec as well as Total. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also Per Region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. was: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers JMX read/write/total request counts but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec as well as Total. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > JMX read/write/total request counts but since HMaster is collecting this info > anyway it shouldn't be a big change to improve it to also show Reads / Writes > Per Sec as well as Total. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also Per Region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Description: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers JMX read/write/total request counts but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec as well as Total. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. was: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers JMX read/write/total request counts but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > JMX read/write/total request counts but since HMaster is collecting this info > anyway it shouldn't be a big change to improve it to also show Reads / Writes > Per Sec as well as Total. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Description: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers JMX read/write/total request counts but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. was: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > JMX read/write/total request counts but since HMaster is collecting this info > anyway it shouldn't be a big change to improve it to also show Reads / Writes > Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Description: HMaster currently shows Requests Per Second per RegionServer under HMaster UI's /master-status page -> Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. was: HMaster UI currently shows Requests Per Second per RegionServer under /master-status Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster currently shows Requests Per Second per RegionServer under HMaster > UI's /master-status page -> Region Servers -> Base Stats section in the Web > UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > but since HMaster is collecting this info anyway it shouldn't be a big change > to improve it to also show Reads / Writes Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read+Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Summary: HMaster Read+Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec (was: HMaster UI/JMX Read+Write Requests per sec across RegionServers) > HMaster Read+Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > --- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster UI currently shows Requests Per Second per RegionServer under > /mater-status Region Servers -> Base Stats section in the Web UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > but since HMaster is collecting this info anyway it shouldn't be a big change > to improve it to also show Reads / Writes Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Summary: HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec (was: HMaster Read+Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec) > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster UI currently shows Requests Per Second per RegionServer under > /mater-status Region Servers -> Base Stats section in the Web UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > but since HMaster is collecting this info anyway it shouldn't be a big change > to improve it to also show Reads / Writes Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster Read / Write Requests Per Sec across RegionServers, currently only Total Requests Per Sec
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Description: HMaster UI currently shows Requests Per Second per RegionServer under /master-status Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. was: HMaster UI currently shows Requests Per Second per RegionServer under /mater-status Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. > HMaster Read / Write Requests Per Sec across RegionServers, currently only > Total Requests Per Sec > -- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster UI currently shows Requests Per Second per RegionServer under > /master-status Region Servers -> Base Stats section in the Web UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > but since HMaster is collecting this info anyway it shouldn't be a big change > to improve it to also show Reads / Writes Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20883) HMaster UI Read+Write Requests per sec across RegionServers
Hari Sekhon created HBASE-20883: --- Summary: HMaster UI Read+Write Requests per sec across RegionServers Key: HBASE-20883 URL: https://issues.apache.org/jira/browse/HBASE-20883 Project: HBase Issue Type: Improvement Components: Admin, master, metrics, monitoring, UI, Usability Affects Versions: 1.1.2 Reporter: Hari Sekhon HMaster UI currently shows Requests Per Second per RegionServer under /mater-status Region Servers -> Base Stats section in the Web UI. Please add Reads Per Second and Writes Per Second per RegionServer alongside this in the HMaster UI, and also expose the Read/Write/Total requests per sec information in the HMaster JMX API. This will make it easier to find read or write hotspotting on HBase as a combined total will minimize and mask differences between RegionServers. For example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, so write skew will be masked as it won't show enough significant difference in the much larger combined Total Requests Per Second stat. For now I've written a Python tool to calculate this info from RegionServers but since HMaster is collecting this info anyway it shouldn't be a big change to improve it to also show Reads / Writes Per Sec. Find my tools for more granular Read/Write Requests Per Sec Per Regionserver and also per region at my [PyTools github repo|https://github.com/harisekhon/pytools] along with a selection of other HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20883) HMaster UI/JMX Read+Write Requests per sec across RegionServers
[ https://issues.apache.org/jira/browse/HBASE-20883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sekhon updated HBASE-20883: Summary: HMaster UI/JMX Read+Write Requests per sec across RegionServers (was: HMaster UI Read+Write Requests per sec across RegionServers) > HMaster UI/JMX Read+Write Requests per sec across RegionServers > --- > > Key: HBASE-20883 > URL: https://issues.apache.org/jira/browse/HBASE-20883 > Project: HBase > Issue Type: Improvement > Components: Admin, master, metrics, monitoring, UI, Usability >Affects Versions: 1.1.2 >Reporter: Hari Sekhon >Priority: Major > > HMaster UI currently shows Requests Per Second per RegionServer under > /mater-status Region Servers -> Base Stats section in the Web UI. > Please add Reads Per Second and Writes Per Second per RegionServer alongside > this in the HMaster UI, and also expose the Read/Write/Total requests per sec > information in the HMaster JMX API. > This will make it easier to find read or write hotspotting on HBase as a > combined total will minimize and mask differences between RegionServers. For > example, we do 30,000 reads/sec but only 900 writes/sec to each RegionServer, > so write skew will be masked as it won't show enough significant difference > in the much larger combined Total Requests Per Second stat. > For now I've written a Python tool to calculate this info from RegionServers > but since HMaster is collecting this info anyway it shouldn't be a big change > to improve it to also show Reads / Writes Per Sec. > Find my tools for more granular Read/Write Requests Per Sec Per Regionserver > and also per region at my [PyTools github > repo|https://github.com/harisekhon/pytools] along with a selection of other > HBase tools I've used for performance debugging over the years. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-19572) RegionMover should use the configured default port number and not the one from HConstants
[ https://issues.apache.org/jira/browse/HBASE-19572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542889#comment-16542889 ] Hudson commented on HBASE-19572: Results for branch branch-2.1 [build #55 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/55/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/55//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/55//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/55//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should use the configured default port number and not the one > from HConstants > - > > Key: HBASE-19572 > URL: https://issues.apache.org/jira/browse/HBASE-19572 > Project: HBase > Issue Type: Bug >Reporter: Esteban Gutierrez >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.1.1 > > Attachments: HBASE-19572.master.001.patch, > HBASE-19572.master.001.patch, HBASE-19572.master.003.patch, > HBASE-19572.master.004.patch, HBASE-19572.master.004.patch, > HBASE-19572.master.005.patch > > > The issue I ran into HBASE-19499 was due RegionMover not using the port used > by {{hbase-site.xml}}. The tool should use the value used in the > configuration before falling back to the hardcoded value > {{HConstants.DEFAULT_REGIONSERVER_PORT}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20866) HBase 1.x scan performance degradation compared to 0.98 version
[ https://issues.apache.org/jira/browse/HBASE-20866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542870#comment-16542870 ] Ted Yu commented on HBASE-20866: Thanks for the new performance numbers. Have a nice weekend. > HBase 1.x scan performance degradation compared to 0.98 version > --- > > Key: HBASE-20866 > URL: https://issues.apache.org/jira/browse/HBASE-20866 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vikas Vishwakarma >Assignee: Vikas Vishwakarma >Priority: Critical > Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6 > > Attachments: HBASE-20866.branch-1.3.001.patch, > HBASE-20866.branch-1.3.002.patch, HBASE-20866.branch-1.3.003.patch > > > Internally while testing 1.3 as part of migration from 0.98 to 1.3 we > observed perf degradation in scan performance for phoenix queries varying > from few 10's to upto 200% depending on the query being executed. We tried > simple native HBase scan and there also we saw upto 40% degradation in > performance when the number of column qualifiers are high (40-50+) > To identify the root cause of performance diff between 0.98 and 1.3 we > carried out lot of experiments with profiling and git bisect iterations, > however we were not able to identify any particular source of scan > performance degradation and it looked like this is an accumulated degradation > of 5-10% over various enhancements and refactoring. > We identified few major enhancements like partialResult handling, > ScannerContext with heartbeat processing, time/size limiting, RPC > refactoring, etc that could have contributed to small degradation in > performance which put together could be leading to large overall degradation. > One of the changes is > [HBASE-11544|https://jira.apache.org/jira/browse/HBASE-11544] which > implements partialResult handling. In ClientScanner.java the results received > from server are cached on the client side by converting the result array into > an ArrayList. This function gets called in a loop depending on the number of > rows in the scan result. Example for ten’s of millions of rows scanned, this > can be called in the order of millions of times. > In almost all the cases 99% of the time (except for handling partial results, > etc). We are just taking the resultsFromServer converting it into a ArrayList > resultsToAddToCache in addResultsToList(..) and then iterating over the list > again and adding it to cache in loadCache(..) as given in the code path below > In ClientScanner → loadCache(..) → getResultsToAddToCache(..) → > addResultsToList(..) → > {code:java} > loadCache() { > ... > List resultsToAddToCache = > getResultsToAddToCache(values, callable.isHeartbeatMessage()); > ... > … > for (Result rs : resultsToAddToCache) { > rs = filterLoadedCell(rs); > cache.add(rs); > ... > } > } > getResultsToAddToCache(..) { > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && > scan.getAllowPartialResults(); > .. > if (allowPartials || isBatchSet) { > addResultsToList(resultsToAddToCache, resultsFromServer, 0, > (null == resultsFromServer ? 0 : resultsFromServer.length)); > return resultsToAddToCache; > } > ... > } > private void addResultsToList(List outputList, Result[] inputArray, > int start, int end) { > if (inputArray == null || start < 0 || end > inputArray.length) return; > for (int i = start; i < end; i++) { > outputList.add(inputArray[i]); > } > }{code} > > It looks like we can avoid the result array to arraylist conversion > (resultsFromServer --> resultsToAddToCache ) for the first case which is also > the most frequent case and instead directly take the values arraay returned > by callable and add it to the cache without converting it into ArrayList. > I have taken both these flags allowPartials and isBatchSet out in loadcahe() > and I am directly adding values to scanner cache if the above condition is > pass instead of coverting it into arrayList by calling > getResultsToAddToCache(). For example: > {code:java} > protected void loadCache() throws IOException { > Result[] values = null; > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && scan.getAllowPartialResults(); > .. > for (;;) { > try { > values = call(callable, caller, scannerTimeout); > .. > } catch (DoNotRetryIOException | NeedUnmanagedConnectionException e) { > .. > } > if (allowPartials || isBatchSet) { // DIRECTLY COPY values TO CACHE > if (values != null) { > for (int v=0; v Result rs = values[v]; > > cache.add(rs); > ... > } else { // DO ALL THE REGULAR PARTIAL RESULT HANDLING .. >
[jira] [Commented] (HBASE-19572) RegionMover should use the configured default port number and not the one from HConstants
[ https://issues.apache.org/jira/browse/HBASE-19572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542868#comment-16542868 ] Hudson commented on HBASE-19572: Results for branch branch-2 [build #977 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/977/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/977//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/977//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/977//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should use the configured default port number and not the one > from HConstants > - > > Key: HBASE-19572 > URL: https://issues.apache.org/jira/browse/HBASE-19572 > Project: HBase > Issue Type: Bug >Reporter: Esteban Gutierrez >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.1.1 > > Attachments: HBASE-19572.master.001.patch, > HBASE-19572.master.001.patch, HBASE-19572.master.003.patch, > HBASE-19572.master.004.patch, HBASE-19572.master.004.patch, > HBASE-19572.master.005.patch > > > The issue I ran into HBASE-19499 was due RegionMover not using the port used > by {{hbase-site.xml}}. The tool should use the value used in the > configuration before falling back to the hardcoded value > {{HConstants.DEFAULT_REGIONSERVER_PORT}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20866) HBase 1.x scan performance degradation compared to 0.98 version
[ https://issues.apache.org/jira/browse/HBASE-20866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542866#comment-16542866 ] Vikas Vishwakarma commented on HBASE-20866: --- [~yuzhih...@gmail.com] I am not seeing much difference in in RandomReadTest and SequentialReadTest probably because these are mostly gets. RandomSeekScanTest 2013537ms without patch and 1908920ms with patch which is 5-6 % improvement filterScan and scanRange1 were taking a long time to complete. I will leave a test iteration over the weekend and report the same once completed. The above test failures in server module again don't look related to my change, probably some issue with the build. Locally mvn test -P runDevTests passed for me. I will leave mvn test -P runAllTests running over the weekend. > HBase 1.x scan performance degradation compared to 0.98 version > --- > > Key: HBASE-20866 > URL: https://issues.apache.org/jira/browse/HBASE-20866 > Project: HBase > Issue Type: Bug >Affects Versions: 1.3.2 >Reporter: Vikas Vishwakarma >Assignee: Vikas Vishwakarma >Priority: Critical > Fix For: 1.5.0, 1.2.7, 1.3.3, 1.4.6 > > Attachments: HBASE-20866.branch-1.3.001.patch, > HBASE-20866.branch-1.3.002.patch, HBASE-20866.branch-1.3.003.patch > > > Internally while testing 1.3 as part of migration from 0.98 to 1.3 we > observed perf degradation in scan performance for phoenix queries varying > from few 10's to upto 200% depending on the query being executed. We tried > simple native HBase scan and there also we saw upto 40% degradation in > performance when the number of column qualifiers are high (40-50+) > To identify the root cause of performance diff between 0.98 and 1.3 we > carried out lot of experiments with profiling and git bisect iterations, > however we were not able to identify any particular source of scan > performance degradation and it looked like this is an accumulated degradation > of 5-10% over various enhancements and refactoring. > We identified few major enhancements like partialResult handling, > ScannerContext with heartbeat processing, time/size limiting, RPC > refactoring, etc that could have contributed to small degradation in > performance which put together could be leading to large overall degradation. > One of the changes is > [HBASE-11544|https://jira.apache.org/jira/browse/HBASE-11544] which > implements partialResult handling. In ClientScanner.java the results received > from server are cached on the client side by converting the result array into > an ArrayList. This function gets called in a loop depending on the number of > rows in the scan result. Example for ten’s of millions of rows scanned, this > can be called in the order of millions of times. > In almost all the cases 99% of the time (except for handling partial results, > etc). We are just taking the resultsFromServer converting it into a ArrayList > resultsToAddToCache in addResultsToList(..) and then iterating over the list > again and adding it to cache in loadCache(..) as given in the code path below > In ClientScanner → loadCache(..) → getResultsToAddToCache(..) → > addResultsToList(..) → > {code:java} > loadCache() { > ... > List resultsToAddToCache = > getResultsToAddToCache(values, callable.isHeartbeatMessage()); > ... > … > for (Result rs : resultsToAddToCache) { > rs = filterLoadedCell(rs); > cache.add(rs); > ... > } > } > getResultsToAddToCache(..) { > .. > final boolean isBatchSet = scan != null && scan.getBatch() > 0; > final boolean allowPartials = scan != null && > scan.getAllowPartialResults(); > .. > if (allowPartials || isBatchSet) { > addResultsToList(resultsToAddToCache, resultsFromServer, 0, > (null == resultsFromServer ? 0 : resultsFromServer.length)); > return resultsToAddToCache; > } > ... > } > private void addResultsToList(List outputList, Result[] inputArray, > int start, int end) { > if (inputArray == null || start < 0 || end > inputArray.length) return; > for (int i = start; i < end; i++) { > outputList.add(inputArray[i]); > } > }{code} > > It looks like we can avoid the result array to arraylist conversion > (resultsFromServer --> resultsToAddToCache ) for the first case which is also > the most frequent case and instead directly take the values arraay returned > by callable and add it to the cache without converting it into ArrayList. > I have taken both these flags allowPartials and isBatchSet out in loadcahe() > and I am directly adding values to scanner cache if the above condition is > pass instead of coverting it into arrayList by calling > getResultsToAddToCache(). For example: > {code:java} > protected void loadCache() throws
[jira] [Commented] (HBASE-19572) RegionMover should use the configured default port number and not the one from HConstants
[ https://issues.apache.org/jira/browse/HBASE-19572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542808#comment-16542808 ] Hudson commented on HBASE-19572: Results for branch branch-2.0 [build #543 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/543/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/543//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/543//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/543//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > RegionMover should use the configured default port number and not the one > from HConstants > - > > Key: HBASE-19572 > URL: https://issues.apache.org/jira/browse/HBASE-19572 > Project: HBase > Issue Type: Bug >Reporter: Esteban Gutierrez >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.1.1 > > Attachments: HBASE-19572.master.001.patch, > HBASE-19572.master.001.patch, HBASE-19572.master.003.patch, > HBASE-19572.master.004.patch, HBASE-19572.master.004.patch, > HBASE-19572.master.005.patch > > > The issue I ran into HBASE-19499 was due RegionMover not using the port used > by {{hbase-site.xml}}. The tool should use the value used in the > configuration before falling back to the hardcoded value > {{HConstants.DEFAULT_REGIONSERVER_PORT}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542796#comment-16542796 ] Toshihiro Suzuki commented on HBASE-20882: -- [~Apache9] Could you please take a look at the patch? > Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in > TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0 > --- > > Key: HBASE-20882 > URL: https://issues.apache.org/jira/browse/HBASE-20882 > Project: HBase > Issue Type: Sub-task > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20882.branch-2.0.001.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542794#comment-16542794 ] Balazs Meszaros commented on HBASE-20649: - I added some extra documentation. > Validate HFiles do not have PREFIX_TREE DataBlockEncoding > - > > Key: HBASE-20649 > URL: https://issues.apache.org/jira/browse/HBASE-20649 > Project: HBase > Issue Type: New Feature >Reporter: Peter Somogyi >Assignee: Balazs Meszaros >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20649.master.001.patch, > HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, > HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, > HBASE-20649.master.006.patch > > > HBASE-20592 adds a tool to check column families on the cluster do not have > PREFIX_TREE encoding. > Since it is possible that DataBlockEncoding was already changed but HFiles > are not rewritten yet we would need a tool that can verify the content of > hfiles in the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Balazs Meszaros updated HBASE-20649: Attachment: HBASE-20649.master.006.patch > Validate HFiles do not have PREFIX_TREE DataBlockEncoding > - > > Key: HBASE-20649 > URL: https://issues.apache.org/jira/browse/HBASE-20649 > Project: HBase > Issue Type: New Feature >Reporter: Peter Somogyi >Assignee: Balazs Meszaros >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-20649.master.001.patch, > HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, > HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, > HBASE-20649.master.006.patch > > > HBASE-20592 adds a tool to check column families on the cluster do not have > PREFIX_TREE encoding. > Since it is possible that DataBlockEncoding was already changed but HFiles > are not rewritten yet we would need a tool that can verify the content of > hfiles in the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542788#comment-16542788 ] Hadoop QA commented on HBASE-20882: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 20s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 58s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} branch-2.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} branch-2.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} hbase-server: The patch generated 0 new + 64 unchanged - 1 fixed = 64 total (was 65) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 54s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 11m 3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}154m 37s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}195m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 | | JIRA Issue | HBASE-20882 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12931439/HBASE-20882.branch-2.0.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux ab17ad9020b4 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | branch-2.0 / 59a02c3978 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13616/testReport/ | | Max. process+thread count | 4131 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/13616/console | | Powered by | Apache Yetus 0.7.0
[jira] [Updated] (HBASE-20879) Compacting memstore config should handle lower case
[ https://issues.apache.org/jira/browse/HBASE-20879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-20879: --- Reporter: Tushar (was: Tushar Sharma) > Compacting memstore config should handle lower case > --- > > Key: HBASE-20879 > URL: https://issues.apache.org/jira/browse/HBASE-20879 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.1 >Reporter: Tushar >Assignee: Ted Yu >Priority: Major > Attachments: 20879.v2.txt > > > Tushar reported seeing the following in region server log when entering > 'basic' for compacting memstore type: > {code} > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-0] > handler.OpenRegionHandler: Failed open of > region=usertable,user6379,1531182972304.69abd81a44e9cc3ef9e150709f4f69ab., > starting to roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hbase.MemoryCompactionPolicy.valueOf(MemoryCompactionPolicy.java:26) > at > org.apache.hadoop.hbase.regionserver.HStore.getMemstore(HStore.java:331) > at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:271) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5531) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:999) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:996) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ... 3 more > 2018-07-10 19:43:45,944 ERROR [RS_OPEN_REGION-regionserver/c01s22:16020-1] > handler.OpenRegionHandler: Failed open of > region=temp,,1530511278693.0be48eedc68b9358aa475946d00571f1., starting to > roll back the global memstore size. > java.io.IOException: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hbase.MemoryCompactionPolicy.basic > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeStores(HRegion.java:1035) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:900) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:872) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7048) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7006) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6977) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6933) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6884) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284) > at > org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:109) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at
[jira] [Commented] (HBASE-20876) Improve docs style in HConstants
[ https://issues.apache.org/jira/browse/HBASE-20876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542654#comment-16542654 ] Reid Chan commented on HBASE-20876: --- +1. Thanks for taking up this one. I will commit it tomorrow. > Improve docs style in HConstants > > > Key: HBASE-20876 > URL: https://issues.apache.org/jira/browse/HBASE-20876 > Project: HBase > Issue Type: Improvement >Reporter: Reid Chan >Assignee: Wei-Chiu Chuang >Priority: Minor > Labels: beginner, beginners, newbie > Attachments: HBASE-20876.master.001.patch > > > In {{HConstants}}, there's a docs snippet: > {code} > /** Don't use it! This'll get you the wrong path in a secure cluster. > * Use FileSystem.getHomeDirectory() or > * "/user/" + UserGroupInformation.getCurrentUser().getShortUserName() */ > {code} > It's ugly style. > Let's improve this docs with following > {code} > /** > * Description > */ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20686) Asyncfs should retry upon RetryStartFileException
[ https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542638#comment-16542638 ] Duo Zhang commented on HBASE-20686: --- Maybe we could make use of Proxy.newProxyInstance? The InvocationHandler does not need to know the number of parameters at compile time? Just say. > Asyncfs should retry upon RetryStartFileException > - > > Key: HBASE-20686 > URL: https://issues.apache.org/jira/browse/HBASE-20686 > Project: HBase > Issue Type: Bug > Components: asyncclient >Affects Versions: 2.0.0-beta-1 > Environment: HBase 2.0, Hadoop 3 with at-rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-20686.master.001.patch, > HBASE-20686.master.002.patch > > > In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when > NameNode experience encryption zone related issue. The code exists in > DFSOutputStream#newStreamForCreate(). (HDFS-6970) > In HBase-2's asyncfs implementation, > FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of > HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon > RetryStartFileException. So it is less resilient to such issues. > Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but > asyncfs does not. Therefore, hbase gets different exceptions than before. > File this jira to get this corrected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20686) Asyncfs should retry upon RetryStartFileException
[ https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542633#comment-16542633 ] Wei-Chiu Chuang commented on HBASE-20686: - Thanks [~Apache9] yeah I've tried that but mocking a reflection based method doesn't seem trivial. Ultimately you'd need to stub {{ClientProtocol.create()}} but there are two versions of {{ClientProtocol.create()}} with different parameter length depending on the Hadoop version, and {{Mockito.anyVararg()}} doesn't work in this case. So the test code would need to somehow maintain multiple versions of {{ClientProtocol.create()}} and it gets messier from there. > Asyncfs should retry upon RetryStartFileException > - > > Key: HBASE-20686 > URL: https://issues.apache.org/jira/browse/HBASE-20686 > Project: HBase > Issue Type: Bug > Components: asyncclient >Affects Versions: 2.0.0-beta-1 > Environment: HBase 2.0, Hadoop 3 with at-rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-20686.master.001.patch, > HBASE-20686.master.002.patch > > > In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when > NameNode experience encryption zone related issue. The code exists in > DFSOutputStream#newStreamForCreate(). (HDFS-6970) > In HBase-2's asyncfs implementation, > FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of > HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon > RetryStartFileException. So it is less resilient to such issues. > Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but > asyncfs does not. Therefore, hbase gets different exceptions than before. > File this jira to get this corrected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20686) Asyncfs should retry upon RetryStartFileException
[ https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542627#comment-16542627 ] Hadoop QA commented on HBASE-20686: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 41s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s{color} | {color:red} hbase-server: The patch generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 36s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 10m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 49s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}162m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20686 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12931428/HBASE-20686.master.002.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 89dc80bcb996 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / ce82fd0f47 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | checkstyle | https://builds.apache.org/job/PreCommit-HBASE-Build/13613/artifact/patchprocess/diff-checkstyle-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13613/testReport/ | | Max. process+thread count | 4805 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output |
[jira] [Commented] (HBASE-20860) Merged region's RIT state may not be cleaned after master restart
[ https://issues.apache.org/jira/browse/HBASE-20860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542598#comment-16542598 ] Duo Zhang commented on HBASE-20860: --- Please also push it to branch-2? > Merged region's RIT state may not be cleaned after master restart > - > > Key: HBASE-20860 > URL: https://issues.apache.org/jira/browse/HBASE-20860 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.1.0, 2.0.1 >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: HBASE-20860.branch-2.0.002.patch, > HBASE-20860.branch-2.0.003.patch, HBASE-20860.branch-2.0.004.patch, > HBASE-20860.branch-2.0.005.patch, HBASE-20860.branch-2.0.patch > > > In MergeTableRegionsProcedure, we issue UnassignProcedures to offline regions > to merge. But if we restart master just after MergeTableRegionsProcedure > finished these two UnassignProcedure and before it can delete their meta > entries. The new master will found these two region is CLOSED but no > procedures are attached to them. They will be regard as RIT regions and > nobody will clean the RIT state for them later. > A quick way to resolve this stuck situation in the production env is > restarting master again, since the meta entries are deleted in > MergeTableRegionsProcedure. Here, I offer a fix for this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20860) Merged region's RIT state may not be cleaned after master restart
[ https://issues.apache.org/jira/browse/HBASE-20860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20860: -- Fix Version/s: 2.2.0 > Merged region's RIT state may not be cleaned after master restart > - > > Key: HBASE-20860 > URL: https://issues.apache.org/jira/browse/HBASE-20860 > Project: HBase > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.1.0, 2.0.1 >Reporter: Allan Yang >Assignee: Allan Yang >Priority: Major > Fix For: 3.0.0, 2.0.2, 2.2.0, 2.1.1 > > Attachments: HBASE-20860.branch-2.0.002.patch, > HBASE-20860.branch-2.0.003.patch, HBASE-20860.branch-2.0.004.patch, > HBASE-20860.branch-2.0.005.patch, HBASE-20860.branch-2.0.patch > > > In MergeTableRegionsProcedure, we issue UnassignProcedures to offline regions > to merge. But if we restart master just after MergeTableRegionsProcedure > finished these two UnassignProcedure and before it can delete their meta > entries. The new master will found these two region is CLOSED but no > procedures are attached to them. They will be regard as RIT regions and > nobody will clean the RIT state for them later. > A quick way to resolve this stuck situation in the production env is > restarting master again, since the meta entries are deleted in > MergeTableRegionsProcedure. Here, I offer a fix for this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20686) Asyncfs should retry upon RetryStartFileException
[ https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542586#comment-16542586 ] Duo Zhang commented on HBASE-20686: --- I think you can use Mockito to create a mocked DistributedFileSystem? It is also not easy as we will get the ClientProtocol and call it directly, But I think this way is cleaner as the FILE_CREATOR is 'static final'... > Asyncfs should retry upon RetryStartFileException > - > > Key: HBASE-20686 > URL: https://issues.apache.org/jira/browse/HBASE-20686 > Project: HBase > Issue Type: Bug > Components: asyncclient >Affects Versions: 2.0.0-beta-1 > Environment: HBase 2.0, Hadoop 3 with at-rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HBASE-20686.master.001.patch, > HBASE-20686.master.002.patch > > > In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when > NameNode experience encryption zone related issue. The code exists in > DFSOutputStream#newStreamForCreate(). (HDFS-6970) > In HBase-2's asyncfs implementation, > FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of > HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon > RetryStartFileException. So it is less resilient to such issues. > Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but > asyncfs does not. Therefore, hbase gets different exceptions than before. > File this jira to get this corrected. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-20882: - Attachment: HBASE-20882.branch-2.0.001.patch > Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in > TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0 > --- > > Key: HBASE-20882 > URL: https://issues.apache.org/jira/browse/HBASE-20882 > Project: HBase > Issue Type: Sub-task > Components: backport >Reporter: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20882.branch-2.0.001.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HBASE-20882) Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0
[ https://issues.apache.org/jira/browse/HBASE-20882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiro Suzuki updated HBASE-20882: - Assignee: Toshihiro Suzuki Status: Patch Available (was: Open) > Backport HBASE-20616 "TruncateTableProcedure is stuck in retry loop in > TRUNCATE_TABLE_CREATE_FS_LAYOUT state" to branch-2.0 > --- > > Key: HBASE-20882 > URL: https://issues.apache.org/jira/browse/HBASE-20882 > Project: HBase > Issue Type: Sub-task > Components: backport >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Attachments: HBASE-20882.branch-2.0.001.patch > > > Backport parent issue to branch-2.0. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-20876) Improve docs style in HConstants
[ https://issues.apache.org/jira/browse/HBASE-20876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542575#comment-16542575 ] Hadoop QA commented on HBASE-20876: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 20s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b | | JIRA Issue | HBASE-20876 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12931437/HBASE-20876.master.001.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a504bbe10bc2 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / ce82fd0f47 | | maven | version: Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) | | Default Java | 1.8.0_171 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/13615/testReport/ | | Max. process+thread count | 293 (vs. ulimit of 1) | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/13615/console | | Powered by |