[jira] [Commented] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843093#comment-16843093
 ] 

Duo Zhang commented on HBASE-22440:
---

Maybe it is better to override this method in HMaster?

> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>Assignee: puleya7
>Priority: Major
> Attachments: HBASE-22440.branch-1.001.patch, 
> HBASE-22440.branch-2.patch, HBASE-22440.master.patch
>
>
> Precondition:
> hbase.balancer.tablesOnMaster = true
> hbase.balancer.tablesOnMaster.systemTablesOnly = true
>  
> Open the rs page of the master throws NullPointException, because 
> replicationSourceHandler never initialized.
> HRegionServer#getWalGroupsReplicationStatus() need check [is HMaster && CAN'T 
> host user region].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843090#comment-16843090
 ] 

Andrew Purtell commented on HBASE-22413:


In my opinion the casting should be removed. 

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843088#comment-16843088
 ] 

Hudson commented on HBASE-22184:


Results for branch branch-1
[build #837 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/837/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/837//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/837//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/837//console].




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843084#comment-16843084
 ] 

Duo Zhang commented on HBASE-22413:
---

So what's the decision here? Provide a new patch to remove the casting?

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache9 commented on a change in pull request #237: HBASE-22408 add dead and unknown server open regions metric to AM 01

2019-05-17 Thread GitBox
Apache9 commented on a change in pull request #237: HBASE-22408 add dead and 
unknown server open regions metric to AM 01
URL: https://github.com/apache/hbase/pull/237#discussion_r285327808
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
 ##
 @@ -907,6 +906,20 @@ public boolean isServerOnline(ServerName serverName) {
 return serverName != null && onlineServers.containsKey(serverName);
   }
 
+  public enum ServerLiveState {
+LIVE,
+DEAD,
+UNKNOWN
+  }
+
+  /**
+   * @return whether the server is online, dead, or unknown.
+   */
+  public synchronized ServerLiveState isServerKnownAndOnline(ServerName 
serverName) {
 
 Review comment:
   Oh, we also checked the deadServers. That's fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22433) Corrupt hfile data

2019-05-17 Thread Anoop Sam John (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842896#comment-16842896
 ] 

Anoop Sam John commented on HBASE-22433:


Seems related issues..  For checking whether an already cached item, we get the 
existing block and compare and in finally return block. Seems an unwanted 
refCount decr happening here which causing the premature eviction of the block 
and so corrupted the HFile block buffer. 

> Corrupt hfile data
> --
>
> Key: HBASE-22433
> URL: https://issues.apache.org/jira/browse/HBASE-22433
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: binlijin
>Priority: Critical
>
> We use 2.2.0 version and encounter corrupt cell data.
> {code}
> 2019-05-15 22:53:59,354 ERROR 
> [regionserver/hb-mbasedata-14:16020-longCompactions-1557048533421] 
> regionserver.CompactSplit: Compaction failed 
> region=mktdm_id_src,9990,1557681762973.255e9adde013e370deb595c59a7285c3., 
> storeName=o, priority=196, startTime=1557931927314
> java.lang.IllegalStateException: Invalid currKeyLen 1700752997 or 
> currValueLen 2002739568. Block offset: 70452918, block length: 66556, 
> position: 42364 (without header).
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkKeyValueLen(HFileReaderImpl.java:1182)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:628)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
>  at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:386)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>  at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1429)
>  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2231)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:629)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:671)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> 2019-05-15 23:14:24,143 ERROR 
> [regionserver/hb-mbasedata-14:16020-longCompactions-1557048533422] 
> regionserver.CompactSplit: Compaction failed 
> region=mktdm_id_src,9fdee4,1557681762973.1782aebb83eae551e7bdfc2bfa13eb3d., 
> storeName=o, priority=194, startTime=1557932726849
> java.lang.RuntimeException: Unknown code 98
>  at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:274)
>  at org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(CellUtil.java:1307)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(HFileWriterImpl.java:383)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock(HFileWriterImpl.java:343)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:603)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:376)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.abortWriter(DefaultCompactor.java:98)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.abortWriter(DefaultCompactor.java:42)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:335)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>  at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1429)
>  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2231)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:629)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:671)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> 

[jira] [Updated] (HBASE-22289) WAL-based log splitting resubmit threshold may result in a task being stuck forever

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22289:
--
Attachment: HBASE-22289.branch-2.1.001.patch

> WAL-based log splitting resubmit threshold may result in a task being stuck 
> forever
> ---
>
> Key: HBASE-22289
> URL: https://issues.apache.org/jira/browse/HBASE-22289
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.5.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.1.5
>
> Attachments: HBASE-22289.01-branch-2.1.patch, 
> HBASE-22289.02-branch-2.1.patch, HBASE-22289.03-branch-2.1.patch, 
> HBASE-22289.branch-2.1.001.patch
>
>
> Not sure if this is handled better in procedure based WAL splitting; in any 
> case it affects versions before that.
> The problem is not in ZK as such but in internal state tracking in master, it 
> seems.
> Master:
> {noformat}
> 2019-04-21 01:49:49,584 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Resubmitting task 
> .1555831286638
> {noformat}
> worker-rs, split fails 
> {noformat}
> 
> 2019-04-21 02:05:31,774 INFO  
> [RS_LOG_REPLAY_OPS-regionserver/:17020-1] wal.WALSplitter: 
> Processed 24 edits across 2 regions; edits skipped=457; log 
> file=.1555831286638, length=2156363702, corrupted=false, progress 
> failed=true
> {noformat}
> Master (not sure about the delay of the acquired-message; at any rate it 
> seems to detect the failure fine from this server)
> {noformat}
> 2019-04-21 02:11:14,928 INFO  [main-EventThread] 
> coordination.SplitLogManagerCoordination: Task .1555831286638 acquired 
> by ,17020,139815097
> 2019-04-21 02:19:41,264 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Skipping resubmissions of task 
> .1555831286638 because threshold 3 reached
> {noformat}
> After that this task is stuck in the limbo forever with the old worker, and 
> never resubmitted. 
> RS never logs anything else for this task.
> Killing the RS on the worker unblocked the task and some other server did the 
> split very quickly, so seems like master doesn't clear the worker name in its 
> internal state when hitting the threshold... master never restarted so 
> restarting the master might have also cleared it.
> This is extracted from splitlogmanager log messages, note the times.
> {noformat}
> 2019-04-21 02:2   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20, 
> 
> 2019-04-22 11:1   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20}
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22169) Open region failed cause memory leak

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842820#comment-16842820
 ] 

HBase QA commented on HBASE-22169:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-22169 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969054/0001-HBASE-22169-Open-region-failed-cause-memory-leak.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/348/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Open region failed cause memory leak
> 
>
> Key: HBASE-22169
> URL: https://issues.apache.org/jira/browse/HBASE-22169
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 3.0.0, 2.2.1, 2.1.6
>
> Attachments: 
> 0001-HBASE-22169-Open-region-failed-cause-memory-leak.patch, 
> HBASE-22169-master-v1.patch, HBASE-22169-master-v2.patch
>
>
> In some cases (for example, coprocessor path is wrong) region open failed, 
> MetricsRegionWrapperImpl is already init and not close, cause memory leak;
> {code:java}
> 2019-02-21 15:41:32,929 ERROR 
> [RS_OPEN_REGION-hb-2zedsc3fxjn12dl6u-005:16020-7] 
> regionserver.RegionCoprocessorHost(362): Failed to load coprocessor 
> org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService
> java.lang.IllegalArgumentException: java.net.UnknownHostException: emr-cluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:352)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:240)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:749)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:657)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6727)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7037)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7009)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6965)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6916)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22169) Open region failed cause memory leak

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22169:
--
Fix Version/s: (was: 2.1.5)
   2.1.6

> Open region failed cause memory leak
> 
>
> Key: HBASE-22169
> URL: https://issues.apache.org/jira/browse/HBASE-22169
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 3.0.0, 2.2.1, 2.1.6
>
> Attachments: 
> 0001-HBASE-22169-Open-region-failed-cause-memory-leak.patch, 
> HBASE-22169-master-v1.patch, HBASE-22169-master-v2.patch
>
>
> In some cases (for example, coprocessor path is wrong) region open failed, 
> MetricsRegionWrapperImpl is already init and not close, cause memory leak;
> {code:java}
> 2019-02-21 15:41:32,929 ERROR 
> [RS_OPEN_REGION-hb-2zedsc3fxjn12dl6u-005:16020-7] 
> regionserver.RegionCoprocessorHost(362): Failed to load coprocessor 
> org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService
> java.lang.IllegalArgumentException: java.net.UnknownHostException: emr-cluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:352)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:240)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:749)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:657)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6727)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7037)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7009)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6965)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6916)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22169) Open region failed cause memory leak

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22169:
--
Attachment: 0001-HBASE-22169-Open-region-failed-cause-memory-leak.patch

> Open region failed cause memory leak
> 
>
> Key: HBASE-22169
> URL: https://issues.apache.org/jira/browse/HBASE-22169
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 3.0.0, 2.1.5, 2.2.1
>
> Attachments: 
> 0001-HBASE-22169-Open-region-failed-cause-memory-leak.patch, 
> HBASE-22169-master-v1.patch, HBASE-22169-master-v2.patch
>
>
> In some cases (for example, coprocessor path is wrong) region open failed, 
> MetricsRegionWrapperImpl is already init and not close, cause memory leak;
> {code:java}
> 2019-02-21 15:41:32,929 ERROR 
> [RS_OPEN_REGION-hb-2zedsc3fxjn12dl6u-005:16020-7] 
> regionserver.RegionCoprocessorHost(362): Failed to load coprocessor 
> org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService
> java.lang.IllegalArgumentException: java.net.UnknownHostException: emr-cluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:352)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:240)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:749)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:657)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6727)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7037)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7009)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6965)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6916)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22169) Open region failed cause memory leak

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842812#comment-16842812
 ] 

stack commented on HBASE-22169:
---

I tried apply this. It seemed ok on branch-2.1 though it'd fail on occasion. I 
ran the test on master branch and it did this:

{code}
 ---
 Test set: org.apache.hadoop.hbase.regionserver.TestHRegion
 ---
 Tests run: 60, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 780.443 s <<< 
FAILURE! - in org.apache.hadoop.hbase.regionserver.TestHRegion
 org.apache.hadoop.hbase.regionserver.TestHRegion  Time elapsed: 717.422 s  <<< 
ERROR!
 org.junit.runners.model.TestTimedOutException: test timed out after 780 seconds
   at 
app//org.apache.hadoop.hbase.regionserver.TestHRegion.testOpenRegionFailedMemoryLeak(TestHRegion.java:6306)
{code}

Let me revert the commits on branch-2.1 and branch-2.2 for now until above is 
resolved. I'll put up what I cherry-picked.

> Open region failed cause memory leak
> 
>
> Key: HBASE-22169
> URL: https://issues.apache.org/jira/browse/HBASE-22169
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 3.0.0, 2.1.5, 2.2.1
>
> Attachments: HBASE-22169-master-v1.patch, HBASE-22169-master-v2.patch
>
>
> In some cases (for example, coprocessor path is wrong) region open failed, 
> MetricsRegionWrapperImpl is already init and not close, cause memory leak;
> {code:java}
> 2019-02-21 15:41:32,929 ERROR 
> [RS_OPEN_REGION-hb-2zedsc3fxjn12dl6u-005:16020-7] 
> regionserver.RegionCoprocessorHost(362): Failed to load coprocessor 
> org.apache.kylin.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService
> java.lang.IllegalArgumentException: java.net.UnknownHostException: emr-cluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.init(CoprocessorClassLoader.java:165)
> at 
> org.apache.hadoop.hbase.util.CoprocessorClassLoader.getClassLoader(CoprocessorClassLoader.java:250)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:194)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:352)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.(RegionCoprocessorHost.java:240)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:749)
> at org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:657)
> at sun.reflect.GeneratedConstructorAccessor13.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:6727)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7037)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7009)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6965)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6916)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:362)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22437) HBOSS: Add Hadoop 2 / 3 profiles

2019-05-17 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842807#comment-16842807
 ] 

Sean Mackrory edited comment on HBASE-22437 at 5/17/19 11:41 PM:
-

I wondered if maybe it was a difference in the parent class, 
FileSystemContractBaseTest. Diffing the 2 versions, there's a few additional 
tests and minor tweaks, the tests are annotated with @Test in Hadoop 3 but not 
in Hadoop 2 (shouldn't be a factor here, as I'm also annotating my versions of 
the failing tests with that), and there's a Timeout rule and an 
ExpectedException.none rule. Neither strikes me as an obvious factor here...

edit: I actually added identical rules to that test and ran it with -Phadoop2, 
and had the same issue. So I don't see any differences in the parent class from 
Hadoop 2 to Hadoop 3 that are even remotely suspicous.


was (Author: mackrorysd):
I wondered if maybe it was a difference in the parent class, 
FileSystemContractBaseTest. Diffing the 2 versions, there's a few additional 
tests and minor tweaks, the tests are annotated with @Test in Hadoop 3 but not 
in Hadoop 2 (shouldn't be a factor here, as I'm also annotating my versions of 
the failing tests with that), and there's a Timeout rule and an 
ExpectedException.none rule. Neither strikes me as an obvious factor here...

> HBOSS: Add Hadoop 2 / 3 profiles
> 
>
> Key: HBASE-22437
> URL: https://issues.apache.org/jira/browse/HBASE-22437
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>  Labels: HBOSS
> Attachments: 0001-HBASE-22437-HBOSS-Add-Hadoop-2-3-profiles.patch
>
>
> Original discussion on HBASE-22149 indicated interest running HBOSS on Hadoop 
> 2, and HBase itself maintains profiles for Hadoop 2 and 3. There's no 
> fundamental reason we can't - there are some minor incompatibilities in the 
> code, but no fundamental mismatch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22289) WAL-based log splitting resubmit threshold may result in a task being stuck forever

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842808#comment-16842808
 ] 

stack commented on HBASE-22289:
---

Patch looks good [~sershe] Let me try and fix the FB..

> WAL-based log splitting resubmit threshold may result in a task being stuck 
> forever
> ---
>
> Key: HBASE-22289
> URL: https://issues.apache.org/jira/browse/HBASE-22289
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 1.5.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.1.5
>
> Attachments: HBASE-22289.01-branch-2.1.patch, 
> HBASE-22289.02-branch-2.1.patch, HBASE-22289.03-branch-2.1.patch
>
>
> Not sure if this is handled better in procedure based WAL splitting; in any 
> case it affects versions before that.
> The problem is not in ZK as such but in internal state tracking in master, it 
> seems.
> Master:
> {noformat}
> 2019-04-21 01:49:49,584 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Resubmitting task 
> .1555831286638
> {noformat}
> worker-rs, split fails 
> {noformat}
> 
> 2019-04-21 02:05:31,774 INFO  
> [RS_LOG_REPLAY_OPS-regionserver/:17020-1] wal.WALSplitter: 
> Processed 24 edits across 2 regions; edits skipped=457; log 
> file=.1555831286638, length=2156363702, corrupted=false, progress 
> failed=true
> {noformat}
> Master (not sure about the delay of the acquired-message; at any rate it 
> seems to detect the failure fine from this server)
> {noformat}
> 2019-04-21 02:11:14,928 INFO  [main-EventThread] 
> coordination.SplitLogManagerCoordination: Task .1555831286638 acquired 
> by ,17020,139815097
> 2019-04-21 02:19:41,264 INFO  
> [master/:17000.splitLogManager..Chore.1] 
> coordination.SplitLogManagerCoordination: Skipping resubmissions of task 
> .1555831286638 because threshold 3 reached
> {noformat}
> After that this task is stuck in the limbo forever with the old worker, and 
> never resubmitted. 
> RS never logs anything else for this task.
> Killing the RS on the worker unblocked the task and some other server did the 
> split very quickly, so seems like master doesn't clear the worker name in its 
> internal state when hitting the threshold... master never restarted so 
> restarting the master might have also cleared it.
> This is extracted from splitlogmanager log messages, note the times.
> {noformat}
> 2019-04-21 02:2   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20, 
> 
> 2019-04-22 11:1   1555831286638=last_update = 1555837874928 last_version = 11 
> cur_worker_name = ,17020,139815097 status = in_progress 
> incarnation = 3 resubmits = 3 batch = installed = 24 done = 3 error = 20}
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22437) HBOSS: Add Hadoop 2 / 3 profiles

2019-05-17 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842807#comment-16842807
 ] 

Sean Mackrory commented on HBASE-22437:
---

I wondered if maybe it was a difference in the parent class, 
FileSystemContractBaseTest. Diffing the 2 versions, there's a few additional 
tests and minor tweaks, the tests are annotated with @Test in Hadoop 3 but not 
in Hadoop 2 (shouldn't be a factor here, as I'm also annotating my versions of 
the failing tests with that), and there's a Timeout rule and an 
ExpectedException.none rule. Neither strikes me as an obvious factor here...

> HBOSS: Add Hadoop 2 / 3 profiles
> 
>
> Key: HBASE-22437
> URL: https://issues.apache.org/jira/browse/HBASE-22437
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>  Labels: HBOSS
> Attachments: 0001-HBASE-22437-HBOSS-Add-Hadoop-2-3-profiles.patch
>
>
> Original discussion on HBASE-22149 indicated interest running HBOSS on Hadoop 
> 2, and HBase itself maintains profiles for Hadoop 2 and 3. There's no 
> fundamental reason we can't - there are some minor incompatibilities in the 
> code, but no fundamental mismatch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22146) SpaceQuotaViolationPolicy Disable is not working in Namespace level

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22146:
--
Fix Version/s: (was: 2.1.5)
   2.1.6

> SpaceQuotaViolationPolicy Disable is not working in Namespace level
> ---
>
> Key: HBASE-22146
> URL: https://issues.apache.org/jira/browse/HBASE-22146
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0
>Reporter: Uma Maheswari
>Assignee: Nihal Jain
>Priority: Major
>  Labels: Quota, space
> Fix For: 3.0.0, 2.0.6, 2.2.1, 2.1.6
>
>
> SpaceQuotaViolationPolicy Disable is not working in Namespace level
> PFB the steps:
>  * Create Namespace and set Quota violation policy as Disable
>  * Create tables under namespace and violate Quota
> Expected result: Tables to get disabled
> Actual Result: Tables are not getting disabled
> Note: mutation operation is not allowed on the table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21981) MMaped bucket cache IOEngine does not work with persistence

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21981:
--
Fix Version/s: (was: 2.1.5)
   2.1.6

> MMaped bucket cache IOEngine does not work with persistence
> ---
>
> Key: HBASE-21981
> URL: https://issues.apache.org/jira/browse/HBASE-21981
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: Anoop Sam John
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-21981.patch, HBASE-21981.patch
>
>
> The MMap based IOEngines does not retrieve the data back if 
> 'hbase.bucketcache.persistent.path' is enabled. FileIOEngine works fine but 
> only the FileMMapEngine has this problem.
> The reason is that we don't get the byte buffers in the proper order while 
> reading back from the file in case of persistence.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21981) MMaped bucket cache IOEngine does not work with persistence

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842572#comment-16842572
 ] 

stack commented on HBASE-21981:
---

Moving out of 2.1.5

> MMaped bucket cache IOEngine does not work with persistence
> ---
>
> Key: HBASE-21981
> URL: https://issues.apache.org/jira/browse/HBASE-21981
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.1.3
>Reporter: ramkrishna.s.vasudevan
>Assignee: Anoop Sam John
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-21981.patch, HBASE-21981.patch
>
>
> The MMap based IOEngines does not retrieve the data back if 
> 'hbase.bucketcache.persistent.path' is enabled. FileIOEngine works fine but 
> only the FileMMapEngine has this problem.
> The reason is that we don't get the byte buffers in the proper order while 
> reading back from the file in case of persistence.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21751) WAL creation fails during region open may cause region assign forever fail

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21751:
--
Fix Version/s: (was: 2.1.5)
   2.1.6

> WAL creation fails during region open may cause region assign forever fail
> --
>
> Key: HBASE-21751
> URL: https://issues.apache.org/jira/browse/HBASE-21751
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.2, 2.0.4
>Reporter: Allan Yang
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-21751-branch-2.1-v1.patch, 
> HBASE-21751-branch-2.1-v2.patch, HBASE-21751.patch, HBASE-21751.v2.patch, 
> HBASE-21751v2.patch
>
>
> During the first region opens on the RS, WALFactory will create a WAL file, 
> but if the wal creation fails, in some cases, HDFS will leave a empty file in 
> the dir(e.g. disk full, file is created succesfully but block allocation 
> fails). We have a check in AbstractFSWAL that if WAL belong to the same 
> factory exists, then a error will be throw. Thus, the region can never be 
> open on this RS later.
> {code:java}
> 2019-01-17 02:15:53,320 ERROR [RS_OPEN_META-regionserver/server003:16020-0] 
> handler.OpenRegionHandler(301): Failed open of region=hbase:meta,,1.1588230740
> java.io.IOException: Target WAL already exists within directory 
> hdfs://cluster/hbase/WALs/server003.hbase.hostname.com,16020,1545269815888
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.(AbstractFSWAL.java:382)
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.(AsyncFSWAL.java:210)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:72)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:47)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
> at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:264)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21751) WAL creation fails during region open may cause region assign forever fail

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842571#comment-16842571
 ] 

stack commented on HBASE-21751:
---

Moving out to 2.1.6.

> WAL creation fails during region open may cause region assign forever fail
> --
>
> Key: HBASE-21751
> URL: https://issues.apache.org/jira/browse/HBASE-21751
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.2, 2.0.4
>Reporter: Allan Yang
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-21751-branch-2.1-v1.patch, 
> HBASE-21751-branch-2.1-v2.patch, HBASE-21751.patch, HBASE-21751.v2.patch, 
> HBASE-21751v2.patch
>
>
> During the first region opens on the RS, WALFactory will create a WAL file, 
> but if the wal creation fails, in some cases, HDFS will leave a empty file in 
> the dir(e.g. disk full, file is created succesfully but block allocation 
> fails). We have a check in AbstractFSWAL that if WAL belong to the same 
> factory exists, then a error will be throw. Thus, the region can never be 
> open on this RS later.
> {code:java}
> 2019-01-17 02:15:53,320 ERROR [RS_OPEN_META-regionserver/server003:16020-0] 
> handler.OpenRegionHandler(301): Failed open of region=hbase:meta,,1.1588230740
> java.io.IOException: Target WAL already exists within directory 
> hdfs://cluster/hbase/WALs/server003.hbase.hostname.com,16020,1545269815888
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.(AbstractFSWAL.java:382)
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.(AsyncFSWAL.java:210)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:72)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:47)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
> at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:264)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21751) WAL creation fails during region open may cause region assign forever fail

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842570#comment-16842570
 ] 

stack commented on HBASE-21751:
---

PI'd change this message on commit.. its a little confusing: 2135 
abort("may lead to meta region stuck in failed open state", ex);

Why is the added Exception Serializable? We do not usually do this (look 
around).

So now, we construct WAL and then have to call init on it. Does init need to be 
added to WAL interface or is it enough just being in abstract?

Just write out success rather than have it be succ.

In below finally, if an exception, we do not try to close the WAL. Should we?

158   } finally {
159 if (!succ) {
160   try {
161 walCopy.close();
162   } catch (Throwable t) {
163 throw new FailedCloseWALAfterInitializedErrorException(
164   "Failed close after init wal failed.", t);
165   }
166 }
167   }

Thanks.


> WAL creation fails during region open may cause region assign forever fail
> --
>
> Key: HBASE-21751
> URL: https://issues.apache.org/jira/browse/HBASE-21751
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.2, 2.0.4
>Reporter: Allan Yang
>Assignee: Bing Xiao
>Priority: Major
> Fix For: 2.0.6, 2.1.5, 2.2.1
>
> Attachments: HBASE-21751-branch-2.1-v1.patch, 
> HBASE-21751-branch-2.1-v2.patch, HBASE-21751.patch, HBASE-21751.v2.patch, 
> HBASE-21751v2.patch
>
>
> During the first region opens on the RS, WALFactory will create a WAL file, 
> but if the wal creation fails, in some cases, HDFS will leave a empty file in 
> the dir(e.g. disk full, file is created succesfully but block allocation 
> fails). We have a check in AbstractFSWAL that if WAL belong to the same 
> factory exists, then a error will be throw. Thus, the region can never be 
> open on this RS later.
> {code:java}
> 2019-01-17 02:15:53,320 ERROR [RS_OPEN_META-regionserver/server003:16020-0] 
> handler.OpenRegionHandler(301): Failed open of region=hbase:meta,,1.1588230740
> java.io.IOException: Target WAL already exists within directory 
> hdfs://cluster/hbase/WALs/server003.hbase.hostname.com,16020,1545269815888
> at 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.(AbstractFSWAL.java:382)
> at 
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.(AsyncFSWAL.java:210)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:72)
> at 
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:47)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
> at 
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
> at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:264)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2085)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:284)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
> at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22075) Potential data loss when MOB compaction fails

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842504#comment-16842504
 ] 

stack edited comment on HBASE-22075 at 5/17/19 8:29 PM:


Moving out of 2.1.5 to 2.1.6. Still WIP it seems.


was (Author: stack):
Moving out. Still WIP it seems.

> Potential data loss when MOB compaction fails
> -
>
> Key: HBASE-22075
> URL: https://issues.apache.org/jira/browse/HBASE-22075
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 2.1.0, 2.0.0, 2.0.1, 2.1.1, 2.0.2, 2.0.3, 2.1.2, 2.0.4, 
> 2.1.3
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: compaction, mob
> Fix For: 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-22075-v1.patch, HBASE-22075-v2.patch, 
> ReproMOBDataLoss.java
>
>
> When MOB compaction fails during last step (bulk load of a newly created 
> reference file) there is a high chance of a data loss due to partially loaded 
> reference file, cells of which refer to (now) non-existent MOB file. The 
> newly created MOB file is deleted automatically in case of a MOB compaction 
> failure, but some cells with the references to this file might be loaded to 
> HBase. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22075) Potential data loss when MOB compaction fails

2019-05-17 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842504#comment-16842504
 ] 

stack commented on HBASE-22075:
---

Moving out. Still WIP it seems.

> Potential data loss when MOB compaction fails
> -
>
> Key: HBASE-22075
> URL: https://issues.apache.org/jira/browse/HBASE-22075
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 2.1.0, 2.0.0, 2.0.1, 2.1.1, 2.0.2, 2.0.3, 2.1.2, 2.0.4, 
> 2.1.3
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: compaction, mob
> Fix For: 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-22075-v1.patch, HBASE-22075-v2.patch, 
> ReproMOBDataLoss.java
>
>
> When MOB compaction fails during last step (bulk load of a newly created 
> reference file) there is a high chance of a data loss due to partially loaded 
> reference file, cells of which refer to (now) non-existent MOB file. The 
> newly created MOB file is deleted automatically in case of a MOB compaction 
> failure, but some cells with the references to this file might be loaded to 
> HBase. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22075) Potential data loss when MOB compaction fails

2019-05-17 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22075:
--
Fix Version/s: (was: 2.1.5)
   2.1.6

> Potential data loss when MOB compaction fails
> -
>
> Key: HBASE-22075
> URL: https://issues.apache.org/jira/browse/HBASE-22075
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 2.1.0, 2.0.0, 2.0.1, 2.1.1, 2.0.2, 2.0.3, 2.1.2, 2.0.4, 
> 2.1.3
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: compaction, mob
> Fix For: 2.0.6, 2.2.1, 2.1.6
>
> Attachments: HBASE-22075-v1.patch, HBASE-22075-v2.patch, 
> ReproMOBDataLoss.java
>
>
> When MOB compaction fails during last step (bulk load of a newly created 
> reference file) there is a high chance of a data loss due to partially loaded 
> reference file, cells of which refer to (now) non-existent MOB file. The 
> newly created MOB file is deleted automatically in case of a MOB compaction 
> failure, but some cells with the references to this file might be loaded to 
> HBase. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842485#comment-16842485
 ] 

Hudson commented on HBASE-22184:


Results for branch branch-1.4
[build #802 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/802/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/802//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/802//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/802//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- Something went wrong with this stage, [check relevant console 
output|${BUILD_URL}/console].


> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842436#comment-16842436
 ] 

Hudson commented on HBASE-21991:


Results for branch branch-2.2
[build #265 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20970) Update hadoop check versions for hadoop3 in hbase-personality

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842435#comment-16842435
 ] 

Hudson commented on HBASE-20970:


Results for branch branch-2.2
[build #265 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Update hadoop check versions for hadoop3 in hbase-personality
> -
>
> Key: HBASE-20970
> URL: https://issues.apache.org/jira/browse/HBASE-20970
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-20970.branch-2.0.001.patch, 
> HBASE-20970.master.001.patch, HBASE-20970.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22424) Interactions in RSGroup test classes will cause TestRSGroupsAdmin2.testMoveServersAndTables and TestRSGroupsBalance.testGroupBalance flaky

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842434#comment-16842434
 ] 

Hudson commented on HBASE-22424:


Results for branch branch-2.2
[build #265 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Interactions in RSGroup test classes will cause 
> TestRSGroupsAdmin2.testMoveServersAndTables and 
> TestRSGroupsBalance.testGroupBalance flaky  
> 
>
> Key: HBASE-22424
> URL: https://issues.apache.org/jira/browse/HBASE-22424
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.2.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22424.master.001.patch
>
>
> When running rsgroup test class folder to run all the UTs together, 
> TestRSGroupsAdmin2.testMoveServersAndTables and 
> TestRSGroupsBalance.testGroupBalance will flaky.
> Because TestRSGroupsAdmin1, TestRSGroupsAdmin2 and TestRSGroupsBalance are 
> all extends TestRSGroupsBase, which has a static variable INIT, controlling 
> the initialize of 'master 'group and the number of rs in 'default' rsgroup. 
> Output errors of TestRSGroupsBalance.testGroupBalance is shown in 
> HBASE-22420, and  TestRSGroupsAdmin2.testMoveServersAndTables will encounter 
> NPE in 
> ```rsGroupAdmin.getRSGroupInfo("master").containsServer(server.getAddress())```
>  because `master` group has not been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22429) hbase-vote download step requires URL to end with '/'

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842432#comment-16842432
 ] 

Hudson commented on HBASE-22429:


Results for branch branch-2.2
[build #265 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> hbase-vote download step requires URL to end with '/' 
> --
>
> Key: HBASE-22429
> URL: https://issues.apache.org/jira/browse/HBASE-22429
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0, 1.3.5
>
>
> The hbase-vote script's download step requires the sourcedir URL be 
> terminated with a path separator or else the retrieval will escape the 
> candidate's directory and mirror way too much.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22430) hbase-vote should tee build and test output to console

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842433#comment-16842433
 ] 

Hudson commented on HBASE-22430:


Results for branch branch-2.2
[build #265 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/265//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> hbase-vote should tee build and test output to console
> --
>
> Key: HBASE-22430
> URL: https://issues.apache.org/jira/browse/HBASE-22430
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0, 1.3.5
>
> Attachments: HBASE-22430.patch, HBASE-22430.patch
>
>
> The hbase-vote script should tee the build and test output to console in 
> addition to the output file so the user does not become suspicious about 
> progress. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842394#comment-16842394
 ] 

Andrew Purtell commented on HBASE-22413:


Ok, I misunderstood the nature of the problem. Let's remove these static blocks 
on branch-1 in this patch then. If this causes the test to fail, @Ignore it and 
file a follow up. 

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842391#comment-16842391
 ] 

Sean Busbey commented on HBASE-22413:
-

bq. It's a bit nontrivial because some tests need to do it as part of their 
function if I recall correctly, and you don't want to make the changes 
universally in a resource shared among all tests.

These change in a static initializer and are never set back. As it is test will 
get inconsistent log levels depending on wether or not they run after this test 
in the same JVM. At least with a shared resource they'd have a consistent 
environment.

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842389#comment-16842389
 ] 

Sean Busbey commented on HBASE-22413:
-

the problem we're hitting is that Hadoop changed what DFSClient.LOG is between 
Hadoop 2.7 and Hadoop 2.8+. We're expecting it to be a commons-logging log4j 
impl but instead it's an slf4j adapter because Hadoop moved to slf4j.

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842382#comment-16842382
 ] 

Andrew Purtell commented on HBASE-22413:


bq. We shouldn't be doing this. If we need to change these log levels we should 
do it via src/test/resources

There are a couple other tests that reach in and twiddle directly with log 
levels too. I propose this: For this patch, provide a branch-1 patch that uses 
Apache Commons Logging and Log4J instead of SLF4J. SLF4J is a branch-2 thing. 
Then file a follow up for removing these hacks from unit tests. It's a bit 
nontrivial because some tests need to do it as part of their function if I 
recall correctly, and you don't want to make the changes universally in a 
resource shared among all tests.

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842370#comment-16842370
 ] 

Hudson commented on HBASE-21991:


Results for branch branch-1.4
[build #801 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/801/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/801//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/801//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/801//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842363#comment-16842363
 ] 

Hudson commented on HBASE-21991:


Results for branch branch-2
[build #1897 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842361#comment-16842361
 ] 

Hudson commented on HBASE-22184:


Results for branch branch-2
[build #1897 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1897//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-16290:
--
Component/s: Scheduler

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability, Scheduler
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Major
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: DebugDump_screenshot.png, HBASE-16290.master.001.patch, 
> HBASE-16290.master.002.patch, HBASE-16290.master.003.patch, 
> HBASE-16290.master.004.patch, HBASE-16290.master.005.patch, 
> HBASE-16290.master.006.patch, HBASE-16290.master.007.patch, 
> HBASE-16290.master.008.patch, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-12790) Support fairness across parallelized scans

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-12790:
--
Component/s: Scheduler

> Support fairness across parallelized scans
> --
>
> Key: HBASE-12790
> URL: https://issues.apache.org/jira/browse/HBASE-12790
> Project: HBase
>  Issue Type: New Feature
>  Components: Scheduler
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Major
>  Labels: Phoenix
> Attachments: AbstractRoundRobinQueue.java, HBASE-12790.patch, 
> HBASE-12790_1.patch, HBASE-12790_5.patch, HBASE-12790_callwrapper.patch, 
> HBASE-12790_trunk_1.patch, PHOENIX_4.5.3-HBase-0.98-2317-SNAPSHOT.zip
>
>
> Some HBase clients parallelize the execution of a scan to reduce latency in 
> getting back results. This can lead to starvation with a loaded cluster and 
> interleaved scans, since the RPC queue will be ordered and processed on a 
> FIFO basis. For example, if there are two clients, A & B that submit largish 
> scans at the same time. Say each scan is broken down into 100 scans by the 
> client (broken down into equal depth chunks along the row key), and the 100 
> scans of client A are queued first, followed immediately by the 100 scans of 
> client B. In this case, client B will be starved out of getting any results 
> back until the scans for client A complete.
> One solution to this is to use the attached AbstractRoundRobinQueue instead 
> of the standard FIFO queue. The queue to be used could be (maybe it already 
> is) configurable based on a new config parameter. Using this queue would 
> require the client to have the same identifier for all of the 100 parallel 
> scans that represent a single logical scan from the clients point of view. 
> With this information, the round robin queue would pick off a task from the 
> queue in a round robin fashion (instead of a strictly FIFO manner) to prevent 
> starvation over interleaved parallelized scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-17088:
--
Component/s: Scheduler

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc, Scheduler
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 1.4.0, 2.0.0
>
> Attachments: HBASE-17088-branch-1.patch, HBASE-17088-v1.patch, 
> HBASE-17088-v2.patch, HBASE-17088-v3.patch, HBASE-17088-v3.patch, 
> HBASE-17088-v4.patch, HBASE-17088-v4.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14331) a single callQueue related improvements

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842302#comment-16842302
 ] 

HBase QA commented on HBASE-14331:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-14331 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-14331 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12767759/HBASE-14331-V6.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/347/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> a single callQueue related improvements
> ---
>
> Key: HBASE-14331
> URL: https://issues.apache.org/jira/browse/HBASE-14331
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance, Scheduler
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Major
> Attachments: BlockingQueuesPerformanceTestApp-output.pdf, 
> BlockingQueuesPerformanceTestApp-output.txt, 
> BlockingQueuesPerformanceTestApp.java, CallQueuePerformanceTestApp.java, 
> HBASE-14331-V2.patch, HBASE-14331-V3.patch, HBASE-14331-V4.patch, 
> HBASE-14331-V5.patch, HBASE-14331-V6.patch, HBASE-14331-V6.patch, 
> HBASE-14331.patch, HBASE-14331.patch, SemaphoreBasedBlockingQueue.java, 
> SemaphoreBasedLinkedBlockingQueue.java, 
> SemaphoreBasedPriorityBlockingQueue.java
>
>
> {{LinkedBlockingQueue}} well separates locks between the {{take}} method and 
> the {{put}} method, but not between takers, and not between putters. These 
> methods are implemented to take locks at the almost beginning of their logic. 
> HBASE-11355 introduces multiple call-queues to reduce such possible 
> congestion, but I doubt that it is required to stick to {{BlockingQueue}}.
> There are the other shortcomings of using {{BlockingQueue}}. When using 
> multiple queues, since {{BlockingQueue}} blocks threads it is required to 
> prepare enough threads for each queue. It is possible that there is a queue 
> starving for threads while there is another queue where threads are idle. 
> Even if you can tune parameters to avoid such situations, the tuning is not 
> so trivial.
> I suggest using a single {{ConcurrentLinkedQueue}} with {{Semaphore}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-11724) Add to RWQueueRpcExecutor the ability to split get and scan handlers

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-11724:
--
Component/s: Scheduler

> Add to RWQueueRpcExecutor the ability to split get and scan handlers
> 
>
> Key: HBASE-11724
> URL: https://issues.apache.org/jira/browse/HBASE-11724
> Project: HBase
>  Issue Type: New Feature
>  Components: IPC/RPC, Scheduler
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-11724-v0.patch, HBASE-11724-v1.patch, 
> HBASE-11724-v2.patch
>
>
> RWQueueRpcExecutor has the devision between reads and writes requests, but we 
> can split also small-reads and long-reads. This can be useful to force a 
> deprioritization of scans on the RS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-14331) a single callQueue related improvements

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-14331:
--
Component/s: Scheduler

> a single callQueue related improvements
> ---
>
> Key: HBASE-14331
> URL: https://issues.apache.org/jira/browse/HBASE-14331
> Project: HBase
>  Issue Type: Improvement
>  Components: IPC/RPC, Performance, Scheduler
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Major
> Attachments: BlockingQueuesPerformanceTestApp-output.pdf, 
> BlockingQueuesPerformanceTestApp-output.txt, 
> BlockingQueuesPerformanceTestApp.java, CallQueuePerformanceTestApp.java, 
> HBASE-14331-V2.patch, HBASE-14331-V3.patch, HBASE-14331-V4.patch, 
> HBASE-14331-V5.patch, HBASE-14331-V6.patch, HBASE-14331-V6.patch, 
> HBASE-14331.patch, HBASE-14331.patch, SemaphoreBasedBlockingQueue.java, 
> SemaphoreBasedLinkedBlockingQueue.java, 
> SemaphoreBasedPriorityBlockingQueue.java
>
>
> {{LinkedBlockingQueue}} well separates locks between the {{take}} method and 
> the {{put}} method, but not between takers, and not between putters. These 
> methods are implemented to take locks at the almost beginning of their logic. 
> HBASE-11355 introduces multiple call-queues to reduce such possible 
> congestion, but I doubt that it is required to stick to {{BlockingQueue}}.
> There are the other shortcomings of using {{BlockingQueue}}. When using 
> multiple queues, since {{BlockingQueue}} blocks threads it is required to 
> prepare enough threads for each queue. It is possible that there is a queue 
> starving for threads while there is another queue where threads are idle. 
> Even if you can tune parameters to avoid such situations, the tuning is not 
> so trivial.
> I suggest using a single {{ConcurrentLinkedQueue}} with {{Semaphore}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16341) Missing bit on "Regression: Random Read/WorkloadC slower in 1.x than 0.98"

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-16341:
--
Component/s: Scheduler

> Missing bit on "Regression: Random Read/WorkloadC slower in 1.x than 0.98"
> --
>
> Key: HBASE-16341
> URL: https://issues.apache.org/jira/browse/HBASE-16341
> Project: HBase
>  Issue Type: Bug
>  Components: rpc, Scheduler
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HBASE-16341.master.001.patch, HBASE-16341.patch, 
> HBASE-16341.patch
>
>
> [~larsgeorge] found a missing bit in HBASE-15971 "Regression: Random 
> Read/WorkloadC slower in 1.x than 0.98" Let me fix here. Let me quote the man:
> {code}
> BTW, in constructor we do this
> ```String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY,
> CALL_QUEUE_TYPE_FIFO_CONF_VALUE);
> ```
> (edited)
> [8:19]  
> but in `onConfigurationChange()` we do
> ```String callQueueType = conf.get(CALL_QUEUE_TYPE_CONF_KEY,
>   CALL_QUEUE_TYPE_DEADLINE_CONF_VALUE);
> ```
> (edited)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17808) FastPath for RWQueueRpcExecutor

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842297#comment-16842297
 ] 

HBase QA commented on HBASE-17808:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-17808 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-17808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859712/HBASE-17808.v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/346/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> FastPath for RWQueueRpcExecutor
> ---
>
> Key: HBASE-17808
> URL: https://issues.apache.org/jira/browse/HBASE-17808
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc, Scheduler
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-17808.patch, HBASE-17808.v2.patch
>
>
> FastPath for the FIFO rpcscheduler was introduced in HBASE-16023. But it is 
> not implemented for RW queues. In this issue, I use 
> FastPathBalancedQueueRpcExecutor in RW queues. So anyone who want to isolate 
> their read/write requests can also benefit from the fastpath.
> I haven't test the performance yet. But since I haven't change any of the 
> core implemention of FastPathBalancedQueueRpcExecutor, it should have the 
> same performance in HBASE-16023.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15703) Deadline scheduler needs to return to the client info about skipped calls, not just drop them

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-15703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-15703:
--
Component/s: Scheduler

> Deadline scheduler needs to return to the client info about skipped calls, 
> not just drop them
> -
>
> Key: HBASE-15703
> URL: https://issues.apache.org/jira/browse/HBASE-15703
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC, Scheduler
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
>Priority: Critical
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HBASE-15703-branch-1.3.v1.patch, 
> HBASE-15703-branch-1.3.v2.patch
>
>
> In AdaptiveLifoCodelCallQueue we drop the calls when we think we're 
> overloaded, we should instead return CallDroppedException to the cleent or 
> something.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15971) Regression: Random Read/WorkloadC slower in 1.x than 0.98

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-15971:
--
Component/s: Scheduler

> Regression: Random Read/WorkloadC slower in 1.x than 0.98
> -
>
> Key: HBASE-15971
> URL: https://issues.apache.org/jira/browse/HBASE-15971
> Project: HBase
>  Issue Type: Sub-task
>  Components: rpc, Scheduler
>Affects Versions: 1.3.0, 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 1.3.0, 2.0.0
>
> Attachments: 098.hits.png, 098.png, HBASE-15971.branch-1.001.patch, 
> HBASE-15971.branch-1.002.patch, Screen Shot 2016-06-10 at 5.08.24 PM.png, 
> Screen Shot 2016-06-10 at 5.08.26 PM.png, branch-1.hits.png, branch-1.png, 
> flight_recording_10172402220203_28.branch-1.jfr, 
> flight_recording_10172402220203_29.09820.0.98.20.jfr, handlers.fp.png, 
> hits.fp.png, hits.patched1.0.vs.unpatched1.0.vs.098.png, run_ycsb.sh
>
>
> branch-1 is slower than 0.98 doing YCSB random read/workloadC. It seems to be 
> doing about 1/2 the throughput of 0.98.
> In branch-1, we have low handler occupancy compared to 0.98. Hacking in 
> reader thread occupancy metric, is about the same in both. In parent issue, 
> hacking out the scheduler, I am able to get branch-1 to go 3x faster so will 
> dig in here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16063) Race condition in new FIFO fastpath from HBASE-16023

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-16063:
--
Component/s: Scheduler

> Race condition in new FIFO fastpath from HBASE-16023
> 
>
> Key: HBASE-16063
> URL: https://issues.apache.org/jira/browse/HBASE-16063
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scheduler
>Affects Versions: 1.3.0
>Reporter: stack
>Priority: Major
>
> From [~ikeda] over in HBASE-16023 at 
> https://issues.apache.org/jira/browse/HBASE-16023?focusedCommentId=15331172=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15331172
> {quote}
> An concrete example of the race condition:
> 1. Worker checks no task.
> 2. Reader checks no ready handler.
> 3. Worker pushes itself as a ready handler and waits on the semaphore.
> 4. Reader queues a task to the queue, without directly passing it to the 
> ready handler nor releasing the semaphore.
> (1,3) and (2,4) should be exclusively executed. That depends on luck, and it 
> might be not severe
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17808) FastPath for RWQueueRpcExecutor

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-17808:
--
Component/s: Scheduler

> FastPath for RWQueueRpcExecutor
> ---
>
> Key: HBASE-17808
> URL: https://issues.apache.org/jira/browse/HBASE-17808
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc, Scheduler
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-17808.patch, HBASE-17808.v2.patch
>
>
> FastPath for the FIFO rpcscheduler was introduced in HBASE-16023. But it is 
> not implemented for RW queues. In this issue, I use 
> FastPathBalancedQueueRpcExecutor in RW queues. So anyone who want to isolate 
> their read/write requests can also benefit from the fastpath.
> I haven't test the performance yet. But since I haven't change any of the 
> core implemention of FastPathBalancedQueueRpcExecutor, it should have the 
> same performance in HBASE-16023.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22184:
--
Hadoop Flags: Reviewed
Release Note: Support get log level and set log level in HTTPS mode

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22184:
--
Resolution: Implemented
Status: Resolved  (was: Patch Available)

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22437) HBOSS: Add Hadoop 2 / 3 profiles

2019-05-17 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842261#comment-16842261
 ] 

Wellington Chevreuil commented on HBASE-22437:
--

Faced same problem with hadoop2 profile. Noticed different junit code path 
being executed between the two profiles. In hadoop2, here's the stack trace for 
*TestHBOSSContract.testMkdirsWithUmask:*

{noformat}
org.apache.hadoop.hbase.oss.contract.TestHBOSSContract.testMkdirsWithUmask(70)
sun.reflect.NativeMethodAccessorImpl.invoke0(-2)
sun.reflect.NativeMethodAccessorImpl.invoke(62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(43)
java.lang.reflect.Method.invoke(498)
junit.framework.TestCase.runTest(176)
junit.framework.TestCase.runBare(141)
junit.framework.TestResult$1.protect(122)
junit.framework.TestResult.runProtected(142)
junit.framework.TestResult.run(125)
junit.framework.TestCase.run(129)
junit.framework.TestSuite.runTest(252)
junit.framework.TestSuite.run(247)
org.junit.internal.runners.JUnit38ClassRunner.run(86)
org.apache.maven.surefire.junit4.JUnit4Provider.execute(365)
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(273)
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(238)
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(159)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(384)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(345)
org.apache.maven.surefire.booter.ForkedBooter.execute(126)
org.apache.maven.surefire.booter.ForkedBooter.main(418)
{noformat}

For hadoop3, we have:

{noformat}
org.apache.hadoop.hbase.oss.contract.TestHBOSSContract.testMkdirsWithUmask(70)
sun.reflect.NativeMethodAccessorImpl.invoke0(-2)
sun.reflect.NativeMethodAccessorImpl.invoke(62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(43)
java.lang.reflect.Method.invoke(498)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(50)
org.junit.internal.runners.model.ReflectiveCallable.run(12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(47)
org.junit.internal.runners.statements.InvokeMethod.evaluate(17)
org.junit.internal.runners.statements.RunBefores.evaluate(26)
org.junit.internal.runners.statements.RunAfters.evaluate(27)
org.junit.rules.TestWatcher$1.evaluate(55)
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(298)
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(292)
java.util.concurrent.FutureTask.run(266)
java.lang.Thread.run(748)
{noformat}

So first thought is that hadoop2 dependencies could be bringing some different 
junit dependency version, but couldn't find any. Also tried to set exclusions 
for junit on all hadoop dependencies in the pom, but it make no difference 
actually.

> HBOSS: Add Hadoop 2 / 3 profiles
> 
>
> Key: HBASE-22437
> URL: https://issues.apache.org/jira/browse/HBASE-22437
> Project: HBase
>  Issue Type: Improvement
>  Components: hboss
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>  Labels: HBOSS
> Attachments: 0001-HBASE-22437-HBOSS-Add-Hadoop-2-3-profiles.patch
>
>
> Original discussion on HBASE-22149 indicated interest running HBOSS on Hadoop 
> 2, and HBase itself maintains profiles for Hadoop 2 and 3. There's no 
> fundamental reason we can't - there are some minor incompatibilities in the 
> code, but no fundamental mismatch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22184:
--
Fix Version/s: 2.3.0
   1.4.10
   1.5.0
   3.0.0

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Fix For: 3.0.0, 1.5.0, 1.4.10, 2.3.0
>
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842257#comment-16842257
 ] 

Reid Chan commented on HBASE-22184:
---

Pushed to branch-1 and branch-1.4 as well.

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842251#comment-16842251
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #227 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/227/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/227//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/227//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/227//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/227//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16089) Add on FastPath for CoDel

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-16089:
--
Component/s: Scheduler
 regionserver

> Add on FastPath for CoDel
> -
>
> Key: HBASE-16089
> URL: https://issues.apache.org/jira/browse/HBASE-16089
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Scheduler
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Major
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HBASE-16089.patch, HBASE-16089.v1.patch, 
> HBASE-16089.v2.patch, HBASE-16089.v3.patch, 
> baselineFifo_codel_codelPlusPatch.png, v3.png
>
>
> If this is all that awesome, so we should have it on CoDel too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-10993:
--
Component/s: regionserver
 IPC/RPC

> Deprioritize long-running scanners
> --
>
> Key: HBASE-10993
> URL: https://issues.apache.org/jira/browse/HBASE-10993
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC, regionserver, Scheduler
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
> HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
> HBASE-10993-v4.patch, HBASE-10993-v5.patch
>
>
> Currently we have a single call queue that serves all the "normal user"  
> requests, and the requests are executed in FIFO.
> When running map-reduce jobs and user-queries on the same machine, we want to 
> prioritize the user-queries.
> Without changing too much code, and not having the user giving hints, we can 
> add a “vtime” field to the scanner, to keep track from how long is running. 
> And we can replace the callQueue with a priorityQueue. In this way we can 
> deprioritize long-running scans, the longer a scan request lives the less 
> priority it gets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-10993) Deprioritize long-running scanners

2019-05-17 Thread Biju Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Biju Nair updated HBASE-10993:
--
Component/s: Scheduler

> Deprioritize long-running scanners
> --
>
> Key: HBASE-10993
> URL: https://issues.apache.org/jira/browse/HBASE-10993
> Project: HBase
>  Issue Type: Sub-task
>  Components: Scheduler
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: HBASE-10993-v0.patch, HBASE-10993-v1.patch, 
> HBASE-10993-v2.patch, HBASE-10993-v3.patch, HBASE-10993-v4.patch, 
> HBASE-10993-v4.patch, HBASE-10993-v5.patch
>
>
> Currently we have a single call queue that serves all the "normal user"  
> requests, and the requests are executed in FIFO.
> When running map-reduce jobs and user-queries on the same machine, we want to 
> prioritize the user-queries.
> Without changing too much code, and not having the user giving hints, we can 
> add a “vtime” field to the scanner, to keep track from how long is running. 
> And we can replace the callQueue with a priorityQueue. In this way we can 
> deprioritize long-running scans, the longer a scan request lives the less 
> priority it gets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842212#comment-16842212
 ] 

Sean Busbey commented on HBASE-22413:
-

{code}
  static {
((Log4JLogger) DFSClient.LOG).getLogger().setLevel(Level.ALL);
((Log4JLogger) HFileSystem.LOG).getLogger().setLevel(Level.ALL);
  }

{code}

We shouldn't be doing this. If we need to change these log levels we should do 
it via {{src/test/resources}}

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842209#comment-16842209
 ] 

Sean Busbey commented on HBASE-22413:
-

weird.

{code}

Caused by: java.lang.ClassCastException: org.slf4j.impl.Log4jLoggerAdapter 
cannot be cast to org.apache.commons.logging.impl.Log4JLogger
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.(TestBlockReorder.java:81)

{code}

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842198#comment-16842198
 ] 

Sean Busbey commented on HBASE-22413:
-

XML failures on branch-1 are always false positives (YETUS-693). whitespace can 
get fixed on commit.

unit test failure looks related
{code:java}
[INFO] Running org.apache.hadoop.hbase.fs.TestBlockReorder
[ERROR] Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.464 s 
<<< FAILURE! - in org.apache.hadoop.hbase.fs.TestBlockReorder
[ERROR] testBlockLocationReorder(org.apache.hadoop.hbase.fs.TestBlockReorder)  
Time elapsed: 0.433 s  <<< ERROR!
java.lang.ExceptionInInitializerError
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.(TestBlockReorder.java:81)

[ERROR] testBlockLocation(org.apache.hadoop.hbase.fs.TestBlockReorder)  Time 
elapsed: 0.007 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.fs.TestBlockReorder

[ERROR] testHBaseCluster(org.apache.hadoop.hbase.fs.TestBlockReorder)  Time 
elapsed: 0.005 s  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.fs.TestBlockReorder{code}

let me see if I can get the class it can't find

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842195#comment-16842195
 ] 

HBase QA commented on HBASE-22184:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 52s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.coprocessor.TestMetaTableMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/345/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969013/HBASE-22184.branch-1.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | 

[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842187#comment-16842187
 ] 

Sean Busbey commented on HBASE-21991:
-

updated the release note. Thanks for pushing on this [~zghaobac]!

> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-21991:

Release Note: The class LossyCounting was unintentionally marked Public but 
was never intended to be part of our public API. This oversight has been 
corrected and LossyCounting is now marked as Private and going forward may be 
subject to additional breaking changes or removal without notice. If you have 
taken a dependency on this class we recommend cloning it locally into your 
project before upgrading to this release.  (was: Moving IA.Public class 
LossyCounting to IA.Private.)

> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842171#comment-16842171
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #102 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/102/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/102//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/102//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/102//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842142#comment-16842142
 ] 

HBase QA commented on HBASE-22440:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.0.3 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}136m 
57s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/344/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969005/HBASE-22440.branch-2.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 760b1b799f1b 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed Feb 
13 15:00:41 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | branch-2 / f5486efdfe |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/344/testReport/ |
| Max. process+thread count | 5291 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Updated] (HBASE-22441) BucketCache NullPointerException in cacheBlock

2019-05-17 Thread binlijin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-22441:
-
Component/s: BucketCache

> BucketCache NullPointerException in cacheBlock
> --
>
> Key: HBASE-22441
> URL: https://issues.apache.org/jira/browse/HBASE-22441
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.2.0
>Reporter: binlijin
>Priority: Major
>
> There is no synchronized in the check in cacheBlock, and wen see 
> NullPointerException in production cluster.
> {code}
> 2019-05-17 18:17:21,299 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=20,queue=7,port=16020] ipc.RpcServer: 
> Unexpected throwable object
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.returnBlock(BucketCache.java:1665)
> at 
> org.apache.hadoop.hbase.io.hfile.BlockCacheUtil.shouldReplaceExistingCacheBlock(BlockCacheUtil.java:250)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlockWithWait(BucketCache.java:426)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlock(BucketCache.java:412)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.cacheBlock(CombinedBlockCache.java:67)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.lambda$readBlock$2(HFileReaderImpl.java:1501)
> at java.util.Optional.ifPresent(Optional.java:159)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1499)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:577)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekBefore(HFileReaderImpl.java:869)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:515)
> at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:135)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6612)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6776)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6549)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3183)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3428)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22441) BucketCache NullPointerException in cacheBlock

2019-05-17 Thread binlijin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-22441:
-
Affects Version/s: 2.2.0

> BucketCache NullPointerException in cacheBlock
> --
>
> Key: HBASE-22441
> URL: https://issues.apache.org/jira/browse/HBASE-22441
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: binlijin
>Priority: Major
>
> There is no synchronized in the check in cacheBlock, and wen see 
> NullPointerException in production cluster.
> {code}
> 2019-05-17 18:17:21,299 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=20,queue=7,port=16020] ipc.RpcServer: 
> Unexpected throwable object
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.returnBlock(BucketCache.java:1665)
> at 
> org.apache.hadoop.hbase.io.hfile.BlockCacheUtil.shouldReplaceExistingCacheBlock(BlockCacheUtil.java:250)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlockWithWait(BucketCache.java:426)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlock(BucketCache.java:412)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.cacheBlock(CombinedBlockCache.java:67)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.lambda$readBlock$2(HFileReaderImpl.java:1501)
> at java.util.Optional.ifPresent(Optional.java:159)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1499)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:577)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekBefore(HFileReaderImpl.java:869)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:515)
> at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:135)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
> at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6612)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6776)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6549)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3183)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3428)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22441) BucketCache NullPointerException in cacheBlock

2019-05-17 Thread binlijin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-22441:
-
Description: 
There is no synchronized in the check in cacheBlock, and wen see 
NullPointerException in production cluster.
{code}
2019-05-17 18:17:21,299 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=20,queue=7,port=16020] ipc.RpcServer: 
Unexpected throwable object
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.returnBlock(BucketCache.java:1665)
at 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil.shouldReplaceExistingCacheBlock(BlockCacheUtil.java:250)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlockWithWait(BucketCache.java:426)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlock(BucketCache.java:412)
at 
org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.cacheBlock(CombinedBlockCache.java:67)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.lambda$readBlock$2(HFileReaderImpl.java:1501)
at java.util.Optional.ifPresent(Optional.java:159)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1499)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:577)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekBefore(HFileReaderImpl.java:869)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:515)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:135)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6612)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6776)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6549)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3183)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3428)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
{code}

  was:
{code}
2019-05-17 18:17:21,299 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=20,queue=7,port=16020] ipc.RpcServer: 
Unexpected throwable object
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.returnBlock(BucketCache.java:1665)
at 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil.shouldReplaceExistingCacheBlock(BlockCacheUtil.java:250)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlockWithWait(BucketCache.java:426)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlock(BucketCache.java:412)
at 
org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.cacheBlock(CombinedBlockCache.java:67)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.lambda$readBlock$2(HFileReaderImpl.java:1501)
at java.util.Optional.ifPresent(Optional.java:159)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1499)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:577)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekBefore(HFileReaderImpl.java:869)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:515)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:135)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6612)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6776)
at 

[jira] [Commented] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842114#comment-16842114
 ] 

HBase QA commented on HBASE-22440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
26s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.0.3 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}141m 17s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.TestSyncReplicationStandbyKillRS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/342/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968996/HBASE-22440.master.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b67604ef8462 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / cb32f4faf0 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/342/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Created] (HBASE-22441) BucketCache NullPointerException in cacheBlock

2019-05-17 Thread binlijin (JIRA)
binlijin created HBASE-22441:


 Summary: BucketCache NullPointerException in cacheBlock
 Key: HBASE-22441
 URL: https://issues.apache.org/jira/browse/HBASE-22441
 Project: HBase
  Issue Type: Bug
Reporter: binlijin


{code}
2019-05-17 18:17:21,299 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=20,queue=7,port=16020] ipc.RpcServer: 
Unexpected throwable object
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.returnBlock(BucketCache.java:1665)
at 
org.apache.hadoop.hbase.io.hfile.BlockCacheUtil.shouldReplaceExistingCacheBlock(BlockCacheUtil.java:250)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlockWithWait(BucketCache.java:426)
at 
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.cacheBlock(BucketCache.java:412)
at 
org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.cacheBlock(CombinedBlockCache.java:67)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.lambda$readBlock$2(HFileReaderImpl.java:1501)
at java.util.Optional.ifPresent(Optional.java:159)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1499)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:340)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:577)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekBefore(HFileReaderImpl.java:869)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:515)
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:135)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6612)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6776)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6549)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3183)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3428)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842107#comment-16842107
 ] 

HBase QA commented on HBASE-22440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
26s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.security.access.TestNamespaceCommands |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/343/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22440 |
| JIRA Patch URL | 

[jira] [Updated] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22184:
--
Attachment: HBASE-22184.branch-1.002.patch

> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.branch-1.002.patch, HBASE-22184.master.001.patch, 
> HBASE-22184.master.002.patch, HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842083#comment-16842083
 ] 

Hudson commented on HBASE-21991:


Results for branch branch-1
[build #835 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/835/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/835//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/835//console].


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/835//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842074#comment-16842074
 ] 

Hudson commented on HBASE-21991:


Results for branch master
[build #1012 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1012/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842073#comment-16842073
 ] 

Hudson commented on HBASE-22184:


Results for branch master
[build #1012 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1012/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1012//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> [security] Support get|set LogLevel in HTTPS mode
> -
>
> Key: HBASE-22184
> URL: https://issues.apache.org/jira/browse/HBASE-22184
> Project: HBase
>  Issue Type: New Feature
>  Components: logging, security, website
>Reporter: Reid Chan
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: security
> Attachments: HBASE-22184.branch-1.001.patch, 
> HBASE-22184.master.001.patch, HBASE-22184.master.002.patch, 
> HBASE-22184.master.003.patch
>
>
> As title read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #244: HBASE-22226 Incorrect level for headings in asciidoc

2019-05-17 Thread GitBox
Apache-HBase commented on issue #244: HBASE-6 Incorrect level for headings 
in asciidoc
URL: https://github.com/apache/hbase/pull/244#issuecomment-493402461
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 22 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 257 | master passed |
   | 0 | refguide | 434 | branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 241 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | 0 | refguide | 435 | patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 15 | The patch does not generate ASF License warnings. |
   | | | 1486 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-244/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/244 |
   | Optional Tests |  dupname  asflicense  refguide  |
   | uname | Linux 117dd2450f31 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / cb32f4faf0 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-244/1/artifact/out/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-244/1/artifact/out/patch-site/book.html
 |
   | Max. process+thread count | 76 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-244/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842052#comment-16842052
 ] 

HBase QA commented on HBASE-22413:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
21s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
7s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  0s{color} | 
{color:red} The patch has 1 ill-formed XML file(s). {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
53s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
4m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 
2.9.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 31s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does 

[GitHub] [hbase] syedmurtazahassan opened a new pull request #244: HBASE-22226 Incorrect level for headings in asciidoc

2019-05-17 Thread GitBox
syedmurtazahassan opened a new pull request #244: HBASE-6 Incorrect level 
for headings in asciidoc
URL: https://github.com/apache/hbase/pull/244
 
 
   Changes made and warnings are gone. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

puleya7 updated HBASE-22440:

Description: 
Precondition:

hbase.balancer.tablesOnMaster = true

hbase.balancer.tablesOnMaster.systemTablesOnly = true

 

Open the rs page of the master throws NullPointException, because 
replicationSourceHandler never initialized.

HRegionServer#getWalGroupsReplicationStatus() need check [is HMaster && CAN'T 
host user region].

  was:
Condition:

hbase.balancer.tablesOnMaster = true

hbase.balancer.tablesOnMaster.systemTablesOnly = true

 

Open the rs page of the master throws NullPointException, because 
replicationSourceHandler never initialized.

HRegionServer#getWalGroupsReplicationStatus() need check 
isMasterNoTableOrSystemTableOnly.


> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>Assignee: puleya7
>Priority: Major
> Attachments: HBASE-22440.branch-1.001.patch, 
> HBASE-22440.branch-2.patch, HBASE-22440.master.patch
>
>
> Precondition:
> hbase.balancer.tablesOnMaster = true
> hbase.balancer.tablesOnMaster.systemTablesOnly = true
>  
> Open the rs page of the master throws NullPointException, because 
> replicationSourceHandler never initialized.
> HRegionServer#getWalGroupsReplicationStatus() need check [is HMaster && CAN'T 
> host user region].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

puleya7 updated HBASE-22440:

Attachment: HBASE-22440.branch-2.patch

> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>Assignee: puleya7
>Priority: Major
> Attachments: HBASE-22440.branch-1.001.patch, 
> HBASE-22440.branch-2.patch, HBASE-22440.master.patch
>
>
> Condition:
> hbase.balancer.tablesOnMaster = true
> hbase.balancer.tablesOnMaster.systemTablesOnly = true
>  
> Open the rs page of the master throws NullPointException, because 
> replicationSourceHandler never initialized.
> HRegionServer#getWalGroupsReplicationStatus() need check 
> isMasterNoTableOrSystemTableOnly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

puleya7 updated HBASE-22440:

Attachment: HBASE-22440.branch-1.001.patch

> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>Assignee: puleya7
>Priority: Major
> Attachments: HBASE-22440.branch-1.001.patch, HBASE-22440.master.patch
>
>
> Condition:
> hbase.balancer.tablesOnMaster = true
> hbase.balancer.tablesOnMaster.systemTablesOnly = true
>  
> Open the rs page of the master throws NullPointException, because 
> replicationSourceHandler never initialized.
> HRegionServer#getWalGroupsReplicationStatus() need check 
> isMasterNoTableOrSystemTableOnly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16842003#comment-16842003
 ] 

puleya7 commented on HBASE-22440:
-

{code:java}
2019-05-17 15:03:49,080 WARN  [qtp1577067350-284] servlet.ServletHandler: 
/rs-status

java.lang.NullPointerException

        at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getWalGroupsReplicationStatus(HRegionServer.java:3012)

        at 
org.apache.hadoop.hbase.tmpl.regionserver.ReplicationStatusTmplImpl.renderNoFlush(ReplicationStatusTmplImpl.java:38)

        at 
org.apache.hadoop.hbase.tmpl.regionserver.ReplicationStatusTmpl.renderNoFlush(ReplicationStatusTmpl.java:119)

        at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(RSStatusTmplImpl.java:160)

        at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.renderNoFlush(RSStatusTmpl.java:226)

        at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.render(RSStatusTmpl.java:217)

        at 
org.apache.hadoop.hbase.regionserver.RSStatusServlet.doGet(RSStatusServlet.java:58)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)

        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

        at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)

        at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)

        at 
org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)

        at 
org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1391)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)

        at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)

        at 
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)

        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)

        at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)

        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

        at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)

        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)

        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)

        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)

        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)

        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

        at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)

        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)

        at org.eclipse.jetty.server.Server.handle(Server.java:539)

        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)

        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)

        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)

        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)

        at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)

        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)

        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)

        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)

        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)

        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)

        at java.base/java.lang.Thread.run(Thread.java:834)
{code}

> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>

[jira] [Commented] (HBASE-22184) [security] Support get|set LogLevel in HTTPS mode

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841999#comment-16841999
 ] 

HBase QA commented on HBASE-22184:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-1 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1 passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
20s{color} | {color:red} hbase-server: The patch generated 3 new + 1 unchanged 
- 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 32s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/340/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968985/HBASE-22184.branch-1.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e4cbbc23359a 4.4.0-138-generic 

[jira] [Updated] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

puleya7 updated HBASE-22440:

  Assignee: puleya7
Attachment: HBASE-22440.master.patch
Status: Patch Available  (was: Open)

> HRegionServer#getWalGroupsReplicationStatus() throws NPE
> 
>
> Key: HBASE-22440
> URL: https://issues.apache.org/jira/browse/HBASE-22440
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.4
>Reporter: puleya7
>Assignee: puleya7
>Priority: Major
> Attachments: HBASE-22440.master.patch
>
>
> Condition:
> hbase.balancer.tablesOnMaster = true
> hbase.balancer.tablesOnMaster.systemTablesOnly = true
>  
> Open the rs page of the master throws NullPointException, because 
> replicationSourceHandler never initialized.
> HRegionServer#getWalGroupsReplicationStatus() need check 
> isMasterNoTableOrSystemTableOnly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22440) HRegionServer#getWalGroupsReplicationStatus() throws NPE

2019-05-17 Thread puleya7 (JIRA)
puleya7 created HBASE-22440:
---

 Summary: HRegionServer#getWalGroupsReplicationStatus() throws NPE
 Key: HBASE-22440
 URL: https://issues.apache.org/jira/browse/HBASE-22440
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.1.4
Reporter: puleya7


Condition:

hbase.balancer.tablesOnMaster = true

hbase.balancer.tablesOnMaster.systemTablesOnly = true

 

Open the rs page of the master throws NullPointException, because 
replicationSourceHandler never initialized.

HRegionServer#getWalGroupsReplicationStatus() need check 
isMasterNoTableOrSystemTableOnly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841987#comment-16841987
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #226 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/226/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/226//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/226//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/226//console].


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/226//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22439) 重启hbase,特定节点有几率出现Num.Regions=0的情况

2019-05-17 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-22439.
---
Resolution: Invalid

Please subscribe and send email to u...@hbase.apache.org to ask.

And I think this is a question, not a bug. If you restart the whole cluster, it 
is fine that some region servers do not hold any regions, the load balancer 
will balance regions later.

And please use English if possible. Not all the people here can read Chinese...
 Chinese below 

直接订阅u...@hbase.apache.org然后发邮件问吧。

看描述,我感觉这个不是bug,重启整个集群确实有可能有region server上没有任何region,打开balancer过一会儿就均匀了。

另外,请尽量用英文提问和回复,社区里不是所有人都能看懂中文。。。

> 重启hbase,特定节点有几率出现Num.Regions=0的情况
> -
>
> Key: HBASE-22439
> URL: https://issues.apache.org/jira/browse/HBASE-22439
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.4.9
> Environment: centos 6.9  
> zk 4.6.11
> hadoop2.7.7
> hbase 1.4.9
>Reporter: sylvanas
>Priority: Major
>  Labels: hbase
>
> 每次启动hbase,特定节点(总磁盘是其他节点50%,但是占用率基本位置在25%左右)有几率出现Num.Regions=0的现象(出现几率大于50%),regions被其他节点分摊,整个过程master、regionserver均为发现任何有价值的日志。
> 最终regtion=0所在机器输入日志如下:
> {code:java}
> //代码占位符
> ...
> 2019-05-17 10:38:29,894 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.ReplicationSourceManager: Current list of replicators: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694] other RSs: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694]
> 2019-05-17 10:38:29,936 INFO [SplitLogWorker-gladslave3:16020] 
> regionserver.SplitLogWorker: SplitLogWorker gladslave3,16020,1558060700753 
> starting
> 2019-05-17 10:38:29,936 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
> 2019-05-17 10:38:29,939 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HRegionServer: Serving as gladslave3,16020,1558060700753, 
> RpcServer on gladslave3/10.86.10.103:16020, sessionid=0x401e4be60870092
> 2019-05-17 10:38:30,467 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> quotas.RegionServerQuotaManager: Quota support disabled
> 2019-05-17 10:38:35,699 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, 
> prefix=gladslave3%2C16020%2C1558060700753, suffix=, 
> logDir=hdfs://haservice/hbase/WALs/gladslave3,16020,1558060700753, 
> archiveDir=hdfs://haservice/hbase/oldWALs
> 2019-05-17 10:38:37,122 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: Slow sync cost: 369 ms, current pipeline: []
> 2019-05-17 10:38:37,123 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: New WAL 
> /hbase/WALs/gladslave3,16020,1558060700753/gladslave3%2C16020%2C1558060700753.1558060715708
> {code}
>  
> 但是,修改日志级别为debug后,Num.Regions=0的情况再也没有发生过(重复启动测试20次为发现问题),但是,每次重启hbase,各regionserver的regions数量发生变化,并不是上一次停止时的数量。
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22439) 重启hbase,特定节点有几率出现Num.Regions=0的情况

2019-05-17 Thread sylvanas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sylvanas updated HBASE-22439:
-
Labels: hbase  (was: regionserver)

> 重启hbase,特定节点有几率出现Num.Regions=0的情况
> -
>
> Key: HBASE-22439
> URL: https://issues.apache.org/jira/browse/HBASE-22439
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.4.9
> Environment: centos 6.9  
> zk 4.6.11
> hadoop2.7.7
> hbase 1.4.9
>Reporter: sylvanas
>Priority: Major
>  Labels: hbase
>
> 每次启动hbase,特定节点(总磁盘是其他节点50%,但是占用率基本位置在25%左右)有几率出现Num.Regions=0的现象(出现几率大于50%),regions被其他节点分摊,整个过程master、regionserver均为发现任何有价值的日志。
> 最终regtion=0所在机器输入日志如下:
> {code:java}
> //代码占位符
> ...
> 2019-05-17 10:38:29,894 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.ReplicationSourceManager: Current list of replicators: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694] other RSs: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694]
> 2019-05-17 10:38:29,936 INFO [SplitLogWorker-gladslave3:16020] 
> regionserver.SplitLogWorker: SplitLogWorker gladslave3,16020,1558060700753 
> starting
> 2019-05-17 10:38:29,936 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
> 2019-05-17 10:38:29,939 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HRegionServer: Serving as gladslave3,16020,1558060700753, 
> RpcServer on gladslave3/10.86.10.103:16020, sessionid=0x401e4be60870092
> 2019-05-17 10:38:30,467 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> quotas.RegionServerQuotaManager: Quota support disabled
> 2019-05-17 10:38:35,699 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, 
> prefix=gladslave3%2C16020%2C1558060700753, suffix=, 
> logDir=hdfs://haservice/hbase/WALs/gladslave3,16020,1558060700753, 
> archiveDir=hdfs://haservice/hbase/oldWALs
> 2019-05-17 10:38:37,122 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: Slow sync cost: 369 ms, current pipeline: []
> 2019-05-17 10:38:37,123 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: New WAL 
> /hbase/WALs/gladslave3,16020,1558060700753/gladslave3%2C16020%2C1558060700753.1558060715708
> {code}
>  
> 但是,修改日志级别为debug后,Num.Regions=0的情况再也没有发生过(重复启动测试20次为发现问题),但是,每次重启hbase,各regionserver的regions数量发生变化,并不是上一次停止时的数量。
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22439) 重启hbase,特定节点有几率出现Num.Regions=0的情况

2019-05-17 Thread sylvanas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sylvanas updated HBASE-22439:
-
Labels: regionserver  (was: )

> 重启hbase,特定节点有几率出现Num.Regions=0的情况
> -
>
> Key: HBASE-22439
> URL: https://issues.apache.org/jira/browse/HBASE-22439
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.4.9
> Environment: centos 6.9  
> zk 4.6.11
> hadoop2.7.7
> hbase 1.4.9
>Reporter: sylvanas
>Priority: Major
>  Labels: regionserver
>
> 每次启动hbase,特定节点(总磁盘是其他节点50%,但是占用率基本位置在25%左右)有几率出现Num.Regions=0的现象(出现几率大于50%),regions被其他节点分摊,整个过程master、regionserver均为发现任何有价值的日志。
> 最终regtion=0所在机器输入日志如下:
> {code:java}
> //代码占位符
> ...
> 2019-05-17 10:38:29,894 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.ReplicationSourceManager: Current list of replicators: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694] other RSs: 
> [gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
> gladslave3,16020,1558060700753, gladslave1,16020,1558060697694]
> 2019-05-17 10:38:29,936 INFO [SplitLogWorker-gladslave3:16020] 
> regionserver.SplitLogWorker: SplitLogWorker gladslave3,16020,1558060700753 
> starting
> 2019-05-17 10:38:29,936 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
> 2019-05-17 10:38:29,939 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> regionserver.HRegionServer: Serving as gladslave3,16020,1558060700753, 
> RpcServer on gladslave3/10.86.10.103:16020, sessionid=0x401e4be60870092
> 2019-05-17 10:38:30,467 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> quotas.RegionServerQuotaManager: Quota support disabled
> 2019-05-17 10:38:35,699 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, 
> prefix=gladslave3%2C16020%2C1558060700753, suffix=, 
> logDir=hdfs://haservice/hbase/WALs/gladslave3,16020,1558060700753, 
> archiveDir=hdfs://haservice/hbase/oldWALs
> 2019-05-17 10:38:37,122 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: Slow sync cost: 369 ms, current pipeline: []
> 2019-05-17 10:38:37,123 INFO [regionserver/gladslave3/10.86.10.103:16020] 
> wal.FSHLog: New WAL 
> /hbase/WALs/gladslave3,16020,1558060700753/gladslave3%2C16020%2C1558060700753.1558060715708
> {code}
>  
> 但是,修改日志级别为debug后,Num.Regions=0的情况再也没有发生过(重复启动测试20次为发现问题),但是,每次重启hbase,各regionserver的regions数量发生变化,并不是上一次停止时的数量。
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22433) Corrupt hfile data

2019-05-17 Thread binlijin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841970#comment-16841970
 ] 

binlijin commented on HBASE-22433:
--

Looks like the NullPointerException is another concurrent problems.

> Corrupt hfile data
> --
>
> Key: HBASE-22433
> URL: https://issues.apache.org/jira/browse/HBASE-22433
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: binlijin
>Priority: Critical
>
> We use 2.2.0 version and encounter corrupt cell data.
> {code}
> 2019-05-15 22:53:59,354 ERROR 
> [regionserver/hb-mbasedata-14:16020-longCompactions-1557048533421] 
> regionserver.CompactSplit: Compaction failed 
> region=mktdm_id_src,9990,1557681762973.255e9adde013e370deb595c59a7285c3., 
> storeName=o, priority=196, startTime=1557931927314
> java.lang.IllegalStateException: Invalid currKeyLen 1700752997 or 
> currValueLen 2002739568. Block offset: 70452918, block length: 66556, 
> position: 42364 (without header).
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.checkKeyValueLen(HFileReaderImpl.java:1182)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.readKeyValueLen(HFileReaderImpl.java:628)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl._next(HFileReaderImpl.java:1080)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.next(HFileReaderImpl.java:1097)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:208)
>  at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:120)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:644)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:386)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>  at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1429)
>  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2231)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:629)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:671)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> 2019-05-15 23:14:24,143 ERROR 
> [regionserver/hb-mbasedata-14:16020-longCompactions-1557048533422] 
> regionserver.CompactSplit: Compaction failed 
> region=mktdm_id_src,9fdee4,1557681762973.1782aebb83eae551e7bdfc2bfa13eb3d., 
> storeName=o, priority=194, startTime=1557932726849
> java.lang.RuntimeException: Unknown code 98
>  at org.apache.hadoop.hbase.KeyValue$Type.codeToType(KeyValue.java:274)
>  at org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(CellUtil.java:1307)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(HFileWriterImpl.java:383)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock(HFileWriterImpl.java:343)
>  at 
> org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:603)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:376)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.abortWriter(DefaultCompactor.java:98)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.abortWriter(DefaultCompactor.java:42)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:335)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>  at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1429)
>  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2231)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:629)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:671)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22439) 重启hbase,特定节点有几率出现Num.Regions=0的情况

2019-05-17 Thread sylvanas (JIRA)
sylvanas created HBASE-22439:


 Summary: 重启hbase,特定节点有几率出现Num.Regions=0的情况
 Key: HBASE-22439
 URL: https://issues.apache.org/jira/browse/HBASE-22439
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.4.9
 Environment: centos 6.9  

zk 4.6.11

hadoop2.7.7

hbase 1.4.9
Reporter: sylvanas


每次启动hbase,特定节点(总磁盘是其他节点50%,但是占用率基本位置在25%左右)有几率出现Num.Regions=0的现象(出现几率大于50%),regions被其他节点分摊,整个过程master、regionserver均为发现任何有价值的日志。

最终regtion=0所在机器输入日志如下:
{code:java}
//代码占位符

...
2019-05-17 10:38:29,894 INFO [regionserver/gladslave3/10.86.10.103:16020] 
regionserver.ReplicationSourceManager: Current list of replicators: 
[gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
gladslave3,16020,1558060700753, gladslave1,16020,1558060697694] other RSs: 
[gladslave4,16020,1558060698511, gladslave2,16020,1558060697919, 
gladslave3,16020,1558060700753, gladslave1,16020,1558060697694]
2019-05-17 10:38:29,936 INFO [SplitLogWorker-gladslave3:16020] 
regionserver.SplitLogWorker: SplitLogWorker gladslave3,16020,1558060700753 
starting
2019-05-17 10:38:29,936 INFO [regionserver/gladslave3/10.86.10.103:16020] 
regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
2019-05-17 10:38:29,939 INFO [regionserver/gladslave3/10.86.10.103:16020] 
regionserver.HRegionServer: Serving as gladslave3,16020,1558060700753, 
RpcServer on gladslave3/10.86.10.103:16020, sessionid=0x401e4be60870092
2019-05-17 10:38:30,467 INFO [regionserver/gladslave3/10.86.10.103:16020] 
quotas.RegionServerQuotaManager: Quota support disabled
2019-05-17 10:38:35,699 INFO [regionserver/gladslave3/10.86.10.103:16020] 
wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, 
prefix=gladslave3%2C16020%2C1558060700753, suffix=, 
logDir=hdfs://haservice/hbase/WALs/gladslave3,16020,1558060700753, 
archiveDir=hdfs://haservice/hbase/oldWALs
2019-05-17 10:38:37,122 INFO [regionserver/gladslave3/10.86.10.103:16020] 
wal.FSHLog: Slow sync cost: 369 ms, current pipeline: []
2019-05-17 10:38:37,123 INFO [regionserver/gladslave3/10.86.10.103:16020] 
wal.FSHLog: New WAL 
/hbase/WALs/gladslave3,16020,1558060700753/gladslave3%2C16020%2C1558060700753.1558060715708
{code}
 

但是,修改日志级别为debug后,Num.Regions=0的情况再也没有发生过(重复启动测试20次为发现问题),但是,每次重启hbase,各regionserver的regions数量发生变化,并不是上一次停止时的数量。

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22412) Improve the metrics in ByteBuffAllocator

2019-05-17 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22412:
-
Description: 
gAddress the comment in HBASE-22387: 
bq. The ByteBuffAllocator#getFreeBufferCount will be O(N) complexity, because 
the buffers here is an ConcurrentLinkedQueue. It's worth file an issue for this.

Also I think we should use the allcated bytes instead of allocation number to 
evaluate the heap allocation percent , so that we can decide whether the 
ByteBuffer is too small and whether will have higher GC pressure.  Assume the 
case:  the buffer size is 64KB, and each time we have a block with 65KB, then 
it will have one heap allocation (1KB) and one pool allocation (64KB), if only 
consider the allocation num, then the heap allocation ratio will be 1 / (1 + 1) 
= 50%, but if consider the allocation bytes, the allocation ratio will be  1KB 
/ 65KB = 1.5%.

If the heap allocation percent is less than  
hbase.ipc.server.reservoir.minimal.allocating.size /  
hbase.ipc.server.allocator.buffer.size,  then the allocator  works fine, 
otherwise it's overload. 

  was:
Address the comment in HBASE-22387: 
bq. The ByteBuffAllocator#getFreeBufferCount will be O(N) complexity, because 
the buffers here is an ConcurrentLinkedQueue. It's worth file an issue for this.

Also I think we should use the allcated bytes instead of allocation number to 
evaluate the heap allocation percent , so that we can decide whether the 
ByteBuffer is too small and whether will have higher GC pressure.  Assume the 
case:  the buffer size is 64KB, and each time we have a block with 65KB, then 
it will have one heap allocation (1KB) and one pool allocation (64KB), if only 
consider the allocation num, then the heap allocation ratio will be 1 / (1 + 1) 
= 50%, but if consider the allocation bytes, the allocation ratio will be  1KB 
/ 65KB = 1.5%.

If the heap allocation percent is less than  
hbase.ipc.server.reservoir.minimal.allocating.size /  
hbase.ipc.server.allocator.buffer.size,  then the allocator  works fine, 
otherwise it's overload. 


> Improve the metrics in ByteBuffAllocator
> 
>
> Key: HBASE-22412
> URL: https://issues.apache.org/jira/browse/HBASE-22412
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22412.HBASE-21879.v1.patch, 
> HBASE-22412.HBASE-21879.v2.patch, HBASE-22412.HBASE-21879.v3.patch, JMX.png, 
> web-UI.png
>
>
> gAddress the comment in HBASE-22387: 
> bq. The ByteBuffAllocator#getFreeBufferCount will be O(N) complexity, because 
> the buffers here is an ConcurrentLinkedQueue. It's worth file an issue for 
> this.
> Also I think we should use the allcated bytes instead of allocation number to 
> evaluate the heap allocation percent , so that we can decide whether the 
> ByteBuffer is too small and whether will have higher GC pressure.  Assume the 
> case:  the buffer size is 64KB, and each time we have a block with 65KB, then 
> it will have one heap allocation (1KB) and one pool allocation (64KB), if 
> only consider the allocation num, then the heap allocation ratio will be 1 / 
> (1 + 1) = 50%, but if consider the allocation bytes, the allocation ratio 
> will be  1KB / 65KB = 1.5%.
> If the heap allocation percent is less than  
> hbase.ipc.server.reservoir.minimal.allocating.size /  
> hbase.ipc.server.allocator.buffer.size,  then the allocator  works fine, 
> otherwise it's overload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22424) Interactions in RSGroup test classes will cause TestRSGroupsAdmin2.testMoveServersAndTables and TestRSGroupsBalance.testGroupBalance flaky

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841964#comment-16841964
 ] 

Hudson commented on HBASE-22424:


Results for branch master
[build #1011 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1011/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Interactions in RSGroup test classes will cause 
> TestRSGroupsAdmin2.testMoveServersAndTables and 
> TestRSGroupsBalance.testGroupBalance flaky  
> 
>
> Key: HBASE-22424
> URL: https://issues.apache.org/jira/browse/HBASE-22424
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.2.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22424.master.001.patch
>
>
> When running rsgroup test class folder to run all the UTs together, 
> TestRSGroupsAdmin2.testMoveServersAndTables and 
> TestRSGroupsBalance.testGroupBalance will flaky.
> Because TestRSGroupsAdmin1, TestRSGroupsAdmin2 and TestRSGroupsBalance are 
> all extends TestRSGroupsBase, which has a static variable INIT, controlling 
> the initialize of 'master 'group and the number of rs in 'default' rsgroup. 
> Output errors of TestRSGroupsBalance.testGroupBalance is shown in 
> HBASE-22420, and  TestRSGroupsAdmin2.testMoveServersAndTables will encounter 
> NPE in 
> ```rsGroupAdmin.getRSGroupInfo("master").containsServer(server.getAddress())```
>  because `master` group has not been added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22429) hbase-vote download step requires URL to end with '/'

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841962#comment-16841962
 ] 

Hudson commented on HBASE-22429:


Results for branch master
[build #1011 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1011/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> hbase-vote download step requires URL to end with '/' 
> --
>
> Key: HBASE-22429
> URL: https://issues.apache.org/jira/browse/HBASE-22429
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Tak Lon (Stephen) Wu
>Priority: Trivial
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0, 1.3.5
>
>
> The hbase-vote script's download step requires the sourcedir URL be 
> terminated with a path separator or else the retrieval will escape the 
> candidate's directory and mirror way too much.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22430) hbase-vote should tee build and test output to console

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841963#comment-16841963
 ] 

Hudson commented on HBASE-22430:


Results for branch master
[build #1011 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1011/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> hbase-vote should tee build and test output to console
> --
>
> Key: HBASE-22430
> URL: https://issues.apache.org/jira/browse/HBASE-22430
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0, 1.3.5
>
> Attachments: HBASE-22430.patch, HBASE-22430.patch
>
>
> The hbase-vote script should tee the build and test output to console in 
> addition to the output file so the user does not become suspicious about 
> progress. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20970) Update hadoop check versions for hadoop3 in hbase-personality

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841966#comment-16841966
 ] 

Hudson commented on HBASE-20970:


Results for branch master
[build #1011 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1011/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Update hadoop check versions for hadoop3 in hbase-personality
> -
>
> Key: HBASE-20970
> URL: https://issues.apache.org/jira/browse/HBASE-20970
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-20970.branch-2.0.001.patch, 
> HBASE-20970.master.001.patch, HBASE-20970.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22425) Balance shell command broken in HBase-3.0.0

2019-05-17 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841965#comment-16841965
 ] 

Hudson commented on HBASE-22425:


Results for branch master
[build #1011 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1011/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//console].


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1011//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Balance shell command broken in HBase-3.0.0
> ---
>
> Key: HBASE-22425
> URL: https://issues.apache.org/jira/browse/HBASE-22425
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-22425.v1.patch, HBASE-22425.v2.patch, 
> HBASE-22425.v3.patch
>
>
> Please see: 
> https://issues.apache.org/jira/browse/HBASE-22387?focusedCommentId=16837386=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16837386
> {code}
> hbase(main):001:0> balancer
> (eval):3: warning: instance variable @shell not initialized
> Exception `NoMethodError' at (eval):2 - undefined method `command' for 
> nil:NilClass
> ERROR: undefined method `command' for nil:NilClass
> Backtrace: (eval):2:in `balancer'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/hbase/admin.rb:192:in
>  `balancer'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell/commands/balancer.rb:47:in
>  `command'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell/commands.rb:49:in
>  `block in command_safe'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell/commands.rb:122:in
>  `translate_hbase_exceptions'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell/commands.rb:49:in
>  `command_safe'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell.rb:148:in
>  `internal_command'
>
> /home/huzheng/.minos/packages/hbase/7b5bccdecd0600d98ad9a28d826e289fb6d58f46-20190510-165336/hbase-3.0.0-SNAPSHOT/lib/ruby/shell.rb:140:in
>  `command'
>(eval):2:in `balancer'
>(hbase):1:in `'
>org/jruby/RubyKernel.java:994:in `eval'
>
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/workspace.rb:87:in 
> `evaluate'
>
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:380:in 
> `evaluate'
>uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:489:in 
> `block in eval_input'
>uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:623:in 
> `signal_status'
>uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:486:in 
> `block in eval_input'
>
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:246:in 
> `block in each_top_level_statement'
>org/jruby/RubyKernel.java:1292:in `loop'
>
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:232:in 
> `block in each_top_level_statement'
>org/jruby/RubyKernel.java:1114:in `catch'
>
> uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/ruby-lex.rb:231:in 
> `each_top_level_statement'
>uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:485:in 
> `eval_input'
>
> 

[jira] [Commented] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16841953#comment-16841953
 ] 

HBase QA commented on HBASE-22413:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HBASE-Build/341/console in case of 
problems.


> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21991) Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements

2019-05-17 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-21991.

  Resolution: Fixed
Release Note: Moving IA.Public class LossyCounting to IA.Private.

> Fix MetaMetrics issues - [Race condition, Faulty remove logic], few 
> improvements
> 
>
> Key: HBASE-21991
> URL: https://issues.apache.org/jira/browse/HBASE-21991
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, metrics
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.3.0
>
> Attachments: hbase-21991.addendum.patch, 
> hbase-21991.branch-1.001.patch, hbase-21991.branch-1.002.patch, 
> hbase-21991.master.001.patch, hbase-21991.master.002.patch, 
> hbase-21991.master.003.patch, hbase-21991.master.004.patch, 
> hbase-21991.master.005.patch, hbase-21991.master.006.patch
>
>
> Here is a list of the issues related to the MetaMetrics implementation:
> +*Bugs*+:
>  # [_Lossy counting for top-k_] *Faulty remove logic of non-eligible meters*: 
> Under certain conditions, we might end up storing/exposing all the meters 
> rather than top-k-ish
>  # MetaMetrics can throw NPE resulting in aborting of the RS because of a 
> *Race Condition*.
> +*Improvements*+:
>  # With high number of regions in the cluster, exposure of metrics for each 
> region blows up the JMX from ~140 Kbs to 100+ Mbs depending on the number of 
> regions. It's better to use *lossy counting to maintain top-k for region 
> metrics* as well.
>  # As the lossy meters do not represent actual counts, I think, it'll be 
> better to *rename the meters to include "lossy" in the name*. It would be 
> more informative while monitoring the metrics and there would be less 
> confusion regarding actual counts to lossy counts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22413) Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove the 2.7.x hadoop checks' to branch-1

2019-05-17 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22413:
--
Attachment: HBASE-22413-branch-1-v3.patch

> Backport 'HBASE-22399 Change default hadoop-two.version to 2.8.x and remove 
> the 2.7.x hadoop checks' to branch-1
> 
>
> Key: HBASE-22413
> URL: https://issues.apache.org/jira/browse/HBASE-22413
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22413-branch-1-v1.patch, 
> HBASE-22413-branch-1-v1.patch, HBASE-22413-branch-1-v2.patch, 
> HBASE-22413-branch-1-v3.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch, HBASE-22413-branch-1.patch, 
> HBASE-22413-branch-1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >