[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-20 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23177:
--
Attachment: HBASE-23177.branch-1.001.patch

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch, HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-20 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955766#comment-16955766
 ] 

Michael Stack commented on HBASE-23177:
---

Retry

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch, HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23172) HBase Canary region success count metrics reflect column family successes, not region successes

2019-10-20 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955765#comment-16955765
 ] 

Michael Stack commented on HBASE-23172:
---

Retry

> HBase Canary region success count metrics reflect column family successes, 
> not region successes
> ---
>
> Key: HBASE-23172
> URL: https://issues.apache.org/jira/browse/HBASE-23172
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
> Attachments: HBASE-23172.branch-1.000.patch, 
> HBASE-23172.branch-2.000.patch, HBASE-23172.master.000.patch, 
> HBASE-23172.master.000.patch, HBASE-23172.master.000.patch
>
>
> HBase Canary reads once per column family per region. The current "region 
> success count" should actually be "column family success count," which means 
> we need another metric that actually reflects region success count. 
> Additionally, the region read and write latencies only store the latencies of 
> the last column family of the region read. Instead of a map of regions to a 
> single latency value and success value, we should map each region to a list 
> of such values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23172) HBase Canary region success count metrics reflect column family successes, not region successes

2019-10-20 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23172:
--
Attachment: HBASE-23172.master.000.patch

> HBase Canary region success count metrics reflect column family successes, 
> not region successes
> ---
>
> Key: HBASE-23172
> URL: https://issues.apache.org/jira/browse/HBASE-23172
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
> Attachments: HBASE-23172.branch-1.000.patch, 
> HBASE-23172.branch-2.000.patch, HBASE-23172.master.000.patch, 
> HBASE-23172.master.000.patch, HBASE-23172.master.000.patch
>
>
> HBase Canary reads once per column family per region. The current "region 
> success count" should actually be "column family success count," which means 
> we need another metric that actually reflects region success count. 
> Additionally, the region read and write latencies only store the latencies of 
> the last column family of the region read. Instead of a map of regions to a 
> single latency value and success value, we should map each region to a list 
> of such values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #735: HBASE-22679 : Revamping CellUtil

2019-10-20 Thread GitBox
saintstack commented on a change in pull request #735: HBASE-22679 : Revamping 
CellUtil
URL: https://github.com/apache/hbase/pull/735#discussion_r336845824
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
 ##
 @@ -714,39 +418,13 @@ public boolean advance() {
 };
   }
 
-  /**
-   * @param left
-   * @param right
-   * @return True if the rows in left and right 
Cells match
-   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
-   * Instead use {@link #matchingRows(Cell, Cell)}
-   */
-  @Deprecated
-  public static boolean matchingRow(final Cell left, final Cell right) {
-return matchingRows(left, right);
-  }
-
-  /**
-   *  @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
-   * Instead use {@link #matchingRows(Cell, byte[])}
-   */
-  @Deprecated
-  public static boolean matchingRow(final Cell left, final byte[] buf) {
-return matchingRows(left, buf);
-  }
-
   public static boolean matchingRows(final Cell left, final byte[] buf) {
 if (buf == null) {
   return left.getRowLength() == 0;
 }
 return PrivateCellUtil.matchingRows(left, buf, 0, buf.length);
   }
 
-  public static boolean matchingRow(final Cell left, final byte[] buf, final 
int offset,
 
 Review comment:
   This is not deprecated?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #735: HBASE-22679 : Revamping CellUtil

2019-10-20 Thread GitBox
saintstack commented on a change in pull request #735: HBASE-22679 : Revamping 
CellUtil
URL: https://github.com/apache/hbase/pull/735#discussion_r336846259
 
 

 ##
 File path: hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java
 ##
 @@ -714,39 +418,13 @@ public boolean advance() {
 };
   }
 
-  /**
-   * @param left
-   * @param right
-   * @return True if the rows in left and right 
Cells match
-   * @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
-   * Instead use {@link #matchingRows(Cell, Cell)}
-   */
-  @Deprecated
-  public static boolean matchingRow(final Cell left, final Cell right) {
-return matchingRows(left, right);
-  }
-
-  /**
-   *  @deprecated As of release 2.0.0, this will be removed in HBase 3.0.0.
-   * Instead use {@link #matchingRows(Cell, byte[])}
-   */
-  @Deprecated
-  public static boolean matchingRow(final Cell left, final byte[] buf) {
-return matchingRows(left, buf);
-  }
-
   public static boolean matchingRows(final Cell left, final byte[] buf) {
 if (buf == null) {
   return left.getRowLength() == 0;
 }
 return PrivateCellUtil.matchingRows(left, buf, 0, buf.length);
   }
 
-  public static boolean matchingRow(final Cell left, final byte[] buf, final 
int offset,
 
 Review comment:
   It is ok to remove this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23169) Random region server aborts while clearing Old Wals

2019-10-20 Thread Karthick (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955751#comment-16955751
 ] 

Karthick commented on HBASE-23169:
--

[~wchevreuil] We have 1.4.10 deployed on our production clusters. We checked 
for conflicts in 1.4.10 with the patch in 
[HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] and since there 
were no conflicts we applied the patch. And please note the fact that the 
region server aborts happen randomly. At the moment we have restart mechanisms 
but because of this issue we are not able to apply the patch in all out 
clusters.

> Random region server aborts while clearing Old Wals
> ---
>
> Key: HBASE-23169
> URL: https://issues.apache.org/jira/browse/HBASE-23169
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication, wal
>Affects Versions: 1.4.10, 1.4.11
>Reporter: Karthick
>Assignee: Wellington Chevreuil
>Priority: Blocker
>  Labels: patch
>
> After applying the patch given in 
> [HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] random region 
> server aborts were noticed. This happens in ReplicationResourceShipper thread 
> while writing the replication wal position.
> {code:java}
> 2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for 
> /hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
>  at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
> org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
>  at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
> org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
>  08:17:28,133 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT implements CanUnbuffer but its parents class implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HBASE-23195:
-
Summary: FSDataInputStreamWrapper unbuffer can NOT invoke the classes that 
NOT implements CanUnbuffer but its parents class implements CanUnbuffer   (was: 
FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
CanUnbuffer but parents implements CanUnbuffer )

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT 
> implements CanUnbuffer but its parents class implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT implements CanUnbuffer but its parents class implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yi Ming updated HBASE-23195:
-
Description: 
FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT 
implements CanUnbuffer but its parents class implements CanUnbuffer

For example:

There are 1 interface I1 and one class implements I1 named PC1 and the class C1 
extends from PC1

If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
unbuffer  can NOT do that. 

 

 

 

  was:
FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
CanUnbuffer but parents implements CanUnbuffer

For example:

There are 1 interface I1 and one class implements I1 named PC1 and the class C1 
extends from PC1

If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
unbuffer  can NOT do that. 

 

 

 


> FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT 
> implements CanUnbuffer but its parents class implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes that NOT 
> implements CanUnbuffer but its parents class implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955736#comment-16955736
 ] 

Zhao Yi Ming edited comment on HBASE-23195 at 10/21/19 4:01 AM:


[~weichiu] It is related with HDFS issue 
https://issues.apache.org/jira/browse/HDFS-14308

But this one should be Hbase issue, we can NOT ask C1 implements the I1 again, 
because PC1 already implements the I1.

 

[~zhangduo] Sure, one commit a PR will give a link here. Thanks for reminder!


was (Author: zhaoyim):
[~weichiu] It is related with HDFS issue 
https://issues.apache.org/jira/browse/HDFS-14308

But this one should be Hbase issue, we can NOT ask C1 implements the I1, 
because PC1 already implements the I1.

 

[~zhangduo] Sure, one commit a PR will give a link here. Thanks for reminder!

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955737#comment-16955737
 ] 

Zhao Yi Ming commented on HBASE-23195:
--

[~weichiu] I updated HDFS issue 
https://issues.apache.org/jira/browse/HDFS-14308  after I done the test, will 
submit the HDFS fix in issue HDFS-14308.

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955736#comment-16955736
 ] 

Zhao Yi Ming commented on HBASE-23195:
--

[~weichiu] It is related with HDFS issue 
https://issues.apache.org/jira/browse/HDFS-14308

But this one should be Hbase issue, we can NOT ask C1 implements the I1, 
because PC1 already implements the I1.

 

[~zhangduo] Sure, one commit a PR will give a link here. Thanks for reminder!

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23196) The IndexChunkPool’s percentage is hard code to 0.1

2019-10-20 Thread chenxu (Jira)
chenxu created HBASE-23196:
--

 Summary: The IndexChunkPool’s percentage is hard code to 0.1
 Key: HBASE-23196
 URL: https://issues.apache.org/jira/browse/HBASE-23196
 Project: HBase
  Issue Type: Bug
Reporter: chenxu


Code in ChunkCreator#initialize
{code:java}
public static ChunkCreator initialize(...) {
if (instance != null) {
  return instance;
}
instance = new ChunkCreator(chunkSize, offheap, globalMemStoreSize, 
poolSizePercentage,
initialCountPercentage, heapMemoryManager,
MemStoreLABImpl.INDEX_CHUNK_PERCENTAGE_DEFAULT);
return instance;
  }
{code}
When mslab is enabled, the IndexChunkPool’s percentage is hard code to 
INDEX_CHUNK_PERCENTAGE_DEFAULT, When we use IndexType#ARRAY_MAP other than 
IndexType#CHUNK_MAP, we should set IndexChunkPool’s size to 0, or there will be 
a waste of memory space.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
Apache9 commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-544337346
 
 
   Or we could check for OFFLINE state but for at least for split, we should 
check the field in RegionInfo.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
Apache9 commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-544337279
 
 
   The problem here is that we need to check the split and offline field in 
RegionInfo, not the state...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-23195:
-

Assignee: Zhao Yi Ming

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Assignee: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955727#comment-16955727
 ] 

Duo Zhang commented on HBASE-23195:
---

Just create a PR on github, make sure that you have the "HBASE-23195" as part 
of the PR title so the system will automatically link the PR here.

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955726#comment-16955726
 ] 

Wei-Chiu Chuang commented on HBASE-23195:
-

I wonder if this is a HDFS issue instead ...

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955724#comment-16955724
 ] 

Zhao Yi Ming commented on HBASE-23195:
--

Anyone can help give the Hbase authority, and assign this issue to me.  I 
already have a fix for it. Thanks!

> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer 
> --
>
> Key: HBASE-23195
> URL: https://issues.apache.org/jira/browse/HBASE-23195
> Project: HBase
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.0.2
>Reporter: Zhao Yi Ming
>Priority: Critical
>
> FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
> CanUnbuffer but parents implements CanUnbuffer
> For example:
> There are 1 interface I1 and one class implements I1 named PC1 and the class 
> C1 extends from PC1
> If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
> unbuffer  can NOT do that. 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23195) FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement CanUnbuffer but parents implements CanUnbuffer

2019-10-20 Thread Zhao Yi Ming (Jira)
Zhao Yi Ming created HBASE-23195:


 Summary: FSDataInputStreamWrapper unbuffer can NOT invoke the 
classes NOT implement CanUnbuffer but parents implements CanUnbuffer 
 Key: HBASE-23195
 URL: https://issues.apache.org/jira/browse/HBASE-23195
 Project: HBase
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.2
Reporter: Zhao Yi Ming


FSDataInputStreamWrapper unbuffer can NOT invoke the classes NOT implement 
CanUnbuffer but parents implements CanUnbuffer

For example:

There are 1 interface I1 and one class implements I1 named PC1 and the class C1 
extends from PC1

If we want to invoke the C1 unbuffer() method the FSDataInputStreamWrapper 
unbuffer  can NOT do that. 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] binlijin commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
binlijin commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-544333908
 
 
  * Utility. Whether to include region in list of regions. Default is to
  * weed out split and offline regions.
  * @return True if we should include the node (do not include
  * if split or offline unless offline is set to true.
   
   As in the comment, when offline==true will include if split or offline, do 
you know when this is needed?  
   @Apache9 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23042) Parameters are incorrect in procedures jsp

2019-10-20 Thread Yi Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei resolved HBASE-23042.

Fix Version/s: 2.2.3
   2.3.0
   3.0.0
   Resolution: Fixed

> Parameters are incorrect in procedures jsp
> --
>
> Key: HBASE-23042
> URL: https://issues.apache.org/jira/browse/HBASE-23042
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: 1.png
>
>
> In procedures jps, the parameters of table name, region start end keys are 
> wrong, please see the first picture.
> This is because all bytes params are encoded in base64. It is confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23042) Parameters are incorrect in procedures jsp

2019-10-20 Thread Yi Mei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955699#comment-16955699
 ] 

Yi Mei commented on HBASE-23042:


Pushed to master, branch-2, branche-2.2. Thanks for [~zghao] for reviewing.

> Parameters are incorrect in procedures jsp
> --
>
> Key: HBASE-23042
> URL: https://issues.apache.org/jira/browse/HBASE-23042
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: 1.png
>
>
> In procedures jps, the parameters of table name, region start end keys are 
> wrong, please see the first picture.
> This is because all bytes params are encoded in base64. It is confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
Apache9 commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-544326077
 
 
   Anyway, I think we could persist the SPLIT state but the problem is that, 
when closing the parent region we will set the state to CLOSED. So for fixing 
this issue, I think we could add a check here
   
   
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStates.java#L349
   
   which returns false if the RegionInfo.isSplit is true.
   
   And can file another issue to just remove the usage of SPLIT, SPLITTING and 
SPLITTING_NEW, as the state of SplitTableProcedure is enough?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
Apache9 commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-544324200
 
 
   I'm also not very familiar with the details SPLIT/MERGE processings. Let me 
take a look on it, we need to find the correct way to fix this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mymeiyi merged pull request #728: HBASE-23042 Parameters are incorrect in procedures jsp

2019-10-20 Thread GitBox
mymeiyi merged pull request #728: HBASE-23042 Parameters are incorrect in 
procedures jsp
URL: https://github.com/apache/hbase/pull/728
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23157) WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used

2019-10-20 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955691#comment-16955691
 ] 

Lijin Bin commented on HBASE-23157:
---

Yes, we must make sure that before calling startCacheFlush we have all the WAL 
entries to be flushed out.

> WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
> 
>
> Key: HBASE-23157
> URL: https://issues.apache.org/jira/browse/HBASE-23157
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Attachments: HBASE-23157-master-v1.patch
>
>
> Durability.ASYNC_WAL do not wait wal sync and commit mvcc ahead. So when 
> region start flush may get a large flushedSeqId and later wal process buffer 
> entry and put a small unflushedSequenceIds for this region again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] binlijin commented on a change in pull request #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
binlijin commented on a change in pull request #732: HBASE-23187 Update parent 
region state to SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#discussion_r336816984
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
 ##
 @@ -1580,6 +1580,7 @@ public static void splitRegion(Connection connection, 
RegionInfo parent, long pa
   Put putParent = 
makePutFromRegionInfo(RegionInfoBuilder.newBuilder(parent)
 .setOffline(true)
 .setSplit(true).build(), time);
+  addRegionStateToPut(putParent, State.SPLIT);
 
 Review comment:
   The SPLIT_PARENT add back when enable table, and the code is in: 
   
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/EnableTableProcedure.java#L114
   
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStates.java#L329
   
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStates.java#L349
   And i not am familiar with all this logic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23157) WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used

2019-10-20 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955688#comment-16955688
 ] 

Duo Zhang commented on HBASE-23157:
---

OK, so the problem is that, in startCacheFlush we will remove entries from 
lowestUnflushedSequenceIds, so we must make sure that before calling 
startCacheFlush we have all the WAL entries to be flushed out?

Maybe we could do something when writing out the close marker...

> WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
> 
>
> Key: HBASE-23157
> URL: https://issues.apache.org/jira/browse/HBASE-23157
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Attachments: HBASE-23157-master-v1.patch
>
>
> Durability.ASYNC_WAL do not wait wal sync and commit mvcc ahead. So when 
> region start flush may get a large flushedSeqId and later wal process buffer 
> entry and put a small unflushedSequenceIds for this region again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23157) WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used

2019-10-20 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955685#comment-16955685
 ] 

Lijin Bin edited comment on HBASE-23157 at 10/21/19 1:51 AM:
-

[~zhangduo] 
The wal.sync is called after  wal.startCacheFlush(encodedRegionName, 
flushedFamilyNamesToSeq); , we need to call it before wal.startCacheFlush.
Also this can reproduce in test, you can get it which i upload in the patch.


was (Author: aoxiang):
[~zhangduo] 
The wal.sync is called after  wal.startCacheFlush(encodedRegionName, 
flushedFamilyNamesToSeq); , we need to call it before wal.startCacheFlush.


> WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
> 
>
> Key: HBASE-23157
> URL: https://issues.apache.org/jira/browse/HBASE-23157
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Attachments: HBASE-23157-master-v1.patch
>
>
> Durability.ASYNC_WAL do not wait wal sync and commit mvcc ahead. So when 
> region start flush may get a large flushedSeqId and later wal process buffer 
> entry and put a small unflushedSequenceIds for this region again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23157) WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used

2019-10-20 Thread Lijin Bin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955685#comment-16955685
 ] 

Lijin Bin commented on HBASE-23157:
---

[~zhangduo] 
The wal.sync is called after  wal.startCacheFlush(encodedRegionName, 
flushedFamilyNamesToSeq); , we need to call it before wal.startCacheFlush.


> WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
> 
>
> Key: HBASE-23157
> URL: https://issues.apache.org/jira/browse/HBASE-23157
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Attachments: HBASE-23157-master-v1.patch
>
>
> Durability.ASYNC_WAL do not wait wal sync and commit mvcc ahead. So when 
> region start flush may get a large flushedSeqId and later wal process buffer 
> entry and put a small unflushedSequenceIds for this region again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #737: HBASE-23194 : Remove unused methods from TokenUtil

2019-10-20 Thread GitBox
Apache-HBase commented on issue #737: HBASE-23194 : Remove unused methods from 
TokenUtil
URL: https://github.com/apache/hbase/pull/737#issuecomment-544289661
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 13s |  master passed  |
   | :green_heart: |  compile  |   0m 55s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 30s |  master passed  |
   | :green_heart: |  shadedjars  |   5m  0s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 36s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 21s |  the patch passed  |
   | :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 27s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 58s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m  2s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 151m 38s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 28s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 214m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-737/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/737 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux e7e72920a091 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-737/out/precommit/personality/provided.sh
 |
   | git revision | master / 4d414020bb |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-737/1/testReport/
 |
   | Max. process+thread count | 4427 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-737/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"

2019-10-20 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955545#comment-16955545
 ] 

Michael Stack commented on HBASE-23181:
---

Ugly hack -- not for commit, just parking here -- that forces PURGE of region 
reference from SequenceIDAccounting when we trip over rare condition where we 
are asked flush a region that has been closed. Helps but too much damage has 
been done by the time we get to this stage. Studying, I see region being closed 
but the reference in sequenceidaccounting remains in place. Trying to figure 
how that can happen.  Detail to follow.

> Blocked WAL archive: "LogRoller: Failed to schedule flush of 
> 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
> --
>
> Key: HBASE-23181
> URL: https://issues.apache.org/jira/browse/HBASE-23181
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Michael Stack
>Priority: Major
>
> On a heavily loaded cluster, WAL count keeps rising and we can get into a 
> state where we are not rolling the logs off fast enough. In particular, there 
> is this interesting state at the extreme where we pick a region to flush 
> because 'Too many WALs' but the region is actually not online. As the WAL 
> count rises, we keep picking a region-to-flush that is no longer on the 
> server. This condition blocks our being able to clear WALs; eventually WALs 
> climb into the hundreds and the RS goes zombie with a full Call queue that 
> starts throwing CallQueueTooLargeExceptions (bad if this servers is the one 
> carrying hbase:meta): i.e. clients fail to access the RegionServer.
> One symptom is a fast spike in WAL count for the RS. A restart of the RS will 
> break the bind.
> Here is how it looks in the log:
> {code}
> # Here is region closing
> 2019-10-16 23:10:55,897 INFO 
> org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 
> 8ee433ad59526778c53cc85ed3762d0b
> 
> # Then soon after ...
> 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
> Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is 
> not online on us
> 2019-10-16 23:11:45,006 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
> count=45, max=32; forcing flush of 1 regions(s): 
> 8ee433ad59526778c53cc85ed3762d0b
> ...
> # Later...
> 2019-10-16 23:20:25,427 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
> count=542, max=32; forcing flush of 1 regions(s): 
> 8ee433ad59526778c53cc85ed3762d0b
> 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
> Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is 
> not online on us
> {code}
> I've seen this runaway WALs 2.2.1. I've seen runaway WALs in a 1.2.x version 
> regularly that had HBASE-16721 fix in it, but can't say yet if it was for 
> same reason as above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani commented on issue #735: HBASE-22679 : Revamping CellUtil

2019-10-20 Thread GitBox
virajjasani commented on issue #735: HBASE-22679 : Revamping CellUtil
URL: https://github.com/apache/hbase/pull/735#issuecomment-544271043
 
 
   The above unit test failures are flaky, works fine locally.
   Please review @saintstack 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23194:
-
Description: Cleanup TokenUtil: remove unused methods from TokenUtil. For 
util methods to obtain Authentication tokens, ClientTokenUtil should be used 
where possible (in absence of hbase-server dependency)  (was: Cleanup 
TokenUtil: remove unused methods from TokenUtil)

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-23194:
-
Fix Version/s: 3.0.0
   Status: Patch Available  (was: In Progress)

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
> Fix For: 3.0.0
>
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-20 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23194 started by Viraj Jasani.

> Remove unused methods from TokenUtil
> 
>
> Key: HBASE-23194
> URL: https://issues.apache.org/jira/browse/HBASE-23194
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>
> Cleanup TokenUtil: remove unused methods from TokenUtil. For util methods to 
> obtain Authentication tokens, ClientTokenUtil should be used where possible 
> (in absence of hbase-server dependency)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] virajjasani opened a new pull request #737: HBASE-23194 : Remove unused methods from TokenUtil

2019-10-20 Thread GitBox
virajjasani opened a new pull request #737: HBASE-23194 : Remove unused methods 
from TokenUtil
URL: https://github.com/apache/hbase/pull/737
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on issue #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on issue #722: HBASE-23065 [hbtop] Top-N heavy hitter user 
and client drill downs
URL: https://github.com/apache/hbase/pull/722#issuecomment-544267271
 
 
   And don't we need ClientModeTest?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23194) Remove unused methods from TokenUtil

2019-10-20 Thread Viraj Jasani (Jira)
Viraj Jasani created HBASE-23194:


 Summary: Remove unused methods from TokenUtil
 Key: HBASE-23194
 URL: https://issues.apache.org/jira/browse/HBASE-23194
 Project: HBase
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Cleanup TokenUtil: remove unused methods from TokenUtil



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783647
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ClientModeStrategy.java
 ##
 @@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.UserMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.hadoop.hbase.hbtop.field.FieldValue;
+import org.apache.hadoop.hbase.hbtop.field.FieldValueType;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for client Mode.
+ */
+@InterfaceAudience.Private public final class ClientModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.CLIENT, 0, true), new 
FieldInfo(Field.USER_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final Map requestCountPerSecondMap = 
new HashMap<>();
+
+  ClientModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = createRecords(clusterMetrics, pushDownFilters);
+return aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.CLIENT, Field.USER,
+Field.USER_COUNT);
+  }
+
+  List createRecords(ClusterMetrics clusterMetrics, List 
pushDownFilters) {
+List ret = new ArrayList<>();
+for (ServerMetrics ServerMetrics : 
clusterMetrics.getLiveServerMetrics().values()) {
+  long lastReportTimestamp = ServerMetrics.getLastReportTimestamp();
+  ServerMetrics.getUserMetrics().entrySet().forEach(
+um -> um.getValue().getClientMetrics().values().forEach(clientMetrics 
-> ret.add(
+   createRecord(um.getValue().getNameAsString(), clientMetrics, 
lastReportTimestamp;
+}
+return ret;
+  }
+
+  /**
+   * Aggregate the records and count the unique values for the given 
distinctField
+   *
+   * @param records   records to be processed
+   * @param groupBy   Field on which group by needs to be done
+   * @param distinctField Field whose unique values needs to be counted
+   * @param uniqueCountAssignedTo a target field to which the unique count is 
assigned to
+   * @return aggregated records
+   */
+  List aggregateRecordsAndAddDistinct(List records, Field 
groupBy,
+  Field distinctField, Field uniqueCountAssignedTo) {
+List result = new ArrayList<>();
+records.stream().collect(Collectors.groupingBy(r -> 
r.get(groupBy))).entrySet()
 
 Review comment:
   Small nit: We can also use values() instead of entrySet() here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783942
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/screen/top/TopScreenPresenter.java
 ##
 @@ -327,4 +327,5 @@ public ScreenView goToFilterDisplayMode(Screen screen, 
Terminal terminal, int ro
 return new FilterDisplayModeScreenView(screen, terminal, row, 
topScreenModel.getFilters(),
   topScreenView);
   }
+
 
 Review comment:
   Small nit: We don't need this line :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336784728
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/screen/top/TopScreenModel.java
 ##
 @@ -155,11 +158,13 @@ public boolean addFilter(String filterString, boolean 
ignoreCase) {
 }
 
 filters.add(filter);
+decomposePushDownFilter();
 
 Review comment:
   Maybe we don't need `decomposePushDownFilter()` here. We never need to 
update `pushDownFilters` here I think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783898
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/screen/top/TopScreenModel.java
 ##
 @@ -113,7 +116,7 @@ public void refreshMetricsData() {
   }
 
   private void refreshSummary(ClusterMetrics clusterMetrics) {
-String currentTime = DateFormatUtils.ISO_8601_EXTENDED_TIME_FORMAT
+String currentTime = DateFormatUtils.ISO_DATE_FORMAT
 
 Review comment:
   `ISO_DATE_FORMAT` is deprecated and will replaced by 
`ISO_8601_EXTENDED_DATE_FORMAT`. We should use `ISO_8601_EXTENDED_TIME_FORMAT`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783503
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ClientModeStrategy.java
 ##
 @@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.UserMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.hadoop.hbase.hbtop.field.FieldValue;
+import org.apache.hadoop.hbase.hbtop.field.FieldValueType;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for client Mode.
+ */
+@InterfaceAudience.Private public final class ClientModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.CLIENT, 0, true), new 
FieldInfo(Field.USER_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final Map requestCountPerSecondMap = 
new HashMap<>();
+
+  ClientModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = createRecords(clusterMetrics, pushDownFilters);
+return aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.CLIENT, Field.USER,
+Field.USER_COUNT);
+  }
+
+  List createRecords(ClusterMetrics clusterMetrics, List 
pushDownFilters) {
+List ret = new ArrayList<>();
+for (ServerMetrics ServerMetrics : 
clusterMetrics.getLiveServerMetrics().values()) {
+  long lastReportTimestamp = ServerMetrics.getLastReportTimestamp();
+  ServerMetrics.getUserMetrics().entrySet().forEach(
+um -> um.getValue().getClientMetrics().values().forEach(clientMetrics 
-> ret.add(
+   createRecord(um.getValue().getNameAsString(), clientMetrics, 
lastReportTimestamp;
 
 Review comment:
   Small nit: As this doesn't use um.getKey(), we can use values() here like 
the following:
   ```
 ServerMetrics.getUserMetrics().values().forEach(
   um -> um.getClientMetrics().values().forEach(cm -> ret.add(
  createRecord(um.getNameAsString(), cm, lastReportTimestamp;
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336784148
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ClientModeStrategy.java
 ##
 @@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.UserMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.hadoop.hbase.hbtop.field.FieldValue;
+import org.apache.hadoop.hbase.hbtop.field.FieldValueType;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for client Mode.
+ */
+@InterfaceAudience.Private public final class ClientModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.CLIENT, 0, true), new 
FieldInfo(Field.USER_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final Map requestCountPerSecondMap = 
new HashMap<>();
+
+  ClientModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = createRecords(clusterMetrics, pushDownFilters);
+return aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.CLIENT, Field.USER,
+Field.USER_COUNT);
+  }
+
+  List createRecords(ClusterMetrics clusterMetrics, List 
pushDownFilters) {
 
 Review comment:
   It looks like we don't use `pushDownFilters` here. We can remove this 
argument.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783677
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ClientModeStrategy.java
 ##
 @@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.UserMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.hadoop.hbase.hbtop.field.FieldValue;
+import org.apache.hadoop.hbase.hbtop.field.FieldValueType;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for client Mode.
+ */
+@InterfaceAudience.Private public final class ClientModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.CLIENT, 0, true), new 
FieldInfo(Field.USER_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final Map requestCountPerSecondMap = 
new HashMap<>();
+
+  ClientModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = createRecords(clusterMetrics, pushDownFilters);
+return aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.CLIENT, Field.USER,
+Field.USER_COUNT);
+  }
+
+  List createRecords(ClusterMetrics clusterMetrics, List 
pushDownFilters) {
+List ret = new ArrayList<>();
+for (ServerMetrics ServerMetrics : 
clusterMetrics.getLiveServerMetrics().values()) {
+  long lastReportTimestamp = ServerMetrics.getLastReportTimestamp();
+  ServerMetrics.getUserMetrics().entrySet().forEach(
+um -> um.getValue().getClientMetrics().values().forEach(clientMetrics 
-> ret.add(
+   createRecord(um.getValue().getNameAsString(), clientMetrics, 
lastReportTimestamp;
+}
+return ret;
+  }
+
+  /**
+   * Aggregate the records and count the unique values for the given 
distinctField
+   *
+   * @param records   records to be processed
+   * @param groupBy   Field on which group by needs to be done
+   * @param distinctField Field whose unique values needs to be counted
+   * @param uniqueCountAssignedTo a target field to which the unique count is 
assigned to
+   * @return aggregated records
+   */
+  List aggregateRecordsAndAddDistinct(List records, Field 
groupBy,
+  Field distinctField, Field uniqueCountAssignedTo) {
+List result = new ArrayList<>();
+records.stream().collect(Collectors.groupingBy(r -> 
r.get(groupBy))).entrySet()
+.forEach(entry -> {
+  Set distinctValues = new HashSet<>();
+  Map map = new HashMap();
+  for (Record record : entry.getValue()) {
+for (Map.Entry field : record.entrySet()) {
+  if (distinctField.equals(field.getKey())) {
+//We will not be adding the field in the new record whose 
distinct count is required
+distinctValues.add(record.get(distinctField));
+  } else {
+if (field.getKey().getFieldValueType() == 
FieldValueType.STRING) {
+  map.put(field.getKey(), field.getValue());
+} else {
+ 

[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783841
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/UserModeStrategy.java
 ##
 @@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for User Mode.
+ */
+@InterfaceAudience.Private public final class UserModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.USER, 0, true),
+  new FieldInfo(Field.CLIENT_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final ClientModeStrategy clientModeStrategy = new 
ClientModeStrategy();
+
+  UserModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = clientModeStrategy.createRecords(clusterMetrics, 
pushDownFilters);
+return clientModeStrategy.aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.USER, Field.CLIENT,
+Field.CLIENT_COUNT);
+  }
+
+  @Override public DrillDownInfo drillDown(Record selectedRecord) {
+//Drill down to client and using selected USER as a filter
+List initialFilters = Arrays
+
.asList(RecordFilter.newBuilder(Field.USER).doubleEquals(selectedRecord.get(Field.USER)));
 
 Review comment:
   Small nit: We can use Collections.singletonList() instead of Arrays.asList() 
here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336784393
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ModeStrategy.java
 ##
 @@ -33,6 +34,7 @@
 interface ModeStrategy {
   List getFieldInfos();
   Field getDefaultSortField();
-  List getRecords(ClusterMetrics clusterMetrics);
+  List getRecords(ClusterMetrics clusterMetrics,
+  @Nullable List pushDownFilters);
 
 Review comment:
   Do we need `@Nullable` here? It looks like `pushDownFilters` is always not 
null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N heavy hitter user and client drill downs

2019-10-20 Thread GitBox
brfrn169 commented on a change in pull request #722: HBASE-23065 [hbtop] Top-N 
heavy hitter user and client drill downs
URL: https://github.com/apache/hbase/pull/722#discussion_r336783145
 
 

 ##
 File path: 
hbase-hbtop/src/main/java/org/apache/hadoop/hbase/hbtop/mode/ClientModeStrategy.java
 ##
 @@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.hbtop.mode;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import org.apache.hadoop.hbase.ClusterMetrics;
+import org.apache.hadoop.hbase.ServerMetrics;
+import org.apache.hadoop.hbase.UserMetrics;
+import org.apache.hadoop.hbase.hbtop.Record;
+import org.apache.hadoop.hbase.hbtop.RecordFilter;
+import org.apache.hadoop.hbase.hbtop.field.Field;
+import org.apache.hadoop.hbase.hbtop.field.FieldInfo;
+import org.apache.hadoop.hbase.hbtop.field.FieldValue;
+import org.apache.hadoop.hbase.hbtop.field.FieldValueType;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Implementation for {@link ModeStrategy} for client Mode.
+ */
+@InterfaceAudience.Private public final class ClientModeStrategy implements 
ModeStrategy {
+
+  private final List fieldInfos = Arrays
+  .asList(new FieldInfo(Field.CLIENT, 0, true), new 
FieldInfo(Field.USER_COUNT, 5, true),
+  new FieldInfo(Field.REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.READ_REQUEST_COUNT_PER_SECOND, 10, true),
+  new FieldInfo(Field.WRITE_REQUEST_COUNT_PER_SECOND, 10, true));
+  private final Map requestCountPerSecondMap = 
new HashMap<>();
+
+  ClientModeStrategy() {
+  }
+
+  @Override public List getFieldInfos() {
+return fieldInfos;
+  }
+
+  @Override public Field getDefaultSortField() {
+return Field.REQUEST_COUNT_PER_SECOND;
+  }
+
+  @Override public List getRecords(ClusterMetrics clusterMetrics,
+  List pushDownFilters) {
+List records = createRecords(clusterMetrics, pushDownFilters);
+return aggregateRecordsAndAddDistinct(
+ModeStrategyUtils.applyFilterAndGet(records, pushDownFilters), 
Field.CLIENT, Field.USER,
+Field.USER_COUNT);
+  }
+
+  List createRecords(ClusterMetrics clusterMetrics, List 
pushDownFilters) {
+List ret = new ArrayList<>();
+for (ServerMetrics ServerMetrics : 
clusterMetrics.getLiveServerMetrics().values()) {
+  long lastReportTimestamp = ServerMetrics.getLastReportTimestamp();
+  ServerMetrics.getUserMetrics().entrySet().forEach(
+um -> um.getValue().getClientMetrics().values().forEach(clientMetrics 
-> ret.add(
+   createRecord(um.getValue().getNameAsString(), clientMetrics, 
lastReportTimestamp;
+}
+return ret;
+  }
+
+  /**
+   * Aggregate the records and count the unique values for the given 
distinctField
+   *
+   * @param records   records to be processed
+   * @param groupBy   Field on which group by needs to be done
+   * @param distinctField Field whose unique values needs to be counted
+   * @param uniqueCountAssignedTo a target field to which the unique count is 
assigned to
+   * @return aggregated records
+   */
+  List aggregateRecordsAndAddDistinct(List records, Field 
groupBy,
+  Field distinctField, Field uniqueCountAssignedTo) {
+List result = new ArrayList<>();
+records.stream().collect(Collectors.groupingBy(r -> 
r.get(groupBy))).entrySet()
+.forEach(entry -> {
+  Set distinctValues = new HashSet<>();
+  Map map = new HashMap();
 
 Review comment:
   Small nit: need a diamond operator (<>) with HashMap here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #735: HBASE-22679 : Revamping CellUtil

2019-10-20 Thread GitBox
Apache-HBase commented on issue #735: HBASE-22679 : Revamping CellUtil
URL: https://github.com/apache/hbase/pull/735#issuecomment-544260780
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 15 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 41s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 12s |  master passed  |
   | :green_heart: |  compile  |   2m 45s |  master passed  |
   | :green_heart: |  checkstyle  |   3m 22s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   2m 14s |  master passed  |
   | :blue_heart: |  spotbugs  |   1m 29s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   8m 10s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   4m 55s |  the patch passed  |
   | :green_heart: |  compile  |   2m 44s |  the patch passed  |
   | :green_heart: |  javac  |   2m 44s |  the patch passed  |
   | :green_heart: |  checkstyle  |   0m 29s |  hbase-common: The patch 
generated 0 new + 92 unchanged - 38 fixed = 92 total (was 130)  |
   | :green_heart: |  checkstyle  |   0m 33s |  hbase-client: The patch 
generated 0 new + 113 unchanged - 1 fixed = 113 total (was 114)  |
   | :green_heart: |  checkstyle  |   1m 18s |  hbase-server: The patch 
generated 0 new + 95 unchanged - 12 fixed = 95 total (was 107)  |
   | :green_heart: |  checkstyle  |   0m 19s |  The patch passed checkstyle in 
hbase-mapreduce  |
   | :green_heart: |  checkstyle  |   0m 40s |  The patch passed checkstyle in 
hbase-thrift  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 33s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 26s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   2m  8s |  the patch passed  |
   | :green_heart: |  findbugs  |   8m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   3m  7s |  hbase-common in the patch passed.  |
   | :green_heart: |  unit  |   1m 50s |  hbase-client in the patch passed.  |
   | :broken_heart: |  unit  | 153m 52s |  hbase-server in the patch failed.  |
   | :broken_heart: |  unit  |  18m 52s |  hbase-mapreduce in the patch failed. 
 |
   | :green_heart: |  unit  |   3m 35s |  hbase-thrift in the patch passed.  |
   | :green_heart: |  asflicense  |   2m 58s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 261m  5s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestAsyncTableGetMultiThreaded |
   |   | hadoop.hbase.snapshot.TestExportSnapshotNoCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/735 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 58119121604f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-735/out/precommit/personality/provided.sh
 |
   | git revision | master / 4d414020bb |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/3/artifact/out/patch-unit-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/3/artifact/out/patch-unit-hbase-mapreduce.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/3/testReport/
 |
   | Max. process+thread count | 5413 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server hbase-mapreduce 
hbase-thrift U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) 

[jira] [Updated] (HBASE-11062) hbtop

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-11062:
-
Release Note: Introduces hbtop that's a real-time monitoring tool for HBase 
like Unix's top command. See the ref guide for the details: 
https://hbase.apache.org/book.html#hbtop  (was: Introduces hbtop that's a 
real-time monitoring tool for HBase like Unix's top command. See README for the 
details: https://github.com/apache/hbase/blob/master/hbase-hbtop/README.md)

> hbtop
> -
>
> Key: HBASE-11062
> URL: https://issues.apache.org/jira/browse/HBASE-11062
> Project: HBase
>  Issue Type: New Feature
>  Components: hbtop
>Reporter: Andrew Kyle Purtell
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: HBASE-11062-master-addendum-v1.patch
>
>
> A top-like monitor could be useful for testing, debugging, operations of 
> clusters of moderate size, and possibly for diagnosing issues in large 
> clusters.
> Consider a curses interface like the one presented by atop 
> (http://www.atoptool.nl/images/screenshots/genericw.png) - with aggregate 
> metrics collected over a monitoring interval in the upper portion of the 
> pane, and a listing of discrete measurements sorted and filtered by various 
> criteria in the bottom part of the pane. One might imagine a cluster overview 
> with cluster aggregate metrics above and a list of regionservers sorted by 
> utilization below; and a regionserver view with process metrics above and a 
> list of metrics by operation type below, or a list of client connections, or 
> a list of threads, sorted by utilization, throughput, or latency. 
> Generically 'htop' is taken but would be distinctive in the HBase context, a 
> utility org.apache.hadoop.hbase.HTop
> No need necessarily for a curses interface. Could be an external monitor with 
> a web front end as has been discussed before. I do like the idea of a process 
> that runs in a terminal because I interact with dev and test HBase clusters 
> exclusively by SSH. 
> UPDATE:
> The tool name is changed from htop to hbtop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23157) WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used

2019-10-20 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955490#comment-16955490
 ] 

Duo Zhang commented on HBASE-23157:
---

[~binlijin] After checking the code, we do have a wal.sync when preparing a 
flush, see here

https://github.com/apache/hbase/blob/4d414020bb3bfd7f214d2a599426be700df772b2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L2689

So maybe there are other bugs?

Have you enabled in memory compaction?

> WAL unflushed seqId tracking may wrong when Durability.ASYNC_WAL is used
> 
>
> Key: HBASE-23157
> URL: https://issues.apache.org/jira/browse/HBASE-23157
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Attachments: HBASE-23157-master-v1.patch
>
>
> Durability.ASYNC_WAL do not wait wal sync and commit mvcc ahead. So when 
> region start flush may get a large flushedSeqId and later wal process buffer 
> entry and put a small unflushedSequenceIds for this region again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"

2019-10-20 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955484#comment-16955484
 ] 

Duo Zhang commented on HBASE-23181:
---

Do you have in-memory compaction enabled sir? [~stack].

> Blocked WAL archive: "LogRoller: Failed to schedule flush of 
> 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
> --
>
> Key: HBASE-23181
> URL: https://issues.apache.org/jira/browse/HBASE-23181
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.1
>Reporter: Michael Stack
>Priority: Major
>
> On a heavily loaded cluster, WAL count keeps rising and we can get into a 
> state where we are not rolling the logs off fast enough. In particular, there 
> is this interesting state at the extreme where we pick a region to flush 
> because 'Too many WALs' but the region is actually not online. As the WAL 
> count rises, we keep picking a region-to-flush that is no longer on the 
> server. This condition blocks our being able to clear WALs; eventually WALs 
> climb into the hundreds and the RS goes zombie with a full Call queue that 
> starts throwing CallQueueTooLargeExceptions (bad if this servers is the one 
> carrying hbase:meta): i.e. clients fail to access the RegionServer.
> One symptom is a fast spike in WAL count for the RS. A restart of the RS will 
> break the bind.
> Here is how it looks in the log:
> {code}
> # Here is region closing
> 2019-10-16 23:10:55,897 INFO 
> org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 
> 8ee433ad59526778c53cc85ed3762d0b
> 
> # Then soon after ...
> 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
> Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is 
> not online on us
> 2019-10-16 23:11:45,006 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
> count=45, max=32; forcing flush of 1 regions(s): 
> 8ee433ad59526778c53cc85ed3762d0b
> ...
> # Later...
> 2019-10-16 23:20:25,427 INFO 
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
> count=542, max=32; forcing flush of 1 regions(s): 
> 8ee433ad59526778c53cc85ed3762d0b
> 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
> Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is 
> not online on us
> {code}
> I've seen this runaway WALs 2.2.1. I've seen runaway WALs in a 1.2.x version 
> regularly that had HBASE-16721 fix in it, but can't say yet if it was for 
> same reason as above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23193) ConnectionImplementation.isTableAvailable can not deal with meta table on branch-2.x

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23193:
--
Summary: ConnectionImplementation.isTableAvailable can not deal with meta 
table on branch-2.x  (was: ConnectionImplementation.isTableAvailable can not 
deal with meta table)

> ConnectionImplementation.isTableAvailable can not deal with meta table on 
> branch-2.x
> 
>
> Key: HBASE-23193
> URL: https://issues.apache.org/jira/browse/HBASE-23193
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.3.0, 2.2.3
>
>
> TestRSGroupKillRS is broken by HBASE-22767, as on master the client library 
> has been reimplemented so Admin.isTableAvailable can be used to test meta 
> table, but on branch-2 and branch-2.2, we will get this
> {noformat}
> java.lang.RuntimeException: java.io.IOException: This method can't be used to 
> locate meta regions; use MetaTableLocator instead
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:143)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:242)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitTableAvailable(HBaseTestingUtility.java:3268)
>   at 
> org.apache.hadoop.hbase.rsgroup.TestRSGroupsKillRS.testLowerMetaGroupVersion(TestRSGroupsKillRS.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: This method can't be used to locate meta 
> regions; use MetaTableLocator instead
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getTableRegionsAndLocations(MetaTableAccessor.java:615)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.isTableAvailable(ConnectionImplementation.java:643)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:971)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility$9.evaluate(HBaseTestingUtility.java:4269)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
>   ... 30 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23193) ConnectionImplementation.isTableAvailable can not deal with meta table

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23193:
--
Description: 
TestRSGroupKillRS is broken by HBASE-22767, as on master the client library has 
been reimplemented so Admin.isTableAvailable can be used to test meta table, 
but on branch-2 and branch-2.2, we will get this

{noformat}
java.lang.RuntimeException: java.io.IOException: This method can't be used to 
locate meta regions; use MetaTableLocator instead
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:143)
at 
org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:242)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.waitTableAvailable(HBaseTestingUtility.java:3268)
at 
org.apache.hadoop.hbase.rsgroup.TestRSGroupsKillRS.testLowerMetaGroupVersion(TestRSGroupsKillRS.java:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: This method can't be used to locate meta 
regions; use MetaTableLocator instead
at 
org.apache.hadoop.hbase.MetaTableAccessor.getTableRegionsAndLocations(MetaTableAccessor.java:615)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.isTableAvailable(ConnectionImplementation.java:643)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:971)
at 
org.apache.hadoop.hbase.HBaseTestingUtility$9.evaluate(HBaseTestingUtility.java:4269)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
... 30 more


{noformat}

  was:
It is broken by HBASE-22767, as on master the client library has been 
reimplemented so Admin.isTableAvailable can be used to test meta table, but on 
branch-2 and branch-2.2, we will get this

{noformat}
java.lang.RuntimeException: java.io.IOException: This method can't be used to 
locate meta regions; use MetaTableLocator instead
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:143)
at 
org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:242)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.waitTableAvailable(HBaseTestingUtility.java:3268)
at 
org.apache.hadoop.hbase.rsgroup.TestRSGroupsKillRS.testLowerMetaGroupVersion(TestRSGroupsKillRS.java:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Updated] (HBASE-23193) ConnectionImplementation.isTableAvailable can not deal with meta table

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23193:
--
Summary: ConnectionImplementation.isTableAvailable can not deal with meta 
table  (was: TestRSGroupsKillRS is broken on branch-2 and branch-2,2)

> ConnectionImplementation.isTableAvailable can not deal with meta table
> --
>
> Key: HBASE-23193
> URL: https://issues.apache.org/jira/browse/HBASE-23193
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.3.0, 2.2.3
>
>
> It is broken by HBASE-22767, as on master the client library has been 
> reimplemented so Admin.isTableAvailable can be used to test meta table, but 
> on branch-2 and branch-2.2, we will get this
> {noformat}
> java.lang.RuntimeException: java.io.IOException: This method can't be used to 
> locate meta regions; use MetaTableLocator instead
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:143)
>   at 
> org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:242)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.waitTableAvailable(HBaseTestingUtility.java:3268)
>   at 
> org.apache.hadoop.hbase.rsgroup.TestRSGroupsKillRS.testLowerMetaGroupVersion(TestRSGroupsKillRS.java:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: This method can't be used to locate meta 
> regions; use MetaTableLocator instead
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getTableRegionsAndLocations(MetaTableAccessor.java:615)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.isTableAvailable(ConnectionImplementation.java:643)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:971)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility$9.evaluate(HBaseTestingUtility.java:4269)
>   at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
>   ... 30 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23193) TestRSGroupsKillRS is broken on branch-2 and branch-2,2

2019-10-20 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-23193:
-

 Summary: TestRSGroupsKillRS is broken on branch-2 and branch-2,2
 Key: HBASE-23193
 URL: https://issues.apache.org/jira/browse/HBASE-23193
 Project: HBase
  Issue Type: Bug
  Components: rsgroup, test
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.3.0, 2.2.3


It is broken by HBASE-22767, as on master the client library has been 
reimplemented so Admin.isTableAvailable can be used to test meta table, but on 
branch-2 and branch-2.2, we will get this

{noformat}
java.lang.RuntimeException: java.io.IOException: This method can't be used to 
locate meta regions; use MetaTableLocator instead
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:143)
at 
org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:242)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.waitTableAvailable(HBaseTestingUtility.java:3268)
at 
org.apache.hadoop.hbase.rsgroup.TestRSGroupsKillRS.testLowerMetaGroupVersion(TestRSGroupsKillRS.java:245)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: This method can't be used to locate meta 
regions; use MetaTableLocator instead
at 
org.apache.hadoop.hbase.MetaTableAccessor.getTableRegionsAndLocations(MetaTableAccessor.java:615)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.isTableAvailable(ConnectionImplementation.java:643)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:971)
at 
org.apache.hadoop.hbase.HBaseTestingUtility$9.evaluate(HBaseTestingUtility.java:4269)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191)
... 30 more


{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23055) Alter hbase:meta

2019-10-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955461#comment-16955461
 ] 

Hudson commented on HBASE-23055:


Results for branch HBASE-23055
[build #20 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/20/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/20//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/20//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/20//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Alter hbase:meta
> 
>
> Key: HBASE-23055
> URL: https://issues.apache.org/jira/browse/HBASE-23055
> Project: HBase
>  Issue Type: Task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
>
> hbase:meta is currently hardcoded. Its schema cannot be change.
> This issue is about allowing edits to hbase:meta schema. It will allow our 
> being able to set encodings such as the block-with-indexes which will help 
> quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first 
> step on road to being able to split meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-10-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955450#comment-16955450
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #154 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/154/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/154//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/154//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/154//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/154//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21110) Issues with Unsafe and JDK 11

2019-10-20 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955440#comment-16955440
 ] 

Yu Li commented on HBASE-21110:
---

About the warning on `java.nio.Bits.unaligned()` invocation, I believe apache 
spark handles it in a better way 
[here|https://github.com/apache/spark/blob/1a3858a7694a189f434846b132b828e902273620/common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java#L316].

> Issues with Unsafe and JDK 11
> -
>
> Key: HBASE-21110
> URL: https://issues.apache.org/jira/browse/HBASE-21110
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Priority: Major
>  Labels: jdk11
> Fix For: 3.0.0
>
>
> Using Java 11 RC 1, I get the following warning, probably need to add the 
> suggested flag to our scripts?
> {noformat}
> mdrob@mdrob-MBP:~/IdeaProjects/hbase$ java -version
> java version "11" 2018-09-25
> Java(TM) SE Runtime Environment 18.9 (build 11+28)
> Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11+28, mixed mode)
> mdrob@mdrob-MBP:~/IdeaProjects/hbase$ bin/start-hbase.sh
> mdrob@mdrob-MBP:~/IdeaProjects/hbase$ cat 
> /Users/mdrob/IdeaProjects/hbase/bin/../logs/hbase-mdrob-master-mdrob-MBP.local.out
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker 
> (file:/Users/mdrob/IdeaProjects/hbase/hbase-common/target/hbase-common-3.0.0-SNAPSHOT.jar)
>  to method java.nio.Bits.unaligned()
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.hadoop.hbase.util.UnsafeAvailChecker
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #735: HBASE-22679 : Revamping CellUtil

2019-10-20 Thread GitBox
Apache-HBase commented on issue #735: HBASE-22679 : Revamping CellUtil
URL: https://github.com/apache/hbase/pull/735#issuecomment-544237374
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 15 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 40s |  master passed  |
   | :green_heart: |  compile  |   2m 45s |  master passed  |
   | :green_heart: |  checkstyle  |   3m 23s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   2m 11s |  master passed  |
   | :blue_heart: |  spotbugs  |   1m 28s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   8m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   4m 58s |  the patch passed  |
   | :green_heart: |  compile  |   2m 42s |  the patch passed  |
   | :green_heart: |  javac  |   2m 42s |  the patch passed  |
   | :green_heart: |  checkstyle  |   0m 28s |  hbase-common: The patch 
generated 0 new + 92 unchanged - 38 fixed = 92 total (was 130)  |
   | :green_heart: |  checkstyle  |   0m 33s |  hbase-client: The patch 
generated 0 new + 113 unchanged - 1 fixed = 113 total (was 114)  |
   | :green_heart: |  checkstyle  |   1m 20s |  hbase-server: The patch 
generated 0 new + 95 unchanged - 12 fixed = 95 total (was 107)  |
   | :green_heart: |  checkstyle  |   0m 19s |  The patch passed checkstyle in 
hbase-mapreduce  |
   | :green_heart: |  checkstyle  |   0m 39s |  The patch passed checkstyle in 
hbase-thrift  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 30s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 31s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   2m 11s |  the patch passed  |
   | :green_heart: |  findbugs  |   8m 48s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   3m  6s |  hbase-common in the patch passed.  |
   | :green_heart: |  unit  |   1m 51s |  hbase-client in the patch passed.  |
   | :broken_heart: |  unit  |  31m 29s |  hbase-server in the patch failed.  |
   | :green_heart: |  unit  |  18m 48s |  hbase-mapreduce in the patch passed.  
|
   | :green_heart: |  unit  |   3m 23s |  hbase-thrift in the patch passed.  |
   | :green_heart: |  asflicense  |   1m 51s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 137m 42s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.io.hfile.TestHFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/735 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux fbee98bf2bf6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-735/out/precommit/personality/provided.sh
 |
   | git revision | master / 4d414020bb |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/2/testReport/
 |
   | Max. process+thread count | 5448 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server hbase-mapreduce 
hbase-thrift U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-735/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is 

[GitHub] [hbase] Apache9 commented on issue #736: HBASE-23182 The create-release scripts are broken

2019-10-20 Thread GitBox
Apache9 commented on issue #736: HBASE-23182 The create-release scripts are 
broken
URL: https://github.com/apache/hbase/pull/736#issuecomment-544225656
 
 
   OK the shell check warnings tell us to use double quota but actually, double 
quota is the problem here...
   
   For example, "-Papache-release -Prelease", will be considered as only one 
profile, which is "apache-release -Prelease"...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #736: HBASE-23182 The create-release scripts are broken

2019-10-20 Thread GitBox
Apache-HBase commented on issue #736: HBASE-23182 The create-release scripts 
are broken
URL: https://github.com/apache/hbase/pull/736#issuecomment-544225068
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :blue_heart: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 10s |  Maven dependency ordering for patch  
|
   | :broken_heart: |  shellcheck  |   0m  4s |  The patch generated 3 new + 
112 unchanged - 0 fixed = 115 total (was 112)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | :blue_heart: |  asflicense  |   0m  1s |  ASF License check generated no 
output?  |
   |  |   |   3m  4s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-736/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/736 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 74d2ccb2db77 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-736/out/precommit/personality/provided.sh
 |
   | git revision | master / 4d414020bb |
   | shellcheck | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-736/1/artifact/out/diff-patch-shellcheck.txt
 |
   | Max. process+thread count | 52 (vs. ulimit of 1) |
   | modules | C:  U:  |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-736/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Work started] (HBASE-23182) The create-release scripts are broken

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23182 started by Duo Zhang.
-
> The create-release scripts are broken
> -
>
> Key: HBASE-23182
> URL: https://issues.apache.org/jira/browse/HBASE-23182
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> Only several small bugs but it does make the releasing fail...
> Mostly introduced by HBASE-23092.
> Will upload the patch after I successully published 2.2.2RC0...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 opened a new pull request #736: HBASE-23182 The create-release scripts are broken

2019-10-20 Thread GitBox
Apache9 opened a new pull request #736: HBASE-23182 The create-release scripts 
are broken
URL: https://github.com/apache/hbase/pull/736
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-23168:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Generate CHANGES.md and RELEASENOTES.md for 2.2.2
> -
>
> Key: HBASE-23168
> URL: https://issues.apache.org/jira/browse/HBASE-23168
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.2.2
>
> Attachments: HBASE-23168-branch-2.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-22881) Fix non-daemon threads in hbase server implementation

2019-10-20 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-22881.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Let's open a new issue for fixing the remain non-daemon threads? This issue has 
already been included in several releases...

> Fix non-daemon threads in hbase server implementation
> -
>
> Key: HBASE-22881
> URL: https://issues.apache.org/jira/browse/HBASE-22881
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.1
>
>
> "pool-8-thread-3" #7252 prio=5 os_prio=0 tid=0x7f91040044c0 nid=0xd71e 
> waiting on condition [0x7f8f4d209000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-2" #7251 prio=5 os_prio=0 tid=0x7f910c010be0 nid=0xd71d 
> waiting on condition [0x7f8f4daab000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-1" #7250 prio=5 os_prio=0 tid=0x7f9119d0 nid=0xd71c 
> waiting on condition [0x7f8f4da6a000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-5-thread-3" #7248 prio=5 os_prio=0 tid=0x7f9238005ad0 nid=0xd71a 
> waiting on condition [0x7f8f4cb65000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0ec51e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache9 commented on a change in pull request #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-20 Thread GitBox
Apache9 commented on a change in pull request #732: HBASE-23187 Update parent 
region state to SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#discussion_r336764481
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
 ##
 @@ -1580,6 +1580,7 @@ public static void splitRegion(Connection connection, 
RegionInfo parent, long pa
   Put putParent = 
makePutFromRegionInfo(RegionInfoBuilder.newBuilder(parent)
 .setOffline(true)
 .setSplit(true).build(), time);
+  addRegionStateToPut(putParent, State.SPLIT);
 
 Review comment:
   I think for split, it is SPLIT_PARENT? And what is the way we do this for 
branch-1? In the above code, the RegionInfo has been set to offline and also 
split, so I do not think it should come back online again? If it does then 
there are some bugs in other places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services