[jira] [Comment Edited] (HBASE-28076) NPE on initialization error in RecoveredReplicationSourceShipper

2023-09-15 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765761#comment-17765761
 ] 

Karthik Palanisamy edited comment on HBASE-28076 at 9/15/23 6:09 PM:
-

Thanks [~stoty]. 

[~zhangduo]  Aside from this particular issue, there is another race condition 
occurring that is resulting in a NullPointerException and it bring down 
Regionserver. The exact cause of this NPE is currently unknown. It appears to 
be attempting to access a queue that either no longer exists or has been 
removed from the queue before it can be accessed.
{code:java}
2023-09-10 20:02:35,365 ERROR 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Unexpected 
exception in ReplicationExecutor
..
..
..
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs(ReplicationSourceManager.java:563)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:549)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceInterface.logPositionAndCleanOldLogs(ReplicationSourceInterface.java:202)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.updateLogPosition(ReplicationSourceShipper.java:269)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits(ReplicationSourceShipper.java:163)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:119)
 {code}


was (Author: kpalanisamy):
Thanks [~stoty]. 

[~zhangduo]  Aside from this particular issue, there is another race condition 
occurring that is resulting in a NullPointerException. The exact cause of this 
NPE is currently unknown. It appears to be attempting to access a queue that 
either no longer exists or has been removed from the queue before it can be 
accessed.
{code:java}
2023-09-10 20:02:35,365 ERROR 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Unexpected 
exception in ReplicationExecutor
..
..
..
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs(ReplicationSourceManager.java:563)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:549)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceInterface.logPositionAndCleanOldLogs(ReplicationSourceInterface.java:202)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.updateLogPosition(ReplicationSourceShipper.java:269)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits(ReplicationSourceShipper.java:163)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:119)
 {code}

> NPE on initialization error in RecoveredReplicationSourceShipper
> 
>
> Key: HBASE-28076
> URL: https://issues.apache.org/jira/browse/HBASE-28076
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.4.17, 2.5.5
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.6.0, 2.4.18, 2.5.6
>
>
> When we run into problems starting RecoveredReplicationSourceShipper, we try 
> to stop the reader thread which we haven't initialized yet, resulting in an 
> NPE.
> {noformat}
> ERROR org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: 
> Unexpected exception in redacted currentPath=hdfs://redacted
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.terminate(RecoveredReplicationSourceShipper.java:100)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getRecoveredQueueStartPos(RecoveredReplicationSourceShipper.java:87)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getStartPosition(RecoveredReplicationSourceShipper.java:62)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.lambda$tryStartNewShipper$3(ReplicationSource.java:349)
>         at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.tryStartNewShipper(ReplicationSource.java:341)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize(ReplicationSource.java:601)
>         at java.lang.Thread.run(Thread.java:750)
> {noformat}
> A simple null check should fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28076) NPE on initialization error in RecoveredReplicationSourceShipper

2023-09-15 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17765761#comment-17765761
 ] 

Karthik Palanisamy commented on HBASE-28076:


Thanks [~stoty]. 

[~zhangduo]  Aside from this particular issue, there is another race condition 
occurring that is resulting in a NullPointerException. The exact cause of this 
NPE is currently unknown. It appears to be attempting to access a queue that 
either no longer exists or has been removed from the queue before it can be 
accessed.
{code:java}
2023-09-10 20:02:35,365 ERROR 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Unexpected 
exception in ReplicationExecutor
..
..
..
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.cleanOldLogs(ReplicationSourceManager.java:563)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:549)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceInterface.logPositionAndCleanOldLogs(ReplicationSourceInterface.java:202)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.updateLogPosition(ReplicationSourceShipper.java:269)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.shipEdits(ReplicationSourceShipper.java:163)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceShipper.run(ReplicationSourceShipper.java:119)
 {code}

> NPE on initialization error in RecoveredReplicationSourceShipper
> 
>
> Key: HBASE-28076
> URL: https://issues.apache.org/jira/browse/HBASE-28076
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.4.17, 2.5.5
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
> Fix For: 2.6.0, 2.4.18, 2.5.6
>
>
> When we run into problems starting RecoveredReplicationSourceShipper, we try 
> to stop the reader thread which we haven't initialized yet, resulting in an 
> NPE.
> {noformat}
> ERROR org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: 
> Unexpected exception in redacted currentPath=hdfs://redacted
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.terminate(RecoveredReplicationSourceShipper.java:100)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getRecoveredQueueStartPos(RecoveredReplicationSourceShipper.java:87)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.RecoveredReplicationSourceShipper.getStartPosition(RecoveredReplicationSourceShipper.java:62)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.lambda$tryStartNewShipper$3(ReplicationSource.java:349)
>         at 
> java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.tryStartNewShipper(ReplicationSource.java:341)
>         at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.initialize(ReplicationSource.java:601)
>         at java.lang.Thread.run(Thread.java:750)
> {noformat}
> A simple null check should fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27497) Add a note for RegionMerge tool.

2022-11-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27497:
---
Description: 
Ref: 
[https://github.com/apache/hbase-operator-tools/blob/master/hbase-tools/README.md]

NOTE: 

Do not perform region merge operations on phoenix slated tables. It may affect 
the region boundaries and produce incorrect query results. 

 

  was:
Ref: 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-tools/README.md

NOTE: 

Do not perform region merge operations on phoenix slated tables. It will affect 
the region boundaries and produce incorrect query results. 

 


> Add a note for RegionMerge tool. 
> -
>
> Key: HBASE-27497
> URL: https://issues.apache.org/jira/browse/HBASE-27497
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Trivial
>
> Ref: 
> [https://github.com/apache/hbase-operator-tools/blob/master/hbase-tools/README.md]
> NOTE: 
> Do not perform region merge operations on phoenix slated tables. It may 
> affect the region boundaries and produce incorrect query results. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27497) Add a note for RegionMerge tool.

2022-11-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27497:
---
Description: 
Ref: 
https://github.com/apache/hbase-operator-tools/blob/master/hbase-tools/README.md

NOTE: 

Do not perform region merge operations on phoenix slated tables. It will affect 
the region boundaries and produce incorrect query results. 

 

  was:
NOTE: 

Do not perform region merge operations on phoenix slated tables. It will affect 
the region boundaries and produce incorrect query results. 

 


> Add a note for RegionMerge tool. 
> -
>
> Key: HBASE-27497
> URL: https://issues.apache.org/jira/browse/HBASE-27497
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Trivial
>
> Ref: 
> https://github.com/apache/hbase-operator-tools/blob/master/hbase-tools/README.md
> NOTE: 
> Do not perform region merge operations on phoenix slated tables. It will 
> affect the region boundaries and produce incorrect query results. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27497) Add a note for RegionMerge tool.

2022-11-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27497:
---
Issue Type: Improvement  (was: Bug)

> Add a note for RegionMerge tool. 
> -
>
> Key: HBASE-27497
> URL: https://issues.apache.org/jira/browse/HBASE-27497
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Trivial
>
> NOTE: 
> Do not perform region merge operations on phoenix slated tables. It will 
> affect the region boundaries and produce incorrect query results. 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27497) Add a note for RegionMerge tool.

2022-11-18 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-27497:
--

 Summary: Add a note for RegionMerge tool. 
 Key: HBASE-27497
 URL: https://issues.apache.org/jira/browse/HBASE-27497
 Project: HBase
  Issue Type: Bug
  Components: hbck2
Reporter: Karthik Palanisamy


NOTE: 

Do not perform region merge operations on phoenix slated tables. It will affect 
the region boundaries and produce incorrect query results. 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27147) extraRegionsInMeta does not work If RegionInfo is null

2022-06-22 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557579#comment-17557579
 ] 

Karthik Palanisamy commented on HBASE-27147:


Thank you [~wchevreuil] for review. I just renamed Jira title. Pls review this 
scenario.

> extraRegionsInMeta does not work If RegionInfo is null
> --
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>
> extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is 
> missing.
>  
> Somehow, the customer has the following empty row in meta as a stale. 
> 'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', 
> 16223
>  
> And no corresponding table "I1xx" exist. 
>  
> We use extraRegionsInMeta but it didn't clean. Also, we created same table 
> again and used extraRegionsInMeta after removing HDFS data but the stale row 
> never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" 
> is present. 
>  
> We need to handle the scenario for other columns I.e info:state, info:server, 
> etc



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27147) extraRegionsInMeta does not work If RegionInfo is null

2022-06-22 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27147:
---
Description: 
extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is 
missing.

 

Somehow, the customer has the following empty row in meta as a stale. 
'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', 
16223
 

And no corresponding table "I1xx" exist. 

 

We use extraRegionsInMeta but it didn't clean. Also, we created same table 
again and used extraRegionsInMeta after removing HDFS data but the stale row 
never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" is 
present. 

 

We need to handle the scenario for other columns I.e info:state, info:server, 
etc

> extraRegionsInMeta does not work If RegionInfo is null
> --
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>
> extraRegionsInMeta will not clean/fix meta if info:regioninfo columns is 
> missing.
>  
> Somehow, the customer has the following empty row in meta as a stale. 
> 'I1xx,16332508x.f53609cc1ae366b43205dxxx', 'info:state', 
> 16223
>  
> And no corresponding table "I1xx" exist. 
>  
> We use extraRegionsInMeta but it didn't clean. Also, we created same table 
> again and used extraRegionsInMeta after removing HDFS data but the stale row 
> never cleaned. It looks extraRegionsInMeta works only when "info:regioninfo" 
> is present. 
>  
> We need to handle the scenario for other columns I.e info:state, info:server, 
> etc



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27147) extraRegionsInMeta does not work If RegionInfo is null

2022-06-22 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27147:
---
Description: (was: No alternative option in hbck2 to fix empty regions. 
 hbck1 equivalent is "-fixEmptyMetaCells".  

"Try to fix hbase:meta entries not referencing any region (empty 
REGIONINFO_QUALIFIER rows)"

 

NOTE: This is an inconsistent meta bug. )

> extraRegionsInMeta does not work If RegionInfo is null
> --
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27147) extraRegionsInMeta does not work If RegionInfo is null

2022-06-22 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27147:
---
Summary: extraRegionsInMeta does not work If RegionInfo is null  (was: Add 
a hbck2 option to clear emptyRegion from meta)

> extraRegionsInMeta does not work If RegionInfo is null
> --
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>
> No alternative option in hbck2 to fix empty regions.  hbck1 equivalent is 
> "-fixEmptyMetaCells".  
> "Try to fix hbase:meta entries not referencing any region (empty 
> REGIONINFO_QUALIFIER rows)"
>  
> NOTE: This is an inconsistent meta bug. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (HBASE-27147) extraRegionsInMeta does not work If RegionInfo is null

2022-06-22 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy reopened HBASE-27147:


> extraRegionsInMeta does not work If RegionInfo is null
> --
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>
> No alternative option in hbck2 to fix empty regions.  hbck1 equivalent is 
> "-fixEmptyMetaCells".  
> "Try to fix hbase:meta entries not referencing any region (empty 
> REGIONINFO_QUALIFIER rows)"
>  
> NOTE: This is an inconsistent meta bug. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-27147) Add a hbck2 option to clear emptyRegion from meta

2022-06-21 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-27147:
---
Component/s: hbck2

> Add a hbck2 option to clear emptyRegion from meta
> -
>
> Key: HBASE-27147
> URL: https://issues.apache.org/jira/browse/HBASE-27147
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Reporter: Karthik Palanisamy
>Priority: Major
>
> No alternative option in hbck2 to fix empty regions.  hbck1 equivalent is 
> "-fixEmptyMetaCells".  
> "Try to fix hbase:meta entries not referencing any region (empty 
> REGIONINFO_QUALIFIER rows)"
>  
> NOTE: This is an inconsistent meta bug. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (HBASE-27147) Add a hbck2 option to clear emptyRegion from meta

2022-06-21 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-27147:
--

 Summary: Add a hbck2 option to clear emptyRegion from meta
 Key: HBASE-27147
 URL: https://issues.apache.org/jira/browse/HBASE-27147
 Project: HBase
  Issue Type: Bug
Reporter: Karthik Palanisamy


No alternative option in hbck2 to fix empty regions.  hbck1 equivalent is 
"-fixEmptyMetaCells".  

"Try to fix hbase:meta entries not referencing any region (empty 
REGIONINFO_QUALIFIER rows)"

 

NOTE: This is an inconsistent meta bug. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HBASE-23044) CatalogJanitor#cleanMergeQualifier may clean wrong parent regions

2021-02-03 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278417#comment-17278417
 ] 

Karthik Palanisamy commented on HBASE-23044:


Yes [~rsanwal] . Sometimes silent data-loss. Two users reported this issue 
recently but we somehow recovered data from archive.  

We requested users to disable NORMALIZATION or any aggressive manual region 
merge.

> CatalogJanitor#cleanMergeQualifier may clean wrong parent regions
> -
>
> Key: HBASE-23044
> URL: https://issues.apache.org/jira/browse/HBASE-23044
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.6, 2.2.1, 2.1.6
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.7, 2.2.2
>
>
> 2019-09-17,19:42:40,539 INFO [PEWorker-1] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223589, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[81b6fc3c560a00692bc7c3cd266a626a], 
> [472500358997b0dc8f0002ec86593dcf] in 2.6470sec
> 2019-09-17,19:59:54,179 INFO [PEWorker-6] 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=1223651, 
> state=SUCCESS; GCMultipleMergedRegionsProcedure 
> child={color:red}647600d28633bb2fe06b40682bab0593{color}, 
> parents:[9c52f24e0a9cc9b4959c1ebdfea29d64], 
> [a623f298870df5581bcfae7f83311b33] in 1.0340sec
> The child is same region {color:red}647600d28633bb2fe06b40682bab0593{color} 
> but the parent regions are different.
> MergeTableRegionProcedure#prepareMergeRegion will try to cleanMergeQualifier 
> for the regions to merge.
> {code:java}
> for (RegionInfo ri: this.regionsToMerge) {
>   if (!catalogJanitor.cleanMergeQualifier(ri)) {
> String msg = "Skip merging " + 
> RegionInfo.getShortNameToLog(regionsToMerge) +
> ", because parent " + RegionInfo.getShortNameToLog(ri) + " has a 
> merge qualifier";
> LOG.warn(msg);
> throw new MergeRegionException(msg);
>   }
> {code}
> If region A and B merge to C, region D and E merge to F. When merge C and F, 
> it will try to cleanMergeQualifier for C and F. 
> catalogJanitor.cleanMergeQualifier for region C succeed but 
> catalogJanitor.cleanMergeQualifier for region F failed as there are 
> references in region F.
> When merge C and F again, it will try to cleanMergeQualifier for C and F 
> again. But MetaTableAccessor.getMergeRegions will get wrong parents now. It 
> use scan with filter to scan result. But region C's MergeQualifier already 
> was deleted before. Then the scan will return a wrong result, may be anther 
> region..
> {code:java}
> public boolean cleanMergeQualifier(final RegionInfo region) throws 
> IOException {
> // Get merge regions if it is a merged region and already has merge 
> qualifier
> List parents = 
> MetaTableAccessor.getMergeRegions(this.services.getConnection(),
> region.getRegionName());
> if (parents == null || parents.isEmpty()) {
>   // It doesn't have merge qualifier, no need to clean
>   return true;
> }
> return cleanMergeRegion(region, parents);
>   }
> public static List getMergeRegions(Connection connection, byte[] 
> regionName)
>   throws IOException {
> return getMergeRegions(getMergeRegionsRaw(connection, regionName));
>   }
> private static Cell [] getMergeRegionsRaw(Connection connection, byte [] 
> regionName)
>   throws IOException {
> Scan scan = new Scan().withStartRow(regionName).
> setOneRowLimit().
> readVersions(1).
> addFamily(HConstants.CATALOG_FAMILY).
> setFilter(new QualifierFilter(CompareOperator.EQUAL,
>   new RegexStringComparator(HConstants.MERGE_QUALIFIER_PREFIX_STR+ 
> ".*")));
> try (Table m = getMetaHTable(connection); ResultScanner scanner = 
> m.getScanner(scan)) {
>   // Should be only one result in this scanner if any.
>   Result result = scanner.next();
>   if (result == null) {
> return null;
>   }
>   // Should be safe to just return all Cells found since we had filter in 
> place.
>   // All values should be RegionInfos or something wrong.
>   return result.rawCells();
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24900) Make retain assignment configurable during SCP

2021-01-28 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17274147#comment-17274147
 ] 

Karthik Palanisamy commented on HBASE-24900:


Thank you [~pankajkumar]. This is super important Jira. 

I was checking one of my CDP cluster where the region never retained on every 
graceful HBase stop and start.  It looks like this problem after 
https://issues.apache.org/jira/browse/HBASE-23035 change. 

On every SCP, the call set to forceNewPlan=true 
(ServerCrashProcedure#assignRegions) which will assign region to the new RS.
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L536]
{code}
2021-01-26 05:13:02,455 INFO 
org.apache.hadoop.hbase.master.assignment.TransitRegionStateProcedure: Starting 
pid=9, ppid=2, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, 
locked=true; TransitRegionStateProcedure table=atlas_janus, 
region=e4e72a9453107e1487f2fa79a03c8158, ASSIGN; rit=OPEN, location=null; 
forceNewPlan=true, retain=false{code}
Users started reporting data locality problem due to this issue.  

> Make retain assignment configurable during SCP
> --
>
> Key: HBASE-24900
> URL: https://issues.apache.org/jira/browse/HBASE-24900
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Affects Versions: 3.0.0-alpha-1, 2.3.1, 2.1.9, 2.2.5
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> HBASE-23035 change the "retain" assignment to round-robin assignment during 
> SCP which will make the failover faster and surely improve the availability, 
> but this will impact the scan performance in non-cloud scenario.
> This jira will make this assignment plan configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read

2020-11-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226438#comment-17226438
 ] 

Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:17 AM:
--

[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

The switch happens below,

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
...
private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo,
int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) {
..
  get = scan.isGetScan();
..
  this.maxRowSize = scanInfo.getTableMaxRowSize();
  if (get) {
this.readType = Scan.ReadType.PREAD;
this.scanUsePread = true;
  }
...




public void shipped() throws IOException {
..
  clearAndClose(scannersForDelayedClose);
  if (this.heap != null) {
..
trySwitchToStreamRead();
  }
}



..
void trySwitchToStreamRead() {
  if (readType != Scan.ReadType.DEFAULT || !scanUsePread || closing ||
  heap.peek() == null || bytesRead < preadMaxBytes) {
return;
  }
  LOG.debug("Switch to stream read (scanned={} bytes) of {}", bytesRead,
  this.store.getColumnFamilyName());
..
}

{code}


was (Author: kpalanisamy):
[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

The switch happens below,

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);



...
private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo,
int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) {
..
  get = scan.isGetScan();
..
  this.maxRowSize = scanInfo.getTableMaxRowSize();
  if (get) {
this.readType = Scan.ReadType.PREAD;
this.scanUsePread = true;
  }
...{code}

> Use separated StoreFileReader for streaming read
> 
>
> Key: HBASE-17910
> URL: https://issues.apache.org/jira/browse/HBASE-17910
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0
>
>
> For now we have already supportted using private readers for compaction, by 
> creating a new StoreFile copy. I think a better way is to allow creating 
> multiple readers from a single StoreFile instance, thus we can avoid the ugly 
> cloning, and the reader can also be used for streaming scan, not only for 
> compaction.
> The reason we want to do this is that, we found a read amplification when 
> using short circult read. {{BlockReaderLocal}} will use an internal buffer to 
> read data first, the buffer size is based on the configured buffer size and 
> the readahead option in CachingStrategy. For normal pread request, we should 
> just bypass the buffer, this can be achieved by setting readahead to 0. But 
> for streaming read I 

[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read

2020-11-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226438#comment-17226438
 ] 

Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:15 AM:
--

[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

The switch happens below,

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);



...
private StoreScanner(HStore store, Scan scan, ScanInfo scanInfo,
int numColumns, long readPt, boolean cacheBlocks, ScanType scanType) {
..
  get = scan.isGetScan();
..
  this.maxRowSize = scanInfo.getTableMaxRowSize();
  if (get) {
this.readType = Scan.ReadType.PREAD;
this.scanUsePread = true;
  }
...{code}


was (Author: kpalanisamy):
[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

The switch happens below,

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);{code}

> Use separated StoreFileReader for streaming read
> 
>
> Key: HBASE-17910
> URL: https://issues.apache.org/jira/browse/HBASE-17910
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0
>
>
> For now we have already supportted using private readers for compaction, by 
> creating a new StoreFile copy. I think a better way is to allow creating 
> multiple readers from a single StoreFile instance, thus we can avoid the ugly 
> cloning, and the reader can also be used for streaming scan, not only for 
> compaction.
> The reason we want to do this is that, we found a read amplification when 
> using short circult read. {{BlockReaderLocal}} will use an internal buffer to 
> read data first, the buffer size is based on the configured buffer size and 
> the readahead option in CachingStrategy. For normal pread request, we should 
> just bypass the buffer, this can be achieved by setting readahead to 0. But 
> for streaming read I think the buffer is somehow still useful? So we need to 
> use different FSDataInputStream for pread and streaming read.
> And one more thing is that, we can also remove the streamLock if streaming 
> read always use its own reader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-17910) Use separated StoreFileReader for streaming read

2020-11-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226438#comment-17226438
 ] 

Karthik Palanisamy edited comment on HBASE-17910 at 11/5/20, 1:11 AM:
--

[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

The switch happens below,

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);{code}


was (Author: kpalanisamy):
[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);{code}

> Use separated StoreFileReader for streaming read
> 
>
> Key: HBASE-17910
> URL: https://issues.apache.org/jira/browse/HBASE-17910
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0
>
>
> For now we have already supportted using private readers for compaction, by 
> creating a new StoreFile copy. I think a better way is to allow creating 
> multiple readers from a single StoreFile instance, thus we can avoid the ugly 
> cloning, and the reader can also be used for streaming scan, not only for 
> compaction.
> The reason we want to do this is that, we found a read amplification when 
> using short circult read. {{BlockReaderLocal}} will use an internal buffer to 
> read data first, the buffer size is based on the configured buffer size and 
> the readahead option in CachingStrategy. For normal pread request, we should 
> just bypass the buffer, this can be achieved by setting readahead to 0. But 
> for streaming read I think the buffer is somehow still useful? So we need to 
> use different FSDataInputStream for pread and streaming read.
> And one more thing is that, we can also remove the streamLock if streaming 
> read always use its own reader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-17910) Use separated StoreFileReader for streaming read

2020-11-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17226438#comment-17226438
 ] 

Karthik Palanisamy commented on HBASE-17910:


[~anoop.hbase] [~zhangduo] [~busbey] [~elserj]

Recently, one of our user reported high CPU usage in namenode. On our 
troubleshooting, found millions of OPEN and GetFileInfo calls continuously to 
namenode, is because of readType STREAM which creates multiple scanners. I 
understand we switch readType to STREAM automatically but I don't find any flag 
to disable STREAM. I am curious if that is the expected design?  

If the scan become get. 
 if the scan with startrow and stoprow.
 if the scan keeps running for long time. I.e  kv bytesRead > preadMaxBytes. 
(Default preadMaxBytes is 4*blockSize, which is 4*64KB).

Maybe this spike could be at every cluster but the user might not be noticed 
yet.  At this moment, I am trying to work around with 
"hbase.storescanner.pread.max.bytes" and 
"hbase.cells.scanned.per.heartbeat.check".  Will post more updates next week 
with the root cause. 
{code:java}
this(family, minVersions, maxVersions, ttl, keepDeletedCells, 
timeToPurgeDeletes, comparator,
conf.getLong(HConstants.TABLE_MAX_ROWSIZE_KEY, 
HConstants.TABLE_MAX_ROWSIZE_DEFAULT),
conf.getBoolean("hbase.storescanner.use.pread", false), 
getCellsPerTimeoutCheck(conf),
conf.getBoolean(StoreScanner.STORESCANNER_PARALLEL_SEEK_ENABLE, false),
conf.getLong(StoreScanner.STORESCANNER_PREAD_MAX_BYTES, 4 * blockSize), 
newVersionBehavior);{code}

> Use separated StoreFileReader for streaming read
> 
>
> Key: HBASE-17910
> URL: https://issues.apache.org/jira/browse/HBASE-17910
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.0.0
>
>
> For now we have already supportted using private readers for compaction, by 
> creating a new StoreFile copy. I think a better way is to allow creating 
> multiple readers from a single StoreFile instance, thus we can avoid the ugly 
> cloning, and the reader can also be used for streaming scan, not only for 
> compaction.
> The reason we want to do this is that, we found a read amplification when 
> using short circult read. {{BlockReaderLocal}} will use an internal buffer to 
> read data first, the buffer size is based on the configured buffer size and 
> the readahead option in CachingStrategy. For normal pread request, we should 
> just bypass the buffer, this can be achieved by setting readahead to 0. But 
> for streaming read I think the buffer is somehow still useful? So we need to 
> use different FSDataInputStream for pread and streaming read.
> And one more thing is that, we can also remove the streamLock if streaming 
> read always use its own reader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23915) Backport HBASE-23553 to branch-2.1

2020-03-01 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048517#comment-17048517
 ] 

Karthik Palanisamy commented on HBASE-23915:


I think, PR(#1230) is automatically added to HBASE-23553 Jira instead of 
HBASE-23915. It looks Github adds auto link based on commit message with Jira 
number but in my case it is backport request.

[~busbey]  [~meiyi] Can you please review this backport request?

> Backport HBASE-23553 to branch-2.1
> --
>
> Key: HBASE-23915
> URL: https://issues.apache.org/jira/browse/HBASE-23915
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> HBASE-23553 is fixed in branch-2.2 but there are some conflicts on branch-2.1 
> backport. HBASE-23553 is required HBASE-6028(Turning compaction on/off 
> feature), and HBASE-6028 is used only in test-case 
> "TestTableSnapshotScanner#testMergeRegion" for this issue. Thought only to 
> modify HBASE-23553 test-case without HBASE-6028 backport otherwise we may end 
> up with several other conflicts on HBASE-6028 backport.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23915) Backport HBASE-23553 to branch-2.1

2020-03-01 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23915:
--

 Summary: Backport HBASE-23553 to branch-2.1
 Key: HBASE-23915
 URL: https://issues.apache.org/jira/browse/HBASE-23915
 Project: HBase
  Issue Type: Bug
  Components: snapshots
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


HBASE-23553 is fixed in branch-2.2 but there are some conflicts on branch-2.1 
backport. HBASE-23553 is required HBASE-6028(Turning compaction on/off 
feature), and HBASE-6028 is used only in test-case 
"TestTableSnapshotScanner#testMergeRegion" for this issue. Thought only to 
modify HBASE-23553 test-case without HBASE-6028 backport otherwise we may end 
up with several other conflicts on HBASE-6028 backport.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-10 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993220#comment-16993220
 ] 

Karthik Palanisamy commented on HBASE-23360:


[~busbey] Can we push this small fix for help guide? 

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988351#comment-16988351
 ] 

Karthik Palanisamy edited comment on HBASE-23360 at 12/5/19 1:20 AM:
-

{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> '10M/sec'{code}
When users setting a quota they will pass THROTTLE_TYPE as either READ/WRITE 
but internally we transform given THROTTLE_TYPE into one of the following TYPE 
based on LIMIT unit.

Write operation THROTTLE_TYPE:
{code:java}
WRITE_NUMBER
WRITE_CAPACITY_UNIT
WRITE_SIZE{code}
 
{code:java}
quotas.rb

def _parse_limit(str_limit, type_cls, type)
  str_limit = str_limit.downcase
  match = /^(\d+)(req|cu|[bkmgtp])\/(sec|min|hour|day)$/.match(str_limit)
  if match
limit = match[1].to_i
if match[2] == 'req'
  type = type_cls.valueOf(type + '_NUMBER')
elsif match[2] == 'cu'
  type = type_cls.valueOf(type + '_CAPACITY_UNIT')
else
  limit = _size_from_str(limit, match[2])
  type = type_cls.valueOf(type + '_SIZE')
end
{code}
So unsetting THROTTLE_TYPE users will think to use same READ/WRITE in 
THROTTLE_TYPE but while unsetting we don't transform it. That is because of 
LIMIT value(NONE). Here, user need to use exact THROTTLE_TYPE in order to 
remove otherwise this command will fail.
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE{code}
 

Or we need to introduce new logic in LIMIT in order to parse appropriate 
THROTTLE_TYPE. thoughts?? 

 


was (Author: kpalanisamy):
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> '10M/sec'{code}
When users setting a quota they will pass THROTTLE_TYPE as either READ/WRITE 
but internally we transform given THROTTLE_TYPE into one of the following TYPE 
based on LIMIT unit.

Write operation THROTTLE_TYPE:
{code:java}
WRITE_NUMBER
WRITE_CAPACITY_UNIT
WRITE_SIZE{code}

  
{code:java}
quotas.rb

def _parse_limit(str_limit, type_cls, type)
  str_limit = str_limit.downcase
  match = /^(\d+)(req|cu|[bkmgtp])\/(sec|min|hour|day)$/.match(str_limit)
  if match
limit = match[1].to_i
if match[2] == 'req'
  type = type_cls.valueOf(type + '_NUMBER')
elsif match[2] == 'cu'
  type = type_cls.valueOf(type + '_CAPACITY_UNIT')
else
  limit = _size_from_str(limit, match[2])
  type = type_cls.valueOf(type + '_SIZE')
end
{code}
So unsetting THROTTLE_TYPE users will think to use same READ/WRITE 
THROTTLE_TYPE because they may not aware of our internal transformation and 
eventually this command will fail. 
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE{code}

  

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988351#comment-16988351
 ] 

Karthik Palanisamy edited comment on HBASE-23360 at 12/5/19 12:53 AM:
--

{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> '10M/sec'{code}
When users setting a quota they will pass THROTTLE_TYPE as either READ/WRITE 
but internally we transform given THROTTLE_TYPE into one of the following TYPE 
based on LIMIT unit.

Write operation THROTTLE_TYPE:
{code:java}
WRITE_NUMBER
WRITE_CAPACITY_UNIT
WRITE_SIZE{code}

  
{code:java}
quotas.rb

def _parse_limit(str_limit, type_cls, type)
  str_limit = str_limit.downcase
  match = /^(\d+)(req|cu|[bkmgtp])\/(sec|min|hour|day)$/.match(str_limit)
  if match
limit = match[1].to_i
if match[2] == 'req'
  type = type_cls.valueOf(type + '_NUMBER')
elsif match[2] == 'cu'
  type = type_cls.valueOf(type + '_CAPACITY_UNIT')
else
  limit = _size_from_str(limit, match[2])
  type = type_cls.valueOf(type + '_SIZE')
end
{code}
So unsetting THROTTLE_TYPE users will think to use same READ/WRITE 
THROTTLE_TYPE because they may not aware of our internal transformation and 
eventually this command will fail. 
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE{code}

  


was (Author: kpalanisamy):
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> '10M/sec'{code}
When users setting a quota they will pass THROTTLE_TYPE as either READ/WRITE 
but internally we transform given THROTTLE_TYPE into one of the following TYPE 
based on LIMIT unit.

Write operation THROTTLE_TYPE:
WRITE_NUMBER
WRITE_CAPACITY_UNIT
WRITE_SIZE
 
{code:java}
quotas.rb

def _parse_limit(str_limit, type_cls, type)
  str_limit = str_limit.downcase
  match = /^(\d+)(req|cu|[bkmgtp])\/(sec|min|hour|day)$/.match(str_limit)
  if match
limit = match[1].to_i
if match[2] == 'req'
  type = type_cls.valueOf(type + '_NUMBER')
elsif match[2] == 'cu'
  type = type_cls.valueOf(type + '_CAPACITY_UNIT')
else
  limit = _size_from_str(limit, match[2])
  type = type_cls.valueOf(type + '_SIZE')
end
{code}
So unsetting THROTTLE_TYPE users will think to use same READ/WRITE 
THROTTLE_TYPE because they may not aware of our internal transformation and 
eventually this command will fail. 
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE
 

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988351#comment-16988351
 ] 

Karthik Palanisamy commented on HBASE-23360:


{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> '10M/sec'{code}
When users setting a quota they will pass THROTTLE_TYPE as either READ/WRITE 
but internally we transform given THROTTLE_TYPE into one of the following TYPE 
based on LIMIT unit.

Write operation THROTTLE_TYPE:
WRITE_NUMBER
WRITE_CAPACITY_UNIT
WRITE_SIZE
 
{code:java}
quotas.rb

def _parse_limit(str_limit, type_cls, type)
  str_limit = str_limit.downcase
  match = /^(\d+)(req|cu|[bkmgtp])\/(sec|min|hour|day)$/.match(str_limit)
  if match
limit = match[1].to_i
if match[2] == 'req'
  type = type_cls.valueOf(type + '_NUMBER')
elsif match[2] == 'cu'
  type = type_cls.valueOf(type + '_CAPACITY_UNIT')
else
  limit = _size_from_str(limit, match[2])
  type = type_cls.valueOf(type + '_SIZE')
end
{code}
So unsetting THROTTLE_TYPE users will think to use same READ/WRITE 
THROTTLE_TYPE because they may not aware of our internal transformation and 
eventually this command will fail. 
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE
 

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-04 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988330#comment-16988330
 ] 

Karthik Palanisamy commented on HBASE-23360:


Users may not aware of how to unset throttle because we have not provided any 
info about THROTTLE_TYPE. I think we should give all THROTTLE_TYPE examples in 
help command.
{code:java}
Unthrottle number of requests:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_NUMBER, USER => 
'u1', LIMIT => 'NONE'
Unthrottle number of read requests:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => READ_NUMBER, USER => 'u1', 
LIMIT => NONE
Unthrottle number of write requests:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 'u1', 
LIMIT => NONE


Unthrottle data size:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, USER => 'u1', 
LIMIT => 'NONE'
Unthrottle read data size:
hbase> set_quota TYPE => THROTTLE, USER => 'u1', THROTTLE_TYPE => READ_SIZE, 
LIMIT => 'NONE'
Unthrottle write data size:
hbase> set_quota TYPE => THROTTLE, USER => 'u1', THROTTLE_TYPE => WRITE_SIZE, 
LIMIT => 'NONE'


Unthrottle capacity unit:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_CAPACITY_UNIT, USER 
=> 'u1', LIMIT => 'NONE'
Unthrottle read capacity unit:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => READ_CAPACITY_UNIT, USER => 
'u1', LIMIT => 'NONE'
Unthrottle write capacity unit:
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_CAPACITY_UNIT, USER 
=> 'u1', LIMIT => 'NONE'

{code}

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23361) [UI] Limit two decimals even for total average load

2019-12-03 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23361:
--

 Summary: [UI] Limit two decimals even for total average load
 Key: HBASE-23361
 URL: https://issues.apache.org/jira/browse/HBASE-23361
 Project: HBase
  Issue Type: Improvement
  Components: UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy
 Attachments: Screen Shot 2019-12-03 at 3.17.19 PM.png

We somehow missed limiting decimal points in the total average load.

 

!Screen Shot 2019-12-03 at 3.17.19 PM.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-03 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23360:
---
Issue Type: Bug  (was: Improvement)

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-03 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23360:
---
Affects Version/s: 3.0.0

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-03 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23360:
---
Component/s: shell

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-03 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23360:
---
Status: Patch Available  (was: Open)

> [CLI] Fix help command "set_quota"
> --
>
> Key: HBASE-23360
> URL: https://issues.apache.org/jira/browse/HBASE-23360
> Project: HBase
>  Issue Type: Improvement
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> To remove a quota by throttle_type,
> {code:java}
> hbase> help "set_quota"
> ...
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', 
> LIMIT => NONE
> 
> {code}
> but the actual command should be, 
> {code:java}
> hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 
> 'u1', LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23360) [CLI] Fix help command "set_quota"

2019-12-03 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23360:
--

 Summary: [CLI] Fix help command "set_quota"
 Key: HBASE-23360
 URL: https://issues.apache.org/jira/browse/HBASE-23360
 Project: HBase
  Issue Type: Improvement
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


To remove a quota by throttle_type,
{code:java}
hbase> help "set_quota"
...
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT 
=> NONE

{code}
but the actual command should be, 
{code:java}
hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE_NUMBER, USER => 'u1', 
LIMIT => NONE{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-25 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982170#comment-16982170
 ] 

Karthik Palanisamy edited comment on HBASE-23312 at 11/26/19 6:33 AM:
--

[~elserj]  [~krisden] 

Just curious. going forward, do we really depend on Hadoop utils 
org.apache.hadoop.security.SecurityUtil? Can we use HBase util 
org.apache.hadoop.hbase.security.SecurityUtil instead? but yeah we need to 
introduce same function something similar [ref|#L54]]

 
{code:java}
public static String getServicePrincipalWithFQDN(String principal) throws 
UnknownHostException {
  if (principal != null && !principal.isEmpty()) {
String[] components = principal.split("[/@]");
if (components[1].equals("_HOST")) {
  return components[0] + '/' + InetAddress.getLocalHost().getHostName() + 
'@' + components[2];
}
  }
  return principal;
}
{code}


was (Author: kpalanisamy):
[~elserj]  [~krisden] 

Just curious. going forward, do we really depend on Hadoop utils 
org.apache.hadoop.security.SecurityUtil? Can we use HBase util 
org.apache.hadoop.hbase.security.SecurityUtil instead? but yeah we need to 
introduce same function something similar 
[ref|[https://github.com/karthikhw/hbase/blob/HBASE-23343/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java#L54]]

> HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible
> 
>
> Key: HBASE-23312
> URL: https://issues.apache.org/jira/browse/HBASE-23312
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 3.0.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.1.7, 
> 2.1.8
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-23312.master.001.patch
>
>
> HBASE-19852 is not backwards compatible since it now requires the SPNEGO 
> thrift configs. I haven't seen anything in Apache HBase about changing this 
> so that the older configs still work with a merged keytab. (fall back to the 
> non SPNEGO specific principal/keytab configs)
> I wrote the original patch in HBASE-19852 and with hindsight being 20/20, I 
> think this section of could be extended to fall back to not requiring the 
> additional configs.
> https://github.com/apache/hbase/blame/master/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java#L78
> Supporting the older configs allows upgrade from HBase 1.x to 2.x without 
> needing to change the configs ahead of time. I'll make sure to log a 
> deprecation warning if the older configs are used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23312) HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible

2019-11-25 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982170#comment-16982170
 ] 

Karthik Palanisamy commented on HBASE-23312:


[~elserj]  [~krisden] 

Just curious. going forward, do we really depend on Hadoop utils 
org.apache.hadoop.security.SecurityUtil? Can we use HBase util 
org.apache.hadoop.hbase.security.SecurityUtil instead? but yeah we need to 
introduce same function something similar 
[ref|[https://github.com/karthikhw/hbase/blob/HBASE-23343/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java#L54]]

> HBase Thrift SPNEGO configs (HBASE-19852) should be backwards compatible
> 
>
> Key: HBASE-23312
> URL: https://issues.apache.org/jira/browse/HBASE-23312
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 3.0.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.6, 2.1.7, 
> 2.1.8
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
> Attachments: HBASE-23312.master.001.patch
>
>
> HBASE-19852 is not backwards compatible since it now requires the SPNEGO 
> thrift configs. I haven't seen anything in Apache HBase about changing this 
> so that the older configs still work with a merged keytab. (fall back to the 
> non SPNEGO specific principal/keytab configs)
> I wrote the original patch in HBASE-19852 and with hindsight being 20/20, I 
> think this section of could be extended to fall back to not requiring the 
> additional configs.
> https://github.com/apache/hbase/blame/master/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java#L78
> Supporting the older configs allows upgrade from HBase 1.x to 2.x without 
> needing to change the configs ahead of time. I'll make sure to log a 
> deprecation warning if the older configs are used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23343) Thrift Resolve FQDN from SPNEGO principal

2019-11-25 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982156#comment-16982156
 ] 

Karthik Palanisamy commented on HBASE-23343:


My bad, realized after getting this merge conflict. It looks HBASE-23312 is 
merged today and this improvement also fixed as part of HBASE-23312.   Thanks 
[~krisden] [~elserj].

> Thrift Resolve FQDN from SPNEGO principal
> -
>
> Key: HBASE-23343
> URL: https://issues.apache.org/jira/browse/HBASE-23343
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> We need to manage different config groups in Ambari to run multiple thrift 
> servers. This is because hbase thrift server will not able to resolve _HOST 
> pattern from hbase.thrift.spnego.principal. so we explicitly specify the 
> hostname for each group now.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23343) Thrift Resolve FQDN from SPNEGO principal

2019-11-25 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23343:
--

 Summary: Thrift Resolve FQDN from SPNEGO principal
 Key: HBASE-23343
 URL: https://issues.apache.org/jira/browse/HBASE-23343
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


We need to manage different config groups in Ambari to run multiple thrift 
servers. This is because hbase thrift server will not able to resolve _HOST 
pattern from hbase.thrift.spnego.principal. so we explicitly specify the 
hostname for each group now.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23338) Prevent NPE running rsgroup balancer

2019-11-24 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16981333#comment-16981333
 ] 

Karthik Palanisamy commented on HBASE-23338:


Let me double-check again. Turns green after service reboot. 

> Prevent NPE running rsgroup balancer
> 
>
> Key: HBASE-23338
> URL: https://issues.apache.org/jira/browse/HBASE-23338
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> I think we should prevent NPE even if user trigger balancer on rsgroup that 
> contains only a single regionserver.  Sometimes users trigger balancer on all 
> groups even without knowing group info.
> {code:java}
> 2019-11-24
> 22:50:29,868 ERROR [RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=16000]
> ipc.RpcServer: Unexpected throwable object
> java.lang.NullPointerException
> {code}
> {code:java}
> hbase(main):002:0> balance_rsgroup 'shva'
> ERROR: java.io.IOException at 
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)Caused
>  by: java.lang.NullPointerException
> For usage try 'help "balance_rsgroup"'{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23336) [CLI] Incorrect row(s) count "clear_deadservers"

2019-11-24 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23336:
---
Component/s: shell

> [CLI] Incorrect row(s) count  "clear_deadservers"
> -
>
> Key: HBASE-23336
> URL: https://issues.apache.org/jira/browse/HBASE-23336
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0, 2.1.0, 2.2.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> [HBASE-15849|https://issues.apache.org/jira/browse/HBASE-15849] simplified 
> the format of command total runtime but clear_deadservers caller has not 
> modified so it prints current timestamp instead of no of rows. 
>  
> {code:java}
> hbase(main):015:0>  clear_deadservers 
> 'kpalanisamy-apache302.openstacklocal,16020'
> SERVERNAME
> kpalanisamy-apache302.openstacklocal,16020,16020
> 1574585488 row(s)
> Took 0.0145 seconds
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23338) Prevent NPE running rsgroup balancer

2019-11-24 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23338:
--

 Summary: Prevent NPE running rsgroup balancer
 Key: HBASE-23338
 URL: https://issues.apache.org/jira/browse/HBASE-23338
 Project: HBase
  Issue Type: Bug
  Components: rsgroup
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


I think we should prevent NPE even if user trigger balancer on rsgroup that 
contains only a single regionserver.  Sometimes users trigger balancer on all 
groups even without knowing group info.
{code:java}
2019-11-24
22:50:29,868 ERROR [RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=16000]
ipc.RpcServer: Unexpected throwable object
java.lang.NullPointerException
{code}
{code:java}
hbase(main):002:0> balance_rsgroup 'shva'
ERROR: java.io.IOException at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:433) at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)Caused 
by: java.lang.NullPointerException
For usage try 'help "balance_rsgroup"'{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23336) [CLI] Incorrect row(s) count "clear_deadservers"

2019-11-24 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23336:
--

 Summary: [CLI] Incorrect row(s) count  "clear_deadservers"
 Key: HBASE-23336
 URL: https://issues.apache.org/jira/browse/HBASE-23336
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.2.0, 2.1.0, 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


[HBASE-15849|https://issues.apache.org/jira/browse/HBASE-15849] simplified the 
format of command total runtime but clear_deadservers caller has not modified 
so it prints current timestamp instead of no of rows. 

 
{code:java}
hbase(main):015:0>  clear_deadservers 
'kpalanisamy-apache302.openstacklocal,16020'
SERVERNAME
kpalanisamy-apache302.openstacklocal,16020,16020
1574585488 row(s)
Took 0.0145 seconds
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-21 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979884#comment-16979884
 ] 

Karthik Palanisamy edited comment on HBASE-23237 at 11/22/19 6:02 AM:
--

Submitted two PRs [~elserj].

Both branch-2.1 and branch-2.2 has conflicts.

PR #865 => branch-2.2.

PR #866 => branch-2.1.


was (Author: kpalanisamy):
Submitter two PRs [~elserj].

Both branch-2.1 and branch-2.2 has conflicts.

PR #865 => branch-2.2.

PR #866 => branch-2.1.

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-21 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979884#comment-16979884
 ] 

Karthik Palanisamy edited comment on HBASE-23237 at 11/22/19 6:01 AM:
--

Submitter two PRs [~elserj].

Both branch-2.1 and branch-2.2 has conflicts.

PR #865 => branch-2.2.

PR #866 => branch-2.1.


was (Author: kpalanisamy):
Submitter two PRs [~elserj].

Both branch-2.1 and branch-2.2 has conflicts.

PR #865 => branch-2.2.

PR #866 => branhc-2.1.

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-21 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979884#comment-16979884
 ] 

Karthik Palanisamy commented on HBASE-23237:


Submitter two PRs [~elserj].

Both branch-2.1 and branch-2.2 has conflicts.

PR #865 => branch-2.2.

PR #866 => branhc-2.1.

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-21 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979859#comment-16979859
 ] 

Karthik Palanisamy commented on HBASE-23237:


 Thank you very much [~elserj].  I will submit it shortly. 

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-15 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16975624#comment-16975624
 ] 

Karthik Palanisamy commented on HBASE-23237:


Please ignore PR #814. It has merge conflict with 
[HBASE-23230|https://issues.apache.org/jira/browse/HBASE-23230].

Submitted new PR #834

 

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-14 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974768#comment-16974768
 ] 

Karthik Palanisamy commented on HBASE-23237:


[~zhangduo] [~gxcheng]  Made changes. We will be now computing accurate metrics 
for requestPerSecond,  readRequestsCount and writeRequestsCount. 

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-12 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972655#comment-16972655
 ] 

Karthik Palanisamy commented on HBASE-23237:


Thank you very much [~zhangduo] and [~gxcheng]. I will do this change today. 

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-10 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16971264#comment-16971264
 ] 

Karthik Palanisamy commented on HBASE-23237:


[~stack]  [~gxcheng] I think, good to show the last "requestsPerSecond" status 
during region-transition.  If we show 0, then we assume there is no active 
request but actually there are some active requests. However, new metrics will 
be re-computed in short iteration(hbase.regionserver.metrics.period=5000ms). I 
am submitting PR now. 

 

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Priority: Major
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23237) Negative 'Requests per Second' counts in UI

2019-11-10 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy reassigned HBASE-23237:
--

Assignee: Karthik Palanisamy

> Negative 'Requests per Second' counts in UI
> ---
>
> Key: HBASE-23237
> URL: https://issues.apache.org/jira/browse/HBASE-23237
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 2.2.2
>Reporter: Michael Stack
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: Screen Shot 2019-10-30 at 9.45.58 PM.png
>
>
> I see request per second showing with negative sign.
>  !Screen Shot 2019-10-30 at 9.45.58 PM.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23263) NPE in Quotas.jsp

2019-11-06 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23263:
--

 Summary: NPE in Quotas.jsp
 Key: HBASE-23263
 URL: https://issues.apache.org/jira/browse/HBASE-23263
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


QuotaManager will be started after master initialization. If no online 
regionservers then master will not be initialized and will throw NPE over 
accessing Quota page. 

 

[http://172.26.70.200:16010/quotas.jsp]
{code:java}
HTTP ERROR 500
Problem accessing /quotas.jsp. 
Reason:    
 Server Error
Caused by:java.lang.NullPointerException
 at 
org.apache.hadoop.hbase.generated.master.quotas_jsp._jspService(quotas_jsp.java:58)
 at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
 at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23262) Cannot load Master UI

2019-11-05 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23262:
---
Description: 
If no online regionservers then master UI can't be opened. This issue occurs 
when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home page 
tries to load rsgroup info from "hbase:rsgroup" table but currently no 
regionservers up and running. 

 

[http://172.26.70.200:16010|http://172.26.70.200:16010/]
{code:java}
HTTP ERROR 500

Problem accessing /master-status. 
Reason:    
 Server Error
Caused by:
java.io.UncheckedIOException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=2, exceptions:
...


Tue Nov
05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
replicaId=0

at
org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)

at
org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)

at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)

at
org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)

{code}

  was:
If no online regionservers then master UI can't be opened. This issue occurs 
when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home page 
tries to load rsgroup info from "hbase:rsgroup" table but currently no 
regionservers up and running. 

 

[http://172.26.70.200:16010|http://172.26.70.200:16010/]
{code:java}
HTTP ERROR 500Problem accessing /master-status. Reason:    Server ErrorCaused 
by:java.io.UncheckedIOException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=2, exceptions:
...


Tue Nov
05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
replicaId=0

at
org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)

at
org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)

at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)

at
org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)

{code}


> Cannot load Master UI
> -
>
> Key: HBASE-23262
> URL: https://issues.apache.org/jira/browse/HBASE-23262
> Project: HBase
>  Issue Type: Bug
>  Components: master, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> If no online regionservers then master UI can't be opened. This issue occurs 
> when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home 
> page tries to load rsgroup info from "hbase:rsgroup" table but currently no 
> regionservers up and running. 
>  
> [http://172.26.70.200:16010|http://172.26.70.200:16010/]
> {code:java}
> HTTP ERROR 500
> Problem accessing /master-status. 
> Reason:    
>  Server Error
> Caused by:
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=2, exceptions:
> ...
> Tue Nov
> 05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
> Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
> replicaId=0
> at
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
> at
> org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)
> at
> org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)
> at
> 

[jira] [Created] (HBASE-23262) Cannot load Master UI

2019-11-05 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23262:
--

 Summary: Cannot load Master UI
 Key: HBASE-23262
 URL: https://issues.apache.org/jira/browse/HBASE-23262
 Project: HBase
  Issue Type: Bug
  Components: master, UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


If no online regionservers then master UI can't be opened. This issue occurs 
when using RSGroupAdminEndpoint coprocessor(RSGrouping).  The master home page 
tries to load rsgroup info from "hbase:rsgroup" table but currently no 
regionservers up and running. 

 

[http://172.26.70.200:16010|http://172.26.70.200:16010/]
{code:java}
HTTP ERROR 500Problem accessing /master-status. Reason:    Server ErrorCaused 
by:java.io.UncheckedIOException: 
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=2, exceptions:
...


Tue Nov
05 23:58:51 UTC 2019, , org.apache.hadoop.hbase.exceptions.TimeoutIOException:
Timeout(9450ms) waiting for region location for hbase:rsgroup, row='',
replicaId=0

at
org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)

at
org.apache.hadoop.hbase.RSGroupTableAccessor.getAllRSGroupInfo(RSGroupTableAccessor.java:59)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmplImpl.renderNoFlush(RSGroupListTmplImpl.java:58)

at
org.apache.hadoop.hbase.tmpl.master.RSGroupListTmpl.renderNoFlush(RSGroupListTmpl.java:150)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:346)

at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:397)

at
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:388)

at
org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:79)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23191) Log spams on Replication

2019-10-28 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961655#comment-16961655
 ] 

Karthik Palanisamy commented on HBASE-23191:


I hope you will get some time for review [~zhangduo].  It would be good either 
you approve or reject it. 

 

 

> Log spams on Replication
> 
>
> Key: HBASE-23191
> URL: https://issues.apache.org/jira/browse/HBASE-23191
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
>
> If no new active writes in WAL edit, then *WALEntryStream#hasNext -> 
> ReaderBase -> ProtobufLogReader#readNext* will reach end of file. It would be 
> a good idea for changing the log level from INFO to DEBUG. 
>  
> {code:java}
> 2019-10-18 22:25:03,572 INFO  
> [RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
>  wal.ProtobufLogReader: Reached the end of file at position 83
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23208) Unit formatting in Master & RS UI

2019-10-28 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961653#comment-16961653
 ] 

Karthik Palanisamy commented on HBASE-23208:


Trivial change but it is meaningful :) 

Please someone can review and push this change.

> Unit formatting in Master & RS UI
> -
>
> Key: HBASE-23208
> URL: https://issues.apache.org/jira/browse/HBASE-23208
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
> Attachments: Screen Shot 2019-10-23 at 2.01.44 PM.png, Screen Shot 
> 2019-10-23 at 2.35.04 PM.png
>
>
> ProcV2 use 
> org.apache.hadoop.hbase.procedure2.util.StringUtils#humanTimeDiff(long) and 
> humanSize(double) where it returns human readable string. I think it would 
> good if we format the string before we return it. 
>  
> !Screen Shot 2019-10-23 at 2.01.44 PM.png!
> !Screen Shot 2019-10-23 at 2.35.04 PM.png!
>  
> the same format will apply to master and regionserver logs.  I hope no one 
> will concern about this format change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23208) Unit formatting in Master & RS UI

2019-10-23 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23208:
---
Description: 
ProcV2 use 
org.apache.hadoop.hbase.procedure2.util.StringUtils#humanTimeDiff(long) and 
humanSize(double) where it returns human readable string. I think it would good 
if we format the string before we return it. 

 

!Screen Shot 2019-10-23 at 2.01.44 PM.png!

!Screen Shot 2019-10-23 at 2.35.04 PM.png!

 

the same format will apply to master and regionserver logs.  I hope no one will 
concern about this format change.

  was:
ProcV2 use 
org.apache.hadoop.hbase.procedure2.util.StringUtils#humanTimeDiff(long) and 
humanSize(double) where it returns human readable string. I think it would good 
if we format the string before we retrun it. 

 

!Screen Shot 2019-10-23 at 2.01.44 PM.png!

!Screen Shot 2019-10-23 at 2.35.04 PM.png!

 

the same format will apply to master and regionserver logs.  I hope no one will 
concern about this format change.


> Unit formatting in Master & RS UI
> -
>
> Key: HBASE-23208
> URL: https://issues.apache.org/jira/browse/HBASE-23208
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
> Attachments: Screen Shot 2019-10-23 at 2.01.44 PM.png, Screen Shot 
> 2019-10-23 at 2.35.04 PM.png
>
>
> ProcV2 use 
> org.apache.hadoop.hbase.procedure2.util.StringUtils#humanTimeDiff(long) and 
> humanSize(double) where it returns human readable string. I think it would 
> good if we format the string before we return it. 
>  
> !Screen Shot 2019-10-23 at 2.01.44 PM.png!
> !Screen Shot 2019-10-23 at 2.35.04 PM.png!
>  
> the same format will apply to master and regionserver logs.  I hope no one 
> will concern about this format change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23208) Unit formatting in Master & RS UI

2019-10-23 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23208:
--

 Summary: Unit formatting in Master & RS UI
 Key: HBASE-23208
 URL: https://issues.apache.org/jira/browse/HBASE-23208
 Project: HBase
  Issue Type: Improvement
  Components: UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy
 Attachments: Screen Shot 2019-10-23 at 2.01.44 PM.png, Screen Shot 
2019-10-23 at 2.35.04 PM.png

ProcV2 use 
org.apache.hadoop.hbase.procedure2.util.StringUtils#humanTimeDiff(long) and 
humanSize(double) where it returns human readable string. I think it would good 
if we format the string before we retrun it. 

 

!Screen Shot 2019-10-23 at 2.01.44 PM.png!

!Screen Shot 2019-10-23 at 2.35.04 PM.png!

 

the same format will apply to master and regionserver logs.  I hope no one will 
concern about this format change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23203) NPE in RSGroup info

2019-10-22 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23203:
---
Description: 
Rsgroup.jsp calls *Admin#listTableDescriptors((Pattern)null, true)* with 
Pattern null but implementation *RawAsyncHBaseAdmin#listTableDescriptors* don't 
allow null by Precondition. Also, the suggestion listTables(boolean) is 
removed/deprecated already.

 
{code:java}
HTTP ERROR 500
Problem accessing /rsgroup.jsp. Reason:    
Server Error
Caused by:java.lang.NullPointerException: pattern is null. If you don't specify 
a pattern, use listTables(boolean) instead
 at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
 at 
org.apache.hadoop.hbase.client.RawAsyncHBaseAdmin.listTableDescriptors(RawAsyncHBaseAdmin.java:495)
 at 
org.apache.hadoop.hbase.client.AdminOverAsyncAdmin.listTableDescriptors(AdminOverAsyncAdmin.java:137)
 at 
org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:390)
{code}
 

  was:
Rsgroup.jsp calls *Admin#listTableDescriptors((Pattern)null, true)* with ** 
Pattern ** null but implementation *RawAsyncHBaseAdmin#listTableDescriptors* 
don't allow null by Precondition. Also, the suggestion listTables(boolean) is 
removed/deprecated already.

 
{code:java}
HTTP ERROR 500
Problem accessing /rsgroup.jsp. Reason:    
Server Error
Caused by:java.lang.NullPointerException: pattern is null. If you don't specify 
a pattern, use listTables(boolean) instead
 at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
 at 
org.apache.hadoop.hbase.client.RawAsyncHBaseAdmin.listTableDescriptors(RawAsyncHBaseAdmin.java:495)
 at 
org.apache.hadoop.hbase.client.AdminOverAsyncAdmin.listTableDescriptors(AdminOverAsyncAdmin.java:137)
 at 
org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:390)
{code}
 


> NPE in RSGroup info
> ---
>
> Key: HBASE-23203
> URL: https://issues.apache.org/jira/browse/HBASE-23203
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Rsgroup.jsp calls *Admin#listTableDescriptors((Pattern)null, true)* with 
> Pattern null but implementation *RawAsyncHBaseAdmin#listTableDescriptors* 
> don't allow null by Precondition. Also, the suggestion listTables(boolean) is 
> removed/deprecated already.
>  
> {code:java}
> HTTP ERROR 500
> Problem accessing /rsgroup.jsp. Reason:    
> Server Error
> Caused by:java.lang.NullPointerException: pattern is null. If you don't 
> specify a pattern, use listTables(boolean) instead
>  at 
> org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
>  at 
> org.apache.hadoop.hbase.client.RawAsyncHBaseAdmin.listTableDescriptors(RawAsyncHBaseAdmin.java:495)
>  at 
> org.apache.hadoop.hbase.client.AdminOverAsyncAdmin.listTableDescriptors(AdminOverAsyncAdmin.java:137)
>  at 
> org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:390)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23203) NPE in RSGroup info

2019-10-22 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23203:
--

 Summary: NPE in RSGroup info
 Key: HBASE-23203
 URL: https://issues.apache.org/jira/browse/HBASE-23203
 Project: HBase
  Issue Type: Bug
  Components: rsgroup, UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Rsgroup.jsp calls *Admin#listTableDescriptors((Pattern)null, true)* with ** 
Pattern ** null but implementation *RawAsyncHBaseAdmin#listTableDescriptors* 
don't allow null by Precondition. Also, the suggestion listTables(boolean) is 
removed/deprecated already.

 
{code:java}
HTTP ERROR 500
Problem accessing /rsgroup.jsp. Reason:    
Server Error
Caused by:java.lang.NullPointerException: pattern is null. If you don't specify 
a pattern, use listTables(boolean) instead
 at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:897)
 at 
org.apache.hadoop.hbase.client.RawAsyncHBaseAdmin.listTableDescriptors(RawAsyncHBaseAdmin.java:495)
 at 
org.apache.hadoop.hbase.client.AdminOverAsyncAdmin.listTableDescriptors(AdminOverAsyncAdmin.java:137)
 at 
org.apache.hadoop.hbase.generated.master.rsgroup_jsp._jspService(rsgroup_jsp.java:390)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23199) Error populating Table-Attribute fields

2019-10-21 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23199:
--

 Summary: Error populating Table-Attribute fields
 Key: HBASE-23199
 URL: https://issues.apache.org/jira/browse/HBASE-23199
 Project: HBase
  Issue Type: Bug
  Components: master, UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy
 Attachments: Screen Shot 2019-10-21 at 3.25.40 PM.png

if quota is enabled, then we fetch table and namespace quota info. It is not 
necessary both table and namespace will have a quota set.  Sometimes, users 
only have table level quota or namespace level quota or both un-set.  So we 
must add Quota "null" check before getting Throttle info(Limit, Type, TimeUnit, 
Scope). 

 

!Screen Shot 2019-10-21 at 3.25.40 PM.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23198) Documentation and release notes

2019-10-21 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956421#comment-16956421
 ] 

Karthik Palanisamy commented on HBASE-23198:


Sorry [~vrodionov], I mistakenly assigned me. 

> Documentation and release notes
> ---
>
> Key: HBASE-23198
> URL: https://issues.apache.org/jira/browse/HBASE-23198
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Document all the changes: algorithms, new configuration options, obsolete 
> configurations, upgrade procedure and possibility of downgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23198) Documentation and release notes

2019-10-21 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy reassigned HBASE-23198:
--

Assignee: Vladimir Rodionov  (was: Karthik Palanisamy)

> Documentation and release notes
> ---
>
> Key: HBASE-23198
> URL: https://issues.apache.org/jira/browse/HBASE-23198
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Document all the changes: algorithms, new configuration options, obsolete 
> configurations, upgrade procedure and possibility of downgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23198) Documentation and release notes

2019-10-21 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy reassigned HBASE-23198:
--

Assignee: Karthik Palanisamy  (was: Vladimir Rodionov)

> Documentation and release notes
> ---
>
> Key: HBASE-23198
> URL: https://issues.apache.org/jira/browse/HBASE-23198
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Document all the changes: algorithms, new configuration options, obsolete 
> configurations, upgrade procedure and possibility of downgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23191:
---
Issue Type: Improvement  (was: Bug)

> Log spams on Replication
> 
>
> Key: HBASE-23191
> URL: https://issues.apache.org/jira/browse/HBASE-23191
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> If no new active writes in WAL edit, then *WALEntryStream#hasNext -> 
> ReaderBase -> ProtobufLogReader#readNext* will reach end of file. It would be 
> a good idea for changing the log level from INFO to DEBUG. 
>  
> {code:java}
> 2019-10-18 22:25:03,572 INFO  
> [RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
>  wal.ProtobufLogReader: Reached the end of file at position 83
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23191:
--

 Summary: Log spams on Replication
 Key: HBASE-23191
 URL: https://issues.apache.org/jira/browse/HBASE-23191
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


If no new active writes in WAL edit, then *WALEntryStream#hasNext -> ReaderBase 
-> ProtobufLogReader#readNext* will reach end of file. It would be a good idea 
for changing the log level from INFO to DEBUG. 

 
{code:java}
2019-10-18 22:25:03,572 INFO  
[RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
 wal.ProtobufLogReader: Reached the end of file at position 83
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23191:
---
Priority: Trivial  (was: Minor)

> Log spams on Replication
> 
>
> Key: HBASE-23191
> URL: https://issues.apache.org/jira/browse/HBASE-23191
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
>
> If no new active writes in WAL edit, then *WALEntryStream#hasNext -> 
> ReaderBase -> ProtobufLogReader#readNext* will reach end of file. It would be 
> a good idea for changing the log level from INFO to DEBUG. 
>  
> {code:java}
> 2019-10-18 22:25:03,572 INFO  
> [RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
>  wal.ProtobufLogReader: Reached the end of file at position 83
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23176) delete_all_snapshot does not work with regex

2019-10-15 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952269#comment-16952269
 ] 

Karthik Palanisamy commented on HBASE-23176:


Found same issue in 
[delete_table_snapshots.rb.|https://github.com/apache/hbase/pull/725/commits/fa05907b1b5ff3ac440a43937cae386b3638de5a]

> delete_all_snapshot does not work with regex
> 
>
> Key: HBASE-23176
> URL: https://issues.apache.org/jira/browse/HBASE-23176
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Delete_all_snapshot.rb is using deprecated method 
> SnapshotDescription#getTable but this method is already removed in 3.0.x.
> {code:java}
> hbase(main):022:0>delete_all_snapshot("t10.*")
> SNAPSHOT TABLE + CREATION 
> TIME ERROR: undefined method `getTable' for 
> #
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23176) delete_all_snapshot does not work with regex

2019-10-14 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23176:
--

 Summary: delete_all_snapshot does not work with regex
 Key: HBASE-23176
 URL: https://issues.apache.org/jira/browse/HBASE-23176
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Delete_all_snapshot.rb is using deprecated method SnapshotDescription#getTable 
but this method is already removed in 3.0.x.
{code:java}
hbase(main):022:0>delete_all_snapshot("t10.*")
SNAPSHOT TABLE + CREATION 
TIME ERROR: undefined method `getTable' for 
#
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23152) Compaction_switch does not work by RegionServer name

2019-10-12 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950094#comment-16950094
 ] 

Karthik Palanisamy commented on HBASE-23152:


Thank you very much [~stack] for merging. 

I think we can ignore merge for branch-2.1 because Compaction_switch 
[HBASE-6028|https://issues.apache.org/jira/browse/HBASE-6028] is only available 
from branch-2.2.0.

> Compaction_switch does not work by RegionServer name
> 
>
> Key: HBASE-23152
> URL: https://issues.apache.org/jira/browse/HBASE-23152
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Compaction
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>  Labels: client
> Fix For: 3.0.0, 2.3.0, 2.2.2
>
>
> Compaction_switch is used to stop running compaction on regionservers. This 
> switch works good by using "compaction_switch true/false" but rather I want 
> to stop compaction only for particular regionserver. In that case, the switch 
> doesn't work because serverName that we want to stop is not added into 
> CompletableFuture> [link| 
> [https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java#L3156]
>  ]. So we always get empty Future list by using RS name. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23154) list_deadservers return incorrect no of rows

2019-10-11 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23154:
--

 Summary: list_deadservers return incorrect no of rows
 Key: HBASE-23154
 URL: https://issues.apache.org/jira/browse/HBASE-23154
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


No of rows should be equivalent to no of dead region servers but mistakenly 
included current system time in it.
{code:java}
hbase(main):001:0>list_deadservers
SERVERNAME
apache301.openstacklocal,16020,1570582044467

1570855247 row(s)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23152) Compaction_switch does not work by RegionServer name

2019-10-11 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949837#comment-16949837
 ] 

Karthik Palanisamy commented on HBASE-23152:


Please find the corresponding ref link, 
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java#L3156]

> Compaction_switch does not work by RegionServer name
> 
>
> Key: HBASE-23152
> URL: https://issues.apache.org/jira/browse/HBASE-23152
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Compaction
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>  Labels: client
>
> Compaction_switch is used to stop running compaction on regionservers. This 
> switch works good by using "compaction_switch true/false" but rather I want 
> to stop compaction only for particular regionserver. In that case, the switch 
> doesn't work because serverName that we want to stop is not added into 
> CompletableFuture> [link| 
> [https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java#L3156]
>  ]. So we always get empty Future list by using RS name. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23152) Compaction_switch does not work by RegionServer name

2019-10-11 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23152:
--

 Summary: Compaction_switch does not work by RegionServer name
 Key: HBASE-23152
 URL: https://issues.apache.org/jira/browse/HBASE-23152
 Project: HBase
  Issue Type: Bug
  Components: Client, Compaction
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Compaction_switch is used to stop running compaction on regionservers. This 
switch works good by using "compaction_switch true/false" but rather I want to 
stop compaction only for particular regionserver. In that case, the switch 
doesn't work because serverName that we want to stop is not added into 
CompletableFuture> [link| 
[https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java#L3156]
 ]. So we always get empty Future list by using RS name. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-10 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948885#comment-16948885
 ] 

Karthik Palanisamy edited comment on HBASE-23115 at 10/10/19 7:08 PM:
--

[~brfrn169] The change was conflict with HBASE-22543. Fixed it now. 

Please review the new PR #710 for branch-2.2. Thanks again [~brfrn169]


was (Author: kpalanisamy):
[~brfrn169] The change was conflict with HBASE-22543. Fixed it now. 

Please review the new PR #710. Thanks again [~brfrn169]

> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> few jsp pages contain inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-10 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16948885#comment-16948885
 ] 

Karthik Palanisamy commented on HBASE-23115:


[~brfrn169] The change was conflict with HBASE-22543. Fixed it now. 

Please review the new PR #710. Thanks again [~brfrn169]

> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> few jsp pages contain inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23144) Compact_rs throw wrong number of arguments

2019-10-09 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23144:
---
Description: 
Compact_rs command will call *Admin#compactRegionServer(ServerName, boolean)* 
but this is deprecated method and removed as part of HBASE-22002. 
{code:java}
hbase(main):001:0> compact_rs 'apache303.openstacklocal,16020,157058209'
ERROR: wrong number of arguments (2 for 1)
{code}

  was:
Compact_rs command will call *Admin#compactRegionServer(ServerName, boolean)* 
but this is deprecated method and removed as part of 
[HBASE-22002|[https://issues.apache.org/jira/browse/HBASE-22002]]. **

 

 
{code:java}
hbase(main):001:0> compact_rs 'apache303.openstacklocal,16020,157058209'
ERROR: wrong number of arguments (2 for 1)
{code}
 

 


> Compact_rs throw wrong number of arguments 
> ---
>
> Key: HBASE-23144
> URL: https://issues.apache.org/jira/browse/HBASE-23144
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Compact_rs command will call *Admin#compactRegionServer(ServerName, boolean)* 
> but this is deprecated method and removed as part of HBASE-22002. 
> {code:java}
> hbase(main):001:0> compact_rs 'apache303.openstacklocal,16020,157058209'
> ERROR: wrong number of arguments (2 for 1)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23144) Compact_rs throw wrong number of arguments

2019-10-09 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23144:
--

 Summary: Compact_rs throw wrong number of arguments 
 Key: HBASE-23144
 URL: https://issues.apache.org/jira/browse/HBASE-23144
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Compact_rs command will call *Admin#compactRegionServer(ServerName, boolean)* 
but this is deprecated method and removed as part of 
[HBASE-22002|[https://issues.apache.org/jira/browse/HBASE-22002]]. **

 

 
{code:java}
hbase(main):001:0> compact_rs 'apache303.openstacklocal,16020,157058209'
ERROR: wrong number of arguments (2 for 1)
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-09 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947860#comment-16947860
 ] 

Karthik Palanisamy commented on HBASE-23115:


Thank you very much [~brfrn169]. I will submit PR today for branch 2.1/2.2.

> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Fix For: 3.0.0, 2.3.0
>
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> few jsp pages contain inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-08 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23115:
---
Description: 
StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but few 
jsp pages contain inaccurate unit. The reason is table.jsp (link3) use 
org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern here (link4) is computation (ByteVal/1024/1024) will output 
always lesser than 1 for store contains few bytes or few kbs.  Also, typecast 
will not round up to its nearest value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.

  was:
StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern here (link4) is computation (ByteVal/1024/1024) will output 
always lesser than 1 for store contains few bytes or few kbs.  Also, typecast 
will not round up to its nearest value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.


> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> few jsp pages contain inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-08 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947348#comment-16947348
 ] 

Karthik Palanisamy commented on HBASE-23115:


Please, can someone review this change?

> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23140) Remove unknown table error

2019-10-08 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23140:
--

 Summary: Remove unknown table error
 Key: HBASE-23140
 URL: https://issues.apache.org/jira/browse/HBASE-23140
 Project: HBase
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


"hbase:quota" will be created automatically when hbase.quota.enabled set to 
true but If this feature is disabled then should not throw unknown table error. 
{code:java}
hbase(main):025:0>
describe_namespace 'hbase'
DESCRIPTION
{NAME => 'hbase'}

QUOTAS
ERROR: Unknown table
hbase:quota!
For usage try 'help
"describe_namespace"'
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23138) Drop_all table by regex fail from Shell - Similar to HBASE-23134

2019-10-08 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16947215#comment-16947215
 ] 

Karthik Palanisamy commented on HBASE-23138:


Pls ignore PR #702 and #703

> Drop_all table by regex fail from Shell -  Similar to HBASE-23134
> -
>
> Key: HBASE-23138
> URL: https://issues.apache.org/jira/browse/HBASE-23138
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Initialization error in admin.rb 
> {code:java}
> hbase(main):001:0>  drop_all("t.*")
> ERROR: undefined local variable or method `admin' for 
> #
> Did you mean?  @adminFor usage try 'help "drop_all"'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23138) Drop_all table by regex fail from Shell - Similar to HBASE-23134

2019-10-08 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23138:
--

 Summary: Drop_all table by regex fail from Shell -  Similar to 
HBASE-23134
 Key: HBASE-23138
 URL: https://issues.apache.org/jira/browse/HBASE-23138
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Initialization error in admin.rb 
{code:java}
hbase(main):001:0>  drop_all("t.*")
ERROR: undefined local variable or method `admin' for #
Did you mean?  @adminFor usage try 'help "drop_all"'
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23134) Enable_all and Disable_all table by Regex fail from Shell

2019-10-08 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23134:
--

 Summary: Enable_all and Disable_all table by Regex fail from Shell
 Key: HBASE-23134
 URL: https://issues.apache.org/jira/browse/HBASE-23134
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Found few initialization errors in admin.rb, will get the following errors 
using enable_all and disable_all from Shell.

 
{code:java}
hbase(main):019:0> disable_all 't1*'
ERROR: undefined local variable or method `admin' for #
Did you mean?  @admin
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize

2019-10-06 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23115:
---
Summary: Unit change for StoreFileSize and MemStoreSize  (was: Unit change 
for StoreFileSize and MemStoreSize in table.jsp)

> Unit change for StoreFileSize and MemStoreSize
> --
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23123) Merge_region fails from shell

2019-10-04 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23123:
--

 Summary: Merge_region fails from shell
 Key: HBASE-23123
 URL: https://issues.apache.org/jira/browse/HBASE-23123
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


The deprecated method *Admin#**mergeRegions* is removed in HBase 3.0 but 
somehow we missed to update new API in ruby admin script. 

We will encounter an error undefined method for `mergeRegions'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-03 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16944087#comment-16944087
 ] 

Karthik Palanisamy commented on HBASE-23115:


Made all the possible changes.

> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943342#comment-16943342
 ] 

Karthik Palanisamy commented on HBASE-23115:


Found the same issue on another page, rsgroup_jsp.  Will post new commit 
shortly. 

> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16943218#comment-16943218
 ] 

Karthik Palanisamy commented on HBASE-23115:


Sorry for the spamming messages, it looks to me links are not reflecting 
correctly.

link1 : 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]

link2 : 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]

link3 : 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]

link4 : 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]

 

> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23115:
---
Description: 
StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern here (link4) is computation (ByteVal/1024/1024) will output 
always lesser than 1 for store contains few bytes or few kbs.  Also, typecast 
will not round up to its nearest value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.

  was:
StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668%7Chttps://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java%23L1668]]
 ], [link| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]]
 ])  but table.jsp page have inaccurate unit. The reason is table.jsp ([link| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]
 ]) use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will 
perform longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) 
based on length. The concern [here| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]
 ] is computation (ByteVal/1024/1024) will output always lesser than 1 for 
store contains few bytes or few kbs.  Also, typecast will not round up to its 
nearest value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.


> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs (link1, link2)  but 
> table.jsp page have inaccurate unit. The reason is table.jsp (link3) use 
> org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern here (link4) is computation (ByteVal/1024/1024) will 
> output always lesser than 1 for store contains few bytes or few kbs.  Also, 
> typecast will not round up to its nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> code, otherwise we may end up doing many refactors from getMemStoreSizeMB, 
> setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, 
> setStorefileSizeMB,..
>  
> Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23115:
---
Description: 
StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668%7Chttps://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java%23L1668]]
 ], [link| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]]
 ])  but table.jsp page have inaccurate unit. The reason is table.jsp ([link| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]
 ]) use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will 
perform longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) 
based on length. The concern [here| 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]
 ] is computation (ByteVal/1024/1024) will output always lesser than 1 for 
store contains few bytes or few kbs.  Also, typecast will not round up to its 
nearest value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.

  was:
StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]]
 , 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]])
 but table.jsp page have inaccurate unit. The reason is table.jsp 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488|https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]]])
 use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern 
[here|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]
 is computation (ByteVal/1024/1024) will output always lesser than 1 for store 
contains few bytes or few kbs.  Also, typecast will not round up to its nearest 
value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.


> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs 
> ([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668%7Chttps://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java%23L1668]]
>  ], [link| 
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]]
>  ])  but table.jsp page have inaccurate unit. The reason is table.jsp ([link| 
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]
>  ]) use 

[jira] [Updated] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23115:
---
Description: 
StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]]
 , 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]])
 but table.jsp page have inaccurate unit. The reason is table.jsp 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488|https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]]])
 use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern 
[here|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]
 is computation (ByteVal/1024/1024) will output always lesser than 1 for store 
contains few bytes or few kbs.  Also, typecast will not round up to its nearest 
value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.

  was:
StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]],
 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]])
 but table.jsp page have inaccurate unit. The reason is table.jsp 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]])
 use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern 
[here|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]
 is computation (ByteVal/1024/1024) will output always lesser than 1 for store 
contains few bytes or few kbs.  Also, typecast will not round up to its nearest 
value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.


> Unit change for StoreFileSize and MemStoreSize in table.jsp
> ---
>
> Key: HBASE-23115
> URL: https://issues.apache.org/jira/browse/HBASE-23115
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, UI
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Attachments: Units.pdf
>
>
> StoreFileSize and MemstoreSize usually returned in MBs 
> ([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668|https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]]
>  , 
> [link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]])
>  but table.jsp page have inaccurate unit. The reason is table.jsp 
> ([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488|https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]]])
>  use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
> longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
> length. The concern 
> [here|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]
>  is computation (ByteVal/1024/1024) will output always lesser than 1 for 
> store contains few bytes or few kbs.  Also, typecast will not round up to its 
> nearest value.
> I think the best option is changing unit in table.jsp instead of changing 
> 

[jira] [Created] (HBASE-23115) Unit change for StoreFileSize and MemStoreSize in table.jsp

2019-10-02 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23115:
--

 Summary: Unit change for StoreFileSize and MemStoreSize in 
table.jsp
 Key: HBASE-23115
 URL: https://issues.apache.org/jira/browse/HBASE-23115
 Project: HBase
  Issue Type: Bug
  Components: metrics, UI
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy
 Attachments: Units.pdf

StoreFileSize and MemstoreSize usually returned in MBs 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]],
 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1656]])
 but table.jsp page have inaccurate unit. The reason is table.jsp 
([link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/resources/hbase-webapps/master/table.jsp#L488]])
 use org.apache.hadoop.util.StringUtils.byteDesc(long len), this will perform 
longtostring conversion and returns its unit(B, KB, MB, GB, TB, PB) based on 
length. The concern 
[here|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java#L1668]]
 is computation (ByteVal/1024/1024) will output always lesser than 1 for store 
contains few bytes or few kbs.  Also, typecast will not round up to its nearest 
value.

I think the best option is changing unit in table.jsp instead of changing code, 
otherwise we may end up doing many refactors from getMemStoreSizeMB, 
setMemStoreSizeMB, hasMemStoreSizeMB, getStorefileSizeMB, setStorefileSizeMB,..

 

Please find the attachment, a simple example is posted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23095) Reuse FileStatus in StoreFileInfo

2019-09-30 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23095:
---
Affects Version/s: 1.1.2
   1.2.1
   2.0.0

> Reuse FileStatus in StoreFileInfo
> -
>
> Key: HBASE-23095
> URL: https://issues.apache.org/jira/browse/HBASE-23095
> Project: HBase
>  Issue Type: Improvement
>  Components: mob, snapshots
>Affects Versions: 1.1.2, 1.2.1, 2.0.0, 2.2.1
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>  Labels: performance
> Fix For: 3.0.0
>
> Attachments: PerformanceComparision.pdf
>
>
> The performance of create snapshot on large MOB table reasonably slow because 
> there are two unnecessary calls to namenode on each Hfile, this while we 
> create snapshot manifest. The first namenode call for getting StoreFile 
> modification time 
> [link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java#L139]]
>  which used for metrics and another namenode call for getting StoreFile size 
> [link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java#L132]]
>  which used in snapshot manifest. Both calls can be avoided and this info can 
> be fetched from existing FileStatus 
> [link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java#L155]].
>  
> PFA. 2x performance is seen after reusing existing FileStatus.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23095) Reuse FileStatus in StoreFileInfo

2019-09-29 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23095:
--

 Summary: Reuse FileStatus in StoreFileInfo
 Key: HBASE-23095
 URL: https://issues.apache.org/jira/browse/HBASE-23095
 Project: HBase
  Issue Type: Improvement
  Components: mob, snapshots
Affects Versions: 2.2.1
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy
 Fix For: 3.0.0
 Attachments: PerformanceComparision.pdf

The performance of create snapshot on large MOB table reasonably slow because 
there are two unnecessary calls to namenode on each Hfile, this while we create 
snapshot manifest. The first namenode call for getting StoreFile modification 
time 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java#L139]]
 which used for metrics and another namenode call for getting StoreFile size 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java#L132]]
 which used in snapshot manifest. Both calls can be avoided and this info can 
be fetched from existing FileStatus 
[link|[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java#L155]].

 

PFA. 2x performance is seen after reusing existing FileStatus.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-19320) document the mysterious direct memory leak in hbase

2019-09-06 Thread Karthik Palanisamy (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924533#comment-16924533
 ] 

Karthik Palanisamy commented on HBASE-19320:


Thanks, [~huaxiang] for the note.  Users started hitting this issue,  for 
multi-get operation.  Suggested, bounding NIO temporary buffer cache. 

> document the mysterious direct memory leak in hbase 
> 
>
> Key: HBASE-19320
> URL: https://issues.apache.org/jira/browse/HBASE-19320
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.6, 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
>  Labels: Replication
> Attachments: HBASE-19320-master-v001.patch, Screen Shot 2017-11-21 at 
> 4.43.36 PM.png, Screen Shot 2017-11-21 at 4.44.22 PM.png
>
>
> Recently we run into a direct memory leak case, which takes some time to 
> trace and debug. Internally discussed with our [~saint@gmail.com], we 
> thought we had some findings and want to share with the community.
> Basically, it is the issue described in 
> http://www.evanjones.ca/java-bytebuffer-leak.html and it happened to one of 
> our hbase clusters.
> Create the jira first and will fill in more details later.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22075) Potential data loss when MOB compaction fails

2019-05-03 Thread Karthik Palanisamy (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16832954#comment-16832954
 ] 

Karthik Palanisamy commented on HBASE-22075:


[~elserj] Tested [~vrodionov] patch. This fix handled MOB data well safe. No 
data loss is noticed during MOB compaction failure, RS crash, IO issue, and any 
infra issues.

But majorly race condition is noticed w/ or w/o patch in the MOB. We see data 
loss again during MOB-compaction and Major-compaction while those are running 
together. 

As [~vrodionov] already mentioned, there will be a race condition in this case. 
 I think he already working on a new patch.

I have attached a small repro code (ReproMOBDataLoss.java) for this race 
condition. This is an aggressive test.  Test duration is nearly an hour.
 # Settings: Region Size 200 MB,  Flush threshold 800 KB.
 # Insert 10 Million records
 # MOB Compaction and Archiver
        a) Trigger MOB Compaction (every 2 minutes)
        b) Trigger major compaction (every 2 minutes)
        c) Trigger archive cleaner (every 3 minutes)
 # Validate MOB data after complete data load.

I ran this repro code on branch-2.2. The issue is reproduced. 

Also, ran this repro code after disabling MOB compaction. No data loss is 
noticed.

> Potential data loss when MOB compaction fails
> -
>
> Key: HBASE-22075
> URL: https://issues.apache.org/jira/browse/HBASE-22075
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 2.1.0, 2.0.0, 2.0.1, 2.1.1, 2.0.2, 2.0.3, 2.1.2, 2.0.4, 
> 2.1.3
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: compaction, mob
> Fix For: 2.0.6, 2.1.5, 2.2.1
>
> Attachments: HBASE-22075-v1.patch, HBASE-22075-v2.patch, 
> ReproMOBDataLoss.java
>
>
> When MOB compaction fails during last step (bulk load of a newly created 
> reference file) there is a high chance of a data loss due to partially loaded 
> reference file, cells of which refer to (now) non-existent MOB file. The 
> newly created MOB file is deleted automatically in case of a MOB compaction 
> failure, but some cells with the references to this file might be loaded to 
> HBase. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22075) Potential data loss when MOB compaction fails

2019-05-03 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-22075:
---
Attachment: ReproMOBDataLoss.java

> Potential data loss when MOB compaction fails
> -
>
> Key: HBASE-22075
> URL: https://issues.apache.org/jira/browse/HBASE-22075
> Project: HBase
>  Issue Type: Bug
>  Components: mob
>Affects Versions: 2.1.0, 2.0.0, 2.0.1, 2.1.1, 2.0.2, 2.0.3, 2.1.2, 2.0.4, 
> 2.1.3
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Critical
>  Labels: compaction, mob
> Fix For: 2.0.6, 2.1.5, 2.2.1
>
> Attachments: HBASE-22075-v1.patch, HBASE-22075-v2.patch, 
> ReproMOBDataLoss.java
>
>
> When MOB compaction fails during last step (bulk load of a newly created 
> reference file) there is a high chance of a data loss due to partially loaded 
> reference file, cells of which refer to (now) non-existent MOB file. The 
> newly created MOB file is deleted automatically in case of a MOB compaction 
> failure, but some cells with the references to this file might be loaded to 
> HBase. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21816) Print source cluster replication config directory

2019-02-07 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-21816:
---
Attachment: HBASE-21816.master.001.patch

> Print source cluster replication config directory
> -
>
> Key: HBASE-21816
> URL: https://issues.apache.org/jira/browse/HBASE-21816
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0, 2.0.0
> Environment: NA
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HBASE-21816-001.patch, HBASE-21816-002.patch, 
> HBASE-21816-003.patch, HBASE-21816.master.001.patch
>
>
> User may get confused, to understanding our HBase configurations which are 
> loaded for replication. Sometimes, User may place source and destination 
> cluster conf under "/etc/hbase/conf" directory. It will create uncertainty 
> because our log points that all the configurations are co-located.
>  
> Existing Logs, 
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/
> {code}
> But it should be something like,
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/HDP1
> {code}
>  
> This jira only to change the log-line, no issue with the functionality. 
> {code:java}
> File confDir = new File(replicationConfDir, replicationClusterId);
> String[] listofConfFiles = FileUtil.list(confDir);
> for (String confFile : listofConfFiles) {
> if (new File(confDir, confFile).isFile() && confFile.endsWith(XML)) {
> // Add all the user provided client conf files
> sourceClusterConf.addResource(new Path(confDir.getPath(), confFile));
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21816) Print source cluster replication config directory

2019-02-01 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-21816:
---
Attachment: HBASE-21816-003.patch

> Print source cluster replication config directory
> -
>
> Key: HBASE-21816
> URL: https://issues.apache.org/jira/browse/HBASE-21816
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0, 2.0.0
> Environment: NA
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21816-001.patch, HBASE-21816-002.patch, 
> HBASE-21816-003.patch
>
>
> User may get confused, to understanding our HBase configurations which are 
> loaded for replication. Sometimes, User may place source and destination 
> cluster conf under "/etc/hbase/conf" directory. It will create uncertainty 
> because our log points that all the configurations are co-located.
>  
> Existing Logs, 
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/
> {code}
> But it should be something like,
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/HDP1
> {code}
>  
> This jira only to change the log-line, no issue with the functionality. 
> {code:java}
> File confDir = new File(replicationConfDir, replicationClusterId);
> String[] listofConfFiles = FileUtil.list(confDir);
> for (String confFile : listofConfFiles) {
> if (new File(confDir, confFile).isFile() && confFile.endsWith(XML)) {
> // Add all the user provided client conf files
> sourceClusterConf.addResource(new Path(confDir.getPath(), confFile));
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21816) Print source cluster replication config directory

2019-02-01 Thread Karthik Palanisamy (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758676#comment-16758676
 ] 

Karthik Palanisamy commented on HBASE-21816:


Please find the new patch.

> Print source cluster replication config directory
> -
>
> Key: HBASE-21816
> URL: https://issues.apache.org/jira/browse/HBASE-21816
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0, 2.0.0
> Environment: NA
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21816-001.patch, HBASE-21816-002.patch, 
> HBASE-21816-003.patch
>
>
> User may get confused, to understanding our HBase configurations which are 
> loaded for replication. Sometimes, User may place source and destination 
> cluster conf under "/etc/hbase/conf" directory. It will create uncertainty 
> because our log points that all the configurations are co-located.
>  
> Existing Logs, 
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/
> {code}
> But it should be something like,
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/HDP1
> {code}
>  
> This jira only to change the log-line, no issue with the functionality. 
> {code:java}
> File confDir = new File(replicationConfDir, replicationClusterId);
> String[] listofConfFiles = FileUtil.list(confDir);
> for (String confFile : listofConfFiles) {
> if (new File(confDir, confFile).isFile() && confFile.endsWith(XML)) {
> // Add all the user provided client conf files
> sourceClusterConf.addResource(new Path(confDir.getPath(), confFile));
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21816) Print source cluster replication config directory

2019-02-01 Thread Karthik Palanisamy (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758674#comment-16758674
 ] 

Karthik Palanisamy commented on HBASE-21816:


Thank you [~elserj] for reviewing it.  Yes, This should be a simple change to 
reverse it.  I lined edge :)

> Print source cluster replication config directory
> -
>
> Key: HBASE-21816
> URL: https://issues.apache.org/jira/browse/HBASE-21816
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0, 2.0.0
> Environment: NA
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-21816-001.patch, HBASE-21816-002.patch, 
> HBASE-21816-003.patch
>
>
> User may get confused, to understanding our HBase configurations which are 
> loaded for replication. Sometimes, User may place source and destination 
> cluster conf under "/etc/hbase/conf" directory. It will create uncertainty 
> because our log points that all the configurations are co-located.
>  
> Existing Logs, 
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/
> {code}
> But it should be something like,
> {code:java}
> INFO  [RpcServer.FifoWFPBQ.replication.handler=2,queue=0,port=16020] 
> regionserver.DefaultSourceFSConfigurationProvider: Loading source cluster 
> HDP1 file system configurations from xml files under directory 
> /etc/hbase/conf/HDP1
> {code}
>  
> This jira only to change the log-line, no issue with the functionality. 
> {code:java}
> File confDir = new File(replicationConfDir, replicationClusterId);
> String[] listofConfFiles = FileUtil.list(confDir);
> for (String confFile : listofConfFiles) {
> if (new File(confDir, confFile).isFile() && confFile.endsWith(XML)) {
> // Add all the user provided client conf files
> sourceClusterConf.addResource(new Path(confDir.getPath(), confFile));
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >