[jira] [Created] (HBASE-23004) Update 2.0 Javadoc on website

2019-09-09 Thread Peter Somogyi (Jira)
Peter Somogyi created HBASE-23004:
-

 Summary: Update 2.0 Javadoc on website
 Key: HBASE-23004
 URL: https://issues.apache.org/jira/browse/HBASE-23004
 Project: HBase
  Issue Type: Task
  Components: website
Affects Versions: 2.0.6
Reporter: Peter Somogyi
 Fix For: 2.0.6


Javadoc for 2.0 is still on 2.0.5. Since the 2.0 branch is EOL'd the up-to-date 
Javadoc should be hosted here: [https://hbase.apache.org/2.0/apidocs/index.html]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (HBASE-22997) Move to SLF4J

2019-09-09 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-22997.
---
Release Note: Added SLF4J binding for LOG4J 2.
  Resolution: Fixed

Merged pull request to master. Thanks for the review [~stack]!

> Move to SLF4J
> -
>
> Key: HBASE-22997
> URL: https://issues.apache.org/jira/browse/HBASE-22997
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase-operator-tools
>Affects Versions: operator-1.0.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: operator-1.0.0
>
>
> Currently hbase-operator-tools uses org.apache.logging.log4j while the rest 
> of our projects have SLF4J.
> When building the project with release profile the enforce plugin fails on  
> org.apache.logging.log4j:log4j-api:jar:2.11.1 dependency
> {noformat}
> [INFO] --- maven-enforcer-plugin:1.4:enforce 
> (min-maven-min-java-banned-xerces) @ hbase-hbck2 ---
> [INFO] Restricted to JDK 1.8 yet 
> org.apache.logging.log4j:log4j-api:jar:2.11.1:compile contains 
> META-INF/versions/9/module-info.class targeted to JDK 1.9
> [WARNING] Rule 3: org.apache.maven.plugins.enforcer.EnforceBytecodeVersion 
> failed with message:
> HBase has unsupported dependencies.
>   HBase requires that all dependencies be compiled with version 1.8 or earlier
>   of the JDK to properly build from source.  You appear to be using a newer 
> dependency. You can use
>   either "mvn -version" or "mvn enforcer:display-info" to verify what version 
> is active.
>   Non-release builds can temporarily build with a newer JDK version by 
> setting the
>   'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
> Found Banned Dependency: org.apache.logging.log4j:log4j-api:jar:2.11.1
> Use 'mvn dependency:tree' to locate the source of the banned dependencies. 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase-operator-tools] petersomogyi merged pull request #25: HBASE-22997 Move to SLF4J

2019-09-09 Thread GitBox
petersomogyi merged pull request #25: HBASE-22997 Move to SLF4J
URL: https://github.com/apache/hbase-operator-tools/pull/25
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] petersomogyi commented on issue #25: HBASE-22997 Move to SLF4J

2019-09-09 Thread GitBox
petersomogyi commented on issue #25: HBASE-22997 Move to SLF4J
URL: 
https://github.com/apache/hbase-operator-tools/pull/25#issuecomment-529792539
 
 
   > Logs that come out look ok?
   
   Yes, log messages are the same as before and the `-d` debug flag works as 
well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-21745) Make HBCK2 be able to fix issues other than region assignment

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21745:
--
Release Note: 
This issue adds via its subtasks:

 * An 'HBCK Report' page to the Master UI added by 
HBASE-22527+HBASE-22709+HBASE-22723+ (since 2.1.6, 2.2.1, 2.3.0). Lists 
consistency or anomalies found via new hbase:meta consistency checking 
extensions added to CatalogJanitor (holes, overlaps, bad servers) and by a new 
'HBCK chore' that runs at a lesser periodicity that will note filesystem 
orphans and overlaps as well as the following conditions:
 ** Master thought this region opened, but no regionserver reported it. 
 ** Master thought this region opened on Server1, but regionserver reported 
Server2 
 ** More than one regionservers reported opened this region
 Both chores can be triggered from the shell to regenerate ‘new’ reports.
 * Means of scheduling a ServerCrashProcedure (HBASE-21393).
 * An ‘offline’ hbase:meta rebuild (HBASE-22680).
 * Offline replace of hbase.version and hbase.id
 * Documentation on how to use completebulkload tool to ‘adopt’ orphaned data 
found by new HBCK2 ‘filesystem’ check (see below) and ‘HBCK chore’ (HBASE-22859)
 * A ‘holes’ and ‘overlaps’ fix that runs in the master that uses new 
bulk-merge facility to collapse many overlaps in the one go.
 * hbase-operator-tools HBCK2 client tool got a bunch of additions:
 ** A specialized 'fix' for the case where operators ran old hbck 'offlinemeta' 
repair and destroyed their hbase:meta; it ties together holes in meta with 
orphaned data in the fs (HBASE-22567)
 ** A ‘filesystem’ command that reports on orphan data as well as bad 
references and hlinks with a ‘fix’ for the latter two options (based on hbck1 
facility updated).
 ** Adds back the ‘replication’ fix facility from hbck1 (HBASE-22717)

  was:
This issue adds via its subtasks:

 * An 'HBCK Report' page to the Master UI added by 
HBASE-22527+HBASE-22709+HBASE-22723+ (since 2.1.6, 2.2.1, 2.3.0). Lists 
consistency or anomalies found via new hbase:meta consistency checking 
extensions added to CatalogJanitor (holes, overlaps, bad servers) and by a new 
'HBCK chore' that runs at a lesser periodicity that will note filesystem 
orphans and overlaps as well as the following conditions:

 ** Master thought this region opened, but no regionserver reported it. 
  ** Master thought this region opened on Server1, but regionserver reported 
Server2 
  ** More than one regionservers reported opened this region

 Both chores can be triggered from the shell to regenerate ‘new’ reports.

 * Means of scheduling a ServerCrashProcedure (HBASE-21393).
 * An ‘offline’ hbase:meta rebuild (HBASE-22680).
 * Offline replace of hbase.version and hbase.id
 * Documentation on how to use completebulkload tool to ‘adopt’ orphaned data 
found by new HBCK2 ‘filesystem’ check (see below) and ‘HBCK chore’ (HBASE-22859)
 * A ‘holes’ and ‘overlaps’ fix that runs in the master that uses new 
bulk-merge facility to collapse many overlaps in the one go.
 * hbase-operator-tools HBCK2 client tool got a bunch of additions:
 ** A specialized 'fix' for the case where operators ran old hbck 'offlinemeta' 
repair and destroyed their hbase:meta; it ties together holes in meta with 
orphaned data in the fs (HBASE-22567)
 ** A ‘filesystem’ command that reports on orphan data as well as bad 
references and hlinks with a ‘fix’ for the latter two options (based on hbck1 
facility updated).
 ** Adds back the ‘replication’ fix facility from hbck1 (HBASE-22717)


> Make HBCK2 be able to fix issues other than region assignment
> -
>
> Key: HBASE-21745
> URL: https://issues.apache.org/jira/browse/HBASE-21745
> Project: HBase
>  Issue Type: Umbrella
>  Components: hbase-operator-tools, hbck2
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Critical
>
> This is what [~apurtell] posted on mailing-list, HBCK2 should support
>  * -Rebuild meta from region metadata in the filesystem, aka offline meta 
> rebuild.-
>  * -Fix assignment errors (undeployed regions, double assignments (yes, 
> should not be possible), etc)- (See 
> https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
>  * -Fix region holes, overlaps, and other errors in the region chain- (See 
> HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
> client can as for a fixMeta).
>  * -Fix failed split and merge transactions that have failed to roll back due 
> to some bug (related to previous)- (Previous items 'overlaps' will take care 
> of these).
>  *  -Enumerate store files to determine file level corruption and sideline 
> corrupt files-
>  * -Fix hfile link problems (dangling / broken)-



[jira] [Updated] (HBASE-21745) Make HBCK2 be able to fix issues other than region assignment

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21745:
--
Hadoop Flags: Reviewed
Release Note: 
This issue adds via its subtasks:

 * An 'HBCK Report' page to the Master UI added by 
HBASE-22527+HBASE-22709+HBASE-22723+ (since 2.1.6, 2.2.1, 2.3.0). Lists 
consistency or anomalies found via new hbase:meta consistency checking 
extensions added to CatalogJanitor (holes, overlaps, bad servers) and by a new 
'HBCK chore' that runs at a lesser periodicity that will note filesystem 
orphans and overlaps as well as the following conditions:

 ** Master thought this region opened, but no regionserver reported it. 
  ** Master thought this region opened on Server1, but regionserver reported 
Server2 
  ** More than one regionservers reported opened this region

 Both chores can be triggered from the shell to regenerate ‘new’ reports.

 * Means of scheduling a ServerCrashProcedure (HBASE-21393).
 * An ‘offline’ hbase:meta rebuild (HBASE-22680).
 * Offline replace of hbase.version and hbase.id
 * Documentation on how to use completebulkload tool to ‘adopt’ orphaned data 
found by new HBCK2 ‘filesystem’ check (see below) and ‘HBCK chore’ (HBASE-22859)
 * A ‘holes’ and ‘overlaps’ fix that runs in the master that uses new 
bulk-merge facility to collapse many overlaps in the one go.
 * hbase-operator-tools HBCK2 client tool got a bunch of additions:
 ** A specialized 'fix' for the case where operators ran old hbck 'offlinemeta' 
repair and destroyed their hbase:meta; it ties together holes in meta with 
orphaned data in the fs (HBASE-22567)
 ** A ‘filesystem’ command that reports on orphan data as well as bad 
references and hlinks with a ‘fix’ for the latter two options (based on hbck1 
facility updated).
 ** Adds back the ‘replication’ fix facility from hbck1 (HBASE-22717)

> Make HBCK2 be able to fix issues other than region assignment
> -
>
> Key: HBASE-21745
> URL: https://issues.apache.org/jira/browse/HBASE-21745
> Project: HBase
>  Issue Type: Umbrella
>  Components: hbase-operator-tools, hbck2
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Critical
>
> This is what [~apurtell] posted on mailing-list, HBCK2 should support
>  * -Rebuild meta from region metadata in the filesystem, aka offline meta 
> rebuild.-
>  * -Fix assignment errors (undeployed regions, double assignments (yes, 
> should not be possible), etc)- (See 
> https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
>  * -Fix region holes, overlaps, and other errors in the region chain- (See 
> HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
> client can as for a fixMeta).
>  * -Fix failed split and merge transactions that have failed to roll back due 
> to some bug (related to previous)- (Previous items 'overlaps' will take care 
> of these).
>  *  -Enumerate store files to determine file level corruption and sideline 
> corrupt files-
>  * -Fix hfile link problems (dangling / broken)-



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22965) RS Crash due to DBE reference to an reused ByteBuff

2019-09-09 Thread chenxu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-22965:
---
Summary: RS Crash due to DBE reference to an reused ByteBuff  (was: RS 
Crash due to RowIndexEncoderV1 reference to an reused ByteBuff)

> RS Crash due to DBE reference to an reused ByteBuff
> ---
>
> Key: HBASE-22965
> URL: https://issues.apache.org/jira/browse/HBASE-22965
> Project: HBase
>  Issue Type: Bug
>Reporter: chenxu
>Priority: Major
> Attachments: hs_regionserver_err_pid.log
>
>
> After introduce HBASE-21879 into our own branch, when enable data block 
> encoding with ROW_INDEX_V1, RegionServer crashed (the crash log has been 
> uploaded).
> After reading RowIndexEncoderV1, find _lastCell_ may refer to an reused 
> ByteBuff, because DBE is not a listener of Shipper。



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (HBASE-22965) RS Crash due to DBE reference to an reused ByteBuff

2019-09-09 Thread chenxu (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu reassigned HBASE-22965:
--

Assignee: chenxu

> RS Crash due to DBE reference to an reused ByteBuff
> ---
>
> Key: HBASE-22965
> URL: https://issues.apache.org/jira/browse/HBASE-22965
> Project: HBase
>  Issue Type: Bug
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
> Attachments: hs_regionserver_err_pid.log
>
>
> After introduce HBASE-21879 into our own branch, when enable data block 
> encoding with ROW_INDEX_V1, RegionServer crashed (the crash log has been 
> uploaded).
> After reading RowIndexEncoderV1, find _lastCell_ may refer to an reused 
> ByteBuff, because DBE is not a listener of Shipper。



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22796) [HBCK2] Add fix of overlaps to fixMeta hbck Service

2019-09-09 Thread Sakthi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926342#comment-16926342
 ] 

Sakthi commented on HBASE-22796:


Reviewing, Stack.

> [HBCK2] Add fix of overlaps to fixMeta hbck Service
> ---
>
> Key: HBASE-22796
> URL: https://issues.apache.org/jira/browse/HBASE-22796
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Sakthi
>Priority: Major
> Attachments: HBASE-22796.master.001.patch, 
> HBASE-22796.master.002.patch, HBASE-22796.master.003.patch, 
> HBASE-22796.master.004.patch, HBASE-22796.master.005.patch
>
>
> fixMeta currently does holes in meta only courtesy of HBASE-22771 which added 
> fixMeta to hbck Service; missing was fix of overlaps too. This JIRA is about 
> adding fix of overlaps to general fixMeta call.  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-21745) Make HBCK2 be able to fix issues other than region assignment

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21745:
--
Description: 
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * -Fix region holes, overlaps, and other errors in the region chain- (See 
HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
client can as for a fixMeta).
 * - Fix failed split and merge transactions that have failed to roll back due 
to some bug (related to previous)- (Previous items 'overlaps' will take care of 
these).
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-

  was:
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * -Fix region holes, overlaps, and other errors in the region chain- (See 
 * Fix failed split and merge transactions that have failed to roll back due to 
some bug (related to previous)
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-


> Make HBCK2 be able to fix issues other than region assignment
> -
>
> Key: HBASE-21745
> URL: https://issues.apache.org/jira/browse/HBASE-21745
> Project: HBase
>  Issue Type: Umbrella
>  Components: hbase-operator-tools, hbck2
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Critical
>
> This is what [~apurtell] posted on mailing-list, HBCK2 should support
>  * -Rebuild meta from region metadata in the filesystem, aka offline meta 
> rebuild.-
>  * -Fix assignment errors (undeployed regions, double assignments (yes, 
> should not be possible), etc)- (See 
> https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
>  * -Fix region holes, overlaps, and other errors in the region chain- (See 
> HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
> client can as for a fixMeta).
>  * - Fix failed split and merge transactions that have failed to roll back 
> due to some bug (related to previous)- (Previous items 'overlaps' will take 
> care of these).
>  *  -Enumerate store files to determine file level corruption and sideline 
> corrupt files-
>  * -Fix hfile link problems (dangling / broken)-



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-21745) Make HBCK2 be able to fix issues other than region assignment

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21745:
--
Description: 
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * -Fix region holes, overlaps, and other errors in the region chain- (See 
HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
client can as for a fixMeta).
 * -Fix failed split and merge transactions that have failed to roll back due 
to some bug (related to previous)- (Previous items 'overlaps' will take care of 
these).
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-

  was:
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * -Fix region holes, overlaps, and other errors in the region chain- (See 
HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
client can as for a fixMeta).
 * - Fix failed split and merge transactions that have failed to roll back due 
to some bug (related to previous)- (Previous items 'overlaps' will take care of 
these).
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-


> Make HBCK2 be able to fix issues other than region assignment
> -
>
> Key: HBASE-21745
> URL: https://issues.apache.org/jira/browse/HBASE-21745
> Project: HBase
>  Issue Type: Umbrella
>  Components: hbase-operator-tools, hbck2
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Critical
>
> This is what [~apurtell] posted on mailing-list, HBCK2 should support
>  * -Rebuild meta from region metadata in the filesystem, aka offline meta 
> rebuild.-
>  * -Fix assignment errors (undeployed regions, double assignments (yes, 
> should not be possible), etc)- (See 
> https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
>  * -Fix region holes, overlaps, and other errors in the region chain- (See 
> HBASE-22796 and HBASE-22771 -- adds hole and overlap fixing to master; hbck2 
> client can as for a fixMeta).
>  * -Fix failed split and merge transactions that have failed to roll back due 
> to some bug (related to previous)- (Previous items 'overlaps' will take care 
> of these).
>  *  -Enumerate store files to determine file level corruption and sideline 
> corrupt files-
>  * -Fix hfile link problems (dangling / broken)-



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-21745) Make HBCK2 be able to fix issues other than region assignment

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21745:
--
Description: 
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * -Fix region holes, overlaps, and other errors in the region chain- (See 
 * Fix failed split and merge transactions that have failed to roll back due to 
some bug (related to previous)
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-

  was:
This is what [~apurtell] posted on mailing-list, HBCK2 should support
 * -Rebuild meta from region metadata in the filesystem, aka offline meta 
rebuild.-
 * -Fix assignment errors (undeployed regions, double assignments (yes, should 
not be possible), etc)- (See 
https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
 * Fix region holes, overlaps, and other errors in the region chain
 * Fix failed split and merge transactions that have failed to roll back due to 
some bug (related to previous)
 *  -Enumerate store files to determine file level corruption and sideline 
corrupt files-
 * -Fix hfile link problems (dangling / broken)-


> Make HBCK2 be able to fix issues other than region assignment
> -
>
> Key: HBASE-21745
> URL: https://issues.apache.org/jira/browse/HBASE-21745
> Project: HBase
>  Issue Type: Umbrella
>  Components: hbase-operator-tools, hbck2
>Reporter: Duo Zhang
>Assignee: stack
>Priority: Critical
>
> This is what [~apurtell] posted on mailing-list, HBCK2 should support
>  * -Rebuild meta from region metadata in the filesystem, aka offline meta 
> rebuild.-
>  * -Fix assignment errors (undeployed regions, double assignments (yes, 
> should not be possible), etc)- (See 
> https://issues.apache.org/jira/browse/HBASE-21745?focusedCommentId=16888302&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16888302)
>  * -Fix region holes, overlaps, and other errors in the region chain- (See 
>  * Fix failed split and merge transactions that have failed to roll back due 
> to some bug (related to previous)
>  *  -Enumerate store files to determine file level corruption and sideline 
> corrupt files-
>  * -Fix hfile link problems (dangling / broken)-



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-23003) [HBCK2/hbase-operator-tools] Release-making scripts

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926340#comment-16926340
 ] 

stack commented on HBASE-23003:
---

Ran into HBASE-22997 Move to SLF4J blocked on it for now.

> [HBCK2/hbase-operator-tools] Release-making scripts
> ---
>
> Key: HBASE-23003
> URL: https://issues.apache.org/jira/browse/HBASE-23003
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Major
>
> Make some scripts for creating hbase-operator-tools releases so its easy to 
> do and not subject to vagaries of environment or RM's attention-to-detail (or 
> not).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926326#comment-16926326
 ] 

HBase QA commented on HBASE-22804:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 69m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} yetus {color} | {color:red}  0m 26s{color} 
| {color:red} Unprocessed flag(s): --skip-errorprone {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/889/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979915/HBASE-22804.branch-1.009.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/889/console |
| versions | git=1.9.1 |
| Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |


This message was automatically generated.



> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-1.008.patch, HBASE-22804.branch-1.009.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.branch-2.002.patch, 
> HBASE-22804.branch-2.003.patch, HBASE-22804.branch-2.004.patch, 
> HBASE-22804.branch-2.005.patch, HBASE-22804.branch-2.006.patch, 
> HBASE-22804.master.001.patch, HBASE-22804.master.002.patch, 
> HBASE-22804.master.003.patch, HBASE-22804.master.004.patch, 
> HBASE-22804.master.005.patch, HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22888) Use separate classe to deal with streaming read and pread

2019-09-09 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926325#comment-16926325
 ] 

chenxu commented on HBASE-22888:


The following changes have been made
(1) make BlockIndexReader Stateless, so we share it between readers
(2) share the FixedFileTrailer between readers
(3) do computeHDFSBlocksDistribution in pread reader only
The performance improvement of Scan is obvious in our test env

> Use separate classe to deal with streaming read and pread
> -
>
> Key: HBASE-22888
> URL: https://issues.apache.org/jira/browse/HBASE-22888
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> When switching pread to stream read, new HFileReaderImpl will be create, but 
> the two different readers do not share informations with each other. maybe we 
> can divide HFileReaderImpl into two different class, such as HFilePreadReader 
> and HFileStreamReader. When constructing HFileStreamReader, it will copy some 
> stuffs (fileInfo, index, etc) from an already existing Reader, and no need to 
> do prefetch operations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase] chenxu14 commented on issue #581: HBASE-22888 Use separate classe to deal with streaming read and pread

2019-09-09 Thread GitBox
chenxu14 commented on issue #581: HBASE-22888 Use separate classe to deal with 
streaming read and pread
URL: https://github.com/apache/hbase/pull/581#issuecomment-529765999
 
 
   > I think the most important improvement here is that, we can share the 
index and related data when opening new stream readers? Great that finally 
someone implements it.
   > 
   > Will take a look soon when I have time. Great job.
   
   Very glad if you can review it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani opened a new pull request #600: HBASE-22460 : Reopen regions with very high Store Ref Counts

2019-09-09 Thread GitBox
virajjasani opened a new pull request #600: HBASE-22460 : Reopen regions with 
very high Store Ref Counts
URL: https://github.com/apache/hbase/pull/600
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926321#comment-16926321
 ] 

HBase QA commented on HBASE-22969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
1s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
33s{color} | {color:red} hbase-client: The patch generated 9 new + 0 unchanged 
- 0 fixed = 9 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 35s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} hbase-client generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s

[jira] [Comment Edited] (HBASE-23002) [HBCK2/hbase-operator-tools] Create an assembly that builds an hbase-operator-tools tgz

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926306#comment-16926306
 ] 

stack edited comment on HBASE-23002 at 9/10/19 3:51 AM:


Created HBASE-23002 branch on hbase-operator-tools and pushed first cut at a 
patch.


was (Author: stack):
Created HBASE-23002 branch on hbase-operator-tools.

> [HBCK2/hbase-operator-tools] Create an assembly that builds an 
> hbase-operator-tools tgz
> ---
>
> Key: HBASE-23002
> URL: https://issues.apache.org/jira/browse/HBASE-23002
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Major
>
> Was going to build a convenience binary tgz as part of the first release of 
> hbase-operator-tools. Not sure how just yet; best would be if it were a 
> fatjar with all dependencies but that'd be kinda insane at same time since 
> the tgz would be massive.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-23002) [HBCK2/hbase-operator-tools] Create an assembly that builds an hbase-operator-tools tgz

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926306#comment-16926306
 ] 

stack commented on HBASE-23002:
---

Created HBASE-23002 branch on hbase-operator-tools.

> [HBCK2/hbase-operator-tools] Create an assembly that builds an 
> hbase-operator-tools tgz
> ---
>
> Key: HBASE-23002
> URL: https://issues.apache.org/jira/browse/HBASE-23002
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Major
>
> Was going to build a convenience binary tgz as part of the first release of 
> hbase-operator-tools. Not sure how just yet; best would be if it were a 
> fatjar with all dependencies but that'd be kinda insane at same time since 
> the tgz would be massive.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22804:
-
Attachment: HBASE-22804.branch-1.009.patch

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-1.008.patch, HBASE-22804.branch-1.009.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.branch-2.002.patch, 
> HBASE-22804.branch-2.003.patch, HBASE-22804.branch-2.004.patch, 
> HBASE-22804.branch-2.005.patch, HBASE-22804.branch-2.006.patch, 
> HBASE-22804.master.001.patch, HBASE-22804.master.002.patch, 
> HBASE-22804.master.003.patch, HBASE-22804.master.004.patch, 
> HBASE-22804.master.005.patch, HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Issue Comment Deleted] (HBASE-21873) region can be assigned to 2 servers due to a timed-out call or an unknown exception

2019-09-09 Thread yuhuiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuhuiyang updated HBASE-21873:
--
Comment: was deleted

(was: I had a similar problem with version 2.1.1 .  In my problem a 
AssignProcedure assigns a region A as a subProcedure of ServerCrashProcedure , 
and the region is open failed for ServerBortingException in regionserver rs1 
and the region is retried to be opened in regionserver rs2 and  is opened 
successed in regionserver rs2 finally . So the assignProcedure is success also 
. However the assignProcedure receive  rs1 RegionServerAbortedException( five 
minutes later  due to network problem )  it make the region offline again .  )

> region can be assigned to 2 servers due to a timed-out call or an unknown 
> exception
> ---
>
> Key: HBASE-21873
> URL: https://issues.apache.org/jira/browse/HBASE-21873
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21862-forUT.patch, HBASE-21862-v1.patch, 
> HBASE-21862-v2.patch, HBASE-21862.patch
>
>
> It's a classic bug, sort of... the call times out to open the region, but RS 
> actually processes it alright. It could also happen if the response didn't 
> make it back due to a network issue.
> As a result region is opened on two servers.
> There are some mitigations possible to narrow down the race window.
> 1) Don't process expired open calls, fail them. Won't help for network issues.
> 2) Don't ignore invalid RS state, kill it (YouAreDead exception) - but that 
> will require fixing other network races where master kills RS, which would 
> require adding state versioning to the protocol.
> The fundamental fix though would require either
> 1) an unknown failure from open to ascertain the state of the region from the 
> server. Again, this would probably require protocol changes to make sure we 
> ascertain the region is not opened, and also that the 
> already-failed-on-master open is NOT going to be processed if it's some queue 
> or even in transit on the network (via a nonce-like mechanism)?
> 2) some form of a distributed lock per region, e.g. in ZK
> 3) some form of 2PC? but the participant list cannot be determined in a 
> manner that's both scalable and guaranteed correct. Theoretically it could be 
> all RSes.
> {noformat}
> 2019-02-08 03:21:31,715 INFO  [PEWorker-7] 
> procedure.MasterProcedureScheduler: Took xlock for pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN
> 2019-02-08 03:21:31,758 INFO  [PEWorker-7] 
> assignment.TransitRegionStateProcedure: Starting pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN; rit=OPEN, 
> location=server1,17020,1549567999303; forceNewPlan=false, retain=true
> 2019-02-08 03:21:31,984 INFO  [PEWorker-13] assignment.RegionStateStore: 
> pid=260626 updating hbase:meta row=d0214809147e43dc6870005742d5d204, 
> regionState=OPENING, regionLocation=server1,17020,1549623714617
> 2019-02-08 03:22:32,552 WARN  [RSProcedureDispatcher-pool4-t3451] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=260637, 
> ppid=260626, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region ... 
> to server server1,17020,1549623714617 failed
> java.io.IOException: Call to server1/...:17020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=27191, 
> waitTime=60145, rpcTimeout=6^M
> at 
> org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:185)^M
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:391)^M
> ...
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=27191, 
> waitTime=60145, rpcTimeout=6^M
> at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)^M
> ... 4 more^M
> {noformat}
> RS:
> {noformat}
> hbase-regionserver.log:2019-02-08 03:22:41,131 INFO  
> [RS_OPEN_REGION-regionserver/server1:17020-2] handler.AssignRegionHandler: 
> Open ...d0214809147e43dc6870005742d5d204.
> ...
> hbase-regionserver.log:2019-02-08 03:25:44,751 INFO  
> [RS_OPEN_REGION-regionserver/server1:17020-2] handler.AssignRegionHandler: 
> Opened ...d0214809147e43dc6870005742d5d204.
> {noformat}
> Retry:
> {noformat}
> 2019-02-08 03:22:32,967 INFO  [PEWorker-6] 
> assignment.TransitRegionStateProcedure

[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Attachment: HBASE-22969.0005.patch
Status: Patch Available  (was: Open)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.0005.patch, HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Status: Open  (was: Patch Available)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-21873) region can be assigned to 2 servers due to a timed-out call or an unknown exception

2019-09-09 Thread yuhuiyang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926287#comment-16926287
 ] 

yuhuiyang edited comment on HBASE-21873 at 9/10/19 3:02 AM:


I had a similar problem with version 2.1.1 .  In my problem a AssignProcedure 
assigns a region A as a subProcedure of ServerCrashProcedure , and the region 
is open failed for ServerBortingException in regionserver rs1 and the region is 
retried to be opened in regionserver rs2 and  is opened successed in 
regionserver rs2 finally . So the assignProcedure is success also . However the 
assignProcedure receive  rs1 RegionServerAbortedException( five minutes later  
due to network problem )  it make the region offline again .  


was (Author: yu-huiyang):
I had a similar problem with version 2.1.1 .  In my problem a AssignProcedure 
assigns a region A as a subProcedure of ServerCrashProcedure , and the region 
is open failed for ServerBortingException in regionserver rs1 and the region is 
retried to be open in regionserver rs2 and  is opened successed in regionserver 
rs2 finally . So the assignProcedure is success also . I think the 
assignProcedure should have been finished . However the assignProcedure receive 
 rs1 RegionServerAbortedException( five minutes later  due to network problem ) 
it make the region offline again .  

> region can be assigned to 2 servers due to a timed-out call or an unknown 
> exception
> ---
>
> Key: HBASE-21873
> URL: https://issues.apache.org/jira/browse/HBASE-21873
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21862-forUT.patch, HBASE-21862-v1.patch, 
> HBASE-21862-v2.patch, HBASE-21862.patch
>
>
> It's a classic bug, sort of... the call times out to open the region, but RS 
> actually processes it alright. It could also happen if the response didn't 
> make it back due to a network issue.
> As a result region is opened on two servers.
> There are some mitigations possible to narrow down the race window.
> 1) Don't process expired open calls, fail them. Won't help for network issues.
> 2) Don't ignore invalid RS state, kill it (YouAreDead exception) - but that 
> will require fixing other network races where master kills RS, which would 
> require adding state versioning to the protocol.
> The fundamental fix though would require either
> 1) an unknown failure from open to ascertain the state of the region from the 
> server. Again, this would probably require protocol changes to make sure we 
> ascertain the region is not opened, and also that the 
> already-failed-on-master open is NOT going to be processed if it's some queue 
> or even in transit on the network (via a nonce-like mechanism)?
> 2) some form of a distributed lock per region, e.g. in ZK
> 3) some form of 2PC? but the participant list cannot be determined in a 
> manner that's both scalable and guaranteed correct. Theoretically it could be 
> all RSes.
> {noformat}
> 2019-02-08 03:21:31,715 INFO  [PEWorker-7] 
> procedure.MasterProcedureScheduler: Took xlock for pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN
> 2019-02-08 03:21:31,758 INFO  [PEWorker-7] 
> assignment.TransitRegionStateProcedure: Starting pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN; rit=OPEN, 
> location=server1,17020,1549567999303; forceNewPlan=false, retain=true
> 2019-02-08 03:21:31,984 INFO  [PEWorker-13] assignment.RegionStateStore: 
> pid=260626 updating hbase:meta row=d0214809147e43dc6870005742d5d204, 
> regionState=OPENING, regionLocation=server1,17020,1549623714617
> 2019-02-08 03:22:32,552 WARN  [RSProcedureDispatcher-pool4-t3451] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=260637, 
> ppid=260626, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region ... 
> to server server1,17020,1549623714617 failed
> java.io.IOException: Call to server1/...:17020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=27191, 
> waitTime=60145, rpcTimeout=6^M
> at 
> org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:185)^M
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:391)^M
> ...
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException

[jira] [Commented] (HBASE-21873) region can be assigned to 2 servers due to a timed-out call or an unknown exception

2019-09-09 Thread yuhuiyang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926287#comment-16926287
 ] 

yuhuiyang commented on HBASE-21873:
---

I had a similar problem with version 2.1.1 .  In my problem a AssignProcedure 
assigns a region A as a subProcedure of ServerCrashProcedure , and the region 
is open failed for ServerBortingException in regionserver rs1 and the region is 
retried to be open in regionserver rs2 and  is opened successed in regionserver 
rs2 finally . So the assignProcedure is success also . I think the 
assignProcedure should have been finished . However the assignProcedure receive 
 rs1 RegionServerAbortedException( five minutes later  due to network problem ) 
it make the region offline again .  

> region can be assigned to 2 servers due to a timed-out call or an unknown 
> exception
> ---
>
> Key: HBASE-21873
> URL: https://issues.apache.org/jira/browse/HBASE-21873
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Blocker
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21862-forUT.patch, HBASE-21862-v1.patch, 
> HBASE-21862-v2.patch, HBASE-21862.patch
>
>
> It's a classic bug, sort of... the call times out to open the region, but RS 
> actually processes it alright. It could also happen if the response didn't 
> make it back due to a network issue.
> As a result region is opened on two servers.
> There are some mitigations possible to narrow down the race window.
> 1) Don't process expired open calls, fail them. Won't help for network issues.
> 2) Don't ignore invalid RS state, kill it (YouAreDead exception) - but that 
> will require fixing other network races where master kills RS, which would 
> require adding state versioning to the protocol.
> The fundamental fix though would require either
> 1) an unknown failure from open to ascertain the state of the region from the 
> server. Again, this would probably require protocol changes to make sure we 
> ascertain the region is not opened, and also that the 
> already-failed-on-master open is NOT going to be processed if it's some queue 
> or even in transit on the network (via a nonce-like mechanism)?
> 2) some form of a distributed lock per region, e.g. in ZK
> 3) some form of 2PC? but the participant list cannot be determined in a 
> manner that's both scalable and guaranteed correct. Theoretically it could be 
> all RSes.
> {noformat}
> 2019-02-08 03:21:31,715 INFO  [PEWorker-7] 
> procedure.MasterProcedureScheduler: Took xlock for pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=false; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN
> 2019-02-08 03:21:31,758 INFO  [PEWorker-7] 
> assignment.TransitRegionStateProcedure: Starting pid=260626, ppid=260595, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=table, 
> region=d0214809147e43dc6870005742d5d204, ASSIGN; rit=OPEN, 
> location=server1,17020,1549567999303; forceNewPlan=false, retain=true
> 2019-02-08 03:21:31,984 INFO  [PEWorker-13] assignment.RegionStateStore: 
> pid=260626 updating hbase:meta row=d0214809147e43dc6870005742d5d204, 
> regionState=OPENING, regionLocation=server1,17020,1549623714617
> 2019-02-08 03:22:32,552 WARN  [RSProcedureDispatcher-pool4-t3451] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=260637, 
> ppid=260626, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region ... 
> to server server1,17020,1549623714617 failed
> java.io.IOException: Call to server1/...:17020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=27191, 
> waitTime=60145, rpcTimeout=6^M
> at 
> org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:185)^M
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:391)^M
> ...
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=27191, 
> waitTime=60145, rpcTimeout=6^M
> at 
> org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200)^M
> ... 4 more^M
> {noformat}
> RS:
> {noformat}
> hbase-regionserver.log:2019-02-08 03:22:41,131 INFO  
> [RS_OPEN_REGION-regionserver/server1:17020-2] handler.AssignRegionHandler: 
> Open ...d0214809147e43dc6870005742d5d204.
> ...
> hbase-regionserver.log:2019-02-08 03:25:44,751 INFO  
> [RS_OPEN_REGION-regionserver/server1:17020-2] handler.AssignRegionHandler: 
> Opened ...d0214809147e43dc6870005742d5d204.
> {noformat}
> Retry:
> {noformat}
> 2019-

[jira] [Work started] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-22988 started by Andrew Purtell.
--
> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Attachment: HBASE-22988-branch-1.patch

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926274#comment-16926274
 ] 

Andrew Purtell commented on HBASE-22988:


Updated patch. Just a nit fix in {{bin/hbase}}

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Attachment: (was: HBASE-22988-branch-1.patch)

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926250#comment-16926250
 ] 

Andrew Purtell edited comment on HBASE-22988 at 9/10/19 2:21 AM:
-

Dropping a WIP patch.

This can't be committed or go through precommit until the Yetus issue is 
addressed. Please don't set to Patch Available status.

Manual tests check out. Every mode seems to work as expected. All unit tests 
pass locally. On that note, the mockito version used by branch-1 does not 
support matchers implemented as Java 8 lambdas, so affected lines of some units 
are commented out. Bumping mockito is risky given chance of collateral damage, 
probably should rewrite the affected tests. 


was (Author: apurtell):
Dropping a WIP patch.

This can't be committed or go through precommit until the Yetus issue is 
addressed. Please don't set to Patch Available status.

Manual tests check out. Every mode seems to work as expected. All unit tests 
pass locally. On that note, one remaining thing to do is some parts of some 
unit tests are currently commented out because the mockito and/or hamcrest 
version used by branch-1 does not support matchers implemented as Java 8 
lambdas. Bumping mockito is risky given chance of collateral damage, probably 
should rewrite the affected tests. 

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926265#comment-16926265
 ] 

HBase QA commented on HBASE-22804:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 37m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} branch-1 passed with JDK v1.7.0_232 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} branch-1 passed with JDK v1.7.0_232 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_232 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
20s{color} | {color:red} hbase-server: The patch generated 1 new + 67 unchanged 
- 2 fixed = 68 total (was 69) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
5m  3s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 
2.9.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_232 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}151m 55s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/885/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979900/HBASE-22804.branch-1.008.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc

[jira] [Commented] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926264#comment-16926264
 ] 

Andrew Purtell commented on HBASE-22988:


A significant difference from trunk is there is no filtered reads metric 
available in ClusterStatus, so I dropped that field, which affected some layout 
tests, and those were updated too. 

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Fix Version/s: 1.5.0

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Attachment: HBASE-22988-branch-1.patch

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926261#comment-16926261
 ] 

Duo Zhang commented on HBASE-22969:
---

Hey [~udaikashyap], please generate the patch with LF instead of CRLF as line 
ending. The default 'git apply' command can not deal with different line 
endings, and we all use LF as line ending in our code base.

Thanks.

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926258#comment-16926258
 ] 

HBase QA commented on HBASE-22969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-22969 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22969 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979909/HBASE-22969.0004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/887/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |


This message was automatically generated.



> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a pow

[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Attachment: (was: HBASE-22988-branch-1.patch)

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Attachment: HBASE-22969.0004.patch
Status: Patch Available  (was: Open)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, HBASE-22969.0004.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Status: Open  (was: Patch Available)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926250#comment-16926250
 ] 

Andrew Purtell commented on HBASE-22988:


Dropping a WIP patch.

This can't be committed or go through precommit until the Yetus issue is 
addressed. Please don't set to Patch Available status.

Manual tests check out. Every mode seems to work as expected. All unit tests 
pass locally. On that note, one remaining thing to do is some parts of some 
unit tests are currently commented out because the mockito and/or hamcrest 
version used by branch-1 does not support matchers implemented as Java 8 
lambdas. Bumping mockito is risky given chance of collateral damage, probably 
should rewrite the affected tests. 

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22988) Backport HBASE-11062 "hbtop" to branch-1

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22988:
---
Attachment: HBASE-22988-branch-1.patch

> Backport HBASE-11062 "hbtop" to branch-1
> 
>
> Key: HBASE-22988
> URL: https://issues.apache.org/jira/browse/HBASE-22988
> Project: HBase
>  Issue Type: Sub-task
>  Components: backport, hbtop
>Reporter: Toshihiro Suzuki
>Assignee: Andrew Purtell
>Priority: Major
> Attachments: HBASE-22988-branch-1.patch
>
>
> Backport parent issue to branch-1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22013) SpaceQuotas - getNumRegions() returning wrong number of regions due to region replicas

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926238#comment-16926238
 ] 

Hudson commented on HBASE-22013:


Results for branch master
[build #1419 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1419/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1419//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1419//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1419//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 3. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/master/1419//artifact/output-integration/hadoop-3.log].
 (note that this means we didn't check the Hadoop 3 shaded client)


> SpaceQuotas - getNumRegions() returning wrong number of regions due to region 
> replicas
> --
>
> Key: HBASE-22013
> URL: https://issues.apache.org/jira/browse/HBASE-22013
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Shardul Singh
>Priority: Major
>  Labels: Quota, Space
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
> Attachments: HBASE-22013.branch-2.1.001.patch, 
> HBASE-22013.master.001.patch, HBASE-22013.master.002.patch, 
> HBASE-22013.master.003.patch, hbase-22013.branch-2.001.patch, 
> hbase-22013.branch-2.2.001.patch
>
>
> Space Quota: Space Quota Issue: If a table is created with region replica 
> then quota calculation is not happening
> Steps:
> 1: Create a table with 100 regions with region replica 3
> 2:  Observe that 'hbase:quota' table doesn't have entry of usage for this 
> table So In UI only policy Limit and Policy is shown but not Usage and State.
> Reason: 
>  It looks like File system utilization core is sending data of 100 reasons 
> but not the size of region replicas.
>  But in quota observer chore, it is considering total region(actual regions+ 
> replica reasons) 
>  So the  ratio of reported regions is less then configured 
> percentRegionsReportedThreshold.
> SO quota calculation is not happening



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22992) Blog post for hbtop on hbase.apache.org

2019-09-09 Thread Toshihiro Suzuki (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926230#comment-16926230
 ] 

Toshihiro Suzuki commented on HBASE-22992:
--

Thank you very much! [~stack] Will post it to the dev list.

> Blog post for hbtop on hbase.apache.org
> ---
>
> Key: HBASE-22992
> URL: https://issues.apache.org/jira/browse/HBASE-22992
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22013) SpaceQuotas - getNumRegions() returning wrong number of regions due to region replicas

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926229#comment-16926229
 ] 

Hudson commented on HBASE-22013:


Results for branch branch-2.2
[build #590 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/590/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/590//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/590//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/590//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SpaceQuotas - getNumRegions() returning wrong number of regions due to region 
> replicas
> --
>
> Key: HBASE-22013
> URL: https://issues.apache.org/jira/browse/HBASE-22013
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Shardul Singh
>Priority: Major
>  Labels: Quota, Space
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
> Attachments: HBASE-22013.branch-2.1.001.patch, 
> HBASE-22013.master.001.patch, HBASE-22013.master.002.patch, 
> HBASE-22013.master.003.patch, hbase-22013.branch-2.001.patch, 
> hbase-22013.branch-2.2.001.patch
>
>
> Space Quota: Space Quota Issue: If a table is created with region replica 
> then quota calculation is not happening
> Steps:
> 1: Create a table with 100 regions with region replica 3
> 2:  Observe that 'hbase:quota' table doesn't have entry of usage for this 
> table So In UI only policy Limit and Policy is shown but not Usage and State.
> Reason: 
>  It looks like File system utilization core is sending data of 100 reasons 
> but not the size of region replicas.
>  But in quota observer chore, it is considering total region(actual regions+ 
> replica reasons) 
>  So the  ratio of reported regions is less then configured 
> percentRegionsReportedThreshold.
> SO quota calculation is not happening



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-23002) [HBCK2/hbase-operator-tools] Create an assembly that builds an hbase-operator-tools tgz

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926224#comment-16926224
 ] 

stack commented on HBASE-23002:
---

Let me make a branch for messing. Will help making this and the related jira, 
HBASE-23003.

> [HBCK2/hbase-operator-tools] Create an assembly that builds an 
> hbase-operator-tools tgz
> ---
>
> Key: HBASE-23002
> URL: https://issues.apache.org/jira/browse/HBASE-23002
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Major
>
> Was going to build a convenience binary tgz as part of the first release of 
> hbase-operator-tools. Not sure how just yet; best would be if it were a 
> fatjar with all dependencies but that'd be kinda insane at same time since 
> the tgz would be massive.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-23002) [HBCK2/hbase-operator-tools] Create an assembly that builds an hbase-operator-tools tgz

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-23002:
--
Description: Was going to build a convenience binary tgz as part of the 
first release of hbase-operator-tools. Not sure how just yet; best would be if 
it were a fatjar with all dependencies but that'd be kinda insane at same time 
since the tgz would be massive.
Environment: (was: Was going to build a convenience binary tgz as part 
of the first release of hbase-operator-tools. Not sure how just yet; best would 
be if it were a fatjar with all dependencies but that'd be kinda insane at same 
time since the tgz would be massive.)

> [HBCK2/hbase-operator-tools] Create an assembly that builds an 
> hbase-operator-tools tgz
> ---
>
> Key: HBASE-23002
> URL: https://issues.apache.org/jira/browse/HBASE-23002
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Major
>
> Was going to build a convenience binary tgz as part of the first release of 
> hbase-operator-tools. Not sure how just yet; best would be if it were a 
> fatjar with all dependencies but that'd be kinda insane at same time since 
> the tgz would be massive.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22859) [HBCK2] Fix the orphan regions on filesystem

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926222#comment-16926222
 ] 

Hudson commented on HBASE-22859:


Results for branch branch-2
[build #2244 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2244/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2244//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2244//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2244//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [HBCK2] Fix the orphan regions on filesystem
> 
>
> Key: HBASE-22859
> URL: https://issues.apache.org/jira/browse/HBASE-22859
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck2
>Reporter: Guanghao Zhang
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: HBASE-22859.master.001.patch, 
> HBASE-22859.master.005.patch
>
>
> Plan to add this feature to HBCK2 tool firstly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22956) [HBCK2/hbase-operator-tools] Make first release, 1.0.0

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926221#comment-16926221
 ] 

stack commented on HBASE-22956:
---

We need a 1.0 branch [~psomogyi] I think. Also need RC-making script and an 
assembly to build tgz. I'm working on latter (HBASE-23002 for assembly and 
HBASE-23003 for release-making scripts).



> [HBCK2/hbase-operator-tools] Make first release, 1.0.0
> --
>
> Key: HBASE-22956
> URL: https://issues.apache.org/jira/browse/HBASE-22956
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbck2
>Reporter: stack
>Priority: Major
> Fix For: hbase-operator-tools-1.0.0
>
>
> Make our first release of hbck2/hbase-operator-tools.
> First release should have the coverage hbck1 had at least. When the parent 
> for this issue is done, we'll be at hbck1+. Let us release then (week or 
> two?).
> A release will help operators who have been struggling having to build hbck2 
> against different hbase versions. The release should be a "fat 
> jar"/completely contained with all dependency satisfied so operator can just 
> fire up hbck2 w/o having to build against an hbase or provide some magic mix 
> of jars to satisfy hbck2 tool needs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (HBASE-23003) [HBCK2/hbase-operator-tools] Release-making scripts

2019-09-09 Thread stack (Jira)
stack created HBASE-23003:
-

 Summary: [HBCK2/hbase-operator-tools] Release-making scripts
 Key: HBASE-23003
 URL: https://issues.apache.org/jira/browse/HBASE-23003
 Project: HBase
  Issue Type: Task
Reporter: stack


Make some scripts for creating hbase-operator-tools releases so its easy to do 
and not subject to vagaries of environment or RM's attention-to-detail (or not).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (HBASE-23002) [HBCK2/hbase-operator-tools] Create an assembly that builds an hbase-operator-tools tgz

2019-09-09 Thread stack (Jira)
stack created HBASE-23002:
-

 Summary: [HBCK2/hbase-operator-tools] Create an assembly that 
builds an hbase-operator-tools tgz
 Key: HBASE-23002
 URL: https://issues.apache.org/jira/browse/HBASE-23002
 Project: HBase
  Issue Type: Task
 Environment: Was going to build a convenience binary tgz as part of 
the first release of hbase-operator-tools. Not sure how just yet; best would be 
if it were a fatjar with all dependencies but that'd be kinda insane at same 
time since the tgz would be massive.
Reporter: stack






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22013) SpaceQuotas - getNumRegions() returning wrong number of regions due to region replicas

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926200#comment-16926200
 ] 

Hudson commented on HBASE-22013:


Results for branch branch-2.1
[build #1579 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1579/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1579//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1579//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1579//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SpaceQuotas - getNumRegions() returning wrong number of regions due to region 
> replicas
> --
>
> Key: HBASE-22013
> URL: https://issues.apache.org/jira/browse/HBASE-22013
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Shardul Singh
>Priority: Major
>  Labels: Quota, Space
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
> Attachments: HBASE-22013.branch-2.1.001.patch, 
> HBASE-22013.master.001.patch, HBASE-22013.master.002.patch, 
> HBASE-22013.master.003.patch, hbase-22013.branch-2.001.patch, 
> hbase-22013.branch-2.2.001.patch
>
>
> Space Quota: Space Quota Issue: If a table is created with region replica 
> then quota calculation is not happening
> Steps:
> 1: Create a table with 100 regions with region replica 3
> 2:  Observe that 'hbase:quota' table doesn't have entry of usage for this 
> table So In UI only policy Limit and Policy is shown but not Usage and State.
> Reason: 
>  It looks like File system utilization core is sending data of 100 reasons 
> but not the size of region replicas.
>  But in quota observer chore, it is considering total region(actual regions+ 
> replica reasons) 
>  So the  ratio of reported regions is less then configured 
> percentRegionsReportedThreshold.
> SO quota calculation is not happening



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22859) [HBCK2] Fix the orphan regions on filesystem

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926179#comment-16926179
 ] 

Hudson commented on HBASE-22859:


Results for branch branch-2.2
[build #591 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/591/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/591//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/591//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/591//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [HBCK2] Fix the orphan regions on filesystem
> 
>
> Key: HBASE-22859
> URL: https://issues.apache.org/jira/browse/HBASE-22859
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck2
>Reporter: Guanghao Zhang
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: HBASE-22859.master.001.patch, 
> HBASE-22859.master.005.patch
>
>
> Plan to add this feature to HBCK2 tool firstly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926178#comment-16926178
 ] 

HBase QA commented on HBASE-22969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-22969 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22969 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979903/HBASE-22969.0003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/886/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter =

[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Attachment: HBASE-22969.0003.patch
Status: Patch Available  (was: Open)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.0003.patch, 
> HBASE-22969.HBASE-22969.0001.patch, HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22859) [HBCK2] Fix the orphan regions on filesystem

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926176#comment-16926176
 ] 

Hudson commented on HBASE-22859:


Results for branch branch-2.1
[build #1580 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1580/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1580//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1580//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1580//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [HBCK2] Fix the orphan regions on filesystem
> 
>
> Key: HBASE-22859
> URL: https://issues.apache.org/jira/browse/HBASE-22859
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck2
>Reporter: Guanghao Zhang
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: HBASE-22859.master.001.patch, 
> HBASE-22859.master.005.patch
>
>
> Plan to add this feature to HBCK2 tool firstly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Status: Open  (was: Patch Available)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22804:
-
Attachment: HBASE-22804.branch-1.008.patch

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-1.008.patch, HBASE-22804.branch-2.001.patch, 
> HBASE-22804.branch-2.002.patch, HBASE-22804.branch-2.003.patch, 
> HBASE-22804.branch-2.004.patch, HBASE-22804.branch-2.005.patch, 
> HBASE-22804.branch-2.006.patch, HBASE-22804.master.001.patch, 
> HBASE-22804.master.002.patch, HBASE-22804.master.003.patch, 
> HBASE-22804.master.004.patch, HBASE-22804.master.005.patch, 
> HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926136#comment-16926136
 ] 

HBase QA commented on HBASE-22969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HBASE-22969 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22969 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979895/HBASE-22969.HBASE-22969.0001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/884/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilt

[jira] [Assigned] (HBASE-22902) At regionserver start there's a request to roll the WAL

2019-09-09 Thread Sandeep Pal (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Pal reassigned HBASE-22902:
---

Assignee: Sandeep Pal

> At regionserver start there's a request to roll the WAL
> ---
>
> Key: HBASE-22902
> URL: https://issues.apache.org/jira/browse/HBASE-22902
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: David Manning
>Assignee: Sandeep Pal
>Priority: Minor
>
> See HBASE-22301 for logic that requests to roll the WAL if regionserver 
> encounters a slow write pipeline. In the logs, during regionserver start, I 
> see that the WAL is requested to roll once. It's strange that we roll the WAL 
> because it wasn't a slow sync. It appears when this code executes, we haven't 
> initialized the {{rollOnSyncNs}} variable to use for determining whether it's 
> a slow sync. Current pipeline also shows empty in the logs.
> Disclaimer: I'm experiencing this after backporting this to 1.3.x and 
> building it there - I haven't attempted in 1.5.x, though I'd expect similar 
> results.
> Regionserver logs follow (notice *threshold=0 ms, current pipeline: []*):
> {noformat}
> Tue Aug 20 23:29:50 GMT 2019 Starting regionserver
> ...
> 2019-08-20 23:29:57,824 INFO  wal.FSHLog - WAL configuration: blocksize=256 
> MB, rollsize=243.20 MB, prefix=[truncated]%2C1566343792434, suffix=, 
> logDir=hdfs://[truncated]/hbase/WALs/[truncated],1566343792434, 
> archiveDir=hdfs://[truncated]/hbase/oldWALs
> 2019-08-20 23:29:58,104 INFO  wal.FSHLog - Slow sync cost: 186 ms, current 
> pipeline: []
> 2019-08-20 23:29:58,104 WARN  wal.FSHLog - Requesting log roll because we 
> exceeded slow sync threshold; time=186 ms, threshold=0 ms, current pipeline: 
> []
> 2019-08-20 23:29:58,107 DEBUG regionserver.ReplicationSourceManager - Start 
> tracking logs for wal group [truncated]%2C1566343792434 for peer 1
> 2019-08-20 23:29:58,107 INFO  wal.FSHLog - New WAL 
> /hbase/WALs/[truncated],1566343792434/[truncated]%2C1566343792434.1566343797824
> 2019-08-20 23:29:58,109 DEBUG regionserver.ReplicationSource - Starting up 
> worker for wal group [truncated]%2C1566343792434{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926132#comment-16926132
 ] 

HBase QA commented on HBASE-22804:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  0m 
14s{color} | {color:red} branch has 10 errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} branch-1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
18s{color} | {color:red} hbase-server: The patch generated 2 new + 66 unchanged 
- 3 fixed = 68 total (was 69) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  0m 
11s{color} | {color:red} patch has 10 errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
4m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 
2.9.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}114m 
12s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/882/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979884/HBASE-22804.branch-1.007.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f13bdf509692 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | branch-1 / 1c0ee31 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.8.0_222 |
| shadedjars | 
https://builds.apache.org/job/PreCommit-HBASE-Build/882/artifact/patchprocess/branch-shadedjars.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/882/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| shadedjars | 
https://builds.apache.org/job/PreCommit-HBASE-Build/882/artifact/patchprocess/patch-shadedjars.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/882/testReport/ |
| Max. process+thread count | 4696 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbas

[jira] [Commented] (HBASE-22013) SpaceQuotas - getNumRegions() returning wrong number of regions due to region replicas

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926123#comment-16926123
 ] 

Hudson commented on HBASE-22013:


Results for branch branch-2
[build #2243 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2243/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2243//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2243//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2243//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> SpaceQuotas - getNumRegions() returning wrong number of regions due to region 
> replicas
> --
>
> Key: HBASE-22013
> URL: https://issues.apache.org/jira/browse/HBASE-22013
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Shardul Singh
>Priority: Major
>  Labels: Quota, Space
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
> Attachments: HBASE-22013.branch-2.1.001.patch, 
> HBASE-22013.master.001.patch, HBASE-22013.master.002.patch, 
> HBASE-22013.master.003.patch, hbase-22013.branch-2.001.patch, 
> hbase-22013.branch-2.2.001.patch
>
>
> Space Quota: Space Quota Issue: If a table is created with region replica 
> then quota calculation is not happening
> Steps:
> 1: Create a table with 100 regions with region replica 3
> 2:  Observe that 'hbase:quota' table doesn't have entry of usage for this 
> table So In UI only policy Limit and Policy is shown but not Usage and State.
> Reason: 
>  It looks like File system utilization core is sending data of 100 reasons 
> but not the size of region replicas.
>  But in quota observer chore, it is considering total region(actual regions+ 
> replica reasons) 
>  So the  ratio of reported regions is less then configured 
> percentRegionsReportedThreshold.
> SO quota calculation is not happening



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Attachment: HBASE-22969.HBASE-22969.0001.patch
Status: Patch Available  (was: Open)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.HBASE-22969.0001.patch, 
> HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Status: Open  (was: Patch Available)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926091#comment-16926091
 ] 

HBase QA commented on HBASE-22969:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-22969 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22969 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979889/HBASE-22969.master.0001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/883/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator

[jira] [Commented] (HBASE-22997) Move to SLF4J

2019-09-09 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926085#comment-16926085
 ] 

Peter Somogyi commented on HBASE-22997:
---

Yes, it is log4j2. I kept it, added SLf4J implementation and modified the 
enforcer plugin not to fail on log4j2's module-info in the liked pull request.

> Move to SLF4J
> -
>
> Key: HBASE-22997
> URL: https://issues.apache.org/jira/browse/HBASE-22997
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase-operator-tools
>Affects Versions: operator-1.0.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: operator-1.0.0
>
>
> Currently hbase-operator-tools uses org.apache.logging.log4j while the rest 
> of our projects have SLF4J.
> When building the project with release profile the enforce plugin fails on  
> org.apache.logging.log4j:log4j-api:jar:2.11.1 dependency
> {noformat}
> [INFO] --- maven-enforcer-plugin:1.4:enforce 
> (min-maven-min-java-banned-xerces) @ hbase-hbck2 ---
> [INFO] Restricted to JDK 1.8 yet 
> org.apache.logging.log4j:log4j-api:jar:2.11.1:compile contains 
> META-INF/versions/9/module-info.class targeted to JDK 1.9
> [WARNING] Rule 3: org.apache.maven.plugins.enforcer.EnforceBytecodeVersion 
> failed with message:
> HBase has unsupported dependencies.
>   HBase requires that all dependencies be compiled with version 1.8 or earlier
>   of the JDK to properly build from source.  You appear to be using a newer 
> dependency. You can use
>   either "mvn -version" or "mvn enforcer:display-info" to verify what version 
> is active.
>   Non-release builds can temporarily build with a newer JDK version by 
> setting the
>   'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
> Found Banned Dependency: org.apache.logging.log4j:log4j-api:jar:2.11.1
> Use 'mvn dependency:tree' to locate the source of the banned dependencies. 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22997) Move to SLF4J

2019-09-09 Thread stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926079#comment-16926079
 ] 

stack commented on HBASE-22997:
---

IIRC, I had log4j 2 in there when starting out trying it..

> Move to SLF4J
> -
>
> Key: HBASE-22997
> URL: https://issues.apache.org/jira/browse/HBASE-22997
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase-operator-tools
>Affects Versions: operator-1.0.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: operator-1.0.0
>
>
> Currently hbase-operator-tools uses org.apache.logging.log4j while the rest 
> of our projects have SLF4J.
> When building the project with release profile the enforce plugin fails on  
> org.apache.logging.log4j:log4j-api:jar:2.11.1 dependency
> {noformat}
> [INFO] --- maven-enforcer-plugin:1.4:enforce 
> (min-maven-min-java-banned-xerces) @ hbase-hbck2 ---
> [INFO] Restricted to JDK 1.8 yet 
> org.apache.logging.log4j:log4j-api:jar:2.11.1:compile contains 
> META-INF/versions/9/module-info.class targeted to JDK 1.9
> [WARNING] Rule 3: org.apache.maven.plugins.enforcer.EnforceBytecodeVersion 
> failed with message:
> HBase has unsupported dependencies.
>   HBase requires that all dependencies be compiled with version 1.8 or earlier
>   of the JDK to properly build from source.  You appear to be using a newer 
> dependency. You can use
>   either "mvn -version" or "mvn enforcer:display-info" to verify what version 
> is active.
>   Non-release builds can temporarily build with a newer JDK version by 
> setting the
>   'compileSource' property (eg. mvn -DcompileSource=1.8 clean package).
> Found Banned Dependency: org.apache.logging.log4j:log4j-api:jar:2.11.1
> Use 'mvn dependency:tree' to locate the source of the banned dependencies. 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22859) [HBCK2] Fix the orphan regions on filesystem

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926074#comment-16926074
 ] 

Hudson commented on HBASE-22859:


Results for branch master
[build #1418 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1418/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1418//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1418//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1418//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [HBCK2] Fix the orphan regions on filesystem
> 
>
> Key: HBASE-22859
> URL: https://issues.apache.org/jira/browse/HBASE-22859
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation, hbck2
>Reporter: Guanghao Zhang
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
> Attachments: HBASE-22859.master.001.patch, 
> HBASE-22859.master.005.patch
>
>
> Plan to add this feature to HBCK2 tool firstly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22969) A new binary component comparator(BinaryComponentComparator) to perform comparison of arbitrary length and position

2019-09-09 Thread Udai Bhan Kashyap (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Udai Bhan Kashyap updated HBASE-22969:
--
Attachment: HBASE-22969.master.0001.patch
Status: Patch Available  (was: Open)

> A new binary component comparator(BinaryComponentComparator) to perform 
> comparison of arbitrary length and position
> ---
>
> Key: HBASE-22969
> URL: https://issues.apache.org/jira/browse/HBASE-22969
> Project: HBase
>  Issue Type: Improvement
>  Components: Filters
>Reporter: Udai Bhan Kashyap
>Priority: Minor
> Attachments: HBASE-22969.master.0001.patch
>
>
> Lets say you have composite key: a+b+c+d. And for simplicity assume that 
> a,b,c, and d all are 4 byte integers.
> Now, if you want to execute a query which is semantically same to following 
> sql:
> {{"SELECT * from table where a=1 and b > 10 and b < 20 and c > 90 and c < 100 
> and d=1"}}
> The only choice you have is to do client side filtering. That could be lots 
> of unwanted data going through various software components and network.
> Solution:
> We can create a "component" comparator which takes the value of the 
> "component" and its relative position in the key to pass the 'Filter' 
> subsystem of the server:
> {code}
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
> int bOffset = 4;
> byte[] b10 = Bytes.toBytes(10); 
> Filter b10Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(b10,bOffset));
> filterList.addFilter(b10Filter);
> byte[] b20  = Bytes.toBytes(20);
> Filter b20Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(b20,bOffset));
> filterList.addFilter(b20Filter);
> int cOffset = 8;
> byte[] c90  = Bytes.toBytes(90);
> Filter c90Filter = new RowFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(c90,cOffset));
> filterList.addFilter(c90Filter);
> byte[] c100  = Bytes.toBytes(100);
> Filter c100Filter = new RowFilter(CompareFilter.CompareOp.LESS,
> new BinaryComponentComparator(c100,cOffset));
> filterList.addFilter(c100Filter);
> in dOffset = 12;
> byte[] d1   = Bytes.toBytes(1);
> Filter dFilter  = new RowFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComponentComparator(d1,dOffset));
> filterList.addFilter(dFilter);
> //build start and end key for scan
> int aOffset = 0;
> byte[] startKey = new byte[16]; //key size with four ints
> Bytes.putInt(startKey,aOffset,1); //a=1
> Bytes.putInt(startKey,bOffset,11); //b=11, takes care of b > 10
> Bytes.putInt(startKey,cOffset,91); //c=91, 
> Bytes.putInt(startKey,dOffset,1); //d=1, 
> byte[] endKey = new byte[16];
> Bytes.putInt(endKey,aOffset,1); //a=1
> Bytes.putInt(endKey,bOffset,20); //b=20, takes care of b < 20
> Bytes.putInt(endKey,cOffset,100); //c=100, 
> Bytes.putInt(endKey,dOffset,1); //d=1, 
> //setup scan
> Scan scan = new Scan(startKey,endKey);
> scan.setFilter(filterList);
> //The scanner below now should give only desired rows.
> //No client side filtering is required. 
> ResultScanner scanner = table.getScanner(scan);
> {code}
> The comparator can be used with any filter which makes use of 
> ByteArrayComparable. Most notably it can be used with ValueFilter to filter 
> out KV based on partial comparison of 'values' :
> {code}
> byte[] partialValue = Bytes.toBytes("partial_value");
> int partialValueOffset = 
> Filter partialValueFilter = new 
> ValueFilter(CompareFilter.CompareOp.GREATER,
> new BinaryComponentComparator(partialValue,partialValueOffset));
> {code}
> Which in turn can be combined with RowFilter to create a poweful predicate:
> {code}
> RowFilter rowFilter = new RowFilter(GREATER, new 
> BinaryComponentComparator(Bytes.toBytes("a"),1);
> FilterLiost fl = new FilterList 
> (MUST_PASS_ALL,rowFilter,partialValueFilter);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22796) [HBCK2] Add fix of overlaps to fixMeta hbck Service

2019-09-09 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926042#comment-16926042
 ] 

HBase QA commented on HBASE-22796:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}167m 
50s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/880/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22796 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12979867/HBASE-22796.master.005.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0598e9acc2f9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / ac8fe1627a |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0

[jira] [Commented] (HBASE-22902) At regionserver start there's a request to roll the WAL

2019-09-09 Thread Sandeep Pal (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926001#comment-16926001
 ] 

Sandeep Pal commented on HBASE-22902:
-

[~apurtell]  [~busbey] Can you please assign this to me? 
Also, can you please add me on a contributor list so I can assign jira's to 
myself ?

> At regionserver start there's a request to roll the WAL
> ---
>
> Key: HBASE-22902
> URL: https://issues.apache.org/jira/browse/HBASE-22902
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: David Manning
>Priority: Minor
>
> See HBASE-22301 for logic that requests to roll the WAL if regionserver 
> encounters a slow write pipeline. In the logs, during regionserver start, I 
> see that the WAL is requested to roll once. It's strange that we roll the WAL 
> because it wasn't a slow sync. It appears when this code executes, we haven't 
> initialized the {{rollOnSyncNs}} variable to use for determining whether it's 
> a slow sync. Current pipeline also shows empty in the logs.
> Disclaimer: I'm experiencing this after backporting this to 1.3.x and 
> building it there - I haven't attempted in 1.5.x, though I'd expect similar 
> results.
> Regionserver logs follow (notice *threshold=0 ms, current pipeline: []*):
> {noformat}
> Tue Aug 20 23:29:50 GMT 2019 Starting regionserver
> ...
> 2019-08-20 23:29:57,824 INFO  wal.FSHLog - WAL configuration: blocksize=256 
> MB, rollsize=243.20 MB, prefix=[truncated]%2C1566343792434, suffix=, 
> logDir=hdfs://[truncated]/hbase/WALs/[truncated],1566343792434, 
> archiveDir=hdfs://[truncated]/hbase/oldWALs
> 2019-08-20 23:29:58,104 INFO  wal.FSHLog - Slow sync cost: 186 ms, current 
> pipeline: []
> 2019-08-20 23:29:58,104 WARN  wal.FSHLog - Requesting log roll because we 
> exceeded slow sync threshold; time=186 ms, threshold=0 ms, current pipeline: 
> []
> 2019-08-20 23:29:58,107 DEBUG regionserver.ReplicationSourceManager - Start 
> tracking logs for wal group [truncated]%2C1566343792434 for peer 1
> 2019-08-20 23:29:58,107 INFO  wal.FSHLog - New WAL 
> /hbase/WALs/[truncated],1566343792434/[truncated]%2C1566343792434.1566343797824
> 2019-08-20 23:29:58,109 DEBUG regionserver.ReplicationSource - Starting up 
> worker for wal group [truncated]%2C1566343792434{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22979) Call ChunkCreator.initialize in TestHRegionWithInMemoryFlush

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926000#comment-16926000
 ] 

Hudson commented on HBASE-22979:


Results for branch branch-2.2
[build #589 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/589/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/589//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/589//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/589//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/589//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Call ChunkCreator.initialize in TestHRegionWithInMemoryFlush
> 
>
> Key: HBASE-22979
> URL: https://issues.apache.org/jira/browse/HBASE-22979
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0, 2.2.1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> TestHRegionWithInMemoryFlush is failing 100% on branch-2.2+.
> Refactor of TestHRegion in HBASE-22896 did not update the overridden 
> initHRegion method in this test.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22985) Gracefully handle invalid ServiceLoader entries

2019-09-09 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925999#comment-16925999
 ] 

Josh Elser commented on HBASE-22985:


{quote}Just that I don't think this is the only place we use ServiceLoader
{quote}
Ah, yeah. That's fair.

I think unless someone comes in strongly in favor of us getting this fix (or 
one like it) in the codebase, I'll just close this as won't fix.

> Gracefully handle invalid ServiceLoader entries
> ---
>
> Key: HBASE-22985
> URL: https://issues.apache.org/jira/browse/HBASE-22985
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-22985.001.patch, HBASE-22985.002.patch
>
>
> Just saw this happen: A RegionServer failed to start because, on the 
> classpath, there was a {{META-INF/services}} entry in a JAR on the classpath 
> that was advertising an implementation of 
> {{org.apache.hadoop.hbase.metrics.MetricsRegistries}} but was an 
> implementation of a completely different class:
> {noformat}
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.metrics.MetricRegistries: Provider 
> org.apache.ratis.metrics.impl.MetricRegistriesImpl not a subtype
>   at java.util.ServiceLoader.fail(ServiceLoader.java:239)
>   at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.getDefinedImplemantations(MetricRegistriesLoader.java:92)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.load(MetricRegistriesLoader.java:50)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries$LazyHolder.(MetricRegistries.java:39)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries.global(MetricRegistries.java:47)
>   at 
> org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:122)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:46)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:38)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
>   at org.apache.hadoop.hbase.io.MetricsIO.(MetricsIO.java:35)
>   at org.apache.hadoop.hbase.io.hfile.HFile.(HFile.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
>   ... 10 more{noformat}
> Now, we could catch this and gracefully ignore it; however, this would mean 
> that we're catching an Error which is typically considered a smell.
> It's a pretty straightforward change, so I'm apt to think that it's OK. What 
> do other folks think?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22985) Gracefully handle invalid ServiceLoader entries

2019-09-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925993#comment-16925993
 ] 

Sean Busbey commented on HBASE-22985:
-

{quote}
bq. Why catch just this instance of a ServiceLoader issue and not others?

Didn't want to open a pandora's box of other things that might go wrong. Was 
there something else in particular you had in mind?
{quote}

Just that I don't think this is the only place we use ServiceLoader. i.e. we 
use it for security tokens and for the hadoop compatibility stuff.

It'll be surprising in an operational context if I can safely configure garbage 
for the metrics system but not others.

> Gracefully handle invalid ServiceLoader entries
> ---
>
> Key: HBASE-22985
> URL: https://issues.apache.org/jira/browse/HBASE-22985
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-22985.001.patch, HBASE-22985.002.patch
>
>
> Just saw this happen: A RegionServer failed to start because, on the 
> classpath, there was a {{META-INF/services}} entry in a JAR on the classpath 
> that was advertising an implementation of 
> {{org.apache.hadoop.hbase.metrics.MetricsRegistries}} but was an 
> implementation of a completely different class:
> {noformat}
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.metrics.MetricRegistries: Provider 
> org.apache.ratis.metrics.impl.MetricRegistriesImpl not a subtype
>   at java.util.ServiceLoader.fail(ServiceLoader.java:239)
>   at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.getDefinedImplemantations(MetricRegistriesLoader.java:92)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.load(MetricRegistriesLoader.java:50)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries$LazyHolder.(MetricRegistries.java:39)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries.global(MetricRegistries.java:47)
>   at 
> org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:122)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:46)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:38)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
>   at org.apache.hadoop.hbase.io.MetricsIO.(MetricsIO.java:35)
>   at org.apache.hadoop.hbase.io.hfile.HFile.(HFile.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
>   ... 10 more{noformat}
> Now, we could catch this and gracefully ignore it; however, this would mean 
> that we're catching an Error which is typically considered a smell.
> It's a pretty straightforward change, so I'm apt to think that it's OK. What 
> do other folks think?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase-operator-tools] asf-ci commented on issue #27: HBASE-22999 Fix non-varargs compile warnings

2019-09-09 Thread GitBox
asf-ci commented on issue #27: HBASE-22999 Fix non-varargs compile warnings
URL: 
https://github.com/apache/hbase-operator-tools/pull/27#issuecomment-529614258
 
 
   
   Refer to this link for build results (access rights to CI server needed): 
   https://builds.apache.org/job/PreCommit-HBASE-OPERATOR-TOOLS-Build/95/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22903) alter_status command is broken

2019-09-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22903:
-
Component/s: (was: asyncclient)

> alter_status command is broken
> --
>
> Key: HBASE-22903
> URL: https://issues.apache.org/jira/browse/HBASE-22903
> Project: HBase
>  Issue Type: Bug
>  Components: metrics, shell
>Affects Versions: 3.0.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-22903.master.000.patch, 
> HBASE-22903.master.001.patch, HBASE-22903.master.002.patch, 
> HBASE-22903.master.005.patch, HBASE-22903.master.006.patch
>
>
> This is applicable to master branch only:
> {code:java}
> > alter_status 't1'
> ERROR: undefined method `getAlterStatus' for 
> #
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase] Apache-HBase commented on issue #599: HBASE-22987 Calculate the region servers in default group in foreground

2019-09-09 Thread GitBox
Apache-HBase commented on issue #599: HBASE-22987 Calculate the region servers 
in default group in foreground
URL: https://github.com/apache/hbase/pull/599#issuecomment-529612361
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ HBASE-22514 Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m  1s |  HBASE-22514 passed  |
   | :green_heart: |  compile  |   0m 57s |  HBASE-22514 passed  |
   | :green_heart: |  checkstyle  |   1m 21s |  HBASE-22514 passed  |
   | :green_heart: |  shadedjars  |   5m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 41s |  HBASE-22514 passed  |
   | :blue_heart: |  spotbugs  |   4m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 18s |  HBASE-22514 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   4m 51s |  the patch passed  |
   | :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 20s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 39s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 153m 26s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 31s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 212m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.replication.regionserver.TestWALEntryStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-599/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/599 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 5401a734fe2e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-599/out/precommit/personality/provided.sh
 |
   | git revision | HBASE-22514 / 16861571d5 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-599/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-599/1/testReport/
 |
   | Max. process+thread count | 4620 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-599/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22760) Stop/Resume Snapshot Auto-Cleanup activity with shell command

2019-09-09 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925985#comment-16925985
 ] 

Viraj Jasani commented on HBASE-22760:
--

[~apurtell] I have run hbase-server and hbase-shell tests for master, branch-2 
and branch-1 with the latest respective branch patch and tests look good for 
all.

Will be careful for shell module going forward and run TestShell etc explicitly 
since it is not run as part of precommit job/mvn test. Thanks.

> Stop/Resume Snapshot Auto-Cleanup activity with shell command
> -
>
> Key: HBASE-22760
> URL: https://issues.apache.org/jira/browse/HBASE-22760
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, shell, snapshots
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.3.0
>
> Attachments: HBASE-22760.branch-1.000.patch, 
> HBASE-22760.branch-1.001.patch, HBASE-22760.branch-2.000.patch, 
> HBASE-22760.master.003.patch, HBASE-22760.master.004.patch, 
> HBASE-22760.master.005.patch, HBASE-22760.master.008.patch, 
> HBASE-22760.master.009.patch
>
>
> For any scheduled snapshot backup activity, we would like to disable 
> auto-cleaner for snapshot based on TTL. However, as per HBASE-22648 we have a 
> config to disable snapshot auto-cleaner: 
> hbase.master.cleaner.snapshot.disable, which would take effect only upon 
> HMaster restart just similar to any other hbase-site configs.
> For any running cluster, we should be able to stop/resume auto-cleanup 
> activity for snapshot based on shell command. Something similar to below 
> command should be able to stop/start cleanup chore:
> hbase(main):001:0> snapshot_auto_cleanup_switch false    (disable 
> auto-cleaner)
> hbase(main):001:0> snapshot_auto_cleanup_switch true     (enable auto-cleaner)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22804:
-
Attachment: (was: HBASE-22804.branch-1.007.patch)

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.branch-2.002.patch, 
> HBASE-22804.branch-2.003.patch, HBASE-22804.branch-2.004.patch, 
> HBASE-22804.branch-2.005.patch, HBASE-22804.branch-2.006.patch, 
> HBASE-22804.master.001.patch, HBASE-22804.master.002.patch, 
> HBASE-22804.master.003.patch, HBASE-22804.master.004.patch, 
> HBASE-22804.master.005.patch, HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22804:
-
Attachment: HBASE-22804.branch-1.007.patch

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.branch-2.002.patch, 
> HBASE-22804.branch-2.003.patch, HBASE-22804.branch-2.004.patch, 
> HBASE-22804.branch-2.005.patch, HBASE-22804.branch-2.006.patch, 
> HBASE-22804.master.001.patch, HBASE-22804.master.002.patch, 
> HBASE-22804.master.003.patch, HBASE-22804.master.004.patch, 
> HBASE-22804.master.005.patch, HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-22804) Provide an API to get list of successful regions and total expected regions in Canary

2019-09-09 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-22804:
-
Attachment: (was: HBASE-22804.branch-1.007.patch)

> Provide an API to get list of successful regions and total expected regions 
> in Canary
> -
>
> Key: HBASE-22804
> URL: https://issues.apache.org/jira/browse/HBASE-22804
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
>  Labels: Canary
> Attachments: HBASE-22804.branch-1.001.patch, 
> HBASE-22804.branch-1.002.patch, HBASE-22804.branch-1.003.patch, 
> HBASE-22804.branch-1.004.patch, HBASE-22804.branch-1.005.patch, 
> HBASE-22804.branch-1.006.patch, HBASE-22804.branch-1.007.patch, 
> HBASE-22804.branch-2.001.patch, HBASE-22804.branch-2.002.patch, 
> HBASE-22804.branch-2.003.patch, HBASE-22804.branch-2.004.patch, 
> HBASE-22804.branch-2.005.patch, HBASE-22804.branch-2.006.patch, 
> HBASE-22804.master.001.patch, HBASE-22804.master.002.patch, 
> HBASE-22804.master.003.patch, HBASE-22804.master.004.patch, 
> HBASE-22804.master.005.patch, HBASE-22804.master.006.patch
>
>
> At present HBase Canary tool only prints the successes as part of logs. 
> Providing an API to get the list of successes, as well as total number of 
> expected regions, will make it easier to get a more accurate availability 
> estimate.
>   



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22985) Gracefully handle invalid ServiceLoader entries

2019-09-09 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925981#comment-16925981
 ] 

Josh Elser commented on HBASE-22985:


{quote}a terrible code smell
{quote}
Agree, and I'm leaning more towards this not being worth it.
{quote}Does the RS fail correctly and quickly?
{quote}
Yes, it does (without this patch).
{quote}Why catch just this instance of a ServiceLoader issue and not others?
{quote}
Didn't want to open a pandora's box of other things that might go wrong. Was 
there something else in particular you had in mind?

> Gracefully handle invalid ServiceLoader entries
> ---
>
> Key: HBASE-22985
> URL: https://issues.apache.org/jira/browse/HBASE-22985
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-22985.001.patch, HBASE-22985.002.patch
>
>
> Just saw this happen: A RegionServer failed to start because, on the 
> classpath, there was a {{META-INF/services}} entry in a JAR on the classpath 
> that was advertising an implementation of 
> {{org.apache.hadoop.hbase.metrics.MetricsRegistries}} but was an 
> implementation of a completely different class:
> {noformat}
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.metrics.MetricRegistries: Provider 
> org.apache.ratis.metrics.impl.MetricRegistriesImpl not a subtype
>   at java.util.ServiceLoader.fail(ServiceLoader.java:239)
>   at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.getDefinedImplemantations(MetricRegistriesLoader.java:92)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.load(MetricRegistriesLoader.java:50)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries$LazyHolder.(MetricRegistries.java:39)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries.global(MetricRegistries.java:47)
>   at 
> org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:122)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:46)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:38)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
>   at org.apache.hadoop.hbase.io.MetricsIO.(MetricsIO.java:35)
>   at org.apache.hadoop.hbase.io.hfile.HFile.(HFile.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
>   ... 10 more{noformat}
> Now, we could catch this and gracefully ignore it; however, this would mean 
> that we're catching an Error which is typically considered a smell.
> It's a pretty straightforward change, so I'm apt to think that it's OK. What 
> do other folks think?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22979) Call ChunkCreator.initialize in TestHRegionWithInMemoryFlush

2019-09-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925962#comment-16925962
 ] 

Hudson commented on HBASE-22979:


Results for branch branch-2
[build #2242 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2242/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2242//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2242//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2242//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Call ChunkCreator.initialize in TestHRegionWithInMemoryFlush
> 
>
> Key: HBASE-22979
> URL: https://issues.apache.org/jira/browse/HBASE-22979
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.0, 2.2.1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2
>
>
> TestHRegionWithInMemoryFlush is failing 100% on branch-2.2+.
> Refactor of TestHRegion in HBASE-22896 did not update the overridden 
> initHRegion method in this test.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (HBASE-22992) Blog post for hbtop on hbase.apache.org

2019-09-09 Thread stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-22992.
---
Fix Version/s: 3.0.0
 Assignee: Toshihiro Suzuki
   Resolution: Fixed

Pushed https://blogs.apache.org/hbase/entry/introduction-hbtop-a-real-time

Made some small edits. Shout if you need more done [~brfrn169]. If good by you, 
you might want to add a pointer to dev list (or I could for you...)

> Blog post for hbtop on hbase.apache.org
> ---
>
> Key: HBASE-22992
> URL: https://issues.apache.org/jira/browse/HBASE-22992
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22985) Gracefully handle invalid ServiceLoader entries

2019-09-09 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925953#comment-16925953
 ] 

Sean Busbey commented on HBASE-22985:
-

catching a {{ServiceConfigurationError}} is a terrible code smell. Does the RS 
fail correctly and quickly?

Why catch just this instance of a ServiceLoader issue and not others?

> Gracefully handle invalid ServiceLoader entries
> ---
>
> Key: HBASE-22985
> URL: https://issues.apache.org/jira/browse/HBASE-22985
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Attachments: HBASE-22985.001.patch, HBASE-22985.002.patch
>
>
> Just saw this happen: A RegionServer failed to start because, on the 
> classpath, there was a {{META-INF/services}} entry in a JAR on the classpath 
> that was advertising an implementation of 
> {{org.apache.hadoop.hbase.metrics.MetricsRegistries}} but was an 
> implementation of a completely different class:
> {noformat}
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.metrics.MetricRegistries: Provider 
> org.apache.ratis.metrics.impl.MetricRegistriesImpl not a subtype
>   at java.util.ServiceLoader.fail(ServiceLoader.java:239)
>   at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.getDefinedImplemantations(MetricRegistriesLoader.java:92)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistriesLoader.load(MetricRegistriesLoader.java:50)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries$LazyHolder.(MetricRegistries.java:39)
>   at 
> org.apache.hadoop.hbase.metrics.MetricRegistries.global(MetricRegistries.java:47)
>   at 
> org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSourceImpl.java:122)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:46)
>   at 
> org.apache.hadoop.hbase.io.MetricsIOSourceImpl.(MetricsIOSourceImpl.java:38)
>   at 
> org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:84)
>   at org.apache.hadoop.hbase.io.MetricsIO.(MetricsIO.java:35)
>   at org.apache.hadoop.hbase.io.hfile.HFile.(HFile.java:195)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:570)
>   ... 10 more{noformat}
> Now, we could catch this and gracefully ignore it; however, this would mean 
> that we're catching an Error which is typically considered a smell.
> It's a pretty straightforward change, so I'm apt to think that it's OK. What 
> do other folks think?



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase] Apache-HBase commented on issue #593: HBASE-22927 Upgrade Mockito version for jdk11

2019-09-09 Thread GitBox
Apache-HBase commented on issue #593: HBASE-22927 Upgrade Mockito version for 
jdk11
URL: https://github.com/apache/hbase/pull/593#issuecomment-529590586
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 44s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   7m  1s |  master passed  |
   | :green_heart: |  compile  |   4m  3s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 50s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   3m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 33s |  the patch passed  |
   | :green_heart: |  compile  |   3m 50s |  the patch passed  |
   | :green_heart: |  javac  |   3m 50s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML file.  
|
   | :broken_heart: |  shadedjars  |   5m 39s |  patch has 10 errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  19m 49s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   3m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 219m 26s |  root in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 43s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 288m 38s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.replication.TestVerifyReplicationCrossDiffHdfs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-593/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/593 |
   | Optional Tests | dupname asflicense javac javadoc unit shadedjars 
hadoopcheck xml compile |
   | uname | Linux a975b62a711d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-593/out/precommit/personality/provided.sh
 |
   | git revision | master / ac8fe1627a |
   | Default Java | 1.8.0_181 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-593/2/artifact/out/patch-shadedjars.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-593/2/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-593/2/testReport/
 |
   | Max. process+thread count | 5659 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-593/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #598: HBASE-22142 Space quota: If table inside namespace having space quota is dropped, data size usage is still considered for the drop table.

2019-09-09 Thread GitBox
Apache-HBase commented on issue #598: HBASE-22142 Space quota: If table inside 
namespace having space quota is dropped, data size usage is still considered 
for the drop table.
URL: https://github.com/apache/hbase/pull/598#issuecomment-529585338
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   7m 19s |  master passed  |
   | :green_heart: |  compile  |   1m 11s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 45s |  master passed  |
   | :green_heart: |  shadedjars  |   5m 46s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 44s |  master passed  |
   | :blue_heart: |  spotbugs  |   5m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   5m 28s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 23s |  the patch passed  |
   | :green_heart: |  compile  |   1m  7s |  the patch passed  |
   | :green_heart: |  javac  |   1m  7s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 41s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   5m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  20m 19s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   | :green_heart: |  findbugs  |   5m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 167m 58s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 28s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 239m 13s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/598 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 4d0f83effd0d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-598/out/precommit/personality/provided.sh
 |
   | git revision | master / ac8fe1627a |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/1/testReport/
 |
   | Max. process+thread count | 4753 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-598/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (HBASE-23000) Fix all consistently failing tests in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HBASE-23000:
--

Assignee: Rushabh S Shah

> Fix all consistently failing tests in branch-1.3
> 
>
> Key: HBASE-23000
> URL: https://issues.apache.org/jira/browse/HBASE-23000
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Major
>
> Flaky test report: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
> In last 30 builds this test failed all 30 times.
> Here is the stack trace: 
> {noformat}
> Stacktrace
> java.io.IOException: Shutting down
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
> seconds
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> {noformat}
> Link to latest jenkins build: 
> https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testBlockLocation/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (HBASE-23001) TestBlockReorder.testHBaseCluster is failing consistently in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah resolved HBASE-23001.

Resolution: Duplicate

> TestBlockReorder.testHBaseCluster is failing consistently in branch-1.3   
> 
>
> Key: HBASE-23001
> URL: https://issues.apache.org/jira/browse/HBASE-23001
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Rushabh S Shah
>Priority: Major
>
> Flaky test report: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
> In last 30 builds this test failed all 30 times.
> Here is the stack trace:
> {noformat}
> Stacktrace
> java.io.IOException: Shutting down
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
> Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
> seconds
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
> {noformat}
> Link to latest jenkins build: 
> https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testHBaseCluster/
> Stack trace looks same as issue in HBASE-23000 but creating separate jira for 
> better tracking purpose.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-23000) Fix all consistently failing tests in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HBASE-23000:
---
Summary: Fix all consistently failing tests in branch-1.3  (was: 
TestBlockReorder#testBlockLocation is failing consistently in branch-1.3)

> Fix all consistently failing tests in branch-1.3
> 
>
> Key: HBASE-23000
> URL: https://issues.apache.org/jira/browse/HBASE-23000
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Rushabh S Shah
>Priority: Major
>
> Flaky test report: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
> In last 30 builds this test failed all 30 times.
> Here is the stack trace: 
> {noformat}
> Stacktrace
> java.io.IOException: Shutting down
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
> seconds
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> {noformat}
> Link to latest jenkins build: 
> https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testBlockLocation/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (HBASE-22902) At regionserver start there's a request to roll the WAL

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-22902:
--

Assignee: (was: Andrew Purtell)

> At regionserver start there's a request to roll the WAL
> ---
>
> Key: HBASE-22902
> URL: https://issues.apache.org/jira/browse/HBASE-22902
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: David Manning
>Priority: Minor
>
> See HBASE-22301 for logic that requests to roll the WAL if regionserver 
> encounters a slow write pipeline. In the logs, during regionserver start, I 
> see that the WAL is requested to roll once. It's strange that we roll the WAL 
> because it wasn't a slow sync. It appears when this code executes, we haven't 
> initialized the {{rollOnSyncNs}} variable to use for determining whether it's 
> a slow sync. Current pipeline also shows empty in the logs.
> Disclaimer: I'm experiencing this after backporting this to 1.3.x and 
> building it there - I haven't attempted in 1.5.x, though I'd expect similar 
> results.
> Regionserver logs follow (notice *threshold=0 ms, current pipeline: []*):
> {noformat}
> Tue Aug 20 23:29:50 GMT 2019 Starting regionserver
> ...
> 2019-08-20 23:29:57,824 INFO  wal.FSHLog - WAL configuration: blocksize=256 
> MB, rollsize=243.20 MB, prefix=[truncated]%2C1566343792434, suffix=, 
> logDir=hdfs://[truncated]/hbase/WALs/[truncated],1566343792434, 
> archiveDir=hdfs://[truncated]/hbase/oldWALs
> 2019-08-20 23:29:58,104 INFO  wal.FSHLog - Slow sync cost: 186 ms, current 
> pipeline: []
> 2019-08-20 23:29:58,104 WARN  wal.FSHLog - Requesting log roll because we 
> exceeded slow sync threshold; time=186 ms, threshold=0 ms, current pipeline: 
> []
> 2019-08-20 23:29:58,107 DEBUG regionserver.ReplicationSourceManager - Start 
> tracking logs for wal group [truncated]%2C1566343792434 for peer 1
> 2019-08-20 23:29:58,107 INFO  wal.FSHLog - New WAL 
> /hbase/WALs/[truncated],1566343792434/[truncated]%2C1566343792434.1566343797824
> 2019-08-20 23:29:58,109 DEBUG regionserver.ReplicationSource - Starting up 
> worker for wal group [truncated]%2C1566343792434{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22902) At regionserver start there's a request to roll the WAL

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925931#comment-16925931
 ] 

Andrew Purtell commented on HBASE-22902:


Going out on vacation for much of September. Unassigning, in case someone else 
is interested in picking it up in the meantime. I'll assign back to myself and 
do it upon return otherwise. 

> At regionserver start there's a request to roll the WAL
> ---
>
> Key: HBASE-22902
> URL: https://issues.apache.org/jira/browse/HBASE-22902
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 3.0.0, 1.5.0, 2.3.0
>Reporter: David Manning
>Priority: Minor
>
> See HBASE-22301 for logic that requests to roll the WAL if regionserver 
> encounters a slow write pipeline. In the logs, during regionserver start, I 
> see that the WAL is requested to roll once. It's strange that we roll the WAL 
> because it wasn't a slow sync. It appears when this code executes, we haven't 
> initialized the {{rollOnSyncNs}} variable to use for determining whether it's 
> a slow sync. Current pipeline also shows empty in the logs.
> Disclaimer: I'm experiencing this after backporting this to 1.3.x and 
> building it there - I haven't attempted in 1.5.x, though I'd expect similar 
> results.
> Regionserver logs follow (notice *threshold=0 ms, current pipeline: []*):
> {noformat}
> Tue Aug 20 23:29:50 GMT 2019 Starting regionserver
> ...
> 2019-08-20 23:29:57,824 INFO  wal.FSHLog - WAL configuration: blocksize=256 
> MB, rollsize=243.20 MB, prefix=[truncated]%2C1566343792434, suffix=, 
> logDir=hdfs://[truncated]/hbase/WALs/[truncated],1566343792434, 
> archiveDir=hdfs://[truncated]/hbase/oldWALs
> 2019-08-20 23:29:58,104 INFO  wal.FSHLog - Slow sync cost: 186 ms, current 
> pipeline: []
> 2019-08-20 23:29:58,104 WARN  wal.FSHLog - Requesting log roll because we 
> exceeded slow sync threshold; time=186 ms, threshold=0 ms, current pipeline: 
> []
> 2019-08-20 23:29:58,107 DEBUG regionserver.ReplicationSourceManager - Start 
> tracking logs for wal group [truncated]%2C1566343792434 for peer 1
> 2019-08-20 23:29:58,107 INFO  wal.FSHLog - New WAL 
> /hbase/WALs/[truncated],1566343792434/[truncated]%2C1566343792434.1566343797824
> 2019-08-20 23:29:58,109 DEBUG regionserver.ReplicationSource - Starting up 
> worker for wal group [truncated]%2C1566343792434{noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-23000) TestBlockReorder#testBlockLocation is failing consistently in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925930#comment-16925930
 ] 

Rushabh S Shah commented on HBASE-23000:


Below are the list of all consistently failing tests with the same stack trace. 
Looks like underlying issue is the same.
 
org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation   3 min 23 sec
489
 org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster   3 min 23 sec
489
 
org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad.testAtomicBulkLoad[0]
   3 min 26 sec828
 
org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad.testAtomicBulkLoad[1]
   25 ms   828
 
org.apache.hadoop.hbase.master.TestMasterFileSystemWithWALDir.org.apache.hadoop.hbase.master.TestMasterFileSystemWithWALDir
3 min 25 sec1835
 
org.apache.hadoop.hbase.regionserver.wal.TestLogRollAbort.testRSAbortWithUnflushedEdits

> TestBlockReorder#testBlockLocation is failing consistently in branch-1.3
> 
>
> Key: HBASE-23000
> URL: https://issues.apache.org/jira/browse/HBASE-23000
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Rushabh S Shah
>Priority: Major
>
> Flaky test report: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
> In last 30 builds this test failed all 30 times.
> Here is the stack trace: 
> {noformat}
> Stacktrace
> java.io.IOException: Shutting down
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
> seconds
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
> {noformat}
> Link to latest jenkins build: 
> https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testBlockLocation/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HBASE-22978) Online slow response log

2019-09-09 Thread Andrew Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16925929#comment-16925929
 ] 

Andrew Purtell commented on HBASE-22978:


Going out on vacation for much of September. Unassigning, in case someone else 
is interested in picking it up in the meantime. I'll assign back to myself and 
do it upon return otherwise. 

> Online slow response log
> 
>
> Key: HBASE-22978
> URL: https://issues.apache.org/jira/browse/HBASE-22978
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Operability, regionserver, shell
>Reporter: Andrew Purtell
>Priority: Minor
>
> Today when an individual RPC exceeds a configurable time bound we log a 
> complaint by way of the logging subsystem. These log lines look like:
> {noformat}
> 2019-08-30 22:10:36,195 WARN [,queue=15,port=60020] ipc.RpcServer - 
> (responseTooSlow):
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)",
> "starttimems":1567203007549,
> "responsesize":6819737,
> "method":"Scan",
> "param":"region { type: REGION_NAME value: 
> \"tsdb,\\000\\000\\215\\f)o\\024\\302\\220\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\006\\000\\000\\000\\000\\000\\005\\000\\000",
> "processingtimems":28646,
> "client":"10.253.196.215:41116",
> "queuetimems":22453,
> "class":"HRegionServer"}
> {noformat}
> Unfortunately we often truncate the request parameters, like in the above 
> example. We do this because the human readable representation is verbose, the 
> rate of too slow warnings may be high, and the combination of these things 
> can overwhelm the log capture system. The truncation is unfortunate because 
> it eliminates much of the utility of the warnings. For example, the region 
> name, the start and end keys, and the filter hierarchy are all important 
> clues for debugging performance problems caused by moderate to low 
> selectivity queries or queries made at a high rate.
> We can maintain an in-memory ring buffer of requests that were judged to be 
> too slow in addition to the responseTooSlow logging. The in-memory 
> representation can be complete and compressed. A new admin API and shell 
> command can provide access to the ring buffer for online performance 
> debugging. A modest sizing of the ring buffer will prevent excessive memory 
> utilization for a minor performance debugging feature by limiting the total 
> number of retained records. There is some chance a high rate of requests will 
> cause information on other interesting requests to be overwritten before it 
> can be read. This is the nature of a ring buffer and an acceptable trade off.
> The write request types do not require us to retain all information submitted 
> in the request. We don't need to retain all key-values in the mutation, which 
> may be too large to comfortably retain. We only need a unique set of row 
> keys, or even a min/max range, and total counts.
> The consumers of this information will be debugging tools. We can afford to 
> apply fast compression to ring buffer entries (if codec support is 
> available), something like snappy or zstandard, and decompress on the fly 
> when servicing the retrieval API request. This will minimize the impact of 
> retaining more information about slow requests than we do today.
> This proposal is for retention of request information only, the same 
> information provided by responseTooSlow warnings. Total size of response 
> serialization, possibly also total cell or row counts, should be sufficient 
> to characterize the response.
> Optionally persist new entries added to the ring buffer into one or more 
> files in HDFS in a write-behind manner. If the HDFS writer blocks or falls 
> behind and we are unable to persist an entry before it is overwritten, that 
> is fine. Response too slow logging is best effort. If we can detect this make 
> a note of it in the log file. Provide a tool for parsing, dumping, filtering, 
> and pretty printing the slow logs written to HDFS. The tool and the shell can 
> share and reuse some utility classes and methods for accomplishing that. 
> —
> New shell commands:
> {{get_slow_responses [  ... ,  ] [ , \{  
> } ]}}
> Retrieve, decode, and pretty print the contents of the too slow response ring 
> buffer maintained by the given list of servers; or all servers in the cluster 
> if no list is provided. Optionally provide a map of parameters for filtering 
> as additional argument. The TABLE filter, which expects a string containing a 
> table name, will include only entries pertaining to that table. The REGION 
> filter, which expects a string containing an encoded region name, will 
> include only entries pertaining to that region. The CLIENT_IP filter, which 
> expects a string containing an IP address, will include o

[jira] [Assigned] (HBASE-22978) Online slow response log

2019-09-09 Thread Andrew Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-22978:
--

Assignee: (was: Andrew Purtell)

> Online slow response log
> 
>
> Key: HBASE-22978
> URL: https://issues.apache.org/jira/browse/HBASE-22978
> Project: HBase
>  Issue Type: New Feature
>  Components: Admin, Operability, regionserver, shell
>Reporter: Andrew Purtell
>Priority: Minor
>
> Today when an individual RPC exceeds a configurable time bound we log a 
> complaint by way of the logging subsystem. These log lines look like:
> {noformat}
> 2019-08-30 22:10:36,195 WARN [,queue=15,port=60020] ipc.RpcServer - 
> (responseTooSlow):
> {"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)",
> "starttimems":1567203007549,
> "responsesize":6819737,
> "method":"Scan",
> "param":"region { type: REGION_NAME value: 
> \"tsdb,\\000\\000\\215\\f)o\\024\\302\\220\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000\\000\\006\\000\\000\\000\\000\\000\\005\\000\\000",
> "processingtimems":28646,
> "client":"10.253.196.215:41116",
> "queuetimems":22453,
> "class":"HRegionServer"}
> {noformat}
> Unfortunately we often truncate the request parameters, like in the above 
> example. We do this because the human readable representation is verbose, the 
> rate of too slow warnings may be high, and the combination of these things 
> can overwhelm the log capture system. The truncation is unfortunate because 
> it eliminates much of the utility of the warnings. For example, the region 
> name, the start and end keys, and the filter hierarchy are all important 
> clues for debugging performance problems caused by moderate to low 
> selectivity queries or queries made at a high rate.
> We can maintain an in-memory ring buffer of requests that were judged to be 
> too slow in addition to the responseTooSlow logging. The in-memory 
> representation can be complete and compressed. A new admin API and shell 
> command can provide access to the ring buffer for online performance 
> debugging. A modest sizing of the ring buffer will prevent excessive memory 
> utilization for a minor performance debugging feature by limiting the total 
> number of retained records. There is some chance a high rate of requests will 
> cause information on other interesting requests to be overwritten before it 
> can be read. This is the nature of a ring buffer and an acceptable trade off.
> The write request types do not require us to retain all information submitted 
> in the request. We don't need to retain all key-values in the mutation, which 
> may be too large to comfortably retain. We only need a unique set of row 
> keys, or even a min/max range, and total counts.
> The consumers of this information will be debugging tools. We can afford to 
> apply fast compression to ring buffer entries (if codec support is 
> available), something like snappy or zstandard, and decompress on the fly 
> when servicing the retrieval API request. This will minimize the impact of 
> retaining more information about slow requests than we do today.
> This proposal is for retention of request information only, the same 
> information provided by responseTooSlow warnings. Total size of response 
> serialization, possibly also total cell or row counts, should be sufficient 
> to characterize the response.
> Optionally persist new entries added to the ring buffer into one or more 
> files in HDFS in a write-behind manner. If the HDFS writer blocks or falls 
> behind and we are unable to persist an entry before it is overwritten, that 
> is fine. Response too slow logging is best effort. If we can detect this make 
> a note of it in the log file. Provide a tool for parsing, dumping, filtering, 
> and pretty printing the slow logs written to HDFS. The tool and the shell can 
> share and reuse some utility classes and methods for accomplishing that. 
> —
> New shell commands:
> {{get_slow_responses [  ... ,  ] [ , \{  
> } ]}}
> Retrieve, decode, and pretty print the contents of the too slow response ring 
> buffer maintained by the given list of servers; or all servers in the cluster 
> if no list is provided. Optionally provide a map of parameters for filtering 
> as additional argument. The TABLE filter, which expects a string containing a 
> table name, will include only entries pertaining to that table. The REGION 
> filter, which expects a string containing an encoded region name, will 
> include only entries pertaining to that region. The CLIENT_IP filter, which 
> expects a string containing an IP address, will include only entries 
> pertaining to that client. The USER filter, which expects a string containing 
> a user name, will include only entries pertaining to that user. Filters are 
> additive, for example if b

[jira] [Created] (HBASE-23001) TestBlockReorder.testHBaseCluster is failing consistently in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)
Rushabh S Shah created HBASE-23001:
--

 Summary: TestBlockReorder.testHBaseCluster is failing consistently 
in branch-1.3   
 Key: HBASE-23001
 URL: https://issues.apache.org/jira/browse/HBASE-23001
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.6
Reporter: Rushabh S Shah


Flaky test report: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
In last 30 builds this test failed all 30 times.
Here is the stack trace:
{noformat}
Stacktrace
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
{noformat}
Stack trace looks same as issue in HBASE-23000 but creating separate jira for 
better tracking purpose.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (HBASE-23001) TestBlockReorder.testHBaseCluster is failing consistently in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HBASE-23001:
---
Description: 
Flaky test report: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
In last 30 builds this test failed all 30 times.
Here is the stack trace:
{noformat}
Stacktrace
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
{noformat}
Link to latest jenkins build: 
https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testHBaseCluster/

Stack trace looks same as issue in HBASE-23000 but creating separate jira for 
better tracking purpose.

  was:
Flaky test report: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
In last 30 builds this test failed all 30 times.
Here is the stack trace:
{noformat}
Stacktrace
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
{noformat}
Stack trace looks same as issue in HBASE-23000 but creating separate jira for 
better tracking purpose.


> TestBlockReorder.testHBaseCluster is failing consistently in branch-1.3   
> 
>
> Key: HBASE-23001
> URL: https://issues.apache.org/jira/browse/HBASE-23001
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.3.6
>Reporter: Rushabh S Shah
>Priority: Major
>
> Flaky test report: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
> In last 30 builds this test failed all 30 times.
> Here is the stack trace:
> {noformat}
> Stacktrace
> java.io.IOException: Shutting down
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
> Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
> seconds
>   at 
> org.apache.hadoop.hbase.fs.TestBlockReorder.testHBaseCluster(TestBlockReorder.java:261)
> {noformat}
> Link to latest jenkins build: 
> https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testHBaseCluster/
> Stack trace looks same as issue in HBASE-23000 but creating separate jira for 
> better tracking purpose.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (HBASE-23000) TestBlockReorder#testBlockLocation is failing consistently in branch-1.3

2019-09-09 Thread Rushabh S Shah (Jira)
Rushabh S Shah created HBASE-23000:
--

 Summary: TestBlockReorder#testBlockLocation is failing 
consistently in branch-1.3
 Key: HBASE-23000
 URL: https://issues.apache.org/jira/browse/HBASE-23000
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.6
Reporter: Rushabh S Shah


Flaky test report: 
https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-1.3/Flaky_20Test_20Report/dashboard.html#job_2
In last 30 builds this test failed all 30 times.
Here is the stack trace: 
{noformat}
Stacktrace
java.io.IOException: Shutting down
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
Caused by: java.lang.RuntimeException: Master not initialized after 20ms 
seconds
at 
org.apache.hadoop.hbase.fs.TestBlockReorder.testBlockLocation(TestBlockReorder.java:428)
{noformat}

Link to latest jenkins build: 
https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-1.3/9351/testReport/org.apache.hadoop.hbase.fs/TestBlockReorder/testBlockLocation/



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (HBASE-22998) Fix NOTICE and LICENSE

2019-09-09 Thread Peter Somogyi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-22998.
---
Resolution: Fixed

Merged PR #26.

> Fix NOTICE and LICENSE
> --
>
> Key: HBASE-22998
> URL: https://issues.apache.org/jira/browse/HBASE-22998
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase-operator-tools
>Affects Versions: operator-1.0.0
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Blocker
> Fix For: operator-1.0.0
>
>
> LICENSE.txt contains only Apache License v2 but the hbase-operator-tools 
> project uses dependencies with different licenses. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[GitHub] [hbase-operator-tools] petersomogyi merged pull request #26: HBASE-22998 Fix NOTICE and LICENSE

2019-09-09 Thread GitBox
petersomogyi merged pull request #26: HBASE-22998 Fix NOTICE and LICENSE
URL: https://github.com/apache/hbase-operator-tools/pull/26
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >