[jira] [Commented] (HBASE-20186) Improve RSGroupBasedLoadBalancer#balanceCluster() to be more efficient when calculating cluster state for each rsgroup

2018-03-14 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399953#comment-16399953
 ] 

Xiang Li commented on HBASE-20186:
--

Thanks Ted for the review!

> Improve RSGroupBasedLoadBalancer#balanceCluster() to be more efficient when 
> calculating cluster state for each rsgroup
> --
>
> Key: HBASE-20186
> URL: https://issues.apache.org/jira/browse/HBASE-20186
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20186.master.000.patch, 
> HBASE-20186.master.001.patch
>
>
> In RSGroupBasedLoadBalancer
> {code}
> public List balanceCluster(Map 
> clusterState)
> {code}
> The second half of the function is to calculate region move plan for regions 
> which have been already placed according to the rsgroup assignment, and it is 
> calculated one rsgroup after another.
> The following logic to check if a server belongs to the rsgroup is not quite 
> efficient, as it does not make good use of the fact that servers in 
> RSGroupInfo is a TreeSet.
> {code}
> for (Address sName : info.getServers()) {
>   for(ServerName curr: clusterState.keySet()) {
> if(curr.getAddress().equals(sName)) {
>   groupClusterState.put(curr, correctedState.get(curr));
> }
>   }
> }
> {code}
> Given there are m region servers in the cluster and n region servers for each 
> rsgroup in average, the code above has time complexity as O(m * n), while 
> using TreeSet's contains(), the time complexity could be reduced to O (m * 
> logn).
> Another improvement is we do not need to scan every server for each rsgroup. 
> If the processed server could be recorded,  we could skip those.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19770) Add '--return-values' option to Shell to print return values of commands in interactive mode

2018-03-14 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399947#comment-16399947
 ] 

Appy commented on HBASE-19770:
--

bq. I think flipping the default to true is something we could consider if this 
"experiment" is deemed not as helpful as intended.
sounds right.
Pardon for starting down this road.

> Add '--return-values' option to Shell to print return values of commands in 
> interactive mode
> 
>
> Key: HBASE-19770
> URL: https://issues.apache.org/jira/browse/HBASE-19770
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.5.0, 2.0.0-beta-2, 1.4.2
>
> Attachments: HBASE-19770.000.branch-2.addendum.patch, 
> HBASE-19770.001.branch-2.patch, HBASE-19770.002.branch-2.patch, 
> HBASE-19770.003.branch-2.patch, HBASE-19770.004.branch-2.patch, 
> HBASE-19770.ADDENDUM.branch-2.patch
>
>
> Another good find by our Romil.
> {code}
> hbase(main):001:0> list
> TABLE
> a
> 1 row(s)
> Took 0.8385 seconds
> hbase(main):002:0> tables=list
> TABLE
> a
> 1 row(s)
> Took 0.0267 seconds
> hbase(main):003:0> puts tables
> hbase(main):004:0> p tables
> nil
> {code}
> The {{list}} command should be returning {{\['a'\]}} but is not.
> The command class itself appears to be doing the right thing -- maybe the 
> retval is getting lost somewhere else?
> FYI [~stack].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19999) Remove the SYNC_REPLICATION_ENABLED flag

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399931#comment-16399931
 ] 

Hudson commented on HBASE-1:


Results for branch HBASE-19064
[build #64 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/64/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/64//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/64//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-19064/64//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Remove the SYNC_REPLICATION_ENABLED flag
> 
>
> Key: HBASE-1
> URL: https://issues.apache.org/jira/browse/HBASE-1
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-1.HBASE-19064.001.patch, 
> HBASE-1.HBASE-19064.002.patch, HBASE-1.HBASE-19064.003.patch, 
> HBASE-1.HBASE-19064.004.patch, HBASE-1.HBASE-19064.005.patch
>
>
> It is a bit strange since we can not guard all the sync replication related 
> code with it. We'd better change its name and only use it within the WAL 
> construction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20178) [AMv2] Throw exception if hostile environment

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399921#comment-16399921
 ] 

Hudson commented on HBASE-20178:


Results for branch branch-2.0
[build #40 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> [AMv2] Throw exception if hostile environment
> -
>
> Key: HBASE-20178
> URL: https://issues.apache.org/jira/browse/HBASE-20178
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: 
> 0001-HBASE-20178-AMv2-Throw-exception-if-hostile-environm.patch, 
> HBASE-20178.branch-2.001.patch, HBASE-20178.branch-2.002.patch, 
> HBASE-20178.branch-2.003.patch, HBASE-20178.branch-2.004.patch, 
> HBASE-20178.branch-2.005.patch, HBASE-20178.branch-2.006.patch, 
> HBASE-20178.branch-2.007.patch
>
>
> New pattern where we throw exception on procedure construction if cluster is 
> going down, hosting master is stopping, table is offline, or table is 
> read-only. Fail fast rather than later internal to Procedure so can flag 
> caller there is a problem.
> Changed Move/Split/Merge Procedures.
> No point queuing a move region for a table that is offline and which may 
> never be re-enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399920#comment-16399920
 ] 

Hudson commented on HBASE-20187:


Results for branch branch-2.0
[build #40 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/40//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Shell startup fails with IncompatibleClassChangeError
> -
>
> Key: HBASE-20187
> URL: https://issues.apache.org/jira/browse/HBASE-20187
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-20187.branch-2.001.patch, 
> HBASE-20187.branch-2.002.patch, HBASE-20187.branch-2.003.patch, 
> HBASE-20187.branch-2.004.patch
>
>
> Starting shell fails with a jline exception.
> Before {{2402f1fd43 - HBASE-20108 Remove jline exclusion from ZooKeeper}} the 
> shell starts up.
> {noformat}
> $ ./bin/hbase shell
> 2018-03-13 13:56:58,975 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> HBase Shell
> Use "help" to get list of supported commands.
> Use "exit" to quit this interactive shell.
> Version 2.0.0-beta-2, rc998e8d5f9ca3013d175ed447116c0734192f36c, Tue Mar 13 
> 13:49:59 CET 2018
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but 
> interface was expected
>   at jline.TerminalFactory.create(TerminalFactory.java:101)
>   at jline.TerminalFactory.get(TerminalFactory.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438)
>   at 
> org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:360)
>   at 
> org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:40)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:328)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:141)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:145)
>   at 

[jira] [Commented] (HBASE-20178) [AMv2] Throw exception if hostile environment

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399902#comment-16399902
 ] 

Hudson commented on HBASE-20178:


Results for branch branch-2
[build #486 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> [AMv2] Throw exception if hostile environment
> -
>
> Key: HBASE-20178
> URL: https://issues.apache.org/jira/browse/HBASE-20178
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: 
> 0001-HBASE-20178-AMv2-Throw-exception-if-hostile-environm.patch, 
> HBASE-20178.branch-2.001.patch, HBASE-20178.branch-2.002.patch, 
> HBASE-20178.branch-2.003.patch, HBASE-20178.branch-2.004.patch, 
> HBASE-20178.branch-2.005.patch, HBASE-20178.branch-2.006.patch, 
> HBASE-20178.branch-2.007.patch
>
>
> New pattern where we throw exception on procedure construction if cluster is 
> going down, hosting master is stopping, table is offline, or table is 
> read-only. Fail fast rather than later internal to Procedure so can flag 
> caller there is a problem.
> Changed Move/Split/Merge Procedures.
> No point queuing a move region for a table that is offline and which may 
> never be re-enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399901#comment-16399901
 ] 

Hudson commented on HBASE-20187:


Results for branch branch-2
[build #486 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/486//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Shell startup fails with IncompatibleClassChangeError
> -
>
> Key: HBASE-20187
> URL: https://issues.apache.org/jira/browse/HBASE-20187
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-20187.branch-2.001.patch, 
> HBASE-20187.branch-2.002.patch, HBASE-20187.branch-2.003.patch, 
> HBASE-20187.branch-2.004.patch
>
>
> Starting shell fails with a jline exception.
> Before {{2402f1fd43 - HBASE-20108 Remove jline exclusion from ZooKeeper}} the 
> shell starts up.
> {noformat}
> $ ./bin/hbase shell
> 2018-03-13 13:56:58,975 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> HBase Shell
> Use "help" to get list of supported commands.
> Use "exit" to quit this interactive shell.
> Version 2.0.0-beta-2, rc998e8d5f9ca3013d175ed447116c0734192f36c, Tue Mar 13 
> 13:49:59 CET 2018
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but 
> interface was expected
>   at jline.TerminalFactory.create(TerminalFactory.java:101)
>   at jline.TerminalFactory.get(TerminalFactory.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438)
>   at 
> org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:360)
>   at 
> org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:40)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:328)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:141)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:145)
>   at 

[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399893#comment-16399893
 ] 

Ashish Singhi commented on HBASE-20146:
---

Sorry for the late turn around, I am occupied with our internal cluster issues, 
so couldn't get back here in time. 

Thank you very much [~Apache9] for committing the patches.

> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20166) Make sure the RS/Master can works fine when using table based replication storage layer

2018-03-14 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399886#comment-16399886
 ] 

Zheng Hu edited comment on HBASE-20166 at 3/15/18 4:04 AM:
---

There's a really big  problem here if we use table based replication to start a 
hbase cluster: 

For HMaster process, it works as following:
1.  Start active master initialization . 
2.  Master wait  rs report in .
3.  Master assign meta region to one of the region servers . 
4.  Master create hbase:replication table if not exist. 

But the RS need to finish initialize the replication source & sink before 
finish startup( and the initialization of replication source & sink must finish 
before opening  region, because  we need to listen the wal  event, otherwise 
our replication may lost data),  and  when initialize the source & sink , we 
need to read hbase:replication table which hasn't been avaiable  because our 
master is waiting rs to be OK,  and the rs is waiting hbase:replication to be 
OK ... a dead loop happen again ... 

After discussed with [~zghaobac] offline,  I'm considering that try to assign 
all  system table to a rs which only accept regions of system table assignment 
(The rs will skip to initialize the replication source or sink )...

I've tried to start a mini cluster by setting 
hbase.balancer.tablesOnMaster.systemTablesOnly=true & 
hbase.balancer.tablesOnMaster=true , it seems not work. because currently  we 
initialize the master logic firstly, then region logic  for the HMaster 
process, and it should be ...  
 


was (Author: openinx):
There's a really big  problem here if we use table based replication to start a 
hbase cluster: 

For HMaster process, it works as following:
1.  Start active master initialization . 
2.  Master wait  rs report in .
3.  Master assign meta region to one of the region servers . 
4.  Master create hbase:replication table if not exist. 

But the RS need to finish initialize the replication source & sink before 
finish startup( and the initialization of replication source & sink must finish 
before opening  region, because  we need to listen the wal  event, otherwise 
our replication may lost data),  and  when initialize the source & sink , we 
need to read hbase:replication table which hasn't been avaiable  because our 
master is waiting rs to be OK,  and the rs is waiting hbase:replication to be 
OK ... a dead loop happen again ... 

After discussed with [~zghaobac] offline,  I'm considering that try to assign 
all  system table to a rs which only accept regions of system table assignment 
(The rs will skip to initialize the replication source or sink )...

I've tried to start a mini cluster by setting 
hbase.balancer.tablesOnMaster.systemTablesOnly=true & 
hbase.balancer.tablesOnMaster=true , it seems not work. because currently  we 
initialize the master logic firstly, then region logic  for the HMaster 
process...  
 

> Make sure the RS/Master can works fine when using table based replication 
> storage layer
> ---
>
> Key: HBASE-20166
> URL: https://issues.apache.org/jira/browse/HBASE-20166
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>
> Currently,   we cannot setup the HBase Cluster because the master will list 
> peers before finish its initialization, and if master cannot finish 
> initialization, the meta cannot be online, in other hand, if meta cannot be 
> online, the list peers will never success when using table based replication. 
> a dead loop happen.
> {code}
> 2018-03-09 15:03:50,531 ERROR [M:0;huzheng-xiaomi:46549] 
> helpers.MarkerIgnoringBase(159): * ABORTING master 
> huzheng-xiaomi,46549,1520579026550: Unhandled exception. Starting shutdown. 
> *
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location for replica 0
>   at 
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
>   at 
> org.apache.hadoop.hbase.replication.TableReplicationPeerStorage.listPeerIds(TableReplicationPeerStorage.java:124)
>   at 
> org.apache.hadoop.hbase.master.replication.ReplicationPeerManager.create(ReplicationPeerManager.java:335)
>   at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:737)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:830)
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2014)
>   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:557)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20166) Make sure the RS/Master can works fine when using table based replication storage layer

2018-03-14 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399886#comment-16399886
 ] 

Zheng Hu edited comment on HBASE-20166 at 3/15/18 4:02 AM:
---

There's a really big  problem here if we use table based replication to start a 
hbase cluster: 

For HMaster process, it works as following:
1.  Start active master initialization . 
2.  Master wait  rs report in .
3.  Master assign meta region to one of the region servers . 
4.  Master create hbase:replication table if not exist. 

But the RS need to finish initialize the replication source & sink before 
finish startup( and the initialization of replication source & sink must finish 
before opening  region, because  we need to listen the wal  event, otherwise 
our replication may lost data),  and  when initialize the source & sink , we 
need to read hbase:replication table which hasn't been avaiable  because our 
master is waiting rs to be OK,  and the rs is waiting hbase:replication to be 
OK ... a dead loop happen again ... 

After discussed with [~zghaobac] offline,  I'm considering that try to assign 
all  system table to a rs which only accept regions of system table assignment 
(The rs will skip to initialize the replication source or sink )...

I've tried to start a mini cluster by setting 
hbase.balancer.tablesOnMaster.systemTablesOnly=true & 
hbase.balancer.tablesOnMaster=true , it seems not work. because currently  we 
initialize the master logic firstly, then region logic  for the HMaster 
process...  
 


was (Author: openinx):
There's a really big  problem here if we use table based replication to start a 
hbase cluster: 
1.  Start active master initialization . 
2.  Master wait  rs report in .
3.  Master assign meta region to one of the region servers . 
4.  Master create hbase:replication table if not exist. 

But the RS need to finish initialize the replication source & sink before 
finish startup( and the initialization of replication source & sink must finish 
before opening  region, because  we need to listen the wal  event, otherwise 
our replication may lost data),  and  when initialize the source & sink , we 
need to read hbase:replication table which hasn't been avaiable  because our 
master is waiting rs to be OK,  and the rs is waiting hbase:replication to be 
OK ... a dead loop happen again ... 

After discussed with [~zghaobac] offline,  I'm considering that try to assign 
all  system table to a rs which only accept regions of system table assignment 
(The rs will skip to initialize the replication source or sink )...

I've tried to start a mini cluster by setting 
hbase.balancer.tablesOnMaster.systemTablesOnly=true & 
hbase.balancer.tablesOnMaster=true , it seems not work. because currently  we 
initialize the master logic firstly, then region logic  for the HMaster 
process...  
 

> Make sure the RS/Master can works fine when using table based replication 
> storage layer
> ---
>
> Key: HBASE-20166
> URL: https://issues.apache.org/jira/browse/HBASE-20166
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>
> Currently,   we cannot setup the HBase Cluster because the master will list 
> peers before finish its initialization, and if master cannot finish 
> initialization, the meta cannot be online, in other hand, if meta cannot be 
> online, the list peers will never success when using table based replication. 
> a dead loop happen.
> {code}
> 2018-03-09 15:03:50,531 ERROR [M:0;huzheng-xiaomi:46549] 
> helpers.MarkerIgnoringBase(159): * ABORTING master 
> huzheng-xiaomi,46549,1520579026550: Unhandled exception. Starting shutdown. 
> *
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location for replica 0
>   at 
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
>   at 
> org.apache.hadoop.hbase.replication.TableReplicationPeerStorage.listPeerIds(TableReplicationPeerStorage.java:124)
>   at 
> org.apache.hadoop.hbase.master.replication.ReplicationPeerManager.create(ReplicationPeerManager.java:335)
>   at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:737)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:830)
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2014)
>   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:557)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20166) Make sure the RS/Master can works fine when using table based replication storage layer

2018-03-14 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399886#comment-16399886
 ] 

Zheng Hu commented on HBASE-20166:
--

There's a really big  problem here if we use table based replication to start a 
hbase cluster: 
1.  Start active master initialization . 
2.  Master wait  rs report in .
3.  Master assign meta region to one of the region servers . 
4.  Master create hbase:replication table if not exist. 

But the RS need to finish initialize the replication source & sink before 
finish startup( and the initialization of replication source & sink must finish 
before opening  region, because  we need to listen the wal  event, otherwise 
our replication may lost data),  and  when initialize the source & sink , we 
need to read hbase:replication table which hasn't been avaiable  because our 
master is waiting rs to be OK,  and the rs is waiting hbase:replication to be 
OK ... a dead loop happen again ... 

After discussed with [~zghaobac] offline,  I'm considering that try to assign 
all  system table to a rs which only accept regions of system table assignment 
(The rs will skip to initialize the replication source or sink )...

I've tried to start a mini cluster by setting 
hbase.balancer.tablesOnMaster.systemTablesOnly=true & 
hbase.balancer.tablesOnMaster=true , it seems not work. because currently  we 
initialize the master logic firstly, then region logic  for the HMaster 
process...  
 

> Make sure the RS/Master can works fine when using table based replication 
> storage layer
> ---
>
> Key: HBASE-20166
> URL: https://issues.apache.org/jira/browse/HBASE-20166
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>
> Currently,   we cannot setup the HBase Cluster because the master will list 
> peers before finish its initialization, and if master cannot finish 
> initialization, the meta cannot be online, in other hand, if meta cannot be 
> online, the list peers will never success when using table based replication. 
> a dead loop happen.
> {code}
> 2018-03-09 15:03:50,531 ERROR [M:0;huzheng-xiaomi:46549] 
> helpers.MarkerIgnoringBase(159): * ABORTING master 
> huzheng-xiaomi,46549,1520579026550: Unhandled exception. Starting shutdown. 
> *
> java.io.UncheckedIOException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the 
> location for replica 0
>   at 
> org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
>   at 
> org.apache.hadoop.hbase.replication.TableReplicationPeerStorage.listPeerIds(TableReplicationPeerStorage.java:124)
>   at 
> org.apache.hadoop.hbase.master.replication.ReplicationPeerManager.create(ReplicationPeerManager.java:335)
>   at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:737)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:830)
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2014)
>   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:557)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19389) Limit concurrency of put with dense (hundreds) columns to prevent write handler exhausted

2018-03-14 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-19389:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 2.0.0)
   2.1.0
   3.0.0
 Release Note: After HBASE-19389 we introduced a RegionServer 
self-protection mechanism to prevent write handler getting exhausted by high 
concurrency put with dense columns, mainly through two new properties: 
hbase.region.store.parallel.put.limit.min.column.count to decide what kind of 
put (with how many columns within a single column family) to limit (100 by 
default) and hbase.region.store.parallel.put.limit to limit the concurrency (10 
by default). There's another property for advanced user and please check source 
and javadoc of StoreHotnessProtector for more details. 
   Status: Resolved  (was: Patch Available)

Add release note and close issue. Thanks for the great job [~chancelq]

Setting fix version to 2.1.0/3.0.0 and please let us know if you'd like to 
include it in 2.0.0/1.4 bosses [~stack] [~apurtell]. Thanks.

> Limit concurrency of put with dense (hundreds) columns to prevent write 
> handler exhausted
> -
>
> Key: HBASE-19389
> URL: https://issues.apache.org/jira/browse/HBASE-19389
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 2.0.0
> Environment: 2000+ Region Servers
> PCI-E ssd
>Reporter: Chance Li
>Assignee: Chance Li
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: CSLM-concurrent-write.png, 
> HBASE-19389-branch-2-V10.patch, HBASE-19389-branch-2-V2.patch, 
> HBASE-19389-branch-2-V3.patch, HBASE-19389-branch-2-V4.patch, 
> HBASE-19389-branch-2-V5.patch, HBASE-19389-branch-2-V6.patch, 
> HBASE-19389-branch-2-V7.patch, HBASE-19389-branch-2-V8.patch, 
> HBASE-19389-branch-2-V9.patch, HBASE-19389-branch-2.patch, 
> HBASE-19389.master.patch, HBASE-19389.master.v2.patch, metrics-1.png, 
> ycsb-result.png
>
>
> In a large cluster, with a large number of clients, we found the RS's 
> handlers are all busy sometimes. And after investigation we found the root 
> cause is about CSLM, such as compare function heavy load. We reviewed the 
> related WALs, and found that there were many columns (more than 1000 columns) 
> were writing at that time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399879#comment-16399879
 ] 

Reid Chan commented on HBASE-20095:
---

Thanks Mike, you already did the rebase.
Based on v12, v13 aims to address {{javac}} errors.

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch, HBASE-20095.master.013.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-20095:
--
Attachment: HBASE-20095.master.013.patch

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch, HBASE-20095.master.013.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20147) Serial replication will be stuck if we create a table with serial replication but add it to a peer after there are region moves

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399862#comment-16399862
 ] 

Hadoop QA commented on HBASE-20147:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} hbase-client: The patch generated 1 new + 74 unchanged 
- 1 fixed = 75 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hbase-replication generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}106m 
47s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | 

[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399848#comment-16399848
 ] 

Hudson commented on HBASE-20146:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1091 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1091/])
HBASE-20146 Addendum Regions are stuck while opening when WAL is (zhangduo: rev 
3340618b497ad93277bdd0f71b69988a10363421)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java


> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399840#comment-16399840
 ] 

Hudson commented on HBASE-20146:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #381 (See 
[https://builds.apache.org/job/HBase-1.3-IT/381/])
HBASE-20146 Addendum Regions are stuck while opening when WAL is (zhangduo: rev 
c6ab36edeb67a3beba6ded84026e039777e6717e)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java


> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-20146.
---
Resolution: Fixed

Pushed the addendum to branch-1.2+.

> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.5.0, 1.2.7, 1.4.3, 1.3.2
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20045) When running compaction, cache recent blocks.

2018-03-14 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399810#comment-16399810
 ] 

Zach York commented on HBASE-20045:
---

I have seen some interest in adding compacted blocks to the bucketcache when 
cache on write is enabled. Otherwise the read performance can get very bad 
after compactions. See  [1] for where this is enabled.

If size is a concern, could we have a reference block inserted instead? As far 
as I understand, the data won't be changing with compactions, only the cache 
key (which depends on file name). However, a reference would incur an 
additional read from the block cache (or bucketcache).

Sorry to jump in so late in the conversation!

 

[1] 
[https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L1080]
 

> When running compaction, cache recent blocks.
> -
>
> Key: HBASE-20045
> URL: https://issues.apache.org/jira/browse/HBASE-20045
> Project: HBase
>  Issue Type: New Feature
>  Components: BlockCache, Compaction
>Affects Versions: 2.0.0-beta-1
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> HBase already allows to cache blocks on flush. This is very useful for 
> usecases where most queries are against recent data. However, as soon as 
> their is a compaction, those blocks are evicted. It will be interesting to 
> have a table level parameter to say "When compacting, cache blocks less than 
> 24 hours old". That way, when running compaction, all blocks where some data 
> are less than 24h hold, will be automatically cached. 
>  
> Very useful for table design where there is TS in the key but a long history 
> (Like a year of sensor data).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399791#comment-16399791
 ] 

Reid Chan commented on HBASE-20095:
---

I just saw HBASE-20117 also updated few lines in HMaster.java, this may 
introduce conflicts.

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20193) Basic Replication Web UI - Regionserver

2018-03-14 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399781#comment-16399781
 ] 

Guanghao Zhang commented on HBASE-20193:


Great. This is useful for our replication. I thought this UI is enough as a 
basic replication UI for 2.0. [~stack] Any more ideas?

> Basic Replication Web UI - Regionserver 
> 
>
> Key: HBASE-20193
> URL: https://issues.apache.org/jira/browse/HBASE-20193
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: webui.jpg, webui2.jpg
>
>
> subtask of HBASE-15809. Implementation of replication UI on Regionserver web 
> page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399782#comment-16399782
 ] 

Mike Drob commented on HBASE-20180:
---

Good to know you already addressed this. I had only looked at errors previously 
on branch-2, didn't realize you also did a bunch of warnings there. Resolving 
this then!

> Avoid Class::newInstance
> 
>
> Key: HBASE-20180
> URL: https://issues.apache.org/jira/browse/HBASE-20180
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Labels: error-prone
> Fix For: 2.0.0
>
> Attachments: HBASE-20180.patch, HBASE-20180.v2.patch, 
> HBASE-20180.v3.patch
>
>
> Class::newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions. The suggested replacement is 
> {{getDeclaredConstructor().newInstance()}}, which will wrap the checked 
> exceptions in InvocationException.
> There's even an error-prone warning about it, we should promote that to error 
> while we're fixing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20180:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

> Avoid Class::newInstance
> 
>
> Key: HBASE-20180
> URL: https://issues.apache.org/jira/browse/HBASE-20180
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Labels: error-prone
> Fix For: 2.0.0
>
> Attachments: HBASE-20180.patch, HBASE-20180.v2.patch, 
> HBASE-20180.v3.patch
>
>
> Class::newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions. The suggested replacement is 
> {{getDeclaredConstructor().newInstance()}}, which will wrap the checked 
> exceptions in InvocationException.
> There's even an error-prone warning about it, we should promote that to error 
> while we're fixing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399774#comment-16399774
 ] 

Chia-Ping Tsai commented on HBASE-20146:


{quote}Let me commit the addendum.
{quote}
+1

 

> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399771#comment-16399771
 ] 

Chia-Ping Tsai commented on HBASE-20119:


v3 - rebase

> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch, HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399769#comment-16399769
 ] 

Duo Zhang commented on HBASE-20146:
---

{quote}
Should we return the seq id rather than -1 according to the docs? addendum LGTM
{quote}

There is no 'txid ' for a disabled wal I think. Maybe we can change the comment 
to mention this.

Let me commit the addendum.

> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20119:
---
Release Note: 
1) Make all methods in TableDescriptorBuilder be setter pattern.
addCoprocessor -> setCoprocessor
addColumnFamily -> setColumnFamily
2) add CoprocessorDescriptor to carry cp information
3) add CoprocessorDescriptorBuilder to build CoprocessorDescriptor
4) TD disallow user to set negative priority to coprocessor since parsing the 
negative will cause the exception 

  was:
1) Make all methods in TableDescriptorBuilder be setter pattern.
addCoprocessor -> setCoprocessor
addColumnFamily -> setColumnFamily
2) add CoprocessorDescriptor to carry cp information
3) add CoprocessorDescriptorBuilder to build CoprocessorDescriptor


> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch, HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20119:
---
Attachment: HBASE-20119.v3.patch

> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch, HBASE-20119.v3.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399748#comment-16399748
 ] 

Hadoop QA commented on HBASE-20119:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  2s{color} 
| {color:red} HBASE-20119 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914609/HBASE-20119.v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11968/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20105) Allow flushes to target SSD storage

2018-03-14 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399735#comment-16399735
 ] 

Jean-Marc Spaggiari commented on HBASE-20105:
-

[~anoop.hbase] storagePolicy contains the flush related policy. 
this.conf.get(ColumnFamilyDescriptorBuilder.STORAGE_POLICY); containts the CF 
related policy. They can be different. And the one after is indeed if there is 
nothing specific for this CF. So this sections does:
1) Use Flush policy
2) If none, use CF policy (HBASE-14061)
3) If none, use global config

Regarding HConstant, I have put it there becaure there is other flush related 
constants. Do you prefer that somewhere else? What do you suggest?

> Allow flushes to target SSD storage
> ---
>
> Key: HBASE-20105
> URL: https://issues.apache.org/jira/browse/HBASE-20105
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, regionserver
>Affects Versions: hbase-2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20105-v0.patch, HBASE-20105-v1.patch, 
> HBASE-20105-v2.patch, HBASE-20105-v3.patch, HBASE-20105-v4.patch, 
> HBASE-20105-v5.patch, HBASE-20105-v6.patch
>
>
> On heavy writes usecases, flushes are compactes together pretty quickly. 
> Allowing flushes to go on SSD allows faster flush and faster first 
> compactions. Subsequent compactions going on regular storage.
>  
> I will be interesting to have an option to target SSD for flushes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20105) Allow flushes to target SSD storage

2018-03-14 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-20105:

Release Note: 
Introducing hbase.hstore.flush.storagepolicy column family parameter.
public static final String FLUSH_STORAGE_POLICY = 
"hbase.hstore.flush.storagepolicy"; 

This parameters allows the user to target specific storage policy for flushes. 

There can be 3 storage policy settings. HBase will use the first configured:
1) Use column family flush policy
2) If none, use column family storage policy
3) If none, use global configured storage policy

The following table creation command will instruct HBase to re-direct all 
memstore flushes into SSD drives:
create 't1', {NAME => 'f1',  CONFIGURATION => {'hba
se.hstore.flush.storagepolicy' => 'ALL_SSD'}} 



  was:
Introducing hbase.hstore.flush.storagepolicy column family parameter.
public static final String FLUSH_STORAGE_POLICY = 
"hbase.hstore.flush.storagepolicy"; 

This parameters allows the user to target specific storage policy for flushes. 

There can be 3 storage policy settings. HBase will use the first configured.

1) 

The following table creation command will instruct HBase to re-direct all 
memstore flushes into SSD drives:
create 't1', {NAME => 'f1',  CONFIGURATION => {'hba
se.hstore.flush.storagepolicy' => 'ALL_SSD'}} 




> Allow flushes to target SSD storage
> ---
>
> Key: HBASE-20105
> URL: https://issues.apache.org/jira/browse/HBASE-20105
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, regionserver
>Affects Versions: hbase-2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20105-v0.patch, HBASE-20105-v1.patch, 
> HBASE-20105-v2.patch, HBASE-20105-v3.patch, HBASE-20105-v4.patch, 
> HBASE-20105-v5.patch, HBASE-20105-v6.patch
>
>
> On heavy writes usecases, flushes are compactes together pretty quickly. 
> Allowing flushes to go on SSD allows faster flush and faster first 
> compactions. Subsequent compactions going on regular storage.
>  
> I will be interesting to have an option to target SSD for flushes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20105) Allow flushes to target SSD storage

2018-03-14 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-20105:

Release Note: 
Introducing hbase.hstore.flush.storagepolicy column family parameter.
public static final String FLUSH_STORAGE_POLICY = 
"hbase.hstore.flush.storagepolicy"; 

This parameters allows the user to target specific storage policy for flushes. 

There can be 3 storage policy settings. HBase will use the first configured.

1) 

The following table creation command will instruct HBase to re-direct all 
memstore flushes into SSD drives:
create 't1', {NAME => 'f1',  CONFIGURATION => {'hba
se.hstore.flush.storagepolicy' => 'ALL_SSD'}} 



  was:
Introducing hbase.hstore.flush.storagepolicy column family parameter.
public static final String FLUSH_STORAGE_POLICY = 
"hbase.hstore.flush.storagepolicy"; 

This parameters allows the user to target specific storage policy for flushes. 

The following table creation command will instruct HBase to re-direct all 
memstore flushes into SSD drives:
create 't1', {NAME => 'f1',  CONFIGURATION => {'hba
se.hstore.flush.storagepolicy' => 'ALL_SSD'}} 




> Allow flushes to target SSD storage
> ---
>
> Key: HBASE-20105
> URL: https://issues.apache.org/jira/browse/HBASE-20105
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, regionserver
>Affects Versions: hbase-2.0.0-alpha-4
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20105-v0.patch, HBASE-20105-v1.patch, 
> HBASE-20105-v2.patch, HBASE-20105-v3.patch, HBASE-20105-v4.patch, 
> HBASE-20105-v5.patch, HBASE-20105-v6.patch
>
>
> On heavy writes usecases, flushes are compactes together pretty quickly. 
> Allowing flushes to go on SSD allows faster flush and faster first 
> compactions. Subsequent compactions going on regular storage.
>  
> I will be interesting to have an option to target SSD for flushes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19441) Implement retry logic around starting exclusive backup operation

2018-03-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399704#comment-16399704
 ] 

Ted Yu commented on HBASE-19441:


{code}
+  public final static String BACKUP_EXCLUSIVE_OPERATION_TIMEOUT_KEY =
+  "hbase.backup.exclusive.op.timeout";
{code}
You can add '.second' as suffix to the key name.

{code}
+  } catch (InterruptedException e1) {
+  }
{code}
Restore interrupt status in the catch block.

Is it possible to add a test ?

> Implement retry logic around starting exclusive backup operation
> 
>
> Key: HBASE-19441
> URL: https://issues.apache.org/jira/browse/HBASE-19441
> Project: HBase
>  Issue Type: Improvement
>  Components: backuprestore
>Reporter: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-19441-v1.patch
>
>
> {quote}
> Specifically, the client does a checkAndPut to specifics coordinates in the 
> backup table and throws an exception when that fails. Remember that backups 
> are client driven (per some design review from a long time ago), so queuing 
> is tough to reason about (we have no "centralized" execution system to use). 
> At a glance, it seems pretty straightforward to add some retry/backoff 
> semantics to BackupSystemTable#startBackupExclusiveOperation().
> {quote}
> While we are in a state in which backup operations cannot be executed in 
> parallel, it would be nice to provide some retry logic + configuration. This 
> would alleviate users from having to build this themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399702#comment-16399702
 ] 

Hudson commented on HBASE-20180:


Results for branch branch-2.0
[build #39 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Avoid Class::newInstance
> 
>
> Key: HBASE-20180
> URL: https://issues.apache.org/jira/browse/HBASE-20180
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Labels: error-prone
> Attachments: HBASE-20180.patch, HBASE-20180.v2.patch, 
> HBASE-20180.v3.patch
>
>
> Class::newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions. The suggested replacement is 
> {{getDeclaredConstructor().newInstance()}}, which will wrap the checked 
> exceptions in InvocationException.
> There's even an error-prone warning about it, we should promote that to error 
> while we're fixing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20185) Fix ACL check for MasterRpcServices#execProcedure

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399701#comment-16399701
 ] 

Hudson commented on HBASE-20185:


Results for branch branch-2.0
[build #39 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/39//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix ACL check for MasterRpcServices#execProcedure
> -
>
> Key: HBASE-20185
> URL: https://issues.apache.org/jira/browse/HBASE-20185
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20185.master.001.patch
>
>
> Mailing thread ref: 
> [http://mail-archives.apache.org/mod_mbox/hbase-dev/201803.mbox/%3CCAAjhxrriGy_UXpC4iHCSyBB18iAbjU3Y2%2BnjQ-66i9kPPCrPRQ%40mail.gmail.com%3E]
> TLDR; HBASE-19400 messed up perms required for flushing a table.
> 
> Looks like flush and snapshot procedures are already doing permissions check 
> as part of preTableFlush/preSnapshot hooks. However, 
> LogRollMasterProcedureManager is missing access checks ([~elserj], can 
> someone look at it?)
>  
> With that, it makes no sense to put an ADMIN perm requirement which was added 
> by me in HBASE-19400. Removing it.
> However, to make things better for future, i have made few design changes 
> which will ensure 1) perm checks don't slip by mistake, 2) a suitable 
> placeholder for checks for flush & snapshot when we remove AccessController 
> for good.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-20119:
---
Attachment: HBASE-20119.v2.patch

> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20119) Introduce a pojo class to carry coprocessor information in order to make TableDescriptorBuilder accept multiple cp at once

2018-03-14 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399690#comment-16399690
 ] 

Chia-Ping Tsai commented on HBASE-20119:


v2 - rebase

> Introduce a pojo class to carry coprocessor information in order to make 
> TableDescriptorBuilder accept multiple cp at once
> --
>
> Key: HBASE-20119
> URL: https://issues.apache.org/jira/browse/HBASE-20119
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-20119.v0.patch.patch, HBASE-20119.v1.patch.patch, 
> HBASE-20119.v2.patch
>
>
> The way to add cp to TableDescriptorBuilder is shown below.
> {code:java}
> public TableDescriptorBuilder addCoprocessor(String className) throws 
> IOException {
>   return addCoprocessor(className, null, Coprocessor.PRIORITY_USER, null);
> }
> public TableDescriptorBuilder addCoprocessor(String className, Path 
> jarFilePath,
> int priority, final Map kvs) throws IOException {
>   desc.addCoprocessor(className, jarFilePath, priority, kvs);
>   return this;
> }
> public TableDescriptorBuilder addCoprocessorWithSpec(final String specStr) 
> throws IOException {
>   desc.addCoprocessorWithSpec(specStr);
>   return this;
> }{code}
> When loading our config to create table with multiple cps, we have to write 
> the ugly for-loop.
> {code:java}
> val builder = TableDescriptorBuilder.newBuilde(tableName)
>   .setAAA()
>   .setBBB()
> cps.map(toHBaseCp).foreach(builder.addCoprocessor)
> cfs.map(toHBaseCf).foreach(builder.addColumnFamily)
> admin.createTable(builder.build())
> {code}
> If we introduce a pojo to carry the cp data and add the method accepting 
> multiple cps and cfs, it is easier to exercise the fluent interface of 
> TableDescriptorBuilder.
> {code:java}
> admin.createTable(TableDescriptorBuilder.newBuilde(tableName)
> .addCoprocessor(cps.map(toHBaseCp).asJavaCollection)
> .addColumnFamily(cf.map(toHBaseCf).asJavaCollection)
> .setAAA()
> .setBBB()
> .build){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19441) Implement retry logic around starting exclusive backup operation

2018-03-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399676#comment-16399676
 ] 

Vladimir Rodionov commented on HBASE-19441:
---

Patch v1 is ready. It adds simple retry logic and timeout to acquiring backup 
system table exclusive lock. No procV2 is required or needed. 

> Implement retry logic around starting exclusive backup operation
> 
>
> Key: HBASE-19441
> URL: https://issues.apache.org/jira/browse/HBASE-19441
> Project: HBase
>  Issue Type: Improvement
>  Components: backuprestore
>Reporter: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-19441-v1.patch
>
>
> {quote}
> Specifically, the client does a checkAndPut to specifics coordinates in the 
> backup table and throws an exception when that fails. Remember that backups 
> are client driven (per some design review from a long time ago), so queuing 
> is tough to reason about (we have no "centralized" execution system to use). 
> At a glance, it seems pretty straightforward to add some retry/backoff 
> semantics to BackupSystemTable#startBackupExclusiveOperation().
> {quote}
> While we are in a state in which backup operations cannot be executed in 
> parallel, it would be nice to provide some retry logic + configuration. This 
> would alleviate users from having to build this themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19258) IntegrationTest for Backup and Restore

2018-03-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-19258:
--
Attachment: (was: HBASE-19441-v1.patch)

> IntegrationTest for Backup and Restore
> --
>
> Key: HBASE-19258
> URL: https://issues.apache.org/jira/browse/HBASE-19258
> Project: HBase
>  Issue Type: Test
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 3.0.0
>
>
> See chatter at https://docs.google.com/document/d/1xbPlLKjOcPq2LDqjbSkF6uND
> AG0mzgOxek6P3POLeMc/edit?usp=sharing
> We need to get an IntegrationTest in place for backup and restore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19441) Implement retry logic around starting exclusive backup operation

2018-03-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-19441:
--
Attachment: HBASE-19441-v1.patch

> Implement retry logic around starting exclusive backup operation
> 
>
> Key: HBASE-19441
> URL: https://issues.apache.org/jira/browse/HBASE-19441
> Project: HBase
>  Issue Type: Improvement
>  Components: backuprestore
>Reporter: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-19441-v1.patch
>
>
> {quote}
> Specifically, the client does a checkAndPut to specifics coordinates in the 
> backup table and throws an exception when that fails. Remember that backups 
> are client driven (per some design review from a long time ago), so queuing 
> is tough to reason about (we have no "centralized" execution system to use). 
> At a glance, it seems pretty straightforward to add some retry/backoff 
> semantics to BackupSystemTable#startBackupExclusiveOperation().
> {quote}
> While we are in a state in which backup operations cannot be executed in 
> parallel, it would be nice to provide some retry logic + configuration. This 
> would alleviate users from having to build this themselves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19258) IntegrationTest for Backup and Restore

2018-03-14 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-19258:
--
Attachment: HBASE-19441-v1.patch

> IntegrationTest for Backup and Restore
> --
>
> Key: HBASE-19258
> URL: https://issues.apache.org/jira/browse/HBASE-19258
> Project: HBase
>  Issue Type: Test
>  Components: integration tests
>Reporter: Josh Elser
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-19441-v1.patch
>
>
> See chatter at https://docs.google.com/document/d/1xbPlLKjOcPq2LDqjbSkF6uND
> AG0mzgOxek6P3POLeMc/edit?usp=sharing
> We need to get an IntegrationTest in place for backup and restore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20147) Serial replication will be stuck if we create a table with serial replication but add it to a peer after there are region moves

2018-03-14 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-20147:
--
Attachment: HBASE-20147.patch

> Serial replication will be stuck if we create a table with serial replication 
> but add it to a peer after there are region moves
> ---
>
> Key: HBASE-20147
> URL: https://issues.apache.org/jira/browse/HBASE-20147
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20147.patch, HBASE-20147.patch
>
>
> The start point for serial replication is that, if we are in the first range 
> then we are safe to push. And we will record replication barrier when the 
> replication scope is set to SERIAL even if the table is not contained in any 
> peers. So it could happen that, we record several barriers in the meta 
> already and then we add a peer which contains this table. So when 
> replicating, we will find that we are not in the first range, and then the 
> replication will be stuck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399658#comment-16399658
 ] 

Hudson commented on HBASE-20180:


Results for branch branch-2
[build #485 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Avoid Class::newInstance
> 
>
> Key: HBASE-20180
> URL: https://issues.apache.org/jira/browse/HBASE-20180
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Labels: error-prone
> Attachments: HBASE-20180.patch, HBASE-20180.v2.patch, 
> HBASE-20180.v3.patch
>
>
> Class::newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions. The suggested replacement is 
> {{getDeclaredConstructor().newInstance()}}, which will wrap the checked 
> exceptions in InvocationException.
> There's even an error-prone warning about it, we should promote that to error 
> while we're fixing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20185) Fix ACL check for MasterRpcServices#execProcedure

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399657#comment-16399657
 ] 

Hudson commented on HBASE-20185:


Results for branch branch-2
[build #485 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/485//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix ACL check for MasterRpcServices#execProcedure
> -
>
> Key: HBASE-20185
> URL: https://issues.apache.org/jira/browse/HBASE-20185
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HBASE-20185.master.001.patch
>
>
> Mailing thread ref: 
> [http://mail-archives.apache.org/mod_mbox/hbase-dev/201803.mbox/%3CCAAjhxrriGy_UXpC4iHCSyBB18iAbjU3Y2%2BnjQ-66i9kPPCrPRQ%40mail.gmail.com%3E]
> TLDR; HBASE-19400 messed up perms required for flushing a table.
> 
> Looks like flush and snapshot procedures are already doing permissions check 
> as part of preTableFlush/preSnapshot hooks. However, 
> LogRollMasterProcedureManager is missing access checks ([~elserj], can 
> someone look at it?)
>  
> With that, it makes no sense to put an ADMIN perm requirement which was added 
> by me in HBASE-19400. Removing it.
> However, to make things better for future, i have made few design changes 
> which will ensure 1) perm checks don't slip by mistake, 2) a suitable 
> placeholder for checks for flush & snapshot when we remove AccessController 
> for good.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399644#comment-16399644
 ] 

Hadoop QA commented on HBASE-20095:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 26s{color} 
| {color:red} hbase-server generated 2 new + 186 unchanged - 2 fixed = 188 
total (was 188) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}144m 
52s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20095 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914566/HBASE-20095.master.012.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7d661c89e083 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 
21:23:04 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 67a304d39f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11966/artifact/patchprocess/diff-compile-javac-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11966/testReport/ |
| Max. process+thread count | 3859 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399605#comment-16399605
 ] 

Ted Yu commented on HBASE-20090:


[~stack]:
Do you want this to go into branch-2.0 ?

> Properly handle Preconditions check failure in 
> MemStoreFlusher$FlushHandler.run
> ---
>
> Key: HBASE-20090
> URL: https://issues.apache.org/jira/browse/HBASE-20090
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20090-server-61260-01-07.log, 20090.v6.txt, 
> 20090.v7.txt, 20090.v8.txt, 20090.v9.txt
>
>
> Copied the following from a comment since this was better description of the 
> race condition.
> The original description was merged to the beginning of my first comment 
> below.
> With more debug logging, we can see the scenario where the exception was 
> triggered.
> {code}
> 2018-03-02 17:28:30,097 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit: 
> Splitting TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085., 
> compaction_queue=(0:0), split_queue=1
> 2018-03-02 17:28:30,098 DEBUG 
> [RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=16020] 
> regionserver.IncreasingToUpperBoundRegionSplitPolicy: ShouldSplit because 
> info  size=6.9G, sizeToCheck=256.0M, regionsWithCommonTable=1
> 2018-03-02 17:28:30,296 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,297 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush thread woke up because memory above low 
> water=381.5 M
> 2018-03-02 17:28:30,297 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16020] 
> regionserver.MemStoreFlusher: wake up flusher due to ABOVE_ONHEAP_LOWER_MARK
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> TestTable,,1520011528142.0453f29030757eedb6e6a1c57e88c085. with size 400432696
> 2018-03-02 17:28:30,298 DEBUG [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. with size 0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Flush of region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. due to global
>  heap pressure. Flush type=ABOVE_ONHEAP_LOWER_MARKTotal Memstore Heap 
> size=381.9 MTotal Memstore Off-Heap size=0, Region memstore size=0
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: wake up by WAKEUPFLUSH_INSTANCE
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Nothing to flush for 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae.
> 2018-03-02 17:28:30,298 INFO  [MemStoreFlusher.1] 
> regionserver.MemStoreFlusher: Excluding unflushable region 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. -trying to 
> find a different region to flush.
> {code}
> Region 0453f29030757eedb6e6a1c57e88c085 was being split.
> In HRegion#flushcache, the log from else branch can be seen in 
> 20090-server-61260-01-07.log :
> {code}
>   synchronized (writestate) {
> if (!writestate.flushing && writestate.writesEnabled) {
>   this.writestate.flushing = true;
> } else {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("NOT flushing memstore for region " + this
> + ", flushing=" + writestate.flushing + ", writesEnabled="
> + writestate.writesEnabled);
>   }
> {code}
> Meaning, region 0453f29030757eedb6e6a1c57e88c085 couldn't flush, leaving 
> memory pressure at high level.
> When MemStoreFlusher ran to the following call, the region was no longer a 
> flush candidate:
> {code}
>   HRegion bestFlushableRegion =
>   getBiggestMemStoreRegion(regionsBySize, excludedRegions, true);
> {code}
> So the other region, 
> atlas_janus,,1519927429371.fbcb5e495344542daf8b499e4bac03ae. , was examined 
> next. Since the region was not receiving write, the (current) Precondition 
> check failed.
> The proposed fix is to convert the Precondition to normal return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399601#comment-16399601
 ] 

Hudson commented on HBASE-18864:


Results for branch branch-1.2
[build #267 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20189) Typo in Required Java Version error message while building HBase.

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399602#comment-16399602
 ] 

Hudson commented on HBASE-20189:


Results for branch branch-1.2
[build #267 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/267//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Typo in Required Java Version error message while building HBase.
> -
>
> Key: HBASE-20189
> URL: https://issues.apache.org/jira/browse/HBASE-20189
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Trivial
>  Labels: beginner, beginners
> Fix For: 2.0.0, 2.1.0, 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-20189.master.001.patch
>
>
> Change 'requirs' to 'requires'. See below:
> {code:java}
> $ mvn clean install -DskipTests
> ...
> [WARNING] Rule 2: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
> with message:
> Java is out of date.
>   HBase requirs at least version 1.8 of the JDK to properly build from source.
>   You appear to be using an older version. You can use either "mvn -version" 
> or
>   "mvn enforcer:display-info" to verify what version is active.
>   See the reference guide on building for more information: 
> http://hbase.apache.org/book.html#build
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20204) Add locking to RefreshFileConnections in BucketCache

2018-03-14 Thread Zach York (JIRA)
Zach York created HBASE-20204:
-

 Summary: Add locking to RefreshFileConnections in BucketCache
 Key: HBASE-20204
 URL: https://issues.apache.org/jira/browse/HBASE-20204
 Project: HBase
  Issue Type: Bug
  Components: BucketCache
Reporter: Zach York
Assignee: Zach York


This is a follow-up to HBASE-20141 where [~anoop.hbase] suggested adding 
locking for refreshing channels.

I have also seen this become an issue when a RS has to abort and it locks on 
trying to flush out the remaining data to the cache (since cache on write was 
turned on).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399579#comment-16399579
 ] 

Hadoop QA commented on HBASE-18864:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
22s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.8.0_163. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} hbase-server in branch-1 failed with JDK v1.7.0_171. 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
19s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} branch-1 passed with JDK v1.8.0_163 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-1 passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_163. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.8.0_163. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_171. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} hbase-server in the patch failed with JDK v1.7.0_171. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
10s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  2m 
59s{color} | {color:red} The patch causes 44 errors with Hadoop v2.4.1. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m 
48s{color} | {color:red} The patch causes 44 errors with Hadoop v2.5.2. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_163 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.7.0_171 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 13s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.regionserver.TestGlobalThrottler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-18864 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399555#comment-16399555
 ] 

Hudson commented on HBASE-18864:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #1090 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1090/])
HBASE-18864 (addendum) Fixed unit test failure (apurtell: rev 
3d8f8aea7068b35ea59fa8975db3d3d77e7a106f)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399544#comment-16399544
 ] 

Hudson commented on HBASE-18864:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #380 (See 
[https://builds.apache.org/job/HBase-1.3-IT/380/])
HBASE-18864 (addendum) Fixed unit test failure (apurtell: rev 
63655b54925016d32f037f00d51492746655a156)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20135) NullPointerException during reading bloom filter when upgraded from hbase-1 to hbase-2

2018-03-14 Thread Sakthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-20135:
---
Description: 
When upgraded from hbase-1 to hbase-2, found following exception logged 
multiple times in the log:
{code:java}
ERROR [StoreFileOpenerThread-test_cf-1] regionserver.StoreFileReader: Error 
reading bloom filter meta for GENERAL_BLOOM_META -- proceeding without
java.io.IOException: Comparator class 
org.apache.hadoop.hbase.KeyValue$RawBytesComparator is not instantiable
        at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:628)
        at 
org.apache.hadoop.hbase.io.hfile.CompoundBloomFilter.(CompoundBloomFilter.java:79)
        at 
org.apache.hadoop.hbase.util.BloomFilterFactory.createFromMeta(BloomFilterFactory.java:104)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileReader.loadBloomfilter(StoreFileReader.java:479)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:425)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:460)
        at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:671)
        at 
org.apache.hadoop.hbase.regionserver.HStore.lambda$openStoreFiles$0(HStore.java:537)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException{code}
 
Analysis from [~anoop.hbase]:
Checking the related code.  There seems no issue..  We are not going
to even fail reading the bloom.  In 2.0 code base we expect the
comparator class name to be null.  But in 1.x we write old KV based
Raw Bytes comparator class name.  So reading that back, we will return
class name as null and we get NPE it looks like.
{code:java}
 else if 
(comparatorClassName.equals("org.apache.hadoop.hbase.KeyValue$RawBytesComparator")
        || 
comparatorClassName.equals("org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator"))
{
      // When the comparator to be used is Bytes.BYTES_RAWCOMPARATOR,
we just return null from here
      // Bytes.BYTES_RAWCOMPARATOR is not a CellComparator
      comparatorKlass = null;
    }
{code}
We can better do a null check before trying the comparator class
instantiation so that we can avoid this scary error logs

  was:
When upgraded from hbase-1 to hbase-2, found following exception logged 
multiple times in the log:
{code:java}
ERROR [StoreFileOpenerThread-test_cf-1] regionserver.StoreFileReader: Error 
reading bloom filter meta for GENERAL_BLOOM_META -- proceeding without
java.io.IOException: Comparator class 
org.apache.hadoop.hbase.KeyValue$RawBytesComparator is not instantiable
        at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:628)
        at 
org.apache.hadoop.hbase.io.hfile.CompoundBloomFilter.(CompoundBloomFilter.java:79)
        at 
org.apache.hadoop.hbase.util.BloomFilterFactory.createFromMeta(BloomFilterFactory.java:104)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileReader.loadBloomfilter(StoreFileReader.java:479)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:425)
        at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:460)
        at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:671)
        at 
org.apache.hadoop.hbase.regionserver.HStore.lambda$openStoreFiles$0(HStore.java:537)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException{code}
 
Analysis from [~anoop.hbase]:
Checking the related code.  There seems no issue..  We are not going
to even fail reading the bloom.  In 2.0 code base we expect the
comparator class name to be null.  But in 1.x we write old KV based
Raw Bytes comparator class name.  So reading that back, we will return
class name as null and we get NPE it looks like.
{code}
 else if 
(comparatorClassName.equals("org.apache.hadoop.hbase.KeyValue$RawBytesComparator")
        || 
comparatorClassName.equals("org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator"))
{
      // When the comparator to be 

[jira] [Commented] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399535#comment-16399535
 ] 

Andrew Purtell commented on HBASE-20180:


> The suggested replacement is {{getDeclaredConstructor().newInstance()}}, 

error-prone flagged this issue in branch-1 code and wherever I saw it I made 
the replacement in earlier work. [~mdrob] if you can find occurrences I missed 
by all means.

> Avoid Class::newInstance
> 
>
> Key: HBASE-20180
> URL: https://issues.apache.org/jira/browse/HBASE-20180
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Labels: error-prone
> Attachments: HBASE-20180.patch, HBASE-20180.v2.patch, 
> HBASE-20180.v3.patch
>
>
> Class::newInstance is deprecated starting in Java 9 - 
> https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
> undeclared checked exceptions. The suggested replacement is 
> {{getDeclaredConstructor().newInstance()}}, which will wrap the checked 
> exceptions in InvocationException.
> There's even an error-prone warning about it, we should promote that to error 
> while we're fixing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19994) Create a new class for RPC throttling exception, make it retryable.

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399533#comment-16399533
 ] 

Andrew Purtell commented on HBASE-19994:


lgtm, thanks

> Create a new class for RPC throttling exception, make it retryable. 
> 
>
> Key: HBASE-19994
> URL: https://issues.apache.org/jira/browse/HBASE-19994
> Project: HBase
>  Issue Type: Improvement
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Major
> Attachments: HBASE-19994-master-v01.patch
>
>
> Based on a discussion at dev mailing list.
>  
> {code:java}
> Thanks Andrew.
> +1 for the second option, I will create a jira for this change.
> Huaxiang
> On Feb 9, 2018, at 1:09 PM, Andrew Purtell  wrote:
> We have
> public class ThrottlingException extends QuotaExceededException
> public class QuotaExceededException extends DoNotRetryIOException
> Let the storage quota limits throw QuotaExceededException directly (based
> on DNRIOE). That seems fine.
> However, ThrottlingException is thrown as a result of a temporal quota,
> so it is inappropriate for this to inherit from DNRIOE, it should inherit
> IOException instead so the client is allowed to retry until successful, or
> until the retry policy is exhausted.
> We are in a bit of a pickle because we've released with this inheritance
> hierarchy, so to change it we will need a new minor, or we will want to
> deprecate ThrottlingException and use a new exception class instead, one
> which does not inherit from DNRIOE.
> On Feb 7, 2018, at 9:25 AM, Huaxiang Sun  wrote:
> Hi Mike,
>   You are right. For rpc throttling, definitely it is retryable. For storage 
> quota, I think it will be fail faster (non-retryable).
>   We probably need to separate these two types of exceptions, I will do some 
> more research and follow up.
>   Thanks,
>   Huaxiang
> On Feb 7, 2018, at 9:16 AM, Mike Drob  wrote:
> I think, philosophically, there can be two kinds of QEE -
> For throttling, we can retry. The quota is a temporal quota - you have done
> too many operations this minute, please try again next minute and
> everything will work.
> For storage, we shouldn't retry. The quota is a fixed quote - you have
> exceeded your allotted disk space, please do not try again until you have
> remedied the situation.
> Our current usage conflates the two, sometimes it is correct, sometimes not.
> On Wed, Feb 7, 2018 at 11:00 AM, Huaxiang Sun  wrote:
> Hi Stack,
>  I run into a case that a mapreduce job in hive cannot finish because
> it runs into a QEE.
> I need to look into the hive mr task to see if QEE is not handled
> correctly in hbase code or in hive code.
> I am thinking that if  QEE is a retryable exception, then it should be
> taken care of by the hbase code.
> I will check more and report back.
> Thanks,
> Huaxiang
> On Feb 7, 2018, at 8:23 AM, Stack  wrote:
> QEE being a DNRIOE seems right on the face of it.
> But if throttling, a DNRIOE is inappropriate. Where you seeing a QEE in a
> throttling scenario Huaxiang?
> Thanks,
> S
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399521#comment-16399521
 ] 

Umesh Agashe commented on HBASE-18864:
--

Thank you [~apurtell]!

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20146) Regions are stuck while opening when WAL is disabled

2018-03-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-20146:


Reopened because there is an addendum in progress and some discussion about it. 
Please commit the addendum asap as soon as the discussion is settled or revert 
the original commit. Thanks!

> Regions are stuck while opening when WAL is disabled
> 
>
> Key: HBASE-20146
> URL: https://issues.apache.org/jira/browse/HBASE-20146
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.3.1
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: HBASE-20146-addendum.patch, HBASE-20146.patch, 
> HBASE-20146.v1.patch
>
>
> On a running cluster we had set {{hbase.regionserver.hlog.enabled}} to false, 
> to disable the WAL for complete cluster, after restarting HBase service, 
> regions are not getting opened leading to HMaster abort as Namespace table 
> regions are not getting assigned. 
> jstack for region open:
> {noformat}
> "RS_OPEN_PRIORITY_REGION-BLR106595:16045-1" #159 prio=5 os_prio=0 
> tid=0x7fdfa4341000 nid=0x419d waiting on condition [0x7fdfa0467000]
> java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x87554448> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at org.apache.hadoop.hbase.wal.WALKey.getWriteEntry(WALKey.java:98)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeMarker(WALUtil.java:131)
> at 
> org.apache.hadoop.hbase.regionserver.wal.WALUtil.writeRegionEventMarker(WALUtil.java:88)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.writeRegionOpenMarker(HRegion.java:1026)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6849)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6803)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6774)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6730)
> at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:6681)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
> at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> This used to work with HBase 1.0.2 version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18864:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.5.0, 1.2.7, 1.4.3, 1.3.2
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Sakthi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399511#comment-16399511
 ] 

Sakthi commented on HBASE-18864:


Thank you [~apurtell] .

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399509#comment-16399509
 ] 

Andrew Purtell commented on HBASE-18864:


Nope, false alarm. All good. Pushed to all live 1.x.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399500#comment-16399500
 ] 

Ted Yu commented on HBASE-20197:


Can you address checkstyle warnings ?

Please also include trivial change in hbase-server module to trigger tests in 
that module.

Thanks

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20178) [AMv2] Throw exception if hostile environment

2018-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20178:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed after fixing javac and checkstyle issues.

> [AMv2] Throw exception if hostile environment
> -
>
> Key: HBASE-20178
> URL: https://issues.apache.org/jira/browse/HBASE-20178
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: 
> 0001-HBASE-20178-AMv2-Throw-exception-if-hostile-environm.patch, 
> HBASE-20178.branch-2.001.patch, HBASE-20178.branch-2.002.patch, 
> HBASE-20178.branch-2.003.patch, HBASE-20178.branch-2.004.patch, 
> HBASE-20178.branch-2.005.patch, HBASE-20178.branch-2.006.patch, 
> HBASE-20178.branch-2.007.patch
>
>
> New pattern where we throw exception on procedure construction if cluster is 
> going down, hosting master is stopping, table is offline, or table is 
> read-only. Fail fast rather than later internal to Procedure so can flag 
> caller there is a problem.
> Changed Move/Split/Merge Procedures.
> No point queuing a move region for a table that is offline and which may 
> never be re-enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18864:
---
 Release Note: The only currently legal values for REPLICATION_SCOPE in 
column schema are 0 (local) and 1 (global). This change enforces selection of 
only either of these values when setting the attribute using HColumnDescriptor.
Fix Version/s: 1.5.0
  Component/s: (was: hbase)
   Replication
   Client

There's another failure in TestFromClientSide in branch-1. Bisecting to find 
it. Still planning to commit the addendum here

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: Client, Replication
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.5.0, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399449#comment-16399449
 ] 

Hadoop QA commented on HBASE-20197:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
23s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} hbase-common: The patch generated 5 new + 0 unchanged 
- 1 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  6m  
1s{color} | {color:red} The patch causes 10 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  7m 
56s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
58s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
19s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20197 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914570/HBASE-20197.3.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 9bad19c55e3a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 67a304d39f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 

[jira] [Commented] (HBASE-20189) Typo in Required Java Version error message while building HBase.

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399415#comment-16399415
 ] 

Hudson commented on HBASE-20189:


Results for branch branch-2
[build #484 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Typo in Required Java Version error message while building HBase.
> -
>
> Key: HBASE-20189
> URL: https://issues.apache.org/jira/browse/HBASE-20189
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Trivial
>  Labels: beginner, beginners
> Fix For: 2.0.0, 2.1.0, 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-20189.master.001.patch
>
>
> Change 'requirs' to 'requires'. See below:
> {code:java}
> $ mvn clean install -DskipTests
> ...
> [WARNING] Rule 2: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
> with message:
> Java is out of date.
>   HBase requirs at least version 1.8 of the JDK to properly build from source.
>   You appear to be using an older version. You can use either "mvn -version" 
> or
>   "mvn enforcer:display-info" to verify what version is active.
>   See the reference guide on building for more information: 
> http://hbase.apache.org/book.html#build
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19389) Limit concurrency of put with dense (hundreds) columns to prevent write handler exhausted

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399414#comment-16399414
 ] 

Hudson commented on HBASE-19389:


Results for branch branch-2
[build #484 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/484//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Limit concurrency of put with dense (hundreds) columns to prevent write 
> handler exhausted
> -
>
> Key: HBASE-19389
> URL: https://issues.apache.org/jira/browse/HBASE-19389
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 2.0.0
> Environment: 2000+ Region Servers
> PCI-E ssd
>Reporter: Chance Li
>Assignee: Chance Li
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: CSLM-concurrent-write.png, 
> HBASE-19389-branch-2-V10.patch, HBASE-19389-branch-2-V2.patch, 
> HBASE-19389-branch-2-V3.patch, HBASE-19389-branch-2-V4.patch, 
> HBASE-19389-branch-2-V5.patch, HBASE-19389-branch-2-V6.patch, 
> HBASE-19389-branch-2-V7.patch, HBASE-19389-branch-2-V8.patch, 
> HBASE-19389-branch-2-V9.patch, HBASE-19389-branch-2.patch, 
> HBASE-19389.master.patch, HBASE-19389.master.v2.patch, metrics-1.png, 
> ycsb-result.png
>
>
> In a large cluster, with a large number of clients, we found the RS's 
> handlers are all busy sometimes. And after investigation we found the root 
> cause is about CSLM, such as compare function heavy load. We reviewed the 
> related WALs, and found that there were many columns (more than 1000 columns) 
> were writing at that time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399413#comment-16399413
 ] 

Hadoop QA commented on HBASE-20187:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
5s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} The patch generated 0 new + 44 unchanged - 1 fixed = 
44 total (was 45) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}211m 
26s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}284m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914513/HBASE-20187.branch-2.003.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
shadedjars  hadoopcheck  xml  compile  |
| uname | Linux 868bc558aa87 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / ad425e8603 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| shellcheck 

[jira] [Commented] (HBASE-20189) Typo in Required Java Version error message while building HBase.

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399412#comment-16399412
 ] 

Hudson commented on HBASE-20189:


Results for branch branch-1.4
[build #255 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Typo in Required Java Version error message while building HBase.
> -
>
> Key: HBASE-20189
> URL: https://issues.apache.org/jira/browse/HBASE-20189
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Trivial
>  Labels: beginner, beginners
> Fix For: 2.0.0, 2.1.0, 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-20189.master.001.patch
>
>
> Change 'requirs' to 'requires'. See below:
> {code:java}
> $ mvn clean install -DskipTests
> ...
> [WARNING] Rule 2: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
> with message:
> Java is out of date.
>   HBase requirs at least version 1.8 of the JDK to properly build from source.
>   You appear to be using an older version. You can use either "mvn -version" 
> or
>   "mvn enforcer:display-info" to verify what version is active.
>   See the reference guide on building for more information: 
> http://hbase.apache.org/book.html#build
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399410#comment-16399410
 ] 

Hudson commented on HBASE-18864:


Results for branch branch-1.4
[build #255 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20104) Fix infinite loop of RIT when creating table on a rsgroup that has no online servers

2018-03-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399411#comment-16399411
 ] 

Hudson commented on HBASE-20104:


Results for branch branch-1.4
[build #255 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/255//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Fix infinite loop of RIT when creating table on a rsgroup that has no online 
> servers
> 
>
> Key: HBASE-20104
> URL: https://issues.apache.org/jira/browse/HBASE-20104
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0-beta-2
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-20104.branch-1.001.patch, 
> HBASE-20104.branch-1.4.001.patch, HBASE-20104.branch-2.001.patch, 
> HBASE-20104.branch-2.002.patch
>
>
> This error has been reported in 
> https://builds.apache.org/job/PreCommit-HBASE-Build/11635/testReport/org.apache.hadoop.hbase.rsgroup/TestRSGroups/org_apache_hadoop_hbase_rsgroup_TestRSGroups/
> Cases that creating tables on a rsgroup which has been stopped or 
> decommissioned all region servers can reproduce this error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399400#comment-16399400
 ] 

Andrew Purtell commented on HBASE-18864:


lgtm, committing

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20180) Avoid Class::newInstance

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399367#comment-16399367
 ] 

Hadoop QA commented on HBASE-20180:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
53s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-build-configuration {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
8s{color} | {color:green} hbase-build-configuration in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 50s{color} 
| {color:red} hbase-server generated 12 new + 176 unchanged - 12 fixed = 188 
total (was 188) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} hbase-mapreduce generated 0 new + 159 unchanged - 2 
fixed = 159 total (was 161) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} hbase-backup generated 0 new + 62 unchanged - 1 
fixed = 62 total (was 63) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 6s{color} | {color:green} The patch hbase-build-configuration passed 
checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} hbase-server: The patch generated 0 new + 238 
unchanged - 7 fixed = 238 total (was 245) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch hbase-endpoint passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-backup passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 

[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399349#comment-16399349
 ] 

Umesh Agashe commented on HBASE-18864:
--

Thanks [~apurtell]! we also need help with reviewing and committing addendum to 
the patch.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20178) [AMv2] Throw exception if hostile environment

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399231#comment-16399231
 ] 

Hadoop QA commented on HBASE-20178:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 10m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 27s{color} 
| {color:red} hbase-client generated 1 new + 103 unchanged - 1 fixed = 104 
total (was 104) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} hbase-client: The patch generated 0 new + 102 
unchanged - 1 fixed = 102 total (was 103) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
15s{color} | {color:red} hbase-server: The patch generated 1 new + 587 
unchanged - 1 fixed = 588 total (was 588) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m  2s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
11s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}149m 
59s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}229m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20178 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914514/HBASE-20178.branch-2.007.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e91a67c2c0ef 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 

[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399227#comment-16399227
 ] 

Hadoop QA commented on HBASE-20187:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
3s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
25s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} The patch generated 0 new + 44 unchanged - 1 fixed = 
44 total (was 45) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 26s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestShutdownWithNoRegionServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-20187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914529/HBASE-20187.branch-2.004.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  javac  javadoc  unit  
shadedjars  hadoopcheck  xml  compile  |
| uname | Linux fa0c960fd308 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / ad425e8603 |
| maven | version: Apache Maven 3.5.3 

[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399229#comment-16399229
 ] 

Andrew Purtell commented on HBASE-18864:


It's fine if you'd prefer just to commit the addendum. The change can be undone 
later should the meaning of replication scope need to widen to accommodate an 
enhancement. No strong opinion here 

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20203) [AMv2] CODE-BUG: Uncaught runtime exception for pid=...., state=SUCCESS; AssignProcedure

2018-03-14 Thread stack (JIRA)
stack created HBASE-20203:
-

 Summary: [AMv2] CODE-BUG: Uncaught runtime exception for pid=, 
state=SUCCESS; AssignProcedure
 Key: HBASE-20203
 URL: https://issues.apache.org/jira/browse/HBASE-20203
 Project: HBase
  Issue Type: Bug
  Components: amv2
Affects Versions: 2.0.0-beta-2
Reporter: stack
Assignee: stack


This is an odd one. Causes ITBLL to fail because region is offline.

Two seconds after reporting Finished, successful assign, another thread tries 
to finish the Procedure. The second run messes us up.

{code}
2018-03-14 11:04:07,987 INFO  [PEWorker-1] procedure2.ProcedureExecutor: 
Finished pid=3600, ppid=3591, state=SUCCESS; AssignProcedure 
table=IntegrationTestBigLinkedList, region=b58e6e7c3b2e449f80533ea999707319 in 
4.4100sec

2018-03-14 11:04:10,600 INFO  [PEWorker-2] procedure.MasterProcedureScheduler: 
pid=3600, ppid=3591, state=SUCCESS; AssignProcedure 
table=IntegrationTestBigLinkedList, region=b58e6e7c3b2e449f80533ea999707319, 
IntegrationTestBigLinkedList,\x9Ey\xE7\x9Ey\xE7\x9Ep,1521050540660.b58e6e7c3b2e449f80533ea999707319.
2018-03-14 11:04:10,606 ERROR [PEWorker-2] procedure2.ProcedureExecutor: 
CODE-BUG: Uncaught runtime exception for pid=3600, ppid=3591, state=SUCCESS; 
AssignProcedure table=IntegrationTestBigLinkedList, 
region=b58e6e7c3b2e449f80533ea999707319 
   
java.lang.UnsupportedOperationException: Unhandled state 
REGION_TRANSITION_FINISH; there is no rollback for assignment unless we cancel 
the operation by dropping/disabling the table
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.rollback(RegionTransitionProcedure.java:345)


  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.rollback(RegionTransitionProcedure.java:86)


   at 
org.apache.hadoop.hbase.procedure2.Procedure.doRollback(Procedure.java:859)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:1353)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeRollback(ProcedureExecutor.java:1309)


 at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1178)
  at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:75)


at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1740)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20187:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch as well as putting up with my feedback, Balazs!

> Shell startup fails with IncompatibleClassChangeError
> -
>
> Key: HBASE-20187
> URL: https://issues.apache.org/jira/browse/HBASE-20187
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-20187.branch-2.001.patch, 
> HBASE-20187.branch-2.002.patch, HBASE-20187.branch-2.003.patch, 
> HBASE-20187.branch-2.004.patch
>
>
> Starting shell fails with a jline exception.
> Before {{2402f1fd43 - HBASE-20108 Remove jline exclusion from ZooKeeper}} the 
> shell starts up.
> {noformat}
> $ ./bin/hbase shell
> 2018-03-13 13:56:58,975 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> HBase Shell
> Use "help" to get list of supported commands.
> Use "exit" to quit this interactive shell.
> Version 2.0.0-beta-2, rc998e8d5f9ca3013d175ed447116c0734192f36c, Tue Mar 13 
> 13:49:59 CET 2018
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but 
> interface was expected
>   at jline.TerminalFactory.create(TerminalFactory.java:101)
>   at jline.TerminalFactory.get(TerminalFactory.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438)
>   at 
> org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:360)
>   at 
> org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:40)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:328)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:141)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:145)
>   at org.jruby.RubyClass.newInstance(RubyClass.java:994)
>   at 
> org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   

[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-14 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Status: Open  (was: Patch Available)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-14 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Attachment: HBASE-20197.3.patch

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399202#comment-16399202
 ] 

stack commented on HBASE-20187:
---

+1 for branch-2 and branch-2.0. Our shell is broke in dev mode w/o this fix.

> Shell startup fails with IncompatibleClassChangeError
> -
>
> Key: HBASE-20187
> URL: https://issues.apache.org/jira/browse/HBASE-20187
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Blocker
> Attachments: HBASE-20187.branch-2.001.patch, 
> HBASE-20187.branch-2.002.patch, HBASE-20187.branch-2.003.patch, 
> HBASE-20187.branch-2.004.patch
>
>
> Starting shell fails with a jline exception.
> Before {{2402f1fd43 - HBASE-20108 Remove jline exclusion from ZooKeeper}} the 
> shell starts up.
> {noformat}
> $ ./bin/hbase shell
> 2018-03-13 13:56:58,975 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> HBase Shell
> Use "help" to get list of supported commands.
> Use "exit" to quit this interactive shell.
> Version 2.0.0-beta-2, rc998e8d5f9ca3013d175ed447116c0734192f36c, Tue Mar 13 
> 13:49:59 CET 2018
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but 
> interface was expected
>   at jline.TerminalFactory.create(TerminalFactory.java:101)
>   at jline.TerminalFactory.get(TerminalFactory.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438)
>   at 
> org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:360)
>   at 
> org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:40)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:328)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:141)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:145)
>   at org.jruby.RubyClass.newInstance(RubyClass.java:994)
>   at 
> org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> 

[jira] [Updated] (HBASE-20197) Review of ByteBufferWriterOutputStream.java

2018-03-14 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HBASE-20197:

Status: Patch Available  (was: Open)

> Review of ByteBufferWriterOutputStream.java
> ---
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
>  Issue Type: Improvement
>  Components: hbase
>Affects Versions: 2.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch, 
> HBASE-20197.3.patch
>
>
> In looking at this class, two things caught my eye.
>  # Default buffer size of 4K
>  # Re-sizing of buffer on demand
>  
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern 
> JVMs.  This is due to various bench-marking that showed optimal performance 
> at this level.
>  The Re-sizing buffer looks a bit "unsafe":
>  
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
>   byte[] buf = null;
>   if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
>   } else {
> if (this.tempBuf == null) {
>   this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
>   }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then 
> 4003, etc. then the 'tempBuf' will be re-created many times.  Also, it seems 
> unsafe to create a buffer as large as the 'len' input.  This could 
> theoretically lead to an internal buffer of 2GB for each instance of this 
> class.
> I propose:
>  # Increase the default buffer size to 8K
>  # Create the buffer once and chunk the output instead of loading data into a 
> single array and writing it to the output stream.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20202:
--
Description: 
Found this one running ITBLLs. We'd just finished splitting a region 
91655de06786f786b0ee9c51280e1ee6 and then a move for it comes in. The move 
fails in an interesting way. The location has been removed from the regionnode 
kept by the Master. HBASE-20178 adds macro checks on context. Need to add a few 
checks to the likes of MoveRegionProcedure so we don't try to move an 
offlined/split parent.

{code}
2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
2018-03-14 10:21:45,679 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=3194, ppid=3193, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=IntegrationTestBigLinkedList, 
region=af198ca64b196fb3d2f5b3e815b2dad0, 
server=ve0530.halxg.cloudera.com,16020,1521007509855, 
IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure.MasterProcedureScheduler: 
pid=3187, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
 source=ve0530.halxg.cloudera.com,16020,1521007509855, 
destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
pid=3194 updating hbase:meta 
row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
 regionState=CLOSING
2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=3195, ppid=3187, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855}]
2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
location=ve0530.halxg.cloudera.com,16020,1521007509855
2018-03-14 10:21:45,752 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=3195, ppid=3187, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=IntegrationTestBigLinkedList, 
region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855, 
IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855
java.lang.NullPointerException  



   at 
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
  at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)


  at 
org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:197)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:304)


   at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:86)
  at 

[jira] [Updated] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18864:
-
Labels:   (was: beginner)

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399186#comment-16399186
 ] 

Umesh Agashe edited comment on HBASE-18864 at 3/14/18 7:46 PM:
---

[~apurtell], I reopened the Jira as patch was tested against branch-1.2 but 
fails test on branch 1.3, 1.4. [~jatsakthi] has uploaded the addendum which 
needs to be reviewed and if okay, committed. If this is not okay, let me know 
if commit needs to be reverted or separate Jira is needed.

Regarding fix, initial patches (attached to the JIRA) took an approach you have 
suggested and tried to fix NPE when value is being used. After some discussion, 
the thought was that reject invalid values at the entry point on client side 
itself. Looks like there are different opinions from reviews about which 
approach is okay. Let us know.


was (Author: uagashe):
[~apurtell], I reopened the Jira as patch was tested against branch-1.2 but 
fails test on branch 1.3, 1.4. [~jatsakthi] has uploaded the addendum which 
needs to be reviewed and if okay, committed. If this is now okay, let me know 
if commit needs to be reverted or separate Jira is needed.

Regarding fix, initial patches (attached to the JIRA) took an approach you have 
suggested and tried to fix NPE when value is being used. After some discussion, 
the thought was that reject invalid values at the entry point on client side 
itself. Looks like there are different opinions from reviews about which 
approach is okay. Let us know.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20131) NPE in MoveRegionProcedure via IntegrationTestLoadAndVerify with CM

2018-03-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399192#comment-16399192
 ] 

stack commented on HBASE-20131:
---

Of note, an interesting variant on this one, HBASE-20202

> NPE in MoveRegionProcedure via IntegrationTestLoadAndVerify with CM
> ---
>
> Key: HBASE-20131
> URL: https://issues.apache.org/jira/browse/HBASE-20131
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-20131.001.patch
>
>
> I believe the error is that a MoveRegionProcedure comes in via ChaosMonkey 
> for an unassigned region that was from a disabled table (also due to CM) 
> which causes an NPE as we try to set a null original location into the 
> protobuf which fails.
> {noformat}
> 2018-03-02 23:07:00,146 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=23,queue=2,port=2] ipc.RpcServer: 
> Unexpected throwable object 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos$MoveRegionStateData$Builder.setSourceServer(MasterProcedureProtos.java:26127)
>   at 
> org.apache.hadoop.hbase.master.assignment.MoveRegionProcedure.serializeStateData(MoveRegionProcedure.java:133)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureUtil.convertToProtoProcedure(ProcedureUtil.java:198)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.writeEntry(ProcedureWALFormat.java:211)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.writeInsert(ProcedureWALFormat.java:222)
>   at 
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:490)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.submitProcedure(ProcedureExecutor.java:863)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.submitProcedure(ProcedureExecutor.java:832)
>   at 
> org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.submitProcedure(ProcedureSyncWait.java:111)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.moveAsync(AssignmentManager.java:561)
>   at org.apache.hadoop.hbase.master.HMaster.move(HMaster.java:1707)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.moveRegion(MasterRpcServices.java:1324)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304){noformat}
> IntegrationTestLoadAndVerify also failed, but I'm not sure if it's related to 
> this, or just a problem with the test. The test failed because the table was 
> left offline after it was disabled, and appears to not have been re-enabled. 
> Still debugging that side..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20187) Shell startup fails with IncompatibleClassChangeError

2018-03-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399191#comment-16399191
 ] 

Josh Elser commented on HBASE-20187:


[~stack], this should hit 2.0. You OK?

> Shell startup fails with IncompatibleClassChangeError
> -
>
> Key: HBASE-20187
> URL: https://issues.apache.org/jira/browse/HBASE-20187
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 2.1.0
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Blocker
> Attachments: HBASE-20187.branch-2.001.patch, 
> HBASE-20187.branch-2.002.patch, HBASE-20187.branch-2.003.patch, 
> HBASE-20187.branch-2.004.patch
>
>
> Starting shell fails with a jline exception.
> Before {{2402f1fd43 - HBASE-20108 Remove jline exclusion from ZooKeeper}} the 
> shell starts up.
> {noformat}
> $ ./bin/hbase shell
> 2018-03-13 13:56:58,975 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> HBase Shell
> Use "help" to get list of supported commands.
> Use "exit" to quit this interactive shell.
> Version 2.0.0-beta-2, rc998e8d5f9ca3013d175ed447116c0734192f36c, Tue Mar 13 
> 13:49:59 CET 2018
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but 
> interface was expected
>   at jline.TerminalFactory.create(TerminalFactory.java:101)
>   at jline.TerminalFactory.get(TerminalFactory.java:159)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:438)
>   at 
> org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:360)
>   at 
> org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:40)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:130)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:328)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:141)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:145)
>   at org.jruby.RubyClass.newInstance(RubyClass.java:994)
>   at 
> org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)
>   at 
> org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:192)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:318)
>   at 
> org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:339)
>   at 
> org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:73)
>   at 
> org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:77)
>   at 
> org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:144)
>   at 
> 

[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399194#comment-16399194
 ] 

Umesh Agashe commented on HBASE-18864:
--

Also, IMO 'beginner' label on this Jira is misleading, considering the review 
comments here. Removing the label.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20202) [AMv2] Don't move region if its a split parent or offlined

2018-03-14 Thread stack (JIRA)
stack created HBASE-20202:
-

 Summary: [AMv2] Don't move region if its a split parent or offlined
 Key: HBASE-20202
 URL: https://issues.apache.org/jira/browse/HBASE-20202
 Project: HBase
  Issue Type: Sub-task
  Components: amv2
Affects Versions: 2.0.0-beta-2
Reporter: stack
Assignee: stack


Found this one running ITBLLs. We'd just finished splitting a region and then a 
move comes in. The move fails in an interesting way. The location has been 
removed from the regionnode.

{code}
2018-03-14 10:21:45,678 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
Finished pid=3177, state=SUCCESS; SplitTableRegionProcedure 
table=IntegrationTestBigLinkedList, parent=91655de06786f786b0ee9c51280e1ee6, 
daughterA=b67bf6b79eaa83de788b0519f782ce8e, 
daughterB=99cf6ddb38cad08e3aa7635b6cac2e7b in 10.0210sec   
2018-03-14 10:21:45,679 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=3194, ppid=3193, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=IntegrationTestBigLinkedList, 
region=af198ca64b196fb3d2f5b3e815b2dad0, 
server=ve0530.halxg.cloudera.com,16020,1521007509855, 
IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.
2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure.MasterProcedureScheduler: 
pid=3187, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
hri=IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.,
 source=ve0530.halxg.cloudera.com,16020,1521007509855, 
destination=ve0528.halxg.cloudera.com,16020,1521047890874, 
IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
2018-03-14 10:21:45,680 INFO  [PEWorker-15] assignment.RegionStateStore: 
pid=3194 updating hbase:meta 
row=IntegrationTestBigLinkedList,\xAA\xAA\xAA\xAA\xAA\xAA\xAA\xA0,1521047891276.af198ca64b196fb3d2f5b3e815b2dad0.,
 regionState=CLOSING
2018-03-14 10:21:45,680 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=3195, ppid=3187, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855}]
2018-03-14 10:21:45,683 INFO  [PEWorker-15] 
assignment.RegionTransitionProcedure: Dispatch pid=3194, ppid=3193, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=af198ca64b196fb3d2f5b3e815b2dad0, 
server=ve0530.halxg.cloudera.com,16020,1521007509855; rit=CLOSING, 
location=ve0530.halxg.cloudera.com,16020,1521007509855
2018-03-14 10:21:45,752 INFO  [PEWorker-15] procedure.MasterProcedureScheduler: 
pid=3195, ppid=3187, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=IntegrationTestBigLinkedList, 
region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855, 
IntegrationTestBigLinkedList,\x0C0\xC3\x0C0\xC3\x0C0,1521045713137.91655de06786f786b0ee9c51280e1ee6.
2018-03-14 10:21:45,753 ERROR [PEWorker-15] procedure2.ProcedureExecutor: 
CODE-BUG: Uncaught runtime exception: pid=3195, ppid=3187, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=IntegrationTestBigLinkedList, region=91655de06786f786b0ee9c51280e1ee6, 
server=ve0530.halxg.cloudera.com,16020,1521007509855
java.lang.NullPointerException  



   at 
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates.getOrCreateServer(RegionStates.java:934)
  at 
org.apache.hadoop.hbase.master.assignment.RegionStates.addRegionToServer(RegionStates.java:962)
  at 
org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1548)


  at 
org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:197)
  at 
org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:304)


   at 

[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399186#comment-16399186
 ] 

Umesh Agashe commented on HBASE-18864:
--

[~apurtell], I reopened the Jira as patch was tested against branch-1.2 but 
fails test on branch 1.3, 1.4. [~jatsakthi] has uploaded the addendum which 
needs to be reviewed and if okay, committed. If this is now okay, let me know 
if commit needs to be reverted or separate Jira is needed.

Regarding fix, initial patches (attached to the JIRA) took an approach you have 
suggested and tried to fix NPE when value is being used. After some discussion, 
the thought was that reject invalid values at the entry point on client side 
itself. Looks like there are different opinions from reviews about which 
approach is okay. Let us know.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20201) HBase must provide commons-cli-1.4 for mapreduce jobs with H3

2018-03-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399176#comment-16399176
 ] 

Josh Elser commented on HBASE-20201:


Best as I can tell for H2, we'd be broken in the same manner as we are for H3. 
I can only guess that the reason we haven't seen this previously is due to jobs 
that don't use the AbstractHBaseTool (admittedly, I'm surprised we haven't seen 
it yet).

> HBase must provide commons-cli-1.4 for mapreduce jobs with H3
> -
>
> Key: HBASE-20201
> URL: https://issues.apache.org/jira/browse/HBASE-20201
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Been trying to get some pre-existing mapreduce tests working against HBase2.
> There's an inherent problem right now that hadoop-common depends on 
> commons-cli-1.2 and HBase depends on commons-cli-1.4. This means that if you 
> use {{$(hbase mapredcp)}} to submit a mapreduce job via {{hadoop jar}}, 
> you'll get an error like:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/cli/DefaultParser
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.isHelpCommand(AbstractHBaseTool.java:165)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:133)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:270)
>     at hbase_it.App.main(App.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.cli.DefaultParser
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 11 more{noformat}
> My guess is that in previous versions, we didn't have this conflict with 
> Hadoop (we were on the same version). Now, we're not.
> I see two routes:
>  # We just alter the mapredcp to include our "correct" commons-cli-1.4 on the 
> classpath and remind users to make use of the {{HADOOP_USER_CLASSPATH_FIRST}} 
> environment variable
>  # We put commons-cli into our hbase-thirdparty and stop using it directly.
> The former is definitely quicker, but I'm guessing the latter would insulate 
> us more nicely.
> Thoughts, [~stack], [~busbey], [~mdrob] (and others who have done H3 work?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20201) HBase must provide commons-cli-1.4 for mapreduce jobs with H3

2018-03-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399173#comment-16399173
 ] 

Josh Elser commented on HBASE-20201:


Just an FYI (not a blame) [~appy]. Looks like your change to bump commons-cli 
to 1.3.1 was what got the ball rolling here.

> HBase must provide commons-cli-1.4 for mapreduce jobs with H3
> -
>
> Key: HBASE-20201
> URL: https://issues.apache.org/jira/browse/HBASE-20201
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Been trying to get some pre-existing mapreduce tests working against HBase2.
> There's an inherent problem right now that hadoop-common depends on 
> commons-cli-1.2 and HBase depends on commons-cli-1.4. This means that if you 
> use {{$(hbase mapredcp)}} to submit a mapreduce job via {{hadoop jar}}, 
> you'll get an error like:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/cli/DefaultParser
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.isHelpCommand(AbstractHBaseTool.java:165)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:133)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:270)
>     at hbase_it.App.main(App.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.cli.DefaultParser
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 11 more{noformat}
> My guess is that in previous versions, we didn't have this conflict with 
> Hadoop (we were on the same version). Now, we're not.
> I see two routes:
>  # We just alter the mapredcp to include our "correct" commons-cli-1.4 on the 
> classpath and remind users to make use of the {{HADOOP_USER_CLASSPATH_FIRST}} 
> environment variable
>  # We put commons-cli into our hbase-thirdparty and stop using it directly.
> The former is definitely quicker, but I'm guessing the latter would insulate 
> us more nicely.
> Thoughts, [~stack], [~busbey], [~mdrob] (and others who have done H3 work?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399168#comment-16399168
 ] 

Andrew Purtell edited comment on HBASE-18864 at 3/14/18 7:34 PM:
-

This is committed but the Jira is still in Patch Available state. Let's stop 
doing this!

I don't think this is the right fix. In theory someone can plug in a new 
strategy (perhaps quite theoretical at the moment, but possible) and so we made 
scope an integer to express more options, beyond just local/global, than a 
boolean could. If this ever happens the check in HCD would be incorrect. Better 
to fix the NPE at the source.


was (Author: apurtell):
This is committed but the Jira is still in Patch Available state. Let's stop 
doing this!

I don't think this necessarily the right fix. In theory someone can plug in a 
new strategy (perhaps quite theoretical at the moment, but possible) and so we 
made scope an integer to express more options, beyond just local/global, than a 
boolean could. If this ever happens the check in HCD would be incorrect. Better 
to fix the NPE at the source.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18864) NullPointerException thrown when adding rows to a table from peer cluster, table with replication factor other than 0 or 1

2018-03-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399168#comment-16399168
 ] 

Andrew Purtell commented on HBASE-18864:


This is committed but the Jira is still in Patch Available state. Let's stop 
doing this!

I don't think this necessarily the right fix. In theory someone can plug in a 
new strategy (perhaps quite theoretical at the moment, but possible) and so we 
made scope an integer to express more options, beyond just local/global, than a 
boolean could. If this ever happens the check in HCD would be incorrect. Better 
to fix the NPE at the source.

> NullPointerException thrown when adding rows to a table from peer cluster, 
> table with replication factor other than 0 or 1
> --
>
> Key: HBASE-18864
> URL: https://issues.apache.org/jira/browse/HBASE-18864
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.3.0
>Reporter: smita
>Assignee: Sakthi
>Priority: Major
>  Labels: beginner
> Fix For: 1.3.2, 1.2.7, 1.4.3
>
> Attachments: hbase-18864.branch-1.2.001.patch, 
> hbase-18864.branch-1.2.002.patch, hbase-18864.branch-1.2.003.patch, 
> hbase-18864.branch-1.2.004.patch, hbase-18864.branch-1.addendum.patch
>
>
> Scenario:
> =
> add_peer
> create a table
> alter table with REPLICATION_SCOPE => '5'
> enable table replication
> login to peer cluster and try putting data to the table 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20196) Maintain all regions with same size in memstore flusher

2018-03-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399165#comment-16399165
 ] 

Ted Yu commented on HBASE-20196:


[~eshcar]:
Mind taking a look ?

Thanks

> Maintain all regions with same size in memstore flusher
> ---
>
> Key: HBASE-20196
> URL: https://issues.apache.org/jira/browse/HBASE-20196
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: 20196.v1.txt
>
>
> Here is the javadoc for getCopyOfOnlineRegionsSortedByOffHeapSize() :
> {code}
>*   the biggest.  If two regions are the same size, then the last one 
> found wins; i.e. this
>*   method may NOT return all regions.
> {code}
> Currently value type is HRegion - we only store one region per size.
> I think we should change value type to Collection so that we don't 
> miss any region (potentially with big size).
> e.g. Suppose there are there regions (R1, R2 and R3) with sizes 100, 100 and 
> 1, respectively.
> Using the current data structure, R2 would be stored in the Map, evicting R1 
> from the Map.
> This means that the current code would choose to flush regions R2 and R3, 
> releasing 101 from memory.
> If value type is changed to Collection, we would flush both R1 and 
> R2. This achieves faster memory reclamation.
> Confirmed with [~eshcar] over in HBASE-20090



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20090) Properly handle Preconditions check failure in MemStoreFlusher$FlushHandler.run

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399149#comment-16399149
 ] 

Hadoop QA commented on HBASE-20090:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
7s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
36s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}135m 
24s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914503/20090.v9.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 51ea5f891167 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 84ee32c723 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 

[jira] [Commented] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399143#comment-16399143
 ] 

Mike Drob commented on HBASE-20095:
---

Had an issue applying it, posting what I ended up with. I think the only change 
I dropped is the removal of the indentation fixes in HMaster, but will wait for 
precommit just in case.

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20095) Redesign single instance pool in CleanerChore

2018-03-14 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-20095:
--
Attachment: HBASE-20095.master.012.patch

> Redesign single instance pool in CleanerChore
> -
>
> Key: HBASE-20095
> URL: https://issues.apache.org/jira/browse/HBASE-20095
> Project: HBase
>  Issue Type: Improvement
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20095.master.001.patch, 
> HBASE-20095.master.002.patch, HBASE-20095.master.003.patch, 
> HBASE-20095.master.004.patch, HBASE-20095.master.005.patch, 
> HBASE-20095.master.006.patch, HBASE-20095.master.007.patch, 
> HBASE-20095.master.008.patch, HBASE-20095.master.009.patch, 
> HBASE-20095.master.010.patch, HBASE-20095.master.011.patch, 
> HBASE-20095.master.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20201) HBase must provide commons-cli-1.4 for mapreduce jobs with H3

2018-03-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399139#comment-16399139
 ] 

Josh Elser commented on HBASE-20201:


{quote}Can we do w/o commons-cli? If we do #1, are we broke on hadoop2
{quote}
I knew someone would ask that :P. Gotta look at that still.

> HBase must provide commons-cli-1.4 for mapreduce jobs with H3
> -
>
> Key: HBASE-20201
> URL: https://issues.apache.org/jira/browse/HBASE-20201
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Been trying to get some pre-existing mapreduce tests working against HBase2.
> There's an inherent problem right now that hadoop-common depends on 
> commons-cli-1.2 and HBase depends on commons-cli-1.4. This means that if you 
> use {{$(hbase mapredcp)}} to submit a mapreduce job via {{hadoop jar}}, 
> you'll get an error like:
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/cli/DefaultParser
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.isHelpCommand(AbstractHBaseTool.java:165)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:133)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>     at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:270)
>     at hbase_it.App.main(App.java:85)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.cli.DefaultParser
>     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
>     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>     ... 11 more{noformat}
> My guess is that in previous versions, we didn't have this conflict with 
> Hadoop (we were on the same version). Now, we're not.
> I see two routes:
>  # We just alter the mapredcp to include our "correct" commons-cli-1.4 on the 
> classpath and remind users to make use of the {{HADOOP_USER_CLASSPATH_FIRST}} 
> environment variable
>  # We put commons-cli into our hbase-thirdparty and stop using it directly.
> The former is definitely quicker, but I'm guessing the latter would insulate 
> us more nicely.
> Thoughts, [~stack], [~busbey], [~mdrob] (and others who have done H3 work?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20196) Maintain all regions with same size in memstore flusher

2018-03-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-20196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399123#comment-16399123
 ] 

Hadoop QA commented on HBASE-20196:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
19m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 54s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-20196 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914515/20196.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 9e51977b8be1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 84ee32c723 |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 

  1   2   3   >