[jira] [Updated] (HBASE-15583) Any HTD we give out should be immutable

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Status: Open  (was: Patch Available)

> Any HTD we give out should be immutable
> ---
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17821) The CompoundConfiguration#toString is wrong

2017-03-22 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-17821:
--

 Summary: The CompoundConfiguration#toString is wrong
 Key: HBASE-17821
 URL: https://issues.apache.org/jira/browse/HBASE-17821
 Project: HBase
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Priority: Trivial


Find this bug when reading code. We dont use the API, so it is a trivial bug.
sb.append(this.configs); -> sb.append(m);
{noformat}
  @Override
  public String toString() {
StringBuffer sb = new StringBuffer();
sb.append("CompoundConfiguration: " + this.configs.size() + " configs");
for (ImmutableConfigMap m : this.configs) {
  sb.append(this.configs);
}
return sb.toString();
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937756#comment-15937756
 ] 

stack commented on HBASE-14614:
---

Other tests fail because the bit where we update RegionStates, the inmemory-map 
of where regions are, is still to do. Working on it...

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.011.patch, 
> HBASE-14614.master.012.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.016.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-03-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937726#comment-15937726
 ] 

Duo Zhang commented on HBASE-16890:
---

{quote}
So can increase the number of writers in AsynWAL also? Will that help?
{quote}

It is designed to use only one thread. Multiwal can solve the problem I think.

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: async.svg, AsyncWAL_disruptor_1 (2).patch, 
> AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> AsyncWAL_disruptor.patch, classic.svg, contention_defaultWAL.png, 
> contention.png, HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention.patch, HBASE-16890-remove-contention-v1.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937720#comment-15937720
 ] 

Hadoop QA commented on HBASE-17595:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 38s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestScannersFromClientSide2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860066/HBASE-17595-addendum-v1.patch
 |
| JIRA Issue | HBASE-17595 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux cb7ee9229b9d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f2d1b8d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6200/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6200/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-03-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937704#comment-15937704
 ] 

Anoop Sam John commented on HBASE-16890:


bq. I think the problem is that AsyncFSWAL is single threaded but FSHLog has 5 
sync threads.
So can increase the number of writers in AsynWAL also?  Will that help?

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: async.svg, AsyncWAL_disruptor_1 (2).patch, 
> AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> AsyncWAL_disruptor.patch, classic.svg, contention_defaultWAL.png, 
> contention.png, HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention.patch, HBASE-16890-remove-contention-v1.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17820) Fail build with hadoop-2.6.0

2017-03-22 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-17820:
--
Description: 
I used this command "mvn clean install -Dhadoop-two.version=2.6.0 -DskipTests" 
to build hbase-1.2.4 source code. 
Build failed at hbase-assembly module.

This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

  was:
I used this command to build hbase-1.2.4 source code "mvn clean install 
-Dhadoop-two.version=2.6.0 -DskipTests". 
Build failed at hbase-assembly module.

This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".


> Fail build with hadoop-2.6.0
> 
>
> Key: HBASE-17820
> URL: https://issues.apache.org/jira/browse/HBASE-17820
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.4
> Environment: hadoop-2.6.0, java 8
>Reporter: Reid Chan
>
> I used this command "mvn clean install -Dhadoop-two.version=2.6.0 
> -DskipTests" to build hbase-1.2.4 source code. 
> Build failed at hbase-assembly module.
> This is the fail message: 
> "Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17820) Fail build with hadoop-2.6.0

2017-03-22 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-17820:
--
Description: 
I used this command to build hbase-1.2.4 source code "mvn clean install 
-Dhadoop-two.version=2.6.0 -DskipTests". 
Build failed at hbase-assembly module.

This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

  was:
I used this command "mvn clean install -Dhadoop-two.version=2.6.0 -DskipTests". 
Build failed at hbase-assembly module.

This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".


> Fail build with hadoop-2.6.0
> 
>
> Key: HBASE-17820
> URL: https://issues.apache.org/jira/browse/HBASE-17820
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.4
> Environment: hadoop-2.6.0, java 8
>Reporter: Reid Chan
>
> I used this command to build hbase-1.2.4 source code "mvn clean install 
> -Dhadoop-two.version=2.6.0 -DskipTests". 
> Build failed at hbase-assembly module.
> This is the fail message: 
> "Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17595:
--
Attachment: HBASE-17595-addendum-v2.patch

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, 
> HBASE-17595-addendum-v1.patch, HBASE-17595-addendum-v2.patch, 
> HBASE-17595-branch-1.patch, HBASE-17595.patch, HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17820) Fail build with hadoop-2.6.0

2017-03-22 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-17820:
--
Description: 
I used this command "mvn clean install -Dhadoop-two.version=2.6.0 -DskipTests". 
Build failed at hbase-assembly module.

This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

  was:
This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".


> Fail build with hadoop-2.6.0
> 
>
> Key: HBASE-17820
> URL: https://issues.apache.org/jira/browse/HBASE-17820
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.4
> Environment: hadoop-2.6.0, java 8
>Reporter: Reid Chan
>
> I used this command "mvn clean install -Dhadoop-two.version=2.6.0 
> -DskipTests". 
> Build failed at hbase-assembly module.
> This is the fail message: 
> "Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-03-22 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16890:
--
Priority: Blocker  (was: Major)

I made this a blocker against 2.0. The offheap write-path wants it and we need 
to do this testing. I'm a little occupied at the moment -- trying to land AMv2 
-- but intend to get to this.

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: async.svg, AsyncWAL_disruptor_1 (2).patch, 
> AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> AsyncWAL_disruptor.patch, classic.svg, contention_defaultWAL.png, 
> contention.png, HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention.patch, HBASE-16890-remove-contention-v1.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17820) Fail build with hadoop-2.6.0

2017-03-22 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937603#comment-15937603
 ] 

Reid Chan commented on HBASE-17820:
---

ping [~busbey]

> Fail build with hadoop-2.6.0
> 
>
> Key: HBASE-17820
> URL: https://issues.apache.org/jira/browse/HBASE-17820
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.4
> Environment: hadoop-2.6.0, java 8
>Reporter: Reid Chan
>
> This is the fail message: 
> "Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17820) Fail build with hadoop-2.6.0

2017-03-22 Thread Reid Chan (JIRA)
Reid Chan created HBASE-17820:
-

 Summary: Fail build with hadoop-2.6.0
 Key: HBASE-17820
 URL: https://issues.apache.org/jira/browse/HBASE-17820
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 1.2.4
 Environment: hadoop-2.6.0, java 8
Reporter: Reid Chan


This is the fail message: 
"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16890) Analyze the performance of AsyncWAL and fix the same

2017-03-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937594#comment-15937594
 ] 

Duo Zhang commented on HBASE-16890:
---

[~stack] [~ram_krish] Let's pick this up again?

The last consensus in HBASE-17049 is that, using multiwal can increase the 
performance of AsyncFSWAL more than FSHLog. [~stack] Could you please also 
verify it sir? I think the problem is that AsyncFSWAL is single threaded but 
FSHLog has 5 sync threads.

And for me, I will run PE tool against a distributed cluster next. Maybe a 
single RS but HDFS will be distributed. I think this is the common scenario in 
the real world.

Thanks.

> Analyze the performance of AsyncWAL and fix the same
> 
>
> Key: HBASE-16890
> URL: https://issues.apache.org/jira/browse/HBASE-16890
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: async.svg, AsyncWAL_disruptor_1 (2).patch, 
> AsyncWAL_disruptor_3.patch, AsyncWAL_disruptor_3.patch, 
> AsyncWAL_disruptor_4.patch, AsyncWAL_disruptor_6.patch, 
> AsyncWAL_disruptor.patch, classic.svg, contention_defaultWAL.png, 
> contention.png, HBASE-16890-rc-v2.patch, HBASE-16890-rc-v3.patch, 
> HBASE-16890-remove-contention.patch, HBASE-16890-remove-contention-v1.patch, 
> Screen Shot 2016-10-25 at 7.34.47 PM.png, Screen Shot 2016-10-25 at 7.39.07 
> PM.png, Screen Shot 2016-10-25 at 7.39.48 PM.png, Screen Shot 2016-11-04 at 
> 5.21.27 PM.png, Screen Shot 2016-11-04 at 5.30.18 PM.png
>
>
> Tests reveal that AsyncWAL under load in single node cluster performs slower 
> than the Default WAL. This task is to analyze and see if we could fix it.
> See some discussions in the tail of JIRA HBASE-15536.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17595:
--
Attachment: HBASE-17595-addendum-v1.patch

There are unfinished code in AllowPartialScanResultCache... Fixed.

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, 
> HBASE-17595-addendum-v1.patch, HBASE-17595-branch-1.patch, HBASE-17595.patch, 
> HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937551#comment-15937551
 ] 

Anoop Sam John commented on HBASE-17739:


Agree..  
BTW we by def wont be keeping compressed blocks in cache.  Uncompressed data 
only we keep. Ya when blocks are DBEed , we keep the block in encoded format 
and for data like time series, the size will be much lesser.

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-13788) Shell commands do not support column qualifiers containing colon (:)

2017-03-22 Thread Manaswini (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937536#comment-15937536
 ] 

Manaswini edited comment on HBASE-13788 at 3/23/17 1:49 AM:


As per the @Stack suggestion, I've added an ordinal option. i.e. FORMATTER just 
listed conversion per column mentioned in COLUMN? i.e. FORMATTER => {'toInt'}

Now the custom formatting can be specified in two ways:

 1. Specifying it for each column by column qualifier
 2. Without the column qualifier in which case the column qualifier will be 
derived from COLUMNS specification and applied in the order they appear in 
COLUMNS specification.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
  FORMATTER => {'cf:qualifier1'=> 'toInt','cf:qualifier2'=> 
'c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
   
  or
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
 FORMATTER => [ 'toInt','c(org.apache.hadoop.hbase.util.Bytes).toInt']

stack - I've attached the patch and the test cases I have it tested for. Could 
you review and let me know if any improvements are needed? 


Thanks!
Mansi 




was (Author: mmaharana):
As per the @Stack suggestion, I've added an ordinal option. i.e. FORMATTER just 
listed conversion per column mentioned in COLUMN? i.e. FORMATTER => {'toInt'}. 

Now the custom formatting can be specified in two ways:

 1. Specifying it for each column by column qualifier
 2. Without the column qualifier in which case the column qualifier will be 
derived from COLUMNS specification and applied in the order they appear in 
COLUMNS specification.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
  FORMATTER => {'cf:qualifier1'=> 'toInt','cf:qualifier2'=> 
'c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
   
  or
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
 FORMATTER => [ 'toInt','c(org.apache.hadoop.hbase.util.Bytes).toInt']

stack - I've attached the patch and the test cases I have it tested for. Could 
you review and let me know if any improvements are needed? 


Thanks!
Mansi 



> Shell commands do not support column qualifiers containing colon (:)
> 
>
> Key: HBASE-13788
> URL: https://issues.apache.org/jira/browse/HBASE-13788
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
>Reporter: Dave Latham
>Assignee: Manaswini
> Attachments: Hbase-13788-testcases.docx, hbase-13788-v1.patch
>
>
> The shell interprets the colon within the qualifier as a delimiter to a 
> FORMATTER instead of part of the qualifier itself.
> Example from the mailing list:
> Hmph, I may have spoken too soon. I know I tested this at one point and
> it worked, but now I'm getting different results:
> On the new cluster, I created a duplicate test table:
> hbase(main):043:0> create 'content3', {NAME => 'x', BLOOMFILTER =>
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION =>
> 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> Then I pull some data from the imported table:
> hbase(main):045:0> scan 'content', {LIMIT=>1,
> STARTROW=>'A:9223370612089311807:twtr:57013379'}
> ROW  COLUMN+CELL
> 
> A:9223370612089311807:twtr:570133798827921408
> column=x:twitter:username, timestamp=1424775595345, value=BERITA &
> INFORMASI!
> Then put it:
> hbase(main):046:0> put
> 'content3','A:9223370612089311807:twtr:570133798827921408',
> 'x:twitter:username', 'BERITA & INFORMASI!'
> But then when I query it, I see that I've lost the column qualifier
> ":username":
> hbase(main):046:0> scan 'content3'
> ROW  COLUMN+CELL
>  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
>  timestamp=1432745301788, value=BERITA & INFORMASI!
> Even though I'm missing one of the qualifiers, I can at least filter on
> columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-13788) Shell commands do not support column qualifiers containing colon (:)

2017-03-22 Thread Manaswini (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937536#comment-15937536
 ] 

Manaswini commented on HBASE-13788:
---

As per the @Stack suggestion, I've added an ordinal option. i.e. FORMATTER just 
listed conversion per column mentioned in COLUMN? i.e. FORMATTER => {'toInt'}. 

Now the custom formatting can be specified in two ways:

 1. Specifying it for each column by column qualifier
 2. Without the column qualifier in which case the column qualifier will be 
derived from COLUMNS specification and applied in the order they appear in 
COLUMNS specification.

Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
  FORMATTER => {'cf:qualifier1'=> 'toInt','cf:qualifier2'=> 
'c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
   
  or
 
 hbase> scan 't1', {COLUMN => ['cf:qualifier1','cf:qualifier2'],
 FORMATTER => [ 'toInt','c(org.apache.hadoop.hbase.util.Bytes).toInt']

stack - I've attached the patch and the test cases I have it tested for. Could 
you review and let me know if any improvements are needed? 


Thanks!
Mansi 



> Shell commands do not support column qualifiers containing colon (:)
> 
>
> Key: HBASE-13788
> URL: https://issues.apache.org/jira/browse/HBASE-13788
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
>Reporter: Dave Latham
>Assignee: Manaswini
> Attachments: Hbase-13788-testcases.docx, hbase-13788-v1.patch
>
>
> The shell interprets the colon within the qualifier as a delimiter to a 
> FORMATTER instead of part of the qualifier itself.
> Example from the mailing list:
> Hmph, I may have spoken too soon. I know I tested this at one point and
> it worked, but now I'm getting different results:
> On the new cluster, I created a duplicate test table:
> hbase(main):043:0> create 'content3', {NAME => 'x', BLOOMFILTER =>
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION =>
> 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> Then I pull some data from the imported table:
> hbase(main):045:0> scan 'content', {LIMIT=>1,
> STARTROW=>'A:9223370612089311807:twtr:57013379'}
> ROW  COLUMN+CELL
> 
> A:9223370612089311807:twtr:570133798827921408
> column=x:twitter:username, timestamp=1424775595345, value=BERITA &
> INFORMASI!
> Then put it:
> hbase(main):046:0> put
> 'content3','A:9223370612089311807:twtr:570133798827921408',
> 'x:twitter:username', 'BERITA & INFORMASI!'
> But then when I query it, I see that I've lost the column qualifier
> ":username":
> hbase(main):046:0> scan 'content3'
> ROW  COLUMN+CELL
>  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
>  timestamp=1432745301788, value=BERITA & INFORMASI!
> Even though I'm missing one of the qualifiers, I can at least filter on
> columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13788) Shell commands do not support column qualifiers containing colon (:)

2017-03-22 Thread Manaswini (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manaswini updated HBASE-13788:
--
Attachment: hbase-13788-v1.patch
Hbase-13788-testcases.docx

> Shell commands do not support column qualifiers containing colon (:)
> 
>
> Key: HBASE-13788
> URL: https://issues.apache.org/jira/browse/HBASE-13788
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.98.0, 0.96.0, 1.0.0, 1.1.0
>Reporter: Dave Latham
>Assignee: Manaswini
> Attachments: Hbase-13788-testcases.docx, hbase-13788-v1.patch
>
>
> The shell interprets the colon within the qualifier as a delimiter to a 
> FORMATTER instead of part of the qualifier itself.
> Example from the mailing list:
> Hmph, I may have spoken too soon. I know I tested this at one point and
> it worked, but now I'm getting different results:
> On the new cluster, I created a duplicate test table:
> hbase(main):043:0> create 'content3', {NAME => 'x', BLOOMFILTER =>
> 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION =>
> 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}
> Then I pull some data from the imported table:
> hbase(main):045:0> scan 'content', {LIMIT=>1,
> STARTROW=>'A:9223370612089311807:twtr:57013379'}
> ROW  COLUMN+CELL
> 
> A:9223370612089311807:twtr:570133798827921408
> column=x:twitter:username, timestamp=1424775595345, value=BERITA &
> INFORMASI!
> Then put it:
> hbase(main):046:0> put
> 'content3','A:9223370612089311807:twtr:570133798827921408',
> 'x:twitter:username', 'BERITA & INFORMASI!'
> But then when I query it, I see that I've lost the column qualifier
> ":username":
> hbase(main):046:0> scan 'content3'
> ROW  COLUMN+CELL
>  A:9223370612089311807:twtr:570133798827921408 column=x:twitter,
>  timestamp=1432745301788, value=BERITA & INFORMASI!
> Even though I'm missing one of the qualifiers, I can at least filter on
> columns in this sample table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937527#comment-15937527
 ] 

Duo Zhang commented on HBASE-17595:
---

Let me check the failed UTs. Seems something wrong dealing with allowPartial.

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, HBASE-17595-branch-1.patch, 
> HBASE-17595.patch, HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-03-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937510#comment-15937510
 ] 

stack commented on HBASE-14614:
---

Fix findbugs and a few tests. Wil be back to fix more. Fixed whitespace but 
bulk is over in generated code.

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.011.patch, 
> HBASE-14614.master.012.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.016.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17766) Generate Javadoc for hbase-spark module

2017-03-22 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937456#comment-15937456
 ] 

Yi Liang commented on HBASE-17766:
--

 [~busbey] [~stack][~jerryhe] [~te...@apache.org]
After spending quite some time doing research and experiment, I found there are 
some limitations for showing up javadoc for hbase-spark scala code.

>From http://hbase.apache.org,  We have 4 sets of javadoc for hbase
(1) User API: http://hbase.apache.org/apidocs/index.html
(2) User API(TEST): http://hbase.apache.org/testapidocs/index.html
(3) Developer API: http://hbase.apache.org/devapidocs/index.html
(4) Developer API(TEST): http://hbase.apache.org/testdevapidocs/index.html

And each website above is corresponding to a  in root pom. I am 
trying to add hbase-spark javadoc into one reportSet above, but it seems does 
not work. As I introduced in above comments, I use a plugin to generate java 
code for scala code and output it in ../hbase-spark/target/genjavadoc, and then 
use the maven-javadoc-plugin to generted javadoc for the java code. 

The limitations are:
I can not add ../hbase-spark/target/genjavadoc as a extra source path for one 
of  the reportSets, for the following reasons: 
(1) there is no extra-sourcepath configuration parameter for javadoc; there is 
one parameter called sourcepath, if you do not specify it, Javadoc tool 
searches for classes and corresponding source file from the current directory. 
and if you specify it, you need to list all needed sourcepathes.  And in our 
hbase root pom, we do not  specify it, if I specify it, i need to specify all 
the paths and also exclude all the unnecessary source file, which are too many 
troubles. See one error below about not exclude some unnecessary source file
{noformat}
[ERROR] 
/root/git/os/new-hbase/./hbase-archetypes/hbase-shaded-client-project/target/build-archetype/target/generated-sources/archetype/target/classes/archetype-resources/src/test/java/TestHelloHBase.java:1:
 error: illegal character: '#'
[ERROR] #set( $symbol_pound = '#' )
{noformat}
(2) If I add ../hbase-spark/target/genjavadoc as source file for some 
 above, the mvn site command will report errors about the 
hbase-annotation doclet in javadoc.  See error below
{noformat}
[ERROR] Exit code: 1 - 
/root/git/os/new-hbase/hbase-spark/target/genjavadoc/org/apache/spark/sql/datasources/hbase/Field.java:16:
 error: illegal combination of modifiers: abstract and static
{noformat}

Here I can only come up with one workaround
1. Create a new  called HBase-Spark API: 
http://hbase.apache.org/hbase-spark-api/index.html
2. In the current User API Website, in the hbase-spark package, we can attach a 
link to http://hbase.apache.org/hbase-spark-api/index.html, see screenshot 
attached. 

I want to see if this is ok and also like to hear any other better idea or 
solution.



> Generate Javadoc for hbase-spark module 
> 
>
> Key: HBASE-17766
> URL: https://issues.apache.org/jira/browse/HBASE-17766
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: spark-api.jpg, user-api.jpg
>
>
>  Scala classes in hbase-spark module aren't showing up in our API docs nor 
> our internal API docs. see https://hbase.apache.org/apidocs/ 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17766) Generate Javadoc for hbase-spark module

2017-03-22 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17766:
-
Attachment: user-api.jpg
spark-api.jpg

> Generate Javadoc for hbase-spark module 
> 
>
> Key: HBASE-17766
> URL: https://issues.apache.org/jira/browse/HBASE-17766
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: spark-api.jpg, user-api.jpg
>
>
>  Scala classes in hbase-spark module aren't showing up in our API docs nor 
> our internal API docs. see https://hbase.apache.org/apidocs/ 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937424#comment-15937424
 ] 

Hudson commented on HBASE-17815:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2721 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2721/])
HBASE-17815 Remove the unused field in PrefixTreeSeeker (chia7712: rev 
f2d1b8db89cee7dad675639a50dab9f3c08f219f)
* (edit) 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeSeeker.java


> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17818) On branch-1 in module Server, TestRSKilledWhenInitializing and TestScannerHeartbeatMessages broken

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved HBASE-17818.

Resolution: Cannot Reproduce

[~anuphal], again, see the following builds from Jenkins:

https://builds.apache.org/view/All/job/HBase-1.4/jdk=JDK_1_8,label=Hadoop&&!H13/676/testReport/org.apache.hadoop.hbase.regionserver/TestRSKilledWhenInitializing/
https://builds.apache.org/view/All/job/HBase-1.4/jdk=JDK_1_8,label=Hadoop&&!H13/676/testReport/org.apache.hadoop.hbase.regionserver/TestScannerHeartbeatMessages/
https://builds.apache.org/view/All/job/HBase-1.4/678/jdk=JDK_1_7,label=Hadoop&&!H13/testReport/org.apache.hadoop.hbase.regionserver/TestScannerHeartbeatMessages/
https://builds.apache.org/view/All/job/HBase-1.4/678/jdk=JDK_1_7,label=Hadoop&&!H13/testReport/org.apache.hadoop.hbase.regionserver/TestRSKilledWhenInitializing/

Sometimes tests are sensitive to hardware, but we try our best to avoid this. 
In the future, please do this digging on your own and leave JIRA for when you 
know the reason a test is failing (and ideally, have provided a patch which 
fixes it!). Thanks in advance.

> On branch-1 in module Server, TestRSKilledWhenInitializing and 
> TestScannerHeartbeatMessages broken
> --
>
> Key: HBASE-17818
> URL: https://issues.apache.org/jira/browse/HBASE-17818
> Project: HBase
>  Issue Type: Bug
> Environment: OS: Ubuntu 14.04
> Arch: x86_64
>Reporter: Anup Halarnkar
> Fix For: 1.4.0
>
>
> Branch: branch-1
> Command: mvn clean install -X -fn
> Output:
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 2.030 s
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> [INFO] Final Memory: 15M/304M
> [INFO] 
> 
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache HBase ... SUCCESS [ 24.315 
> s]
> [INFO] Apache HBase - Checkstyle .. SUCCESS [  7.797 
> s]
> [INFO] Apache HBase - Resource Bundle . SUCCESS [  0.263 
> s]
> [INFO] Apache HBase - Annotations . SUCCESS [  2.416 
> s]
> [INFO] Apache HBase - Protocol  SUCCESS [ 12.994 
> s]
> [INFO] Apache HBase - Common .. SUCCESS [02:31 
> min]
> [INFO] Apache HBase - Procedure ... SUCCESS [03:37 
> min]
> [INFO] Apache HBase - Client .. SUCCESS [01:53 
> min]
> [INFO] Apache HBase - Hadoop Compatibility  SUCCESS [  8.560 
> s]
> [INFO] Apache HBase - Hadoop Two Compatibility  SUCCESS [ 10.500 
> s]
> [INFO] Apache HBase - Prefix Tree . SUCCESS [  9.367 
> s]
> [INFO] Apache HBase - Server .. FAILURE [  02:04 
> h]
> [INFO] Apache HBase - Testing Util  SUCCESS [  5.107 
> s]
> [INFO] Apache HBase - Thrift .. SUCCESS [04:52 
> min]
> [INFO] Apache HBase - Rest  SUCCESS [13:56 
> min]
> [INFO] Apache HBase - Shell ... SUCCESS [  3.980 
> s]
> [INFO] Apache HBase - Integration Tests ... SUCCESS [  02:33 
> h]
> [INFO] Apache HBase - Examples  SUCCESS [ 11.129 
> s]
> [INFO] Apache HBase - External Block Cache  SUCCESS [  1.231 
> s]
> [INFO] Apache HBase - Assembly  FAILURE [  4.063 
> s]
> [INFO] Apache HBase - Shaded .. SUCCESS [  0.172 
> s]
> [INFO] Apache HBase - Shaded - Client . SUCCESS [  0.793 
> s]
> [INFO] Apache HBase - Shaded - Server . SUCCESS [  1.721 
> s]
> [INFO] Apache HBase - Archetypes .. SUCCESS [  0.094 
> s]
> [INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [01:24 
> min]
> [INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS 
> [01:15 min]
> [INFO] Apache HBase - Archetype builder ... SUCCESS [ 31.495 
> s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:10 h
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> ---
> Failed tests:
> 

[jira] [Updated] (HBASE-17818) On branch-1 in module Server, TestRSKilledWhenInitializing and TestScannerHeartbeatMessages broken

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17818:
---
Fix Version/s: (was: 2.0.0)
   1.4.0

> On branch-1 in module Server, TestRSKilledWhenInitializing and 
> TestScannerHeartbeatMessages broken
> --
>
> Key: HBASE-17818
> URL: https://issues.apache.org/jira/browse/HBASE-17818
> Project: HBase
>  Issue Type: Bug
> Environment: OS: Ubuntu 14.04
> Arch: x86_64
>Reporter: Anup Halarnkar
> Fix For: 1.4.0
>
>
> Branch: branch-1
> Command: mvn clean install -X -fn
> Output:
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 2.030 s
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> [INFO] Final Memory: 15M/304M
> [INFO] 
> 
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache HBase ... SUCCESS [ 24.315 
> s]
> [INFO] Apache HBase - Checkstyle .. SUCCESS [  7.797 
> s]
> [INFO] Apache HBase - Resource Bundle . SUCCESS [  0.263 
> s]
> [INFO] Apache HBase - Annotations . SUCCESS [  2.416 
> s]
> [INFO] Apache HBase - Protocol  SUCCESS [ 12.994 
> s]
> [INFO] Apache HBase - Common .. SUCCESS [02:31 
> min]
> [INFO] Apache HBase - Procedure ... SUCCESS [03:37 
> min]
> [INFO] Apache HBase - Client .. SUCCESS [01:53 
> min]
> [INFO] Apache HBase - Hadoop Compatibility  SUCCESS [  8.560 
> s]
> [INFO] Apache HBase - Hadoop Two Compatibility  SUCCESS [ 10.500 
> s]
> [INFO] Apache HBase - Prefix Tree . SUCCESS [  9.367 
> s]
> [INFO] Apache HBase - Server .. FAILURE [  02:04 
> h]
> [INFO] Apache HBase - Testing Util  SUCCESS [  5.107 
> s]
> [INFO] Apache HBase - Thrift .. SUCCESS [04:52 
> min]
> [INFO] Apache HBase - Rest  SUCCESS [13:56 
> min]
> [INFO] Apache HBase - Shell ... SUCCESS [  3.980 
> s]
> [INFO] Apache HBase - Integration Tests ... SUCCESS [  02:33 
> h]
> [INFO] Apache HBase - Examples  SUCCESS [ 11.129 
> s]
> [INFO] Apache HBase - External Block Cache  SUCCESS [  1.231 
> s]
> [INFO] Apache HBase - Assembly  FAILURE [  4.063 
> s]
> [INFO] Apache HBase - Shaded .. SUCCESS [  0.172 
> s]
> [INFO] Apache HBase - Shaded - Client . SUCCESS [  0.793 
> s]
> [INFO] Apache HBase - Shaded - Server . SUCCESS [  1.721 
> s]
> [INFO] Apache HBase - Archetypes .. SUCCESS [  0.094 
> s]
> [INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [01:24 
> min]
> [INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS 
> [01:15 min]
> [INFO] Apache HBase - Archetype builder ... SUCCESS [ 31.495 
> s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:10 h
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> ---
> Failed tests:
> TestRSKilledWhenInitializing.testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNode:123
>  null
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:207->testImportanceOfHeartbeats:237
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1698, Failures: 2, Errors: 0, Skipped: 11
> --
> If these tests are passing on your end,then please let me know the possible 
> issue that I could have on my side.
> Thanks in advance,
> Anup



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17727) [C++] Make RespConverter work with RawAsyncTableImpl

2017-03-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17727:
--
Attachment: hbase-17727-v1.patch

[~xiaobingo] can you please take a look at this patch. The problem was that we 
are capturing by reference in the inner lambda from the function argument 
{{const &}}. 
I've changed the function args to be copy by value for the {{std::function}}'s 
which is compatible with copy semantics 
(http://en.cppreference.com/w/cpp/utility/functional/function). 

> [C++] Make RespConverter work with RawAsyncTableImpl
> 
>
> Key: HBASE-17727
> URL: https://issues.apache.org/jira/browse/HBASE-17727
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Enis Soztutar
> Attachments: hbase-17727-v1.patch
>
>
> This is a follow up work of HBASE-17465. 
> There's a problem to dereference instance of RpcCallback when it's passed as 
> function argument.
> {code}
> template  typename S>
> using RespConverter = std::function;
> {code}
> {code}
>   templatetypename PREQ,
>typename PRESP,
>typename RESP>
>   folly::Future Call(
>   std::shared_ptr rpc_client,
>   std::shared_ptr controller,
>   std::shared_ptr loc,
>   const REQ& req,
>   const ReqConverter& 
> req_converter,
>   const hbase::RpcCall& rpc_call,
>   const RespConverter& resp_converter) {
> rpc_call(
> rpc_client,
> loc,
> controller,
> std::move(req_converter(req, loc->region_name(
> .then([&, this](std::unique_ptr presp) {
>   // std::unique_ptr result = resp_converter(presp);
>   std::unique_ptr result = 
> hbase::ResponseConverter::FromGetResponse(*presp);
>   promise_->setValue(std::move(*result));
> })
> .onError([this] (const std::exception& e) 
> {promise_->setException(e);});
> return promise_->getFuture();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17727) [C++] Make RespConverter work with RawAsyncTableImpl

2017-03-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17727:
--
Summary: [C++] Make RespConverter work with RawAsyncTableImpl  (was: [C++] 
Make RespConverter work with MockRawAsyncTableImpl)

> [C++] Make RespConverter work with RawAsyncTableImpl
> 
>
> Key: HBASE-17727
> URL: https://issues.apache.org/jira/browse/HBASE-17727
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Enis Soztutar
>
> This is a follow up work of HBASE-17465. 
> There's a problem to dereference instance of RpcCallback when it's passed as 
> function argument.
> {code}
> template  typename S>
> using RespConverter = std::function;
> {code}
> {code}
>   templatetypename PREQ,
>typename PRESP,
>typename RESP>
>   folly::Future Call(
>   std::shared_ptr rpc_client,
>   std::shared_ptr controller,
>   std::shared_ptr loc,
>   const REQ& req,
>   const ReqConverter& 
> req_converter,
>   const hbase::RpcCall& rpc_call,
>   const RespConverter& resp_converter) {
> rpc_call(
> rpc_client,
> loc,
> controller,
> std::move(req_converter(req, loc->region_name(
> .then([&, this](std::unique_ptr presp) {
>   // std::unique_ptr result = resp_converter(presp);
>   std::unique_ptr result = 
> hbase::ResponseConverter::FromGetResponse(*presp);
>   promise_->setValue(std::move(*result));
> })
> .onError([this] (const std::exception& e) 
> {promise_->setException(e);});
> return promise_->getFuture();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17818) On branch-1 in module Server, TestRSKilledWhenInitializing and TestScannerHeartbeatMessages broken

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17818:
---
Affects Version/s: (was: 2.0.0)

> On branch-1 in module Server, TestRSKilledWhenInitializing and 
> TestScannerHeartbeatMessages broken
> --
>
> Key: HBASE-17818
> URL: https://issues.apache.org/jira/browse/HBASE-17818
> Project: HBase
>  Issue Type: Bug
> Environment: OS: Ubuntu 14.04
> Arch: x86_64
>Reporter: Anup Halarnkar
> Fix For: 1.4.0
>
>
> Branch: branch-1
> Command: mvn clean install -X -fn
> Output:
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 2.030 s
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> [INFO] Final Memory: 15M/304M
> [INFO] 
> 
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache HBase ... SUCCESS [ 24.315 
> s]
> [INFO] Apache HBase - Checkstyle .. SUCCESS [  7.797 
> s]
> [INFO] Apache HBase - Resource Bundle . SUCCESS [  0.263 
> s]
> [INFO] Apache HBase - Annotations . SUCCESS [  2.416 
> s]
> [INFO] Apache HBase - Protocol  SUCCESS [ 12.994 
> s]
> [INFO] Apache HBase - Common .. SUCCESS [02:31 
> min]
> [INFO] Apache HBase - Procedure ... SUCCESS [03:37 
> min]
> [INFO] Apache HBase - Client .. SUCCESS [01:53 
> min]
> [INFO] Apache HBase - Hadoop Compatibility  SUCCESS [  8.560 
> s]
> [INFO] Apache HBase - Hadoop Two Compatibility  SUCCESS [ 10.500 
> s]
> [INFO] Apache HBase - Prefix Tree . SUCCESS [  9.367 
> s]
> [INFO] Apache HBase - Server .. FAILURE [  02:04 
> h]
> [INFO] Apache HBase - Testing Util  SUCCESS [  5.107 
> s]
> [INFO] Apache HBase - Thrift .. SUCCESS [04:52 
> min]
> [INFO] Apache HBase - Rest  SUCCESS [13:56 
> min]
> [INFO] Apache HBase - Shell ... SUCCESS [  3.980 
> s]
> [INFO] Apache HBase - Integration Tests ... SUCCESS [  02:33 
> h]
> [INFO] Apache HBase - Examples  SUCCESS [ 11.129 
> s]
> [INFO] Apache HBase - External Block Cache  SUCCESS [  1.231 
> s]
> [INFO] Apache HBase - Assembly  FAILURE [  4.063 
> s]
> [INFO] Apache HBase - Shaded .. SUCCESS [  0.172 
> s]
> [INFO] Apache HBase - Shaded - Client . SUCCESS [  0.793 
> s]
> [INFO] Apache HBase - Shaded - Server . SUCCESS [  1.721 
> s]
> [INFO] Apache HBase - Archetypes .. SUCCESS [  0.094 
> s]
> [INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [01:24 
> min]
> [INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS 
> [01:15 min]
> [INFO] Apache HBase - Archetype builder ... SUCCESS [ 31.495 
> s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 05:10 h
> [INFO] Finished at: 2017-03-22T19:17:16+05:30
> ---
> Failed tests:
> TestRSKilledWhenInitializing.testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNode:123
>  null
>   
> TestScannerHeartbeatMessages.testScannerHeartbeatMessages:207->testImportanceOfHeartbeats:237
>  Heartbeats messages are disabled, an exception should be thrown. If an 
> exception  is not thrown, the test case is not testing the importance of 
> heartbeat messages
> Tests run: 1698, Failures: 2, Errors: 0, Skipped: 11
> --
> If these tests are passing on your end,then please let me know the possible 
> issue that I could have on my side.
> Thanks in advance,
> Anup



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17727) [C++] Make RespConverter work with MockRawAsyncTableImpl

2017-03-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar reassigned HBASE-17727:
-

Assignee: Enis Soztutar  (was: Xiaobing Zhou)

> [C++] Make RespConverter work with MockRawAsyncTableImpl
> 
>
> Key: HBASE-17727
> URL: https://issues.apache.org/jira/browse/HBASE-17727
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Enis Soztutar
>
> This is a follow up work of HBASE-17465. 
> There's a problem to dereference instance of RpcCallback when it's passed as 
> function argument.
> {code}
> template  typename S>
> using RespConverter = std::function;
> {code}
> {code}
>   templatetypename PREQ,
>typename PRESP,
>typename RESP>
>   folly::Future Call(
>   std::shared_ptr rpc_client,
>   std::shared_ptr controller,
>   std::shared_ptr loc,
>   const REQ& req,
>   const ReqConverter& 
> req_converter,
>   const hbase::RpcCall& rpc_call,
>   const RespConverter& resp_converter) {
> rpc_call(
> rpc_client,
> loc,
> controller,
> std::move(req_converter(req, loc->region_name(
> .then([&, this](std::unique_ptr presp) {
>   // std::unique_ptr result = resp_converter(presp);
>   std::unique_ptr result = 
> hbase::ResponseConverter::FromGetResponse(*presp);
>   promise_->setValue(std::move(*result));
> })
> .onError([this] (const std::exception& e) 
> {promise_->setException(e);});
> return promise_->getFuture();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17753) Update QuotaObserverChore to include computed snapshot sizes

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937349#comment-15937349
 ] 

Hadoop QA commented on HBASE-17753:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} HBASE-17753 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860034/HBASE-17753.001.HBASE-17748.patch
 |
| JIRA Issue | HBASE-17753 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6199/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Update QuotaObserverChore to include computed snapshot sizes
> 
>
> Key: HBASE-17753
> URL: https://issues.apache.org/jira/browse/HBASE-17753
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17753.001.HBASE-17748.patch
>
>
> Need to update QuotaObserverChore to include the new snapshot size 
> computations that were implemented in HBASE-17749 so that the quota 
> utilizations are accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-17752:
--

Assignee: Josh Elser

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17753) Update QuotaObserverChore to include computed snapshot sizes

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17753:
---
Attachment: HBASE-17753.001.HBASE-17748.patch

.001 Amends QuotaObserverChore to pull the serialized snapshot sizes placed in 
the quota table and use them in calculation of quota state.

> Update QuotaObserverChore to include computed snapshot sizes
> 
>
> Key: HBASE-17753
> URL: https://issues.apache.org/jira/browse/HBASE-17753
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17753.001.HBASE-17748.patch
>
>
> Need to update QuotaObserverChore to include the new snapshot size 
> computations that were implemented in HBASE-17749 so that the quota 
> utilizations are accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17753) Update QuotaObserverChore to include computed snapshot sizes

2017-03-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17753:
---
Status: Patch Available  (was: Open)

> Update QuotaObserverChore to include computed snapshot sizes
> 
>
> Key: HBASE-17753
> URL: https://issues.apache.org/jira/browse/HBASE-17753
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17753.001.HBASE-17748.patch
>
>
> Need to update QuotaObserverChore to include the new snapshot size 
> computations that were implemented in HBASE-17749 so that the quota 
> utilizations are accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15583) Any HTD we give out should be immutable

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937307#comment-15937307
 ] 

Hadoop QA commented on HBASE-15583:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 7s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 7s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 53s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 112m 21s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 57s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 10s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 54s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 140m 50s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
32s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-22 Thread Steen Manniche (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937292#comment-15937292
 ] 

Steen Manniche commented on HBASE-17817:


I don't have the relevant loglines with me right now. But the case is, when 
looking in the hbase-regionserver log, the exception shows the qualified name 
of the coprocessors as well as the line where the exception occurred. There is 
no trace of which table the coprocessor was loaded on.

> Make Regionservers log which tables it removed coprocessors from when aborting
> --
>
> Key: HBASE-17817
> URL: https://issues.apache.org/jira/browse/HBASE-17817
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, regionserver
>Affects Versions: 1.1.2
>Reporter: Steen Manniche
>  Labels: logging
>
> When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
> handles this according to {{hbase.coprocessor.abortonerror}}.
> The output in the logs give no indication as to which table the coprocessor 
> was removed from (or which version, or jarfile is the culprit). This causes 
> longer debugging and recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14417) Incremental backup and bulk loading

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937212#comment-15937212
 ] 

Hadoop QA commented on HBASE-14417:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 44s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 139m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859991/14417-tbl-ext.v22.txt 
|
| JIRA Issue | HBASE-14417 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 9e7da8794801 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f2d1b8d |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6198/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6198/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6198/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Incremental backup and bulk loading
> 

[jira] [Commented] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937161#comment-15937161
 ] 

Enis Soztutar commented on HBASE-17707:
---

bq. I made sure to be consistent with the other cost functions when creating 
this one.
Thanks for checking. 
bq. I have a separate issue open HBASE-17706 that fixes the behavior in the old 
TableSkewCostFunction if people would still like to use it.
We cannot maintain two different cost functions for table skew. Let's remove 
the old one from the code, and only have the new implementation in this patch. 
We cannot have dead code lying around and rot. We can close HBASE-17706 as 
won't fix. 

Do you mind submitting a patch which also removes the old table skew function 
in the same patch. 

The new candidate generator {{TableSkewCandidateGenerator}} is not added to the 
SLB::candidateGenerators field which means that it is not used? I can only see 
the test using it. Is this intended? It has to be enabled by default. 
- Did you intend to use the raw variable here instead of calling scale again: 
{code}
return raw == 0 ? 0 : .1 + .9 * Math.sqrt(scale(0, maxCost, totalCost));
{code}

> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, HBASE-17707-12.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-12870) "Major compaction triggered" and "Skipping major compaction" messages lack the region information

2017-03-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-12870:
--

Assignee: Chinmay Kulkarni

> "Major compaction triggered" and "Skipping major compaction" messages lack 
> the region information
> -
>
> Key: HBASE-12870
> URL: https://issues.apache.org/jira/browse/HBASE-12870
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 0.98.9
>Reporter: Hari Krishna Dara
>Assignee: Chinmay Kulkarni
>
> The below messages coming from {{RatioBasedCompactionPolicy}} log the default 
> {{.toString()}} output of itself, which is not useful:
> {noformat}
> 2015-01-16 06:15:07,166 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy: 
> Major compaction triggered on store 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy@4acab1f5;
>  time since last major compaction 339994249ms
> {noformat}
> {noformat}
> 2015-01-16 09:49:12,872 DEBUG 
> org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy: 
> Skipping major compaction of 
> org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy@7b9f47bb
>  because one (major) compacted file only and oldestTime 6795587394ms is < 
> ttl=1555200
> {noformat}
> It should log the store/region information instead. Unfortunately, this 
> information seems to be unavailable at this point, so more context needs to 
> be supplied while calling {{RatioBasedCompactionPolicy.isMajorCompaction()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15712) Tool for retiring empty regions

2017-03-22 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937144#comment-15937144
 ] 

Nick Dimiduk commented on HBASE-15712:
--

[~davelatham]: We're using a 1.1 derivative. I've not tested this with any 
other HBase versions.

[~churromorales]: I would absolutely hope so. I don't believe it has this 
feature at this time, but I haven't followed recent changes to that feature.

> Tool for retiring empty regions
> ---
>
> Key: HBASE-15712
> URL: https://issues.apache.org/jira/browse/HBASE-15712
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Priority: Minor
>
> For folks with rowkey design that includes timestamp, in combination with the 
> TTL feature, empty regions will accumulate. This includes folks making use of 
> Phoenix's [Row timestamps|https://phoenix.apache.org/rowtimestamp.html]. 
> Provide some scripts for cleaning up these empty regions.
> See conversation over on hbase-user: 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201604.mbox/%3CCANZa=gtzgnpqeemvj5p8rjfv-x93vnragoymd1flyc1ahjz...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17771) [C++] Classes required for implementation of BatchCallerBuilder

2017-03-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937128#comment-15937128
 ] 

Enis Soztutar commented on HBASE-17771:
---

Thanks for the updated patch. 
- Remove the commented-out code here:
{code}
+  std::shared_ptr stat_;  // = 
std::make_shared();
{code}
- Is this conditional coming from your custom testing? Need to remove it. 
{code}
+  if (action_num % 3 == 0)
+pb_action->set_allocated_get(nullptr);
+  else
+pb_action->set_allocated_get(pb_get.release());
{code}
- Use {{auto unique = std::make_unique(*pb_req)}} here:
{code}
+  auto unique = std::unique_ptr(new Request(*pb_req));
{code}
- And maybe replace 
{code}
+  auto region_specifier = new hbase::pb::RegionSpecifier();
+  RequestConverter::SetRegion(region_name, region_specifier);
+  auto pb_region_action = pb_msg->add_regionaction();
+  pb_region_action->set_allocated_region(region_specifier);
{code} 
with something like: 
{code}
+  auto pb_region_action = pb_msg->add_regionaction();
+  RequestConverter::SetRegion(region_name, pb_region_action->mutable_region());
{code}
- Are these statements even necessary? I don't think so: 
{code}
+  auto unique = std::unique_ptr(new Request(*pb_req));
+  VLOG(8) << "pb_req Addr:-" << pb_req.get() << "; unique Addr:-" << 
unique.get();
+  multi_req = std::move(unique);
+  VLOG(8) << "multi_req Addr:-" << multi_req.get();
{code} 
Seems coming from debugging. 
- Rename this var to be {{multi_response}}. 
+  auto multi_results = std::make_unique();
- Can we have some consistency in naming these (and related methods): 
{code}
MultiResponse::Add() 
RegionRequest::add_action()
RegionResult::add_result()
ServerRequest::AddAction() 
{code}
Either it should be add_action and add_result or AddAction and AddResult. 
- Are you gonna handle thread safety in this patch, or in a follow up? 





> [C++] Classes required for implementation of BatchCallerBuilder
> ---
>
> Key: HBASE-17771
> URL: https://issues.apache.org/jira/browse/HBASE-17771
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-17771.HBASE-14850.v1.patch, 
> HBASE-17771.HBASE-14850.v2.patch, HBASE-17771.HBASE-14850.v3.patch
>
>
> Separating depedencies of BatchCallerBuilder.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15712) Tool for retiring empty regions

2017-03-22 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937097#comment-15937097
 ] 

churro morales commented on HBASE-15712:


wouldn't the RegionNormalizer take care of this?  its 1.2+ i believe

> Tool for retiring empty regions
> ---
>
> Key: HBASE-15712
> URL: https://issues.apache.org/jira/browse/HBASE-15712
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Priority: Minor
>
> For folks with rowkey design that includes timestamp, in combination with the 
> TTL feature, empty regions will accumulate. This includes folks making use of 
> Phoenix's [Row timestamps|https://phoenix.apache.org/rowtimestamp.html]. 
> Provide some scripts for cleaning up these empty regions.
> See conversation over on hbase-user: 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201604.mbox/%3CCANZa=gtzgnpqeemvj5p8rjfv-x93vnragoymd1flyc1ahjz...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937059#comment-15937059
 ] 

Hadoop QA commented on HBASE-14614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 75 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
37s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 141 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 33s 
{color} | {color:red} hbase-server generated 6 new + 0 unchanged - 0 fixed = 6 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hbase-hadoop2-compat generated 0 new + 1 unchanged - 1 
fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hbase-common in the patch passed. {color} 

[jira] [Commented] (HBASE-15712) Tool for retiring empty regions

2017-03-22 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15937049#comment-15937049
 ] 

Dave Latham commented on HBASE-15712:
-

Thanks, Nick.  Do you know what versions of HBase it is compatible with?

> Tool for retiring empty regions
> ---
>
> Key: HBASE-15712
> URL: https://issues.apache.org/jira/browse/HBASE-15712
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Priority: Minor
>
> For folks with rowkey design that includes timestamp, in combination with the 
> TTL feature, empty regions will accumulate. This includes folks making use of 
> Phoenix's [Row timestamps|https://phoenix.apache.org/rowtimestamp.html]. 
> Provide some scripts for cleaning up these empty regions.
> See conversation over on hbase-user: 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201604.mbox/%3CCANZa=gtzgnpqeemvj5p8rjfv-x93vnragoymd1flyc1ahjz...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936970#comment-15936970
 ] 

Vladimir Rodionov commented on HBASE-17739:
---

Do not forget 8-byte padding for small data types. Ref size is 4 bytes only if 
heap is less than 28GB (or 32?) and block size is much less if you consider 
compression, especially for time-series data, where value is much less than 
rowkey. Again, overhead depends on the application data, compression and block 
size, and usually can be at least 1% of a cache size.  

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14417) Incremental backup and bulk loading

2017-03-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14417:
---
Attachment: 14417-tbl-ext.v22.txt

> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0
>
> Attachments: 14417-tbl-ext.v10.txt, 14417-tbl-ext.v11.txt, 
> 14417-tbl-ext.v14.txt, 14417-tbl-ext.v18.txt, 14417-tbl-ext.v19.txt, 
> 14417-tbl-ext.v20.txt, 14417-tbl-ext.v21.txt, 14417-tbl-ext.v22.txt, 
> 14417-tbl-ext.v9.txt, 14417.v11.txt, 14417.v13.txt, 14417.v1.txt, 
> 14417.v21.txt, 14417.v23.txt, 14417.v24.txt, 14417.v25.txt, 14417.v2.txt, 
> 14417.v6.txt
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Here is the review board (out of date):
> https://reviews.apache.org/r/54258/
> In order not to miss the hfiles which are loaded into region directories in a 
> situation where postBulkLoadHFile() hook is not called (bulk load being 
> interrupted), we record hfile names thru preCommitStoreFile() hook.
> At time of incremental backup, we check the presence of such hfiles. If they 
> are present, they become part of the incremental backup image.
> Here is review board:
> https://reviews.apache.org/r/57790/
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17815:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

push to master.

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.branch-1.v3.patch, HBASE-17623.branch-1.v3.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Attachment: HBASE-17623.branch-1.v3.patch

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.branch-1.v3.patch, HBASE-17623.branch-1.v3.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Open  (was: Patch Available)

TestScannerHeartbeatMessages pass locally. retry

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.branch-1.v3.patch, HBASE-17623.branch-1.v3.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936900#comment-15936900
 ] 

Anoop Sam John commented on HBASE-16438:


Not clearing/removing from Q is not the real issue. Code wise there may not be 
a remove. But the MSLABImpl object keeps this Q and the MSLAB object become 
dead once the segment referring it gets flushed.  So some time we will remove 
it.  The issue was specific to use case where there are many updates to same 
Cell.  MSLAB was in use in that!   So we have cells from one chunk but all 
become irrelevant after some time as new cells comes in. Adding new cells to 
CSLM would remove the old.  If we were not referring to that chunk any where 
other than those original Cells, a GC can collect it.  But adding to the Q was 
preventing it.   That is why that jira changed it to keep only chunks from pool.
Now we keep refer to all chunks created out of ChunkCreator. So we are back to 
old problem

But as such there wont be OOME for sure.  Because as part of above jira 
[~carp84] fixed another issue where we account for the cell size (when its data 
bytes in chunk) and so we will end up in size breaching at region level/ global 
level and normal/forced flush will happen.  So I dont feel like any OOME issue. 
 But to have a better GC in this scenario, HBASE-16195 would have helped , 
which will be broken by this jira.

Am I making things clear?

cc [~carp84]

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936878#comment-15936878
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


Ok seeing this comment now. I think in HBASE-16195 there was a queue which was 
not getting cleared at all and hence there was problem?
{code}
 this.chunkQueue.add(c);
{code}
I could see the above line the patch V4 attached in that JIRA where this queue 
is now added only when there is pool. Previously there was no remove I believe 
from this queue. 
Now in the these patches we hold on to the Chunk but still we clear it on close 
or when the scanner close it right? Anyway these chunks are needed till the 
flush completes and so some where there is a reference still maintained. Or am 
i missing something?

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15712) Tool for retiring empty regions

2017-03-22 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936873#comment-15936873
 ] 

Nick Dimiduk commented on HBASE-15712:
--

Sorry for the long delay.

Here's the script we've been running in Prod for the last year or so. The 
use-case it supports is storage of write-once event log data in tables using 
Phoenix's [Row Timestamp|https://phoenix.apache.org/rowtimestamp.html] feature 
in combination with HBase's [TTL|http://hbase.apache.org/book.html#ttl] 
functionality. Because the timestamp is included in the rowkey, we end up with 
regions that contain no data after their expire period. This script is run 
periodically on those tables to prune off the empty region. It has similar 
issues as HBCK in that cluster topology can change between when it decides on 
an execution plan and execution of that plan, but multiple runs will converge.

https://gist.github.com/ndimiduk/6594d55a7a282c5d3378e65b9582deaa

> Tool for retiring empty regions
> ---
>
> Key: HBASE-15712
> URL: https://issues.apache.org/jira/browse/HBASE-15712
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Priority: Minor
>
> For folks with rowkey design that includes timestamp, in combination with the 
> TTL feature, empty regions will accumulate. This includes folks making use of 
> Phoenix's [Row timestamps|https://phoenix.apache.org/rowtimestamp.html]. 
> Provide some scripts for cleaning up these empty regions.
> See conversation over on hbase-user: 
> http://mail-archives.apache.org/mod_mbox/hbase-user/201604.mbox/%3CCANZa=gtzgnpqeemvj5p8rjfv-x93vnragoymd1flyc1ahjz...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936859#comment-15936859
 ] 

Hadoop QA commented on HBASE-17707:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 56s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 113m 9s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859950/HBASE-17707-12.patch |
| JIRA Issue | HBASE-17707 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ea463b4ff6a0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9410709 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6193/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6193/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no 

[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936839#comment-15936839
 ] 

Hadoop QA commented on HBASE-17623:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
54s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 53s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 39s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestScannerHeartbeatMessages |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936816#comment-15936816
 ] 

Hadoop QA commented on HBASE-17595:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
47m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 44s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 164m 9s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
3s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 244m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.util.TestHBaseFsckTwoRS |
|   | hadoop.hbase.util.TestHBaseFsckReplicas |
|   | hadoop.hbase.client.TestScannersFromClientSide2 |
| Timed out junit tests | 
org.apache.hadoop.hbase.security.access.TestAccessController3 |
|   | org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions |
|   | 
org.apache.hadoop.hbase.security.access.TestCoprocessorWhitelistMasterObserver |
|   | org.apache.hadoop.hbase.security.token.TestZKSecretWatcher |
|   | org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859936/HBASE-17595-addendum.patch
 |
| JIRA Issue | HBASE-17595 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4be303e9c0d9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-03-22 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-17819:
--

 Summary: Reduce the heap overhead for BucketCache
 Key: HBASE-17819
 URL: https://issues.apache.org/jira/browse/HBASE-17819
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
the key , value into this map.
BlockCacheKey
---
String hfileName  -  Ref  - 4
long offset  - 8
BlockType blockType  - Ref  - 4
boolean isPrimaryReplicaBlock  - 1
Total  =  12 (Object) + 17 = 29

BucketEntry

int offsetBase  -  4
int length  - 4
byte offset1  -  1
byte deserialiserIndex  -  1
long accessCounter  -  8
BlockPriority priority  - Ref  - 4
volatile boolean markedForEvict  -  1
AtomicInteger refCount  -  16 + 4
long cachedTime  -  8
Total = 12 (Object) + 51 = 63

ConcurrentHashMap Map.Entry  -  40
blocksByHFile ConcurrentSkipListSet Entry  -  40

Total = 29 + 63 + 80 = 172

For 10 million blocks we will end up having 1.6GB of heap size.  
This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936813#comment-15936813
 ] 

Anoop Sam John commented on HBASE-17739:


Thanks.  Ya I missed some parts like Object size as such and CSLM entry size.  
My math comes this way
BlockCacheKey
---
String hfileName  -  Ref  - 4
long offset  - 8
BlockType blockType  - Ref  - 4
boolean isPrimaryReplicaBlock  - 1
Total  =  12 (Object) + 17 = 29

BucketEntry

int offsetBase  -  4
int length  - 4
byte offset1  -  1
byte deserialiserIndex  -  1
long accessCounter  -  8
BlockPriority priority  - Ref  - 4
volatile boolean markedForEvict  -  1
AtomicInteger refCount  -  16 + 4
long cachedTime  -  8
Total = 12 (Object) + 51 = 63

ConcurrentHashMap Map.Entry  -  40
blocksByHFile ConcurrentSkipListSet Entry  -  40

Total = 29 + 63 + 80 = 172

May be u considering 8 as the Reference size.  I am following what ClassSize 
returns.  Pls refer UnsafeLayout

Ya it is big only I agree.   For 10 million entries we will end up having 1.6 
GB heap size.  (Say considering 64 KB block size, 10 million means 600 GB cache 
size).

I went through this some time back also.  Seeing how far we grow in heap size 
when growing really big with BC.  But some calc did not get correctly and it 
was not coming this big.
Let me raise an issue. I can see some possibility to reduce this. Any reduction 
here will help us for sure.

Thanks Vladimir for raising the concern.

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936708#comment-15936708
 ] 

Anoop Sam John commented on HBASE-16438:


Pls see HBASE-16193..   Now we will keep ref to Chunks as long as the MSLAB, 
which created it, is not closed.With that we will kind of break the fix in 
HBASE-16195. Thoughts!

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTD we give out should be immutable

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Release Note: 
# The HTD got from Admin, AsyncAdmin, and Table is immutable.
# DEFERRED_LOG_FLUSH is removed.
# cleanup the deprecated construction of HTD
  Status: Patch Available  (was: Open)

> Any HTD we give out should be immutable
> ---
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTD we give out should be immutable

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Attachment: HBASE-15583.v0.patch

> Any HTD we give out should be immutable
> ---
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTD we give out should be immutable

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Summary: Any HTD we give out should be immutable  (was: Discuss mutable vs 
immutable HTableDescriptor)

> Any HTD we give out should be immutable
> ---
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-03-22 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.016.patch

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.011.patch, 
> HBASE-14614.master.012.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.016.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Attachment: HBASE-17623.branch-1.v3.patch

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.branch-1.v3.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, HBASE-17623.v3.patch, 
> HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.branch-1.v3.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v2.patch, HBASE-17623.v3.patch, 
> HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Open  (was: Patch Available)

run the timeout tests locally. All pass.
submit the patch for branch-1

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-22 Thread Weizhan Zeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936557#comment-15936557
 ] 

Weizhan Zeng commented on HBASE-17816:
--

[~ashu210890]  I would like to upload a test , do you mind ?

> HRegion#mutateRowWithLocks should update writeRequestCount metric
> -
>
> Key: HBASE-17816
> URL: https://issues.apache.org/jira/browse/HBASE-17816
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-17816.master.001.patch
>
>
> Currently, all the calls that use HRegion#mutateRowWithLocks miss 
> writeRequestCount metric. The mutateRowWithLocks base method should update 
> the metric.
> Examples are checkAndMutate calls through RSRpcServices#multi, 
> Region#mutateRow api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Kahlil Oppenheimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kahlil Oppenheimer updated HBASE-17707:
---
Status: Patch Available  (was: Open)

> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, HBASE-17707-12.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Kahlil Oppenheimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kahlil Oppenheimer updated HBASE-17707:
---
Status: Open  (was: Patch Available)

> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, HBASE-17707-12.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Kahlil Oppenheimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kahlil Oppenheimer updated HBASE-17707:
---
Attachment: HBASE-17707-12.patch

> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, HBASE-17707-12.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-22 Thread Kahlil Oppenheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936530#comment-15936530
 ] 

Kahlil Oppenheimer commented on HBASE-17707:


Sorry, I just realized it was unclear because the unit test was pre-existing 
(before my patch) for the old table skew cost function, but now applies to the 
new one. It is found at TestStochasticLoadBalancer::testTableSkewCost. Also it 
*is* a hard guarantee that {{numMovesPerTable <= pathologicalNumMoves}}. I made 
sure to be consistent with the other cost functions when creating this one.

The issue is that the old table skew cost function was fundamentally broken. It 
did not change its cost estimate as the balancer proposed region moves/swaps, 
meaning the table skew cost it estimated at the beginning of balancing was 
often the same as at the end, which meant it actually played no role in the 
balancing at all. I have a separate issue open HBASE-17706 that fixes the 
behavior in the old TableSkewCostFunction if people would still like to use it. 
But I can't merge that one until this one gets resolved. In any case, it does 
not surprise me that this new cost function would alter behavior because we are 
effectively having table skew considered for the first time in the balancing 
process.

I'll go ahead and rebase/resubmit a new patch that includes the new table skew 
stuff as well as the fix to the region replica host cost function.



> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17810) TestFuzzyRowFilter and IntegrationTest* tests on "branch-1" are broken

2017-03-22 Thread Anup Halarnkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936523#comment-15936523
 ] 

Anup Halarnkar commented on HBASE-17810:


I have run build/test many times and still getting same failure. Doesn't look 
like any timing issues.
Please let me know if this might be related to any environment issue.

Thanks in advance,
Anup

> TestFuzzyRowFilter and IntegrationTest* tests on "branch-1" are broken
> --
>
> Key: HBASE-17810
> URL: https://issues.apache.org/jira/browse/HBASE-17810
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0
> Environment: OS: Ubuntu 14.04
> Arch: ppc64le
>Reporter: Anup Halarnkar
> Fix For: 2.0.0
>
>
> I saw that some fixes HBASE-17746 were in branch-1. So, I used this branch to 
> see if I am getting any failures.
> This is the result:
> Results :
> Failed tests:
>   TestFuzzyRowFilter.testSatisfiesForward:80 expected: but was:
>   TestFuzzyRowFilter.testSatisfiesReverse:120 expected: but 
> was:
> Tests in error:
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDecreasingCommonPrefix:140->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDiffFamilyAndQualifer:249->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDiffQualifer:160->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDiffQualiferOnSameRow:183->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDiffQualiferOnSameRow1:206->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithDiffQualiferOnSameRowButDescendingInSize:229->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekToBlockWithNonMatchingSeekKey:65->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekingToBlockToANotAvailableKey:117->seekToTheKey:270
>  » NoClassDefFound
>   
> TestSeekToBlockWithEncoders.testSeekingToBlockWithBiggerNonLength1:91->seekToTheKey:270
>  » NoClassDefFound
>   TestHFileEncryption.testHFileEncryption:232 » NoClassDefFound Could not 
> initia...
>   
> TestSeekTo.testSeekBeforeWithReSeekTo:199->testSeekBeforeWithReSeekToInternals:216
>  » NoClassDefFound
>   TestSeekTo.testSeekBefore:145->testSeekBeforeInternals:161 » 
> NoClassDefFound C...
>   TestSeekTo.testSeekTo:292->testSeekToInternals:308 » NoClassDefFound Could 
> not...
> Tests run: 1106, Failures: 2, Errors: 13, Skipped: 22
> 
> Results :
> Failed tests:
>   IntegrationTestRegionReplicaReplication.testIngest:112->runIngestTest:214 
> Load failed with error code 1
>   
> IntegrationTestBulkLoad.testBulkLoad:213->runLoad:223->runLinkedListMRJob:296 
> expected: but was:
>   IntegrationTestImportTsv.testGenerateAndLoad:203 expected:<0> but was:<1>
>   IntegrationTestLoadAndVerify.testLoadAndVerify:544->doLoad:353 null
>   
> IntegrationTestWithCellVisibilityLoadAndVerify>IntegrationTestLoadAndVerify.testLoadAndVerify:544->doLoad:244->IntegrationTestLoadAndVerify.doLoad:353
>  null
> Tests in error:
>   
> IntegrationTestIngestWithACL>IntegrationTestBase.setUp:148->setUpCluster:64->IntegrationTestIngest.setUpCluster:84
>  » IO
>   IntegrationTestMTTR.testKillRsHoldingMeta:278->run:319 » Execution 
> java.io.IOE...
>   IntegrationTestMTTR.testRestartRsHoldingTable:273->run:319 » Execution 
> java.io...
>   IntegrationTestBigLinkedList.testContinuousIngest:1798 » Runtime Generator 
> fai...
>   IntegrationTestBigLinkedListWithVisibility.testContinuousIngest:637 » 
> Runtime ...
>   
> IntegrationTestReplication>IntegrationTestBigLinkedList.testContinuousIngest:1798
>  » Runtime
> Tests run: 32, Failures: 5, Errors: 6, Skipped: 2
> Any idea what could be the reason for above failures?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17818) On branch-1 in module Server, TestRSKilledWhenInitializing and TestScannerHeartbeatMessages broken

2017-03-22 Thread Anup Halarnkar (JIRA)
Anup Halarnkar created HBASE-17818:
--

 Summary: On branch-1 in module Server, 
TestRSKilledWhenInitializing and TestScannerHeartbeatMessages broken
 Key: HBASE-17818
 URL: https://issues.apache.org/jira/browse/HBASE-17818
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
 Environment: OS: Ubuntu 14.04
Arch: x86_64
Reporter: Anup Halarnkar
 Fix For: 2.0.0


Branch: branch-1
Command: mvn clean install -X -fn

Output:
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 2.030 s
[INFO] Finished at: 2017-03-22T19:17:16+05:30
[INFO] Final Memory: 15M/304M
[INFO] 
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache HBase ... SUCCESS [ 24.315 s]
[INFO] Apache HBase - Checkstyle .. SUCCESS [  7.797 s]
[INFO] Apache HBase - Resource Bundle . SUCCESS [  0.263 s]
[INFO] Apache HBase - Annotations . SUCCESS [  2.416 s]
[INFO] Apache HBase - Protocol  SUCCESS [ 12.994 s]
[INFO] Apache HBase - Common .. SUCCESS [02:31 min]
[INFO] Apache HBase - Procedure ... SUCCESS [03:37 min]
[INFO] Apache HBase - Client .. SUCCESS [01:53 min]
[INFO] Apache HBase - Hadoop Compatibility  SUCCESS [  8.560 s]
[INFO] Apache HBase - Hadoop Two Compatibility  SUCCESS [ 10.500 s]
[INFO] Apache HBase - Prefix Tree . SUCCESS [  9.367 s]
[INFO] Apache HBase - Server .. FAILURE [  02:04 h]
[INFO] Apache HBase - Testing Util  SUCCESS [  5.107 s]
[INFO] Apache HBase - Thrift .. SUCCESS [04:52 min]
[INFO] Apache HBase - Rest  SUCCESS [13:56 min]
[INFO] Apache HBase - Shell ... SUCCESS [  3.980 s]
[INFO] Apache HBase - Integration Tests ... SUCCESS [  02:33 h]
[INFO] Apache HBase - Examples  SUCCESS [ 11.129 s]
[INFO] Apache HBase - External Block Cache  SUCCESS [  1.231 s]
[INFO] Apache HBase - Assembly  FAILURE [  4.063 s]
[INFO] Apache HBase - Shaded .. SUCCESS [  0.172 s]
[INFO] Apache HBase - Shaded - Client . SUCCESS [  0.793 s]
[INFO] Apache HBase - Shaded - Server . SUCCESS [  1.721 s]
[INFO] Apache HBase - Archetypes .. SUCCESS [  0.094 s]
[INFO] Apache HBase - Exemplar for hbase-client archetype . SUCCESS [01:24 min]
[INFO] Apache HBase - Exemplar for hbase-shaded-client archetype SUCCESS [01:15 
min]
[INFO] Apache HBase - Archetype builder ... SUCCESS [ 31.495 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:10 h
[INFO] Finished at: 2017-03-22T19:17:16+05:30
---
Failed tests:

TestRSKilledWhenInitializing.testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNode:123
 null
  
TestScannerHeartbeatMessages.testScannerHeartbeatMessages:207->testImportanceOfHeartbeats:237
 Heartbeats messages are disabled, an exception should be thrown. If an 
exception  is not thrown, the test case is not testing the importance of 
heartbeat messages

Tests run: 1698, Failures: 2, Errors: 0, Skipped: 11
--
If these tests are passing on your end,then please let me know the possible 
issue that I could have on my side.

Thanks in advance,
Anup



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-22 Thread Weizhan Zeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936500#comment-15936500
 ] 

Weizhan Zeng commented on HBASE-17817:
--

Do you mean when RegionCoprocessorHost#execOperation  appear exception , we 
should do some log ?

> Make Regionservers log which tables it removed coprocessors from when aborting
> --
>
> Key: HBASE-17817
> URL: https://issues.apache.org/jira/browse/HBASE-17817
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, regionserver
>Affects Versions: 1.1.2
>Reporter: Steen Manniche
>  Labels: logging
>
> When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
> handles this according to {{hbase.coprocessor.abortonerror}}.
> The output in the logs give no indication as to which table the coprocessor 
> was removed from (or which version, or jarfile is the culprit). This causes 
> longer debugging and recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936395#comment-15936395
 ] 

Hadoop QA commented on HBASE-17623:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 58s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
49s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestFromClientSide3 |
|   | org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestAsyncTableScan |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859918/HBASE-17623.v3.patch |
| JIRA Issue | HBASE-17623 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 9d5880908a98 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9410709 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HBASE-17669) Implement async mergeRegion/splitRegion methods.

2017-03-22 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936369#comment-15936369
 ] 

Zheng Hu commented on HBASE-17669:
--

The failed ut is unrelated. [~Apache9], Could you help to review the patch 
first ?  

> Implement async mergeRegion/splitRegion methods.
> 
>
> Key: HBASE-17669
> URL: https://issues.apache.org/jira/browse/HBASE-17669
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17669.v1.patch, HBASE-17669.v2.patch, 
> HBASE-17669.v3.patch, HBASE-17669.v3.patch, HBASE-17669.v4.patch, 
> HBASE-17669.v5.patch, HBASE-17669.v5.patch
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936347#comment-15936347
 ] 

Duo Zhang commented on HBASE-17595:
---

[~yangzhe1991] FYI.

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, HBASE-17595-branch-1.patch, 
> HBASE-17595.patch, HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16318) fail build if license isn't in whitelist

2017-03-22 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936350#comment-15936350
 ] 

Sean Busbey commented on HBASE-16318:
-

Hi [~reidchan]! please file a new issue if compiling with hadoop 2.6.0 doesn't 
work.

> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17595:
--
Hadoop Flags: Incompatible change  (was: Incompatible change,Reviewed)
  Status: Patch Available  (was: Reopened)

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, HBASE-17595-branch-1.patch, 
> HBASE-17595.patch, HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17595) Add partial result support for small/limited scan

2017-03-22 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17595:
--
Attachment: HBASE-17595-addendum.patch

> Add partial result support for small/limited scan
> -
>
> Key: HBASE-17595
> URL: https://issues.apache.org/jira/browse/HBASE-17595
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, scan
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17595-addendum.patch, HBASE-17595-branch-1.patch, 
> HBASE-17595.patch, HBASE-17595-v1.patch
>
>
> The partial result support is marked as a 'TODO' when implementing 
> HBASE-17045. And when implementing HBASE-17508, we found that if we make 
> small scan share the same logic with general scan, the scan request other 
> than open scanner will not have the small flag so the server may return  
> partial result to the client and cause some strange behavior. It is solved by 
> modifying the logic at server side, but this means the 1.4.x client is not 
> safe to contact with earlier 1.x server. So we'd better address the problem 
> at client side. Marked as blocker as this issue should be finished before any 
> 2.x and 1.4.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17669) Implement async mergeRegion/splitRegion methods.

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936269#comment-15936269
 ] 

Hadoop QA commented on HBASE-17669:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 37s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859905/HBASE-17669.v5.patch |
| JIRA Issue | HBASE-17669 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0b31ff471ef6 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9410709 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6190/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6190/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6190/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Attachment: HBASE-17623.v3.patch

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, HBASE-17623.v3.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936160#comment-15936160
 ] 

Chia-Ping Tsai commented on HBASE-17623:


[~anoop.hbase]
I will commit it if QA doesn't complain.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-22 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Open  (was: Patch Available)

retry v3

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-16318) fail build if license isn't in whitelist

2017-03-22 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936114#comment-15936114
 ] 

Reid Chan edited comment on HBASE-16318 at 3/22/17 11:06 AM:
-

Still got this error:

"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

when i tried to compile source code with -Dhadoop-two.version=2.6.0
Java 8 environment, does it matter?



was (Author: reidchan):
Still got this error:

"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

when i tried to compile source code with -Dhadoop-two.version=2.6.0.


> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16318) fail build if license isn't in whitelist

2017-03-22 Thread Reid Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15936114#comment-15936114
 ] 

Reid Chan commented on HBASE-16318:
---

Still got this error:

"Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hbase-assembly: Error rendering velocity resource. Error invoking 
method 'get(java.lang.Integer)' in java.util.ArrayList at 
META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 0, 
Size: 0 -> [Help 1]".

when i tried to compile source code with -Dhadoop-two.version=2.6.0.


> fail build if license isn't in whitelist
> 
>
> Key: HBASE-16318
> URL: https://issues.apache.org/jira/browse/HBASE-16318
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, dependencies
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3, 0.98.22
>
> Attachments: 16318-0.98-addendum2.txt, HBASE-16318.0.patch, 
> HBASE-16318.1.patch, HBASE-16318.2.patch, HBASE-16318.3.patch, 
> HBASE-16318.v3addendum.0.98.patch
>
>
> we use supplemental-models.xml to make sure we have consistent names and 
> descriptions for licenses. we also know what licenses we expect to see in our 
> build. If we see a different one
> # fail the velocity template process
> # if possible, include some information about why this happened



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables

2017-03-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935798#comment-15935798
 ] 

Hadoop QA commented on HBASE-14141:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hbase-server generated 5 new + 0 unchanged - 0 fixed = 5 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 101m 15s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 140m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859849/HBASE-14141.v2.patch |
| JIRA Issue | HBASE-14141 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2c9309046acf 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9410709 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6188/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6188/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6188/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits 
> from backup tables
>