[jira] [Updated] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17836:
---
Status: Patch Available  (was: Open)

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch, HBASE-17836.v1.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17836:
---
Attachment: HBASE-17836.v1.patch

v1 addresses [~anoop.hbase]'s comment and adds a trivial change to test the 
hbase-server.

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch, HBASE-17836.v1.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17836:
---
Status: Open  (was: Patch Available)

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17859) ByteBufferUtils#compareTo is wrong

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17859:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review. [~yuzhih...@gmail.com], [~anoop.hbase], [~ram_krish], 
and [~stack]

> ByteBufferUtils#compareTo is wrong
> --
>
> Key: HBASE-17859
> URL: https://issues.apache.org/jira/browse/HBASE-17859
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17859.v0.patch, HBASE-17859.v1.patch
>
>
> buf2.get( i ) & 0xFF; -> buf2.get(j) & 0xFF;
> {noformat}
>   public static int compareTo(byte [] buf1, int o1, int l1, ByteBuffer buf2, 
> int o2, int l2) {
>// 
> int end1 = o1 + l1;
> int end2 = o2 + l2;
> for (int i = o1, j = o2; i < end1 && j < end2; i++, j++) {
>   int a = buf1[i] & 0xFF;
>   int b = buf2.get(i) & 0xFF;
>   if (a != b) {
> return a - b;
>   }
> }
> return l1 - l2;
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952048#comment-15952048
 ] 

Hadoop QA commented on HBASE-17854:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 109m 42s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  org.apache.hadoop.hbase.master.cleaner.HFileCleaner$HFileDeleteTask 
defines compareTo(HFileCleaner$HFileDeleteTask) and uses Object.equals()  At 
HFileCleaner.java:Object.equals()  At HFileCleaner.java:[lines 310-313] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861204/HBASE-17854.patch |
| JIRA Issue | HBASE-17854 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux dcebdd991a67 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9facfa5 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6292/artifact/patchprocess/new-findbugs-hbase-server.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6292/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6292/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use StealJobQueue in 

[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952042#comment-15952042
 ] 

Jerry He commented on HBASE-17861:
--

We can just these three: s3, wasb and swift, as [~zyork] suggested, and ignore 
case.

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861.branch-1.V1.patch, HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952026#comment-15952026
 ] 

Thiruvel Thirumoolan commented on HBASE-16942:
--

Unit test failures unrelated to patch.

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE-16942.master.010.patch, HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2017-03-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15952025#comment-15952025
 ] 

Jerry He commented on HBASE-16179:
--

bq. I'd be up for us changing gears a bit here. We could focus here just on 
what has to be done for correctness and do things that would be nice in 
follow-ons
Agree.

bq.  I do agree that I'd like Spark 2 support for HBase 2.0.0.
Agree too.

bq. Just make sure that however those classpaths are formed we don't include 
the various sparkXscala specific jars.
Confused here.  Any reason why we don't want one to be in the classpath of the 
server? I think there is a custom filter in the jar that needs to be on the 
server.


> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v10.txt, 16179.v11.txt, 
> 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 16179.v15.txt, 
> 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 16179.v1.txt, 
> 16179.v1.txt, 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 
> 16179.v25.txt, 16179.v26.txt, 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 
> 16179.v8.txt, 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951999#comment-15951999
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #136 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/136/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
ea3907da7ba2fd8e9b86e0aee4c2198c8611faf2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15176) maven archetype: add end-user-oriented documentation to book.html

2017-03-31 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951982#comment-15951982
 ] 

Daniel Vimont commented on HBASE-15176:
---

Completion of the task outlined here awaits resolution of this issue: 
HBASE-17598

Once that issue is dealt with (which would make the archetypes publicly 
available via the Maven Central Repository), we can add some straightforward 
documentation to the HBase Reference Guide. A good model to follow might be the 
"quickstart" guide in the Apache Beam project, which makes excellent use of an 
archetype to get Beam newbies up and running in a matter of minutes: 
https://beam.apache.org/get-started/quickstart-java/

> maven archetype: add end-user-oriented documentation to book.html
> -
>
> Key: HBASE-15176
> URL: https://issues.apache.org/jira/browse/HBASE-15176
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>
> [~misty] advises (from HBASE-14877):
> Please add a section to http://hbase.apache.org/book.html#hbase_apis, before 
> the Examples section, with a brief (one-paragraph?) explanation of Maven 
> archetypes and how the HBase archetypes can help developers get started 
> quickly... The source for that chapter is in 
> src/main/asciidoc/_chapters/hbase_apis.adoc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes

2017-03-31 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951965#comment-15951965
 ] 

Duo Zhang commented on HBASE-17857:
---

[~jerryhe] Thanks for your reminding. Will take a look.

> Remove IS annotations from IA.Public classes
> 
>
> Key: HBASE-17857
> URL: https://issues.apache.org/jira/browse/HBASE-17857
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17857.patch, HBASE-17857-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17854) Use StealJobQueue in HFileCleaner after HBASE-17215

2017-03-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17854:
--
Status: Patch Available  (was: Open)

Back to this one since HBASE-17215 is closed. Submit for HadoopQA to check.

> Use StealJobQueue in HFileCleaner after HBASE-17215
> ---
>
> Key: HBASE-17854
> URL: https://issues.apache.org/jira/browse/HBASE-17854
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-17854.patch
>
>
> In HBASE-17215 we use specific threads for deleting large/small (archived) 
> hfiles, and will improve it from below aspects in this JIRA:
> 1. Using {{StealJobQueue}} to allow large file deletion thread to steal jobs 
> from small queue, based on the experience that in real world there'll be much 
> more small hfiles
> 2. {{StealJobQueue}} is a kind of {{PriorityQueue}}, so we could also delete 
> from the larger file in the queues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes

2017-03-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951959#comment-15951959
 ] 

Jerry He commented on HBASE-17857:
--

There are a few scala source files in hbase-spark you may want to include in 
your script.

> Remove IS annotations from IA.Public classes
> 
>
> Key: HBASE-17857
> URL: https://issues.apache.org/jira/browse/HBASE-17857
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17857.patch, HBASE-17857-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17215) Separate small/large file delete threads in HFileCleaner to accelerate archived hfile cleanup speed

2017-03-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17215:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed into master branch and closing issue. Thanks all for review.

> Separate small/large file delete threads in HFileCleaner to accelerate 
> archived hfile cleanup speed
> ---
>
> Key: HBASE-17215
> URL: https://issues.apache.org/jira/browse/HBASE-17215
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-17215.patch, HBASE-17215.v2.patch, 
> HBASE-17215.v3.patch
>
>
> When using PCIe-SSD the flush speed will be really quick, and although we 
> have per CF flush, we still have the 
> {{hbase.regionserver.optionalcacheflushinterval}} setting and some other 
> mechanism to avoid data kept in memory for too long to flush small hfiles. In 
> our online environment we found the single thread cleaner kept cleaning 
> earlier flushed small files while large files got no chance, which caused 
> disk full then many other problems.
> Deleting hfiles in parallel with too many threads will also increase the 
> workload of namenode, so here we propose to separate large/small hfile 
> cleaner threads just like we do for compaction, and it turned out to work 
> well in our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17215) Separate small/large file delete threads in HFileCleaner to accelerate archived hfile cleanup speed

2017-03-31 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17215:
--
Hadoop Flags: Reviewed
Release Note: After HBASE-17215 we change to use two threads for (archived) 
hfile cleaning. The size throttling for large/small files could be set through 
"hbase.regionserver.thread.hfilecleaner.throttle" and default to 67108864 
(64M). It supports online configuration change, just find the active master 
address through zookeeper dump and use it in update_config command, e.g. 
update_config 'hbasem1.et2.tbsite.net,60100,1488038696741'

> Separate small/large file delete threads in HFileCleaner to accelerate 
> archived hfile cleanup speed
> ---
>
> Key: HBASE-17215
> URL: https://issues.apache.org/jira/browse/HBASE-17215
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-17215.patch, HBASE-17215.v2.patch, 
> HBASE-17215.v3.patch
>
>
> When using PCIe-SSD the flush speed will be really quick, and although we 
> have per CF flush, we still have the 
> {{hbase.regionserver.optionalcacheflushinterval}} setting and some other 
> mechanism to avoid data kept in memory for too long to flush small hfiles. In 
> our online environment we found the single thread cleaner kept cleaning 
> earlier flushed small files while large files got no chance, which caused 
> disk full then many other problems.
> Deleting hfiles in parallel with too many threads will also increase the 
> workload of namenode, so here we propose to separate large/small hfile 
> cleaner threads just like we do for compaction, and it turned out to work 
> well in our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951947#comment-15951947
 ] 

Hadoop QA commented on HBASE-16942:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 141m 2s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 187m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestBlockEvictionFromClient |
|   | hadoop.hbase.quotas.TestQuotaThrottle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.10.1 Server=1.10.1 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861541/HBASE-16942.master.010.patch
 |
| JIRA Issue | HBASE-16942 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6d768a80104d 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 
24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1c4d9c8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6291/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6291/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6291/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6291/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add FavoredStochasticLoadBalancer and FN Candidate generators
> 

[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951944#comment-15951944
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #145 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/145/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
ea3907da7ba2fd8e9b86e0aee4c2198c8611faf2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951942#comment-15951942
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.4 #686 (See 
[https://builds.apache.org/job/HBase-1.4/686/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
19e4e4d49a332aa653e7fa7a8267d0bc14788709)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17668) Implement async assgin/offline/move/unassign methods

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951940#comment-15951940
 ] 

Hudson commented on HBASE-17668:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2777 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2777/])
HBASE-17668: Implement async assgin/offline/move/unassign methods (zhangduo: 
rev 5f98ad2053ddc31e0abc6863478db594e4447cf8)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java


> Implement async assgin/offline/move/unassign methods
> 
>
> Key: HBASE-17668
> URL: https://issues.apache.org/jira/browse/HBASE-17668
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17668.v1.patch, HBASE-17668.v1.patch, 
> HBASE-17668.v2.patch, HBASE-17668.v3.patch, HBASE-17668.v4.patch
>
>
> Implement following methods for async admin client: 
> 1.  assign region; 
> 2.  unassign region; 
> 3.  offline region; 
> 4.  move region;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951939#comment-15951939
 ] 

Hudson commented on HBASE-17698:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2777 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2777/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
80381f39446bab131f5b1f227c98bad97545c4c8)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951934#comment-15951934
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #120 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/120/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
cd583383bcdae7e658ca2ca308d96a3be48956aa)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951935#comment-15951935
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #116 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/116/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
cd583383bcdae7e658ca2ca308d96a3be48956aa)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17860:
---
Attachment: 17860.v3.txt

Patch v3 removes hardcoding of KERBEROS_AUTH_TYPE by passing Conf to RpcSerde.

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 17860.v2.txt, 17860.v3.txt
>
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951917#comment-15951917
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1940 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1940/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
575a8c909be5a3e032a8fc757f7224b234ce7307)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951916#comment-15951916
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1856 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1856/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
575a8c909be5a3e032a8fc757f7224b234ce7307)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17862) Condition that always returns true

2017-03-31 Thread JC (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951903#comment-15951903
 ] 

JC edited comment on HBASE-17862 at 4/1/17 1:31 AM:


Thanks for the comment. I've created a pull request: 
https://github.com/apache/hbase/pull/47

Thanks!


was (Author: lifove):
Thanks for the comment. I've create a pull request: 
https://github.com/apache/hbase/pull/47

Thanks!

> Condition that always returns true
> --
>
> Key: HBASE-17862
> URL: https://issues.apache.org/jira/browse/HBASE-17862
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: JC
>Priority: Trivial
>
> Hi
> In recent github mirror of hbase, I've found the following code smell.
> Path: 
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
> {code}
> 209 
> 210 ColumnPaginationFilter other = (ColumnPaginationFilter)o;
> 211 if (this.columnOffset != null) {
> 212   return this.getLimit() == this.getLimit() &&
> 213   Bytes.equals(this.getColumnOffset(), other.getColumnOffset());
> 214 }
> {code}
> It should be?
> {code}
> 212   return this.getLimit() == other.getLimit() &&
> {code}
> This might be just a code smell as Bytes.equals can be enough for the return 
> value but wanted to report just in case.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17862) Condition that always returns true

2017-03-31 Thread JC (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951903#comment-15951903
 ] 

JC commented on HBASE-17862:


Thanks for the comment. I've create a pull request: 
https://github.com/apache/hbase/pull/47

Thanks!

> Condition that always returns true
> --
>
> Key: HBASE-17862
> URL: https://issues.apache.org/jira/browse/HBASE-17862
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: JC
>Priority: Trivial
>
> Hi
> In recent github mirror of hbase, I've found the following code smell.
> Path: 
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
> {code}
> 209 
> 210 ColumnPaginationFilter other = (ColumnPaginationFilter)o;
> 211 if (this.columnOffset != null) {
> 212   return this.getLimit() == this.getLimit() &&
> 213   Bytes.equals(this.getColumnOffset(), other.getColumnOffset());
> 214 }
> {code}
> It should be?
> {code}
> 212   return this.getLimit() == other.getLimit() &&
> {code}
> This might be just a code smell as Bytes.equals can be enough for the return 
> value but wanted to report just in case.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17576) [C++] Implement request retry mechanism over RPC for Multi calls.

2017-03-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951894#comment-15951894
 ] 

Enis Soztutar commented on HBASE-17576:
---

- Here, ActionsByServer is already a hash map, but you are iterating through 
the keys to find the element. You should construct the map with the equals and 
hash methods, no? 
{code}
for (auto itr = actions_by_server.begin(); itr != actions_by_server.end(); 
++itr) {
+  ServerNameEquals server_name_equals;
+  if 
(server_name_equals(std::make_shared(region_loc->server_name()),
+ itr->first)) {
{code}
You can take a look at location-cache.h for examples. 
- This will be replaced by the caller-level mutex? 
{code}
+  std::mutex action2errors_lock_;
{code}
- In raw-async-table.cc, you may need to do the same trick we do in Get(Get &) 
method:
{code}
  // Return the Future we obtain from the call(). However, we do not want the 
Caller to go out of
  // context and get deallocated since the caller injects a lot of closures 
which capture [this, &]
  // which is use-after-free. We are just passing an identity closure capturing 
caller by value to
  // ensure  that the lifecycle of the Caller object is longer than the retry 
lambdas.
  return caller->Call().then([caller](const auto r) { return r; });
{code} 
This is to make sure that the Caller object is not de-allocated when you return 
the Future from this method.
- For this comment:
{code}
+  // TODO we need to optimize this. No need to call ToMultiRequest twice;
+  // Last patch we were passing multi_req by reference to ToMultiRequest()
+  // It is failing sometimes so doing it this way for now
{code}
it is happening because you have a unique_ptr here which you are giving 
ownership of it to the rpc-client when you are calling AysncCall() with 
std::move(). Then you need to use it again, in the ResponseConverter, but it is 
no longer there. Two possible solutions:
 -- you can change the all rpc end-to-end be based on shared_ptr 
rather than unique_ptr 
 -- you can move the Request to be inside the Response when you are done at the 
RPC layer so that the ResponseConverter can access the Request object from 
there. To me it sounds like this option is easier. The other one is best done 
in a different jira. 

> [C++] Implement request retry mechanism over RPC for Multi calls.
> -
>
> Key: HBASE-17576
> URL: https://issues.apache.org/jira/browse/HBASE-17576
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17576.HBASE-14850.v1.patch, 
> HBASE-17576.HBASE-14850.v2.patch, HBASE-17576.HBASE-14850.v3.patch, 
> HBASE-17576.HBASE-14850.v4.patch, HBASE-17576.HBASE-14850.v5.patch, 
> HBASE-17576.HBASE-14850.v6.patch, HBASE-17576.HBASE-14850.v7.patch
>
>
> This work is based on top of HBASE-17465. Multi Calls will be based on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17860:
---
Attachment: 17860.v2.txt

Patch v2 retrieves the principal by calling krb5 API.
If user hasn't done kinit (empty ccache), login user would be used. However, 
calling Cyrus library would fault out.

I am open to suggestion on where the unit test should be placed. Currently I 
latch onto ClientTest.PutGet to show that both put and get can succeed.

Some of the files, such as conf/hbase-site.xml, contain changes which enable my 
testing on docker VM. They would be dropped before the commit.
hbase/23a039358...@example.com corresponds to the principal I generated for 
server to run (23a03935850c being the hostname of the docker VM).

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Attachments: 17860.v2.txt
>
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951884#comment-15951884
 ] 

Hadoop QA commented on HBASE-17861:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 57s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
53s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 49s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 27s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 6s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestReplicasClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861535/HBASE-17861.branch-1.V1.patch
 |
| JIRA Issue | HBASE-17861 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 93b9d3eddaec 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HBASE-17862) Condition that always returns true

2017-03-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951883#comment-15951883
 ] 

Sean Busbey commented on HBASE-17862:
-

looks like a legit bug. Are you interested in putting together a patch to fix 
it?

> Condition that always returns true
> --
>
> Key: HBASE-17862
> URL: https://issues.apache.org/jira/browse/HBASE-17862
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: JC
>Priority: Trivial
>
> Hi
> In recent github mirror of hbase, I've found the following code smell.
> Path: 
> hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
> {code}
> 209 
> 210 ColumnPaginationFilter other = (ColumnPaginationFilter)o;
> 211 if (this.columnOffset != null) {
> 212   return this.getLimit() == this.getLimit() &&
> 213   Bytes.equals(this.getColumnOffset(), other.getColumnOffset());
> 214 }
> {code}
> It should be?
> {code}
> 212   return this.getLimit() == other.getLimit() &&
> {code}
> This might be just a code smell as Bytes.equals can be enough for the return 
> value but wanted to report just in case.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes

2017-03-31 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951859#comment-15951859
 ] 

Duo Zhang commented on HBASE-17857:
---

Thanks boss [~stack]. Let me reply on the mailing list about the progress to 
see if there are objections.

And also, we need to open another issue to track the IS annotations for 
IA.LimitedPrivate classes. There are several classes declared as 
IA.LimitedPrivate but do not have IS annotations. Yeah this is less critical 
than IA.Public API so we should not let it block the main issue.

Thanks.

> Remove IS annotations from IA.Public classes
> 
>
> Key: HBASE-17857
> URL: https://issues.apache.org/jira/browse/HBASE-17857
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17857.patch, HBASE-17857-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17668) Implement async assgin/offline/move/unassign methods

2017-03-31 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17668:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks [~syuanjiang] for reviewing.
Thanks [~openinx] for contributing.

> Implement async assgin/offline/move/unassign methods
> 
>
> Key: HBASE-17668
> URL: https://issues.apache.org/jira/browse/HBASE-17668
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17668.v1.patch, HBASE-17668.v1.patch, 
> HBASE-17668.v2.patch, HBASE-17668.v3.patch, HBASE-17668.v4.patch
>
>
> Implement following methods for async admin client: 
> 1.  assign region; 
> 2.  unassign region; 
> 3.  offline region; 
> 4.  move region;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-31 Thread Weizhan Zeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951856#comment-15951856
 ] 

Weizhan Zeng commented on HBASE-17816:
--

[~busbey] [~jerryhe] [~chia7712] [~ashu210890] thanks guys !

> HRegion#mutateRowWithLocks should update writeRequestCount metric
> -
>
> Key: HBASE-17816
> URL: https://issues.apache.org/jira/browse/HBASE-17816
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Ashu Pachauri
>Assignee: Weizhan Zeng
> Attachments: HBASE-17816.master.001.patch, 
> HBASE-17816.master.002.patch
>
>
> Currently, all the calls that use HRegion#mutateRowWithLocks miss 
> writeRequestCount metric. The mutateRowWithLocks base method should update 
> the metric.
> Examples are checkAndMutate calls through RSRpcServices#multi, 
> Region#mutateRow api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951855#comment-15951855
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #854 (See 
[https://builds.apache.org/job/HBase-1.2-IT/854/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
cd583383bcdae7e658ca2ca308d96a3be48956aa)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17668) Implement async assgin/offline/move/unassign methods

2017-03-31 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951852#comment-15951852
 ] 

Duo Zhang commented on HBASE-17668:
---

+1. Will commit soon.

> Implement async assgin/offline/move/unassign methods
> 
>
> Key: HBASE-17668
> URL: https://issues.apache.org/jira/browse/HBASE-17668
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17668.v1.patch, HBASE-17668.v1.patch, 
> HBASE-17668.v2.patch, HBASE-17668.v3.patch, HBASE-17668.v4.patch
>
>
> Implement following methods for async admin client: 
> 1.  assign region; 
> 2.  unassign region; 
> 3.  offline region; 
> 4.  move region;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951844#comment-15951844
 ] 

Hudson commented on HBASE-16780:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2776 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2776/])
HBASE-16780 Since move to protobuf3.1, Cells are limited to 64MB where (stack: 
rev 7700a7fac1262934fe538a96b040793c6ff171ce)
* (edit) hbase-server/pom.xml
* (edit) hbase-protocol-shaded/pom.xml


> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16780.master.001.patch, 
> HBASE-16780.master.002.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951840#comment-15951840
 ] 

Hudson commented on HBASE-17698:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #18 (See 
[https://builds.apache.org/job/HBase-1.3-IT/18/])
HBASE-17698 ReplicationEndpoint choosing sinks (apurtell: rev 
ea3907da7ba2fd8e9b86e0aee4c2198c8611faf2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17698:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.2.6
   1.1.9
   1.3.1
   1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.1 and up

> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Fix For: 2.0.0, 1.4.0, 1.3.1, 1.1.9, 1.2.6
>
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951792#comment-15951792
 ] 

Thiruvel Thirumoolan commented on HBASE-16942:
--

Rebased and uploaded patch, the unit tests ran fine on my laptop. 
https://reviews.apache.org/r/54724/diff/6-8/ - changes that describe since 
patch was last approved.

Patch pending review from [~toffer]. If there are any new issues with precommit 
build, will address them and upload.

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE-16942.master.010.patch, HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951789#comment-15951789
 ] 

Hadoop QA commented on HBASE-17863:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 23s {color} 
| {color:red} hbase-procedure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.procedure2.TestProcedureRecovery |
|   | hadoop.hbase.procedure2.TestStateMachineProcedure |
|   | hadoop.hbase.procedure2.TestProcedureEvents |
| Timed out junit tests | 
org.apache.hadoop.hbase.procedure2.TestYieldProcedures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.10.1 Server=1.10.1 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861525/HBASE-17863.v1.patch |
| JIRA Issue | HBASE-17863 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a275d987b50c 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 
24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1c4d9c8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6289/artifact/patchprocess/patch-unit-hbase-procedure.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6289/artifact/patchprocess/patch-unit-hbase-procedure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6289/testReport/ |
| modules | C: hbase-procedure U: hbase-procedure |
| 

[jira] [Updated] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16942:
-
Attachment: HBASE-16942.master.010.patch

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE-16942.master.010.patch, HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16942:
-
Attachment: HBASE-16942.master.009.patch

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE-16942.master.009.patch, 
> HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17844) Subset of HBASE-14614, Procedure v2: Core Assignment Manager (non-critical related changes)

2017-03-31 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-17844:
---
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-14350

> Subset of HBASE-14614, Procedure v2: Core Assignment Manager (non-critical 
> related changes)
> ---
>
> Key: HBASE-17844
> URL: https://issues.apache.org/jira/browse/HBASE-17844
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-17844.master.001.patch, 
> HBASE-17844.master.002.patch, HBASE-17844.master.003.patch, 
> HBASE-17844.master.004.patch
>
>
> Here is a patch that breaks out non-pertinent changes that made it into 
> HBASE-14614 in an attempt at shrinking its overall size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951757#comment-15951757
 ] 

Zach York commented on HBASE-17861:
---

For example, it could be something like this:

instead of:
if(fs.getScheme().startsWith("hdfs")
->
if (fs.getScheme().startsWith("s3").

Also are scheme's always lowercase or do you need to ensure it is lowercase 
before the comparison?

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861.branch-1.V1.patch, HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951750#comment-15951750
 ] 

Zach York commented on HBASE-17861:
---

I'm not saying that you need to check every conceivable filesystem. I'm just 
saying perhaps the patch should exclude if the scheme contains s3 (which would 
cover s3, s3a, s3n, etc) rather than removing it for everything but hdfs (since 
there are likely other FS implementations other than hdfs that DO support rwx 
style permissions).

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861.branch-1.V1.patch, HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Attachment: HBASE-17861.branch-1.V1.patch

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861.branch-1.V1.patch, HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951739#comment-15951739
 ] 

Yi Liang commented on HBASE-17861:
--

>From Enis comment in HBASE-11045:
[[HBase is supported (to the extend that corresponding vendors support it) on a 
couple of file systems other than HDFS: gpfs, maprfs, EMC Isilon, Microsoft 
WASB are the ones from the top of my head. You should contact the corresponding 
vendor if you want to learn more. ]]

And I just check Microsoft WASB, it seems it does not support permission, see 
https://hadoop.apache.org/docs/stable/hadoop-azure/index.html.  It says "File 
owner and group are persisted, but the permissions model is not enforced. 
Authorization occurs at the level of the entire Azure Blob Storage account."

I probably could not cover all the above FS

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-7590) Add a costless notifications mechanism from master to regionservers & clients

2017-03-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-7590:
---
Issue Type: New Feature  (was: Bug)

> Add a costless notifications mechanism from master to regionservers & clients
> -
>
> Key: HBASE-7590
> URL: https://issues.apache.org/jira/browse/HBASE-7590
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, master, regionserver
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.95.0
>
> Attachments: 7590.inprogress.patch, 7590.v12.patch, 7590.v12.patch, 
> 7590.v13.patch, 7590.v1.patch, 7590.v1-rebased.patch, 7590.v2.patch, 
> 7590.v3.patch, 7590.v5.patch, 7590.v5.patch
>
>
> t would be very useful to add a mechanism to distribute some information to 
> the clients and regionservers. Especially It would be useful to know globally 
> (regionservers + clients apps) that some regionservers are dead. This would 
> allow:
> - to lower the load on the system, without clients using staled information 
> and going on dead machines
> - to make the recovery faster from a client point of view. It's common to use 
> large timeouts on the client side, so the client may need a lot of time 
> before declaring a region server dead and trying another one. If the client 
> receives the information separatly about a region server states, it can take 
> the right decision, and continue/stop to wait accordingly.
> We can also send more information, for example instructions like 'slow down' 
> to instruct the client to increase the retries delay and so on.
>  Technically, the master could send this information. To lower the load on 
> the system, we should:
> - have a multicast communication (i.e. the master does not have to connect to 
> all servers by tcp), with once packet every 10 seconds or so.
> - receivers should not depend on this: if the information is available great. 
> If not, it should not break anything.
> - it should be optional.
> So at the end we would have a thread in the master sending a protobuf message 
> about the dead servers on a multicast socket. If the socket is not 
> configured, it does not do anything. On the client side, when we receive an 
> information that a node is dead, we refresh the cache about it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2017-03-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-10375:

Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> hbase-default.xml hbase.status.multicast.address.port does not match code
> -
>
> Key: HBASE-10375
> URL: https://issues.apache.org/jira/browse/HBASE-10375
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
>Reporter: Jonathan Hsieh
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 
> 10375.v2.96-98.patch, 10375.v2.trunk.patch
>
>
> In hbase-default.xml
> {code}
> +  
> +hbase.status.multicast.address.port
> +6100
> +
> +  Multicast port to use for the status publication by multicast.
> +
> +  
> {code}
> In HConstants it was 60100.
> {code}
>   public static final String STATUS_MULTICAST_PORT = 
> "hbase.status.multicast.port";
>   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
> {code}
> (it was 60100 in the code for 0.96 and 0.98.)
> I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951728#comment-15951728
 ] 

Jerry He commented on HBASE-17861:
--

bq. It seems like this should be the exception instead of the rule.

Makes sense.

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu edited comment on HBASE-17860 at 3/31/17 10:19 PM:
--

Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm and export CYRUS_SASL_PLUGINS_DIR 
pointing to the directory where sasl library resides

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user, 
if you don't want to type password)

* run kinit with the keytab for user "hbase", or by providing password to kinit

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache


was (Author: yuzhih...@gmail.com):
Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm and export CYRUS_SASL_PLUGINS_DIR 
pointing to the directory where sasl library resides

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user, 
if you don't want to type password)

* run kinit with the keytab for user "hbase", or by providing password to kinit

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951719#comment-15951719
 ] 

stack commented on HBASE-17863:
---

+1 Nice cleanup.

> Procedure V2: Some cleanup around isFinished() and procedure executor
> -
>
> Key: HBASE-17863
> URL: https://issues.apache.org/jira/browse/HBASE-17863
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Attachments: HBASE-17863.v1.patch
>
>
> Clean up around isFinished() and procedure executor



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17576) [C++] Implement request retry mechanism over RPC for Multi calls.

2017-03-31 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951716#comment-15951716
 ] 

Enis Soztutar commented on HBASE-17576:
---

- This should be ordered reverse? 
{code}
+  return server_name ? "" : server_name->ShortDebugString();
{code}
- RawAsyncTable should remain async. All of the methods should return Futures, 
rather than blocking on the results. So the following method will need to go to 
table.cc instead. 
{code}
+std::vector RawAsyncTable::Get(
{code}
- {{std::vector}} is already an instance of 
{{std::vector}}, no? You should not need to copy the 
vector. 
{code}
+std::vector RawAsyncTable::Get(
+const std::vector& gets) {
+  std::vector rows;
{code}
- In Get() when you call {{collectAll}}, and {{then()}}, you should also wait 
for the results, before returning the response back to the caller. 

> [C++] Implement request retry mechanism over RPC for Multi calls.
> -
>
> Key: HBASE-17576
> URL: https://issues.apache.org/jira/browse/HBASE-17576
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17576.HBASE-14850.v1.patch, 
> HBASE-17576.HBASE-14850.v2.patch, HBASE-17576.HBASE-14850.v3.patch, 
> HBASE-17576.HBASE-14850.v4.patch, HBASE-17576.HBASE-14850.v5.patch, 
> HBASE-17576.HBASE-14850.v6.patch, HBASE-17576.HBASE-14850.v7.patch
>
>
> This work is based on top of HBASE-17465. Multi Calls will be based on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-17863:
-
Status: Patch Available  (was: In Progress)

> Procedure V2: Some cleanup around isFinished() and procedure executor
> -
>
> Key: HBASE-17863
> URL: https://issues.apache.org/jira/browse/HBASE-17863
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Attachments: HBASE-17863.v1.patch
>
>
> Clean up around isFinished() and procedure executor



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-17863:
-
Attachment: HBASE-17863.v1.patch

> Procedure V2: Some cleanup around isFinished() and procedure executor
> -
>
> Key: HBASE-17863
> URL: https://issues.apache.org/jira/browse/HBASE-17863
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Attachments: HBASE-17863.v1.patch
>
>
> Clean up around isFinished() and procedure executor



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-17863 started by Umesh Agashe.

> Procedure V2: Some cleanup around isFinished() and procedure executor
> -
>
> Key: HBASE-17863
> URL: https://issues.apache.org/jira/browse/HBASE-17863
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Attachments: HBASE-17863.v1.patch
>
>
> Clean up around isFinished() and procedure executor



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17863) Procedure V2: Some cleanup around isFinished() and procedure executor

2017-03-31 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-17863:


 Summary: Procedure V2: Some cleanup around isFinished() and 
procedure executor
 Key: HBASE-17863
 URL: https://issues.apache.org/jira/browse/HBASE-17863
 Project: HBase
  Issue Type: Bug
  Components: proc-v2
Reporter: Umesh Agashe
Assignee: Umesh Agashe


Clean up around isFinished() and procedure executor



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu edited comment on HBASE-17860 at 3/31/17 10:09 PM:
--

Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm and export CYRUS_SASL_PLUGINS_DIR 
pointing to the directory where sasl library resides

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user, 
if you don't want to type password)

* run kinit with the keytab for user "hbase", or by providing password to kinit

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache


was (Author: yuzhih...@gmail.com):
Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm and export CYRUS_SASL_PLUGINS_DIR 
pointing to the directory where sasl library resides

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951702#comment-15951702
 ] 

Zach York commented on HBASE-17861:
---

Are there other FileSystems other than S3 that don't have permissions? It seems 
like this should be the exception instead of the rule.

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu edited comment on HBASE-17860 at 3/31/17 9:56 PM:
-

Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm and export CYRUS_SASL_PLUGINS_DIR 
pointing to the directory where sasl library resides

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache


was (Author: yuzhih...@gmail.com):
Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951696#comment-15951696
 ] 

Ted Yu commented on HBASE-17861:


Name your patch HBASE-17861.branch-1.V1.patch

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951695#comment-15951695
 ] 

Yi Liang edited comment on HBASE-17861 at 3/31/17 9:54 PM:
---

I mean branch-1, will rename the patch as you suggested


was (Author: easyliangjob):
I mean branch-1

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951695#comment-15951695
 ] 

Yi Liang commented on HBASE-17861:
--

I mean branch-1

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17860:
---
Description: 
So far, the native client communicates with insecure cluster.

This JIRA is to add secure connection support for native client using Cyrus 
library.
The work is based on earlier implementation and is redone via wangle and folly 
frameworks.

Thanks to [~devaraj] who started the initiative.

Here is high level description of the design:
* SaslHandler is declared as:
{code}
class SaslHandler
: public wangle::HandlerAdapter{
{code}
It would be inserted between EventBaseHandler and LengthFieldBasedFrameDecoder 
in the pipeline (via ConnectionFactory::Connect())

* SaslHandler would intercept writes to server by buffering the IOBuf's and 
start the handshake process (via sasl_client_XX calls provided by Cyrus)

* after handshake is complete, SaslHandler would send the buffered IOBuf's to 
server and act as pass-thru from then on

  was:
So far, the native client communicates with insecure cluster.

This JIRA is to add secure connection support for native client using Cyrus 
library.
The work is based on earlier implementation and is redone via wangle and folly 
frameworks.

Thanks to [~devaraj] who started the initiative.


> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.
> Here is high level description of the design:
> * SaslHandler is declared as:
> {code}
> class SaslHandler
> : public wangle::HandlerAdapter std::unique_ptr>{
> {code}
> It would be inserted between EventBaseHandler and 
> LengthFieldBasedFrameDecoder in the pipeline (via 
> ConnectionFactory::Connect())
> * SaslHandler would intercept writes to server by buffering the IOBuf's and 
> start the handshake process (via sasl_client_XX calls provided by Cyrus)
> * after handshake is complete, SaslHandler would send the buffered IOBuf's to 
> server and act as pass-thru from then on



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu edited comment on HBASE-17860 at 3/31/17 9:30 PM:
-

Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table (test would populate the table with:)
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache


was (Author: yuzhih...@gmail.com):
Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table and populate with:
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951640#comment-15951640
 ] 

Ted Yu commented on HBASE-17861:


You can name your patch HBASE-17861.branch-1.0.V1.patch

But is there going to be any more release for branch-1.0 ?

Did you mean branch-1.1 ?

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951607#comment-15951607
 ] 

Yi Liang commented on HBASE-17861:
--

Hi Ted, Thanks for reviewing,  I have test it on S3 with HBase 1.2.4. 

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Description: 
Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
outside of the root directory.

The region server are  showdown when I add following config into hbase-site.xml 
hbase.rootdir = hdfs://xx/xx
hbase.wal.dir = s3a://xx//xx
hbase.coprocessor.region.classes = 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
Error is below
{noformat}
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
java.lang.IllegalStateException: Directory already exists but permissions 
aren't set to '-rwx--x--x'
{noformat}

The reason is that, when hbase enable securebulkload, hbase will create a 
folder in s3, it can not set above permission, because in s3, all files are 
listed as having full read/write permissions and all directories appear to have 
full rwx permissions. See Object stores have differerent authorization models 
in 
https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



  was:
Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
outside of the root directory.

The region server are  showdown when I add following config into hbase-site.xml 
hbase.rootdir = hdfs://xx/xx
hbase.wal.dir = s3a://xx//xx
hbase.coprocessor.region.classes = 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
Error is below
{noformat}
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
java.lang.IllegalStateException: Directory already exists but permissions 
aren't set to '-rwx--x--x'
{noformat}

The reason is that, when hbase enable securebulkload, hbase will create a 
folder in s3, it can not set above permission, because in s3, all files are 
listed as having full read/write permissions and all directories appear to have 
full rwx permissions.  See simulated permission section in 
https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html




> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions. See Object stores have differerent authorization 
> models in 
> https://hadoop.apache.org/docs/current3/hadoop-aws/tools/hadoop-aws/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951606#comment-15951606
 ] 

Hadoop QA commented on HBASE-17861:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s {color} 
| {color:red} HBASE-17861 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861500/HBASE-17861-V1.patch |
| JIRA Issue | HBASE-17861 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6288/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951605#comment-15951605
 ] 

Ted Yu commented on HBASE-17861:


lgtm

Have you tested with s3a filesystem ?

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16942) Add FavoredStochasticLoadBalancer and FN Candidate generators

2017-03-31 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951596#comment-15951596
 ] 

Thiruvel Thirumoolan commented on HBASE-16942:
--

Will rebase and upload.

> Add FavoredStochasticLoadBalancer and FN Candidate generators
> -
>
> Key: HBASE-16942
> URL: https://issues.apache.org/jira/browse/HBASE-16942
> Project: HBase
>  Issue Type: Sub-task
>  Components: FavoredNodes
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16942.master.001.patch, 
> HBASE-16942.master.002.patch, HBASE-16942.master.003.patch, 
> HBASE-16942.master.004.patch, HBASE-16942.master.005.patch, 
> HBASE-16942.master.006.patch, HBASE-16942.master.007.patch, 
> HBASE-16942.master.008.patch, HBASE_16942_rough_draft.patch
>
>
> This deals with the balancer based enhancements to favored nodes patch as 
> discussed in HBASE-15532.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Status: Patch Available  (was: Open)

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu edited comment on HBASE-17860 at 3/31/17 8:28 PM:
-

Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* generate hbase-host.keytab for server (and optionally hbase.keytab for user)

* run kinit with the keytab

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table and populate with:
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache


was (Author: yuzhih...@gmail.com):
Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table and populate with:
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17862) Condition that always returns true

2017-03-31 Thread JC (JIRA)
JC created HBASE-17862:
--

 Summary: Condition that always returns true
 Key: HBASE-17862
 URL: https://issues.apache.org/jira/browse/HBASE-17862
 Project: HBase
  Issue Type: Bug
  Components: Client
Reporter: JC
Priority: Trivial


Hi

In recent github mirror of hbase, I've found the following code smell.

Path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java

{code}
209 
210 ColumnPaginationFilter other = (ColumnPaginationFilter)o;
211 if (this.columnOffset != null) {
212   return this.getLimit() == this.getLimit() &&
213   Bytes.equals(this.getColumnOffset(), other.getColumnOffset());
214 }
{code}

It should be?
{code}
212   return this.getLimit() == other.getLimit() &&
{code}

This might be just a code smell as Bytes.equals can be enough for the return 
value but wanted to report just in case.

Thanks!




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951593#comment-15951593
 ] 

Ted Yu commented on HBASE-17860:


Here is brief procedure for testing:

* install cyrus-sasl-2.1.26 on docker vm

* follow this link to install kerberos packages: 
https://help.ubuntu.com/lts/serverguide/kerberos.html

* follow this link to configure KDC: 
https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/

* apply the patch which sets necessary config in conf/hbase-site.xml

* run bin/start-hbase.sh to start hbase server

* use hbase shell to create table and populate with:
{code}
 test1  column=d:1, 
timestamp=1490984371943, value=value1
 test1  column=d:extra, 
timestamp=1490984371949, value=value for extra
 test2  column=d:2, 
timestamp=1490831145321, value=value2
 test2  column=d:extra, 
timestamp=1490831219721, value=value for extra
{code}
* run the following command and verify that ClientTest.PutGet passes:

buck test //core:client-test --no-results-cache

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Labels: filesystem s3 wal  (was: )

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>  Labels: filesystem, s3, wal
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951589#comment-15951589
 ] 

Yi Liang edited comment on HBASE-17861 at 3/31/17 8:21 PM:
---

And in our master branch, the code in SecureBulkLoadEndpoint has been 
refactored, and the statement check the staging folder permission is removed. 
so the patch is only for branch-1.0



was (Author: easyliangjob):
And in our master branch, the code in SecureBulkLoadEndpoint has been 
refactored, and the statement check the folder permission is removed. so the 
patch is only for branch-1.0


> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Attachment: (was: HBase-17821-V1.patch)

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Attachment: HBASE-17861-V1.patch
HBase-17821-V1.patch

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
> Attachments: HBASE-17861-V1.patch
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951589#comment-15951589
 ] 

Yi Liang commented on HBASE-17861:
--

And in our master branch, the code in SecureBulkLoadEndpoint has been 
refactored, and the statement check the folder permission is removed. so the 
patch is only for branch-1.0


> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)
Yi Liang created HBASE-17861:


 Summary: Regionserver down when checking the permission of staging 
dir if hbase.rootdir is on S3
 Key: HBASE-17861
 URL: https://issues.apache.org/jira/browse/HBASE-17861
 Project: HBase
  Issue Type: Bug
Reporter: Yi Liang
Assignee: Yi Liang


Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
outside of the root directory.

The region server are  showdown when I add following config into hbase-site.xml 
hbase.rootdir = hdfs://xx/xx
hbase.wal.dir = s3a://xx//xx
hbase.coprocessor.region.classes = 
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
Error is below
{noformat}
org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
java.lang.IllegalStateException: Directory already exists but permissions 
aren't set to '-rwx--x--x'
{noformat}

The reason is that, when hbase enable securebulkload, hbase will create a 
folder in s3, it can not set above permission, because in s3, all files are 
listed as having full read/write permissions and all directories appear to have 
full rwx permissions.  See simulated permission section in 
https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Affects Version/s: 1.4.0

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17861) Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3

2017-03-31 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17861:
-
Fix Version/s: 1.4.0

> Regionserver down when checking the permission of staging dir if 
> hbase.rootdir is on S3
> ---
>
> Key: HBASE-17861
> URL: https://issues.apache.org/jira/browse/HBASE-17861
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 1.4.0
>
>
> Found some issue, when set up HBASE-17437: Support specifying a WAL directory 
> outside of the root directory.
> The region server are  showdown when I add following config into 
> hbase-site.xml 
> hbase.rootdir = hdfs://xx/xx
> hbase.wal.dir = s3a://xx//xx
> hbase.coprocessor.region.classes = 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
> Error is below
> {noformat}
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint threw 
> java.lang.IllegalStateException: Directory already exists but permissions 
> aren't set to '-rwx--x--x'
> {noformat}
> The reason is that, when hbase enable securebulkload, hbase will create a 
> folder in s3, it can not set above permission, because in s3, all files are 
> listed as having full read/write permissions and all directories appear to 
> have full rwx permissions.  See simulated permission section in 
> https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/s3-s3aclient/index.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951575#comment-15951575
 ] 

Ted Yu commented on HBASE-17860:


Currently cleaning up code and replacing hardcoded principal with one retrieved 
from kerberos ccache.

Secure mode is detected through the value for config 
"hbase.security.authentication" being "kerberos".

Service name is retrieved from the value for config 
"hbase.regionserver.kerberos.principal"

> Implement secure native client connection
> -
>
> Key: HBASE-17860
> URL: https://issues.apache.org/jira/browse/HBASE-17860
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
>
> So far, the native client communicates with insecure cluster.
> This JIRA is to add secure connection support for native client using Cyrus 
> library.
> The work is based on earlier implementation and is redone via wangle and 
> folly frameworks.
> Thanks to [~devaraj] who started the initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17698) ReplicationEndpoint choosing sinks

2017-03-31 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951573#comment-15951573
 ] 

Andrew Purtell commented on HBASE-17698:


+1 will commit shortly


> ReplicationEndpoint choosing sinks
> --
>
> Key: HBASE-17698
> URL: https://issues.apache.org/jira/browse/HBASE-17698
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0
>Reporter: churro morales
>Assignee: Karan Mehta
> Attachments: HBASE-17698.patch
>
>
> The only time we choose new sinks is when we have a ConnectException, but we 
> have encountered other exceptions where there is a problem contacting a 
> particular sink and replication gets backed up for any sources that try that 
> sink
> HBASE-17675 occurred when there was a bad keytab refresh and the source was 
> stuck.
> Another issue we recently had was a bad drive controller on the sink side and 
> replication was stuck again.  
> Is there any reason not to choose new sinks anytime we have a 
> RemoteException?  I can understand TableNotFound we don't have to choose new 
> sinks, but for all other cases this seems like the safest approach.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17860) Implement secure native client connection

2017-03-31 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17860:
--

 Summary: Implement secure native client connection
 Key: HBASE-17860
 URL: https://issues.apache.org/jira/browse/HBASE-17860
 Project: HBase
  Issue Type: Sub-task
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Critical


So far, the native client communicates with insecure cluster.

This JIRA is to add secure connection support for native client using Cyrus 
library.
The work is based on earlier implementation and is redone via wangle and folly 
frameworks.

Thanks to [~devaraj] who started the initiative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-31 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16780:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Resolving.

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16780.master.001.patch, 
> HBASE-16780.master.002.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951467#comment-15951467
 ] 

Hadoop QA commented on HBASE-17836:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861482/HBASE-17836.v0.patch |
| JIRA Issue | HBASE-17836 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3c17d10c73d4 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / a9682ca |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6287/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6287/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>

[jira] [Commented] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951428#comment-15951428
 ] 

Chia-Ping Tsai commented on HBASE-17836:


bq. This check can be remove now as KV is ExtendedCell type any way
copy that. Thanks for the feedback. [~anoop.hbase]

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951411#comment-15951411
 ] 

Anoop Sam John commented on HBASE-17836:


You will have to add Bytes.SIZEOF_INT. (See KV case)
if (cell instanceof KeyValue) {
1392  return ((KeyValue)cell).getLength() + Bytes.SIZEOF_INT;
1393}
This check can be remove now as KV is ExtendedCell type any way

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17836:
---
Assignee: Chia-Ping Tsai
  Status: Patch Available  (was: Open)

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17836) CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell

2017-03-31 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17836:
---
Attachment: HBASE-17836.v0.patch

> CellUtil#estimatedSerializedSizeOf is slow when input is ByteBufferCell
> ---
>
> Key: HBASE-17836
> URL: https://issues.apache.org/jira/browse/HBASE-17836
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17836.v0.patch
>
>
> We call CellUtil#estimatedSerializedSize to calculate the size of rows when 
> scanning. If the input is ByteBufferCell, the 
> CellUtil#estimatedSerializedSizeOf parses many length components to get the 
> qualifierLength stored in the backing buffer.
> We should consider using the KeyValueUtil#getSerializedSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17172) Optimize mob compaction with _del files

2017-03-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951360#comment-15951360
 ] 

huaxiang sun edited comment on HBASE-17172 at 3/31/17 5:35 PM:
---

[~jingcheng.du], a late follow-up on this. Grouping delete files by its 
first/last key is to avoid including delete files to set of 
files-to-be-compacted as much as possible. If only started key is used, there 
is one case which I am not sure how to handle it (maybe I am following your 
idea correctly). 

Let's say, for region 1, it starts with key0, ends at key2. It has one delete 
file key0***_del. After that, the region may split to region1-0, region1-1, For 
region1-1, key0***_del may be included for compaction as it may contain keys 
for it.  My understanding is that if we only use startKey to group files, 
key0***_del will not be included in region1-1's mob compaction. 

Maybe as you said
{quote}
Since now we have always retained the delete markers in hfiles, 
{quote}
It is ok not to include the delete file with reigon1-1, data for the delete 
cells will still be kept, and they will be bulkloaded after mob compaction, 
since delete markers are still in hfiles, they will not show up.

Is my understanding correct? Thanks [~jingcheng.du]!



was (Author: huaxiang):
[~jingcheng.du], a late follow-up on this. Grouping delete files by its 
first/last key is to avoid including delete files to set of 
files-to-be-compacted as much as possible. If only started key is used, there 
is one case which I am not sure how to handle it (maybe I am following your 
idea correctly). 

Let's say, for region 1, it starts with key0, ends at key2. It has one delete 
file key0***_del. After that, the region may split to region1-0, region1-1, For 
region1-1, key0***_del may be included for compaction as it may contain keys 
for it.  My understanding is that if we only use startKey to group files, 
key0***_del will not be included in region1-1's mob compaction. 

Maybe as you said
{quote}
Since now we have always retained the delete markers in hfiles, 
{quota}
It is ok not to include the delete file with reigon1-1, data for the delete 
cells will still be kept, and they will be bulkloaded after mob compaction, 
since delete markers are still in hfiles, they will not show up.

Is my understanding correct? Thanks [~jingcheng.du]!


> Optimize mob compaction with _del files
> ---
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0
>
> Attachments: HBASE-17172-master-001.patch, 
> HBASE-17172.master.001.patch, HBASE-17172.master.002.patch, 
> HBASE-17172.master.003.patch
>
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17172) Optimize mob compaction with _del files

2017-03-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951360#comment-15951360
 ] 

huaxiang sun commented on HBASE-17172:
--

[~jingcheng.du], a late follow-up on this. Grouping delete files by its 
first/last key is to avoid including delete files to set of 
files-to-be-compacted as much as possible. If only started key is used, there 
is one case which I am not sure how to handle it (maybe I am following your 
idea correctly). 

Let's say, for region 1, it starts with key0, ends at key2. It has one delete 
file key0***_del. After that, the region may split to region1-0, region1-1, For 
region1-1, key0***_del may be included for compaction as it may contain keys 
for it.  My understanding is that if we only use startKey to group files, 
key0***_del will not be included in region1-1's mob compaction. 

Maybe as you said
{quote}
Since now we have always retained the delete markers in hfiles, 
{quota}
It is ok not to include the delete file with reigon1-1, data for the delete 
cells will still be kept, and they will be bulkloaded after mob compaction, 
since delete markers are still in hfiles, they will not show up.

Is my understanding correct? Thanks [~jingcheng.du]!


> Optimize mob compaction with _del files
> ---
>
> Key: HBASE-17172
> URL: https://issues.apache.org/jira/browse/HBASE-17172
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0
>
> Attachments: HBASE-17172-master-001.patch, 
> HBASE-17172.master.001.patch, HBASE-17172.master.002.patch, 
> HBASE-17172.master.003.patch
>
>
> Today, when there is a _del file in mobdir, with major mob compaction, every 
> mob file will be recompacted, this causes lots of IO and slow down major mob 
> compaction (may take months to finish). This needs to be improved. A few 
> ideas are: 
> 1) Do not compact all _del files into one, instead, compact them based on 
> groups with startKey as the key. Then use firstKey/startKey to make each mob 
> file to see if the _del file needs to be included for this partition.
> 2). Based on the timerange of the _del file, compaction for files after that 
> timerange does not need to include the _del file as these are newer files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15143) Procedure v2 - Web UI displaying queues

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951291#comment-15951291
 ] 

Hadoop QA commented on HBASE-15143:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
8s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 7s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 25 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 13 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 52s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 106m 39s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 5m 3s {color} | 
{color:red} hbase-shell in the patch failed. {color} |
| {color:green}+1{color} | 

[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2017-03-31 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951249#comment-15951249
 ] 

Sean Busbey commented on HBASE-16179:
-

bq. How can we move this forward ?

I've been waiting on the dev@ discussion. You could start it.

{quote}
bq. just properly kept out of the classpath for general HBase uses
Any suggestion on how the above can be achieved ?
{quote}

We currently build a classpath for our services and utilities. Just make sure 
that however those classpaths are formed we don't include the various 
sparkXscala specific jars.

bq. Without support for Spark 2.x, it makes little sense to include hbase-spark 
module for 2.0.0

I don't agree with this statement; there are plenty of Spark 1.6 users still. I 
do agree that I'd like Spark 2 support for HBase 2.0.0.

If [~jerryhe] is also game, I'd be up for us changing gears a bit here. We 
could focus here just on what _has_ to be done for correctness and do things 
that would be nice in follow-ons.

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v10.txt, 16179.v11.txt, 
> 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 16179.v15.txt, 
> 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 16179.v1.txt, 
> 16179.v1.txt, 16179.v20.txt, 16179.v22.txt, 16179.v23.txt, 16179.v24.txt, 
> 16179.v25.txt, 16179.v26.txt, 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 
> 16179.v8.txt, 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16222) fix RegionServer Host Name

2017-03-31 Thread Weizhan Zeng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weizhan Zeng updated HBASE-16222:
-
Component/s: (was: Balancer)
 (was: Region Assignment)
 (was: Client)
 rsgroup

> fix RegionServer Host Name
> --
>
> Key: HBASE-16222
> URL: https://issues.apache.org/jira/browse/HBASE-16222
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0, 1.1.2, 1.2.2
>Reporter: weizhan Zeng
>Priority: Minor
> Attachments: HBASE-16222.patch
>
>
> RegionServer Host Name was not equa meta table'r regionserver hostName.
> For example, host BeiJIN.98.100.hbase.com In zk or master is 
> beijin.98.100.hbase.com , but in meter table  is BeiJIN.98.199.hbase.com. 
> Because of this , when rsgroup balance compare host , that cause problem .The 
> reason  is  when regionserver report host to master , it will make the 
> hostname to lowcase。



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17857) Remove IS annotations from IA.Public classes

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951201#comment-15951201
 ] 

Hadoop QA commented on HBASE-17857:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hbase-annotations in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 110m 55s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 3s 
{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 37s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 189m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861445/HBASE-17857-v1.patch |
| JIRA Issue | HBASE-17857 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 5891a8d73df7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HBASE-17215) Separate small/large file delete threads in HFileCleaner to accelerate archived hfile cleanup speed

2017-03-31 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951178#comment-15951178
 ] 

huaxiang sun commented on HBASE-17215:
--

{quote}
Please carefully check and confirm whether this is caused by pressure on 
Namenode, and if so the change here might worsen it (more requests in parallel, 
although not that much). And good luck(smile).
{quote}

Thanks [~carp84] for the great advice. We checked the name node, its workload 
is not heavy. Still investigating why it takes 120 ms to delete one file as 
tracelog seems to tell us it should be much faster. 

Thanks for the patch! We will apply this patch and HBASE-17854 to see how much 
is improved.

> Separate small/large file delete threads in HFileCleaner to accelerate 
> archived hfile cleanup speed
> ---
>
> Key: HBASE-17215
> URL: https://issues.apache.org/jira/browse/HBASE-17215
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-17215.patch, HBASE-17215.v2.patch, 
> HBASE-17215.v3.patch
>
>
> When using PCIe-SSD the flush speed will be really quick, and although we 
> have per CF flush, we still have the 
> {{hbase.regionserver.optionalcacheflushinterval}} setting and some other 
> mechanism to avoid data kept in memory for too long to flush small hfiles. In 
> our online environment we found the single thread cleaner kept cleaning 
> earlier flushed small files while large files got no chance, which caused 
> disk full then many other problems.
> Deleting hfiles in parallel with too many threads will also increase the 
> workload of namenode, so here we propose to separate large/small hfile 
> cleaner threads just like we do for compaction, and it turned out to work 
> well in our cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >