[jira] [Commented] (HADOOP-13827) Add reencryptEDEK interface for KMS

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704572#comment-15704572
 ] 

Hadoop QA commented on HADOOP-13827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project: The patch generated 11 
new + 176 unchanged - 10 fixed = 187 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
36s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.crypto.key.KeyProvider$KeyVersion defines equals and 
uses Object.hashCode()  At KeyProvider.java:Object.hashCode()  At 
KeyProvider.java:[lines 113-129] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13827 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840818/HADOOP-13827.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6f479e85f87a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 67d9f28 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11158/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
| 

[jira] [Updated] (HADOOP-13827) Add reencryptEDEK interface for KMS

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13827:
---
Attachment: HADOOP-13827.02.patch

Thanks for the review and jira management, Andrew.

Great idea on future proofing! Patch 2 handles that and the rest. Except:
- ACL. Sorry my bad on wrongly using DECRYPT. But I feel REENCRYPT can share 
the same ACL as GENERATE, since they behave really similarly - ask for a 
(re)generated EDEK. {{KMSACLs#Type}} and {{KMS#KMSOp}} are not 1-1 mapping, so 
in this patch I used generate acl for reencrypt op. Please let me know if you 
feel otherwise.
- Doc update will come in later, after things stabilize a bit. Added a line in 
the doc so it's not forgotten in later revs.

> Add reencryptEDEK interface for KMS
> ---
>
> Key: HADOOP-13827
> URL: https://issues.apache.org/jira/browse/HADOOP-13827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13827.02.patch, HDFS-11159.01.patch
>
>
> This is the KMS part. Please refer to HDFS-10899 for the design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704366#comment-15704366
 ] 

Hadoop QA commented on HADOOP-13837:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840813/HADOOP-13837.04.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 8c4a1069b04a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 67d9f28 |
| shellcheck | v0.4.5 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11157/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11157/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> HADOOP-13837.03.patch, HADOOP-13837.04.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-11-28 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704351#comment-15704351
 ] 

Sahil Takiar commented on HADOOP-13600:
---

Will take a look at HADOOP-13823

Addressed a few of the comments:

* Using a {{BlockingQueue}} to track keys that need to be deleted
* A separate thread takes from the queue until it taken 
{{MAX_ENTRIES_TO_DELETE}} keys, and then it issues the DELETE request
* An {{AtomicBoolean}} is passed into the {{ProgressListener}} of the COPY 
request, if the COPY fails the boolean is set to false, in which case no more 
COPY requests will be issued
* [~ste...@apache.org] I took a look at your PR, is it necessary to have a 
threadpool where each thread calls {{Copy.waitForCopyResult()}}; would it be 
simpler to create a separate {{TransferManager}} just for COPY requests

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704286#comment-15704286
 ] 

John Zhuge edited comment on HADOOP-13597 at 11/29/16 6:12 AM:
---

Map {{sbin/kms.sh}} to {{bin/hadoop kms}}:
| kms.sh run | hadoop kms |
| kms.sh start | hadoop kms --daemon start |
| kms.sh stop | hadoop kms --daemon stop |
| kms.sh status | hadoop kms --daemon status |




was (Author: jzhuge):
Map {{sbin/kms.sh}} to {{bin/hadoop kms}}:
| kms.sh run | hadoop kms |
| kms.sh start | hadoop kms --daemon start |
| kms.sh stop | hadoop kms --daemon stop |
|  | hadoop kms --daemon status |



> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704129#comment-15704129
 ] 

John Zhuge edited comment on HADOOP-13597 at 11/29/16 6:11 AM:
---

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway
- Obsolete {{kms.sh version}} that prints Tomcat version

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001


was (Author: jzhuge):
Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.04.patch

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> HADOOP-13837.03.patch, HADOOP-13837.04.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704315#comment-15704315
 ] 

Hadoop QA commented on HADOOP-13597:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} root: The patch generated 0 new + 57 unchanged - 5 
fixed = 57 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
16s{color} | {color:green} The patch generated 0 new + 566 unchanged - 1 fixed 
= 566 total (was 567) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
24s{color} | {color:green} The patch generated 0 new + 344 unchanged - 2 fixed 
= 344 total (was 346) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13597 |
| JIRA Patch 

[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704309#comment-15704309
 ] 

Hadoop QA commented on HADOOP-13837:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 2 new + 117 unchanged - 0 fixed = 
119 total (was 117) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840806/HADOOP-13837.03.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 83ed810a29af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 67d9f28 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11156/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11156/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11156/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> HADOOP-13837.03.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704286#comment-15704286
 ] 

John Zhuge commented on HADOOP-13597:
-

Map {{sbin/kms.sh}} to {{bin/hadoop kms}}:
| kms.sh run | hadoop kms |
| kms.sh start | hadoop kms --daemon start |
| kms.sh stop | hadoop kms --daemon stop |
|  | hadoop kms --daemon status |



> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.03.patch

Hello [~aw]

OK, I respect that. I just uploaded a simpler v3 patch. This patch simply calls 
hadoop_status_daemon to check process, and add a 3s sleep after kill -9 when 
process fails to stop gracefully. Let me know if this looks better.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> HADOOP-13837.03.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704255#comment-15704255
 ] 

John Zhuge commented on HADOOP-13597:
-

[~aw] I was about to ping you on the rewrite, thanks for the quick feedback !

It seems more natural to move it into a sub-command of bin/hadoop.

Also ok to move sbin/kms.sh to bin/kms, while I found it a little awkward for 
this kind of script where there is one single implicit subcommand, unlike 
hadoop/hdfs/yarn scripts.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13706) Update jackson from 1.9.13 to 2.x in hadoop-common-project

2016-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704248#comment-15704248
 ] 

Hudson commented on HADOOP-13706:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10903 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10903/])
HADOOP-13706. Update jackson from 1.9.13 to 2.x in (aajisaka: rev 
67d9f2808efb34b9a7b0b824cb4033b95ad33474)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/Log4Json.java
* (edit) hadoop-common-project/hadoop-kms/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLog4Json.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/jmx/JMXJsonServlet.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONWriter.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java


> Update jackson from 1.9.13 to 2.x in hadoop-common-project
> --
>
> Key: HADOOP-13706
> URL: https://issues.apache.org/jira/browse/HADOOP-13706
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13706.01.patch, HADOOP-13706.02.patch, 
> HADOOP-13706.03.patch, HADOOP-13706.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704241#comment-15704241
 ] 

Kai Zheng commented on HADOOP-13836:


It's good to see this. Some quick questions for now:
* What's the scenarios, requirements and use cases you have in mind for this 
support (other than Kerberos)?
* What interfaces will be taken care of by this: RPC/commands, REST, web, JDBC 
and etc.
* How authentication will be considered? Still simple or some mechanisms over 
SSL/TLS?
* How would you manage credentials (X.509 certificates) for Hadoop services and 
maybe clients?
* What's the exact SSL/TLS versions to support and how to configure such with 
the cipher suite options?

We may need a design doc to document these. Thanks.

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704230#comment-15704230
 ] 

Allen Wittenauer commented on HADOOP-13597:
---

Hooray! This is really great.  

bq. Rewrite kms.sh to use Hadoop shell script framework

I didn't have any specific feedback about this bit (quick pass; didn't see 
anything obvious).

One of the big goals I had for the rewrite was to get sbin out of the direct 
path for administrators.  With that in mind, I wonder if this is the time to 
fix kms to be less of an outlier.

One choice would be integrate it into bin/hadoop. (probably via shell profile a 
la the bits in hadoop-tools).  Another, less drastic option would be just to 
move sbin/kms.sh to bin/kms.  In either case, sbin/kms.sh just becomes a 
wrapper.

Anyway, food for thought.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13706) Update jackson from 1.9.13 to 2.x in hadoop-common-project

2016-11-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13706:
---
Fix Version/s: 3.0.0-alpha2

> Update jackson from 1.9.13 to 2.x in hadoop-common-project
> --
>
> Key: HADOOP-13706
> URL: https://issues.apache.org/jira/browse/HADOOP-13706
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13706.01.patch, HADOOP-13706.02.patch, 
> HADOOP-13706.03.patch, HADOOP-13706.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2016-11-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704224#comment-15704224
 ] 

Akira Ajisaka commented on HADOOP-13332:


Fixed the most of the code base. We need to fix hadoop-maven-plugins module 
next.

> Remove jackson 1.9.13 and switch all jackson code to 2.x code line
> --
>
> Key: HADOOP-13332
> URL: https://issues.apache.org/jira/browse/HADOOP-13332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: PJ Fanning
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, 
> HADOOP-13332.02.patch, HADOOP-13332.03.patch
>
>
> This jackson 1.9 code line is no longer maintained. Upgrade
> Most changes from jackson 1.9 to 2.x just involve changing the package name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704220#comment-15704220
 ] 

Manoj Govindassamy commented on HADOOP-13840:
-

TestViewFsTrash is passing through locally for me. Suspecting some file 
operations are failing in the test and may not be related to the patch. Will 
dig more.

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13706) Update jackson from 1.9.13 to 2.x in hadoop-common-project

2016-11-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13706:
---
  Resolution: Fixed
Hadoop Flags: Incompatible change
Release Note: Removed Jackson 1.9.13 dependency from hadoop-common module.
  Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~ste...@apache.org] for the review!

> Update jackson from 1.9.13 to 2.x in hadoop-common-project
> --
>
> Key: HADOOP-13706
> URL: https://issues.apache.org/jira/browse/HADOOP-13706
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13706.01.patch, HADOOP-13706.02.patch, 
> HADOOP-13706.03.patch, HADOOP-13706.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704129#comment-15704129
 ] 

John Zhuge edited comment on HADOOP-13597 at 11/29/16 4:33 AM:
---

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001


was (Author: jzhuge):
Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Still need admin port?
- Still need /index.html?
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704129#comment-15704129
 ] 

John Zhuge edited comment on HADOOP-13597 at 11/29/16 4:26 AM:
---

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Still need admin port?
- Still need /index.html?
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001


was (Author: jzhuge):
Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Still need admin port?
- Still need /index.html?
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13597:

Status: Patch Available  (was: In Progress)

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13597:

Attachment: HADOOP-13597.001.patch

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework

TESTING DONE
- All hadoop-kms unit tests which exercise the full KMS instead of MiniKMS
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status

TODO
- Still need admin port?
- Still need /index.html?
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- JMX not working, existing issue
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704093#comment-15704093
 ] 

Allen Wittenauer commented on HADOOP-13837:
---

bq. The hadoop_status_daemon_wrapper was going to wait at maximum 5 secs, if 
process doesn't get to the expected state (started or stopped), it will 
terminate and return an error code 1. Won't be an infinite loop.

Ok, I misread this.  Instead, we've got a potentially longer sleep since the 
execution time will take longer than HADOOP_STOP_TIMEOUT on busy systems...

bq. Just sleep has the problem that you don't know how long you want to sleep.

I don't see this as an issue in *actual* usage.  Yes, it's a pain for 
contributors doing development continually bouncing services on tiny dev boxes, 
but that's pretty much it.  Given the impact, I'd much rather have simpler code 
than introduce a complex loop here. Users who want to do something more complex 
can take advantage of user functions.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704069#comment-15704069
 ] 

Weiwei Yang edited comment on HADOOP-13837 at 11/29/16 3:43 AM:


Hello [~aw]

bq. The proposed patch assumes that the process will actually end

The hadoop_status_daemon_wrapper was going to wait at maximum 5 secs, if 
process doesn't get to the expected state (started or stopped), it will 
terminate and return an error code 1. Won't be an infinite loop.

Just sleep has the problem that you don't know how long you want to sleep. Some 
cases, process doesn't stop, then we should wait until times out; some other 
cases, process was stopped in 1 or 2 secs, so we just wait for 1 or 2 secs.


was (Author: cheersyang):
Hello [~aw]

bq. The proposed patch assumes that the process will actually end

The hadoop_status_daemon_wrapper was going to wait at maximum 5 secs, if 
process doesn't get to the expected state (started or stopped), it will 
terminate and return an error code 1. Won't be an infinite loop.

Just sleep has the problem that you don't know how long you want to sleep. Some 
cases, process doesn't stop, then we should wait until times out, some cases, 
process was stopped in 1 or 2 secs, so we just wait for 1 or 2 secs.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704069#comment-15704069
 ] 

Weiwei Yang commented on HADOOP-13837:
--

Hello [~aw]

bq. The proposed patch assumes that the process will actually end

The hadoop_status_daemon_wrapper was going to wait at maximum 5 secs, if 
process doesn't get to the expected state (started or stopped), it will 
terminate and return an error code 1. Won't be an infinite loop.

Just sleep has the problem that you don't know how long you want to sleep. Some 
cases, process doesn't stop, then we should wait until times out, some cases, 
process was stopped in 1 or 2 secs, so we just wait for 1 or 2 secs.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704058#comment-15704058
 ] 

Allen Wittenauer commented on HADOOP-13837:
---

bq. Does that make sense?

No, it doesn't.

The proposed patch assumes that the process will actually end.  In practice, 
that doesn't always happen (e.g., process stuck in IO wait).  End result, this 
creates an infinite loop, thus making the problem even worse.

This is exactly why the code is written the way it is.  Just add a sleep and a 
call to hadoop_status_daemon. Someone calling stop will always exit, successful 
or not.



> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704025#comment-15704025
 ] 

Hadoop QA commented on HADOOP-13840:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-common-project_hadoop-common generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 19s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFsTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840779/HADOOP-13840.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 32ceb9a486f4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 47ca9e2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11152/artifact/patchprocess/diff-javadoc-javadoc-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11152/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11152/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11152/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: 

[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704015#comment-15704015
 ] 

Hadoop QA commented on HADOOP-13837:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 1 new + 117 unchanged - 0 fixed = 
118 total (was 117) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840782/HADOOP-13837.02.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 716daa4ce4e7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 47ca9e2 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11154/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11154/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11154/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.02.patch

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, HADOOP-13837.02.patch, 
> check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703963#comment-15703963
 ] 

Hadoop QA commented on HADOOP-13837:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 7 new + 117 unchanged - 0 fixed = 
124 total (was 117) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840778/HADOOP-13837.01.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 529d2066dccb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 47ca9e2 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11153/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11153/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11153/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703958#comment-15703958
 ] 

Hudson commented on HADOOP-13838:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10902 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10902/])
HADOOP-13838. KMSTokenRenewer should close providers (xiaochen via (rkanter: 
rev 47ca9e26fba4a639e43bee5bfc001ffc4b42330d)
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Status: Patch Available  (was: Open)

Submitting a patch to resolve this issue. The patch adds a function named 
*hadoop_status_daemon_wrapper*, it internally calls hadoop_status_daemon to 
check status in given interval with a certain timeout. With this patch, 
hadoop_stop_daemon will wait in a necessary amount of time after killing the 
process.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13597:

Hadoop Flags: Incompatible change

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703923#comment-15703923
 ] 

John Zhuge commented on HADOOP-13597:
-

Agree with you, it is an incompatible.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13838:
---
Fix Version/s: (was: 2.9.0)
   2.8.0

Thank you [~rkanter]!
Cherry picked to branch-2.8 as well to match HADOOP-13155.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13840:

Status: Patch Available  (was: Open)

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703919#comment-15703919
 ] 

Manoj Govindassamy edited comment on HADOOP-13840 at 11/29/16 2:23 AM:
---

Attached v0 patch to address the following
1. Made {{ViewFileSystem#getUsed()}} to throw an exception when the SlashRoot 
is not supported or not configured. 
2. Test to validate getUsed() returning same space usage numbers when accessed 
via ViewFileSystem and the target FileSystem.

[~andrew.wang], can you please review the patch ?


was (Author: manojg):
Attached v0 patch to address the following
1. Made {{ViewFileSystem#getUsed()}} to throw an exception when the SlashRoot 
is not supported or not configured. 
2. Test to validate getUsed() returning same space usage numbers when accessed 
via ViewFileSystem and the target FileSystem.

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13840:

Attachment: HADOOP-13840.01.patch

Attached v0 patch to address the following
1. Made {{ViewFileSystem#getUsed()}} to throw an exception when the SlashRoot 
is not supported or not configured. 
2. Test to validate getUsed() returning same space usage numbers when accessed 
via ViewFileSystem and the target FileSystem.

> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13840.01.patch
>
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.01.patch

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: (was: HADOOP-13837.01.patch)

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-13838:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen].  Committed to trunk and branch-2!

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HADOOP-13840:
---

 Summary: Implement getUsed() for ViewFileSystem
 Key: HADOOP-13840
 URL: https://issues.apache.org/jira/browse/HADOOP-13840
 Project: Hadoop Common
  Issue Type: Task
  Components: viewfs
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


ViewFileSystem doesn't override {{FileSystem#getUSed()}. So, when file system 
used space is queried for slash root "/" paths, the default implementations 
tries to run the {{getContentSummary}} which crashes on seeing unexpected mount 
points under slash. 

ViewFileSystem#getUsed() is not expected to collate all the space used from all 
the mount points configured under "/". Proposal is to avoid invoking 
FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash is 
supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13840) Implement getUsed() for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13840:

Description: 
ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file system 
used space is queried for slash root "/" paths, the default implementations 
tries to run the {{getContentSummary}} which crashes on seeing unexpected mount 
points under slash. 

ViewFileSystem#getUsed() is not expected to collate all the space used from all 
the mount points configured under "/". Proposal is to avoid invoking 
FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash is 
supported.

  was:
ViewFileSystem doesn't override {{FileSystem#getUSed()}. So, when file system 
used space is queried for slash root "/" paths, the default implementations 
tries to run the {{getContentSummary}} which crashes on seeing unexpected mount 
points under slash. 

ViewFileSystem#getUsed() is not expected to collate all the space used from all 
the mount points configured under "/". Proposal is to avoid invoking 
FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash is 
supported.


> Implement getUsed() for ViewFileSystem
> --
>
> Key: HADOOP-13840
> URL: https://issues.apache.org/jira/browse/HADOOP-13840
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>
> ViewFileSystem doesn't override {{FileSystem#getUSed()}}. So, when file 
> system used space is queried for slash root "/" paths, the default 
> implementations tries to run the {{getContentSummary}} which crashes on 
> seeing unexpected mount points under slash. 
> ViewFileSystem#getUsed() is not expected to collate all the space used from 
> all the mount points configured under "/". Proposal is to avoid invoking 
> FileSystem#getUsed() and throw NotInMountPointException until LinkMergeSlash 
> is supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703879#comment-15703879
 ] 

Robert Kanter commented on HADOOP-13838:


+1

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703876#comment-15703876
 ] 

Xiao Chen commented on HADOOP-13838:


Test failure look unrelated, and the checkstyle is about the test method being 
too long, which was already around the border before this change.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-1381:

Fix Version/s: (was: 3.0.0-beta1)
   3.0.0-alpha2

> The distance between sync blocks in SequenceFiles should be configurable
> 
>
> Key: HADOOP-1381
> URL: https://issues.apache.org/jira/browse/HADOOP-1381
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.0.0-alpha
>Reporter: Owen O'Malley
>Assignee: Harsh J
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
> HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
> HADOOP-1381.r5.diff
>
>
> Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
> better if it was configurable with a much higher default (1mb or so?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703865#comment-15703865
 ] 

Hadoop QA commented on HADOOP-13838:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 1 new + 309 unchanged 
- 0 fixed = 310 total (was 309) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 
49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840746/HADOOP-13838.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fbc18b9b543d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2b1ff0 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11151/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11151/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-13833) TestSymlinkHdfsFileSystem#testCreateLinkUsingPartQualPath2 fails after HADOOP13605

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13833:
-
Fix Version/s: 3.0.0-alpha2

> TestSymlinkHdfsFileSystem#testCreateLinkUsingPartQualPath2 fails after 
> HADOOP13605
> --
>
> Key: HADOOP-13833
> URL: https://issues.apache.org/jira/browse/HADOOP-13833
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13833.patch
>
>
> {noformat}
> org.junit.ComparisonFailure: expected:<...ileSystem for scheme[: null]> but 
> was:<...ileSystem for scheme[ "null"]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.fs.SymlinkBaseTest.testCreateLinkUsingPartQualPath2(SymlinkBaseTest.java:574)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
>  *REF:*  
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/235/testReport/junit/org.apache.hadoop.fs/TestSymlinkHdfsFileSystem/testCreateLinkUsingPartQualPath2/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13018:
-
Fix Version/s: 3.0.0-alpha2

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13018.01.patch, HADOOP-13018.02.patch, 
> HADOOP-13018.03.patch, HADOOP-13018.04.patch
>
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10776) Open up already widely-used APIs for delegation-token fetching & renewal to ecosystem projects

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10776:
-
Fix Version/s: 3.0.0-alpha2

> Open up already widely-used APIs for delegation-token fetching & renewal to 
> ecosystem projects
> --
>
> Key: HADOOP-10776
> URL: https://issues.apache.org/jira/browse/HADOOP-10776
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Robert Joseph Evans
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-10776-20160822.txt, 
> HADOOP-10776-branch-2-002.patch, HADOOP-10776-branch-2-003.patch
>
>
> Storm would like to be able to fetch delegation tokens and forward them on to 
> running topologies so that they can access HDFS (STORM-346).  But to do so we 
> need to open up access to some of APIs. 
> Most notably FileSystem.addDelegationTokens(), Token.renew, 
> Credentials.getAllTokens, and UserGroupInformation but there may be others.
> At a minimum adding in storm to the list of allowed API users. But ideally 
> making them public. Restricting access to such important functionality to 
> just MR really makes secure HDFS inaccessible to anything except MR, or tools 
> that reuse MR input formats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.01.patch

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: (was: HADOOP-13837.01.patch)

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11552) Allow handoff on the server side for RPC requests

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11552:
-
Fix Version/s: 3.0.0-alpha2

> Allow handoff on the server side for RPC requests
> -
>
> Key: HADOOP-11552
> URL: https://issues.apache.org/jira/browse/HADOOP-11552
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-11552.05.patch, HADOOP-11552.06.patch, 
> HADOOP-11552.07.patch, HADOOP-11552.08.patch, HADOOP-11552.1.wip.txt, 
> HADOOP-11552.2.txt, HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt
>
>
> An RPC server handler thread is tied up for each incoming RPC request. This 
> isn't ideal, since this essentially implies that RPC operations should be 
> short lived, and most operations which could take time end up falling back to 
> a polling mechanism.
> Some use cases where this is useful.
> - YARN submitApplication - which currently submits, followed by a poll to 
> check if the application is accepted while the submit operation is written 
> out to storage. This can be collapsed into a single call.
> - YARN allocate - requests and allocations use the same protocol. New 
> allocations are received via polling.
> The allocate protocol could be split into a request/heartbeat along with a 
> 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
> on a much longer heartbeat interval. awaitResponse is always left active with 
> the RM - and returns the moment something is available.
> MapReduce/Tez task to AM communication is another example of this pattern.
> The same pattern of splitting calls can be used for other protocols as well. 
> This should serve to improve latency, as well as reduce network traffic since 
> the keep-alive heartbeat can be sent less frequently.
> I believe there's some cases in HDFS as well, where the DN gets told to 
> perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2016-11-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13055:

Summary: Implement linkMergeSlash for ViewFileSystem  (was: Implement 
linkMergeSlash for ViewFs)

> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: HADOOP-13837.01.patch

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HADOOP-13837.01.patch, check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703836#comment-15703836
 ] 

Weiwei Yang edited comment on HADOOP-13837 at 11/29/16 1:45 AM:


Hello [~aw]

Thanks for looking at this one. 
hadoop_status_daemon immediately returns if process exists or not based on the 
pid file, it cannot resolve the problem here. I am proposing to remove the 
fixed time sleep after kill

{code}
function hadoop_stop_daemon {
  ...
  kill "${pid}" >/dev/null 2>&1
  ## sleep for 5s after kill
  sleep "${HADOOP_STOP_TIMEOUT}"
}
{code}
you don't need to always wait for 5s (default) until that happens

and add a check after kill -9

{code}
function hadoop_stop_daemon {
  kill -9 "${pid}" >/dev/null 2>&1
  ...
  ## check result .. this needs to wait a moment
  ...
}
{code}

you need to wait a bit until kill -9 works. Plan to add a check something like 
{{hadoop_status_daemon_wrapper}}. Does that make sense?


was (Author: cheersyang):
Hello [~aw]

Thanks for looking at this one. 
hadoop_status_daemon immediately returns if process exists or not based on the 
pid file, it cannot resolve the problem here. I am proposing to remove the 
fixed time sleep after kill

{code}
function hadoop_stop_daemon {
  ...
  kill "${pid}" >/dev/null 2>&1
  ## sleep for 5s after kill
  sleep "${HADOOP_STOP_TIMEOUT}"
}
{code}
you don't need to always wait for 5s (default) until that happens

and add a check after kill -9

{code}
function hadoop_stop_daemon {
  kill -9 "${pid}" >/dev/null 2>&1
  ...
  ## check result .. this needs to wait a moment
{code}

you need to wait a bit until kill -9 works. Plan to add a check something like 
{{hadoop_status_daemon_wrapper}}. Does that make sense?

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703836#comment-15703836
 ] 

Weiwei Yang commented on HADOOP-13837:
--

Hello [~aw]

Thanks for looking at this one. 
hadoop_status_daemon immediately returns if process exists or not based on the 
pid file, it cannot resolve the problem here. I am proposing to remove the 
fixed time sleep after kill

{code}
function hadoop_stop_daemon {
  ...
  kill "${pid}" >/dev/null 2>&1
  ## sleep for 5s after kill
  sleep "${HADOOP_STOP_TIMEOUT}"
}
{code}
you don't need to always wait for 5s (default) until that happens

and add a check after kill -9

{code}
function hadoop_stop_daemon {
  kill -9 "${pid}" >/dev/null 2>&1
  ...
  ## check result .. this needs to wait a moment
{code}

you need to wait a bit until kill -9 works. Plan to add a check something like 
{{hadoop_status_daemon_wrapper}}. Does that make sense?

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13823) s3a rename: fail if dest file exists

2016-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703753#comment-15703753
 ] 

Hudson commented on HADOOP-13823:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10901 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10901/])
HADOOP-13823. s3a rename: fail if dest file exists. Contributed by Steve 
(liuml07: rev d60a60be8aa450c44d3be69d26c88025e253ac0c)
* (edit) hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/RenameFailedException.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java


> s3a rename: fail if dest file exists
> 
>
> Key: HADOOP-13823
> URL: https://issues.apache.org/jira/browse/HADOOP-13823
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13813-branch-2-001.patch, 
> HADOOP-13823-branch-2-002.patch
>
>
> HIVE-15199 shows that s3a allows rename onto an existing file, which is 
> something HDFS, azure and s3n do not permit (though file:// does). This is 
> breaking bits of Hive, is an inconsistency with HDFS and a regression 
> compared to s3n semantics.
> I propose: rejecting the rename on a file -> file rename if the destination 
> exists (easy) and changing the s3a.xml contract file to declare the behavior 
> change; this is needed for 
> {{AbstractContractRenameTest.testRenameFileOverExistingFile}} to handle the 
> changed semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13823) s3a rename: fail if dest file exists

2016-11-28 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13823:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

+1

Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your 
contribution, [~ste...@apache.org].

> s3a rename: fail if dest file exists
> 
>
> Key: HADOOP-13823
> URL: https://issues.apache.org/jira/browse/HADOOP-13823
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13813-branch-2-001.patch, 
> HADOOP-13823-branch-2-002.patch
>
>
> HIVE-15199 shows that s3a allows rename onto an existing file, which is 
> something HDFS, azure and s3n do not permit (though file:// does). This is 
> breaking bits of Hive, is an inconsistency with HDFS and a regression 
> compared to s3n semantics.
> I propose: rejecting the rename on a file -> file rename if the destination 
> exists (easy) and changing the s3a.xml contract file to declare the behavior 
> change; this is needed for 
> {{AbstractContractRenameTest.testRenameFileOverExistingFile}} to handle the 
> changed semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2016-11-28 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703679#comment-15703679
 ] 

Sahil Takiar commented on HADOOP-13826:
---

Interesting, did not know that about mutli-part uploads. In that case, we may 
want to consider separating the configuration of mutli-part uploads and 
multi-part copies, right now the S3AFileSystem sets the same threshold and part 
size for multi-part copies and uploads, they are both controlled by 
fs.s3a.multipart.size and fs.s3a.multipart.threshold

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-11-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-13742:
---
Fix Version/s: 2.7.4

Thanks Brahma for the work! I think this is a good improvement for branch-2.7 
as well. I just did the backport.

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13742-002.patch, HADOOP-13742-003.patch, 
> HADOOP-13742-004.patch, HADOOP-13742-005.patch, HADOOP-13742-006.patch, 
> HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13839) Fix outdated tracing documentation

2016-11-28 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-13839:
-

 Summary: Fix outdated tracing documentation
 Key: HADOOP-13839
 URL: https://issues.apache.org/jira/browse/HADOOP-13839
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, tracing
Affects Versions: 2.7.3
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


Sample code in tracing doc is based on older version of SpanReceiverHost. The 
doc in branch-2 and trunk seems to be good.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703471#comment-15703471
 ] 

Allen Wittenauer commented on HADOOP-13837:
---

bq. To fix this issue, propose to wrap up a function to check process liveness 
by pid

Unnecessary.  The pid came from a file, so just need to call 
hadoop_status_daemon and check it's result.

See also HADOOP-13632.

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-11-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703459#comment-15703459
 ] 

Allen Wittenauer commented on HADOOP-13632:
---

At the tail end of hadoop_start_secure_daemon_wrapper:

{code}
  hadoop_status_daemon "${pidfile}"
  return $?
{code}

pidfile is undefined.

return $? is superfluous.  (i'm surprised that shellcheck doesn't have an error 
for that)


> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13632.001.patch, HADOOP-13632.002.patch
>
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703449#comment-15703449
 ] 

Hadoop QA commented on HADOOP-13578:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 46s{color} | {color:orange} root: The patch generated 73 new + 159 unchanged 
- 1 fixed = 232 total (was 160) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2383 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 59s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13578 |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13838:
---
Attachment: HADOOP-13838.02.patch

Thanks a lot for the reviews, Robert and Andrew.

Patch adds the unit test for KMS, and also closes the instance in FSN.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch, HADOOP-13838.02.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703314#comment-15703314
 ] 

Hadoop QA commented on HADOOP-13838:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840723/HADOOP-13838.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c4bbf32e888a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2b1ff0 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11150/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11150/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>
> KMSClientProvider need to 

[jira] [Comment Edited] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-11-28 Thread Luke Miner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703276#comment-15703276
 ] 

Luke Miner edited comment on HADOOP-13811 at 11/28/16 10:07 PM:


That worked great. It is running!

However, there's a new error, a NumberFormatException that I was not getting 
before. It fails almost immediately, right when spark is reading in a 20 line 
textfile that doesn't even contain the string that the error message is 
complaining about:

{code}
16/11/28 21:56:47 INFO SparkContext: Created broadcast 0 from textFile at 
json2pq.scala:130
Exception in thread "main" java.lang.NumberFormatException: For input string: 
"100M"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1320)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2904)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2941)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2923)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:265)
at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:236)
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1957)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:928)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.collect(RDD.scala:927)
at Json2Pq$.main(json2pq.scala:130)
at Json2Pq.main(json2pq.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}


was (Author: lminer):
That worked great. It is running!

However, there's a new error, a NumberFormatException that I was not getting 
before:

{code}
Exception in thread "main" java.lang.NumberFormatException: For input string: 
"100M"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1320)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2904)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2941)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2923)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at 

[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-11-28 Thread Luke Miner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703276#comment-15703276
 ] 

Luke Miner commented on HADOOP-13811:
-

That worked great. It is running!

However, there's a new error, a NumberFormatException that I was not getting 
before:

{code}
Exception in thread "main" java.lang.NumberFormatException: For input string: 
"100M"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1320)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2904)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:101)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2941)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2923)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at 
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:265)
at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:236)
at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:322)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1957)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:928)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.collect(RDD.scala:927)
at Json2Pq$.main(json2pq.scala:130)
at Json2Pq.main(json2pq.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
{code}

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703207#comment-15703207
 ] 

Robert Kanter commented on HADOOP-13838:


The fix seems good to me.  Can you add a unit test?

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703208#comment-15703208
 ] 

Andrew Wang commented on HADOOP-13838:
--

Good catch here Xiao. Do you want to also close the key provider in 
FSNamesystem? I think this only really affects unit tests, but would be good 
code hygiene.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13838:
---
Description: 
KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
See HADOOP-11368 for details.

Credit to [~rkanter] for finding this.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>
> KMSClientProvider need to be closed to free up the {{SSLFactory}} internally. 
> See HADOOP-11368 for details.
> Credit to [~rkanter] for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13838:
---
Attachment: HADOOP-13838.01.patch

This only applies to the {{KMSTokenRenwer}} added by HADOOP-13155. Fixing in 
patch 1.

HDFS clients caches the provider in clientcontext, which closes the provider on 
cache removal.

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13838:
---
Status: Patch Available  (was: Open)

> KMSTokenRenewer should close providers
> --
>
> Key: HADOOP-13838
> URL: https://issues.apache.org/jira/browse/HADOOP-13838
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
> Attachments: HADOOP-13838.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13838) KMSTokenRenewer should close providers

2016-11-28 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13838:
--

 Summary: KMSTokenRenewer should close providers
 Key: HADOOP-13838
 URL: https://issues.apache.org/jira/browse/HADOOP-13838
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.8.0
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13578) Add Codec for ZStandard Compression

2016-11-28 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HADOOP-13578:

Attachment: HADOOP-13578.v4.patch

[~jlowe] hope you had a good thanksgiving.  Attached is the latest patch, 
thanks for taking the time to review.  

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch, 
> HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, HADOOP-13578.v4.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13828) Implement getFileChecksum(path, length) for ViewFileSystem

2016-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15703021#comment-15703021
 ] 

Hudson commented on HADOOP-13828:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10899 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10899/])
HADOOP-13828. Implement getFileChecksum(path, length) for (wang: rev 
a2b1ff0257bde26d1f64454e97bc1225294a30b9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java


> Implement getFileChecksum(path, length) for ViewFileSystem
> --
>
> Key: HADOOP-13828
> URL: https://issues.apache.org/jira/browse/HADOOP-13828
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13828.01.patch
>
>
> {{ViewFileSystem}} inherits the default implementation of 
> {{getFileChecksum(final Path f, final long length)}} from FileSystem which 
> returns null. ViewFileSystem must override this to resolve the target 
> filesystem and file path from configured mount points and invoke the right 
> checksum method on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13832) Implement a file-based GroupMappingServiceProvider

2016-11-28 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HADOOP-13832:
---
Attachment: HADOOP-13832.branch-2.7.002.patch

Attaching an updated patch against branch-2.7, with a couple of changes:

* if any line in the mapping file is malformed, fails the entire file load and 
retains the last successfully loaded file
* fixes javadoc on FileBasedGroupsMapping

> Implement a file-based GroupMappingServiceProvider
> --
>
> Key: HADOOP-13832
> URL: https://issues.apache.org/jira/browse/HADOOP-13832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Gary Helmling
> Attachments: HADOOP-13832.branch-2.7.001.patch, 
> HADOOP-13832.branch-2.7.002.patch
>
>
> In can be useful to decouple Hadoop group membership resolution from OS-level 
> group memberships, without having to depend on an external system like LDAP.
> I'd like to propose a file-based group mapping implementation, which will 
> read group membership information from a configured file path on the local 
> filesystem, reloading periodically for changes.  For simplicity, it will use 
> the same file format as /etc/group.
> I'm aware of the option for static mappings in core-site.xml, but maintaining 
> these in an xml file is cumbersome and these are not reloadable.  Having a 
> built-in file-based implementation will also make this more usable in other 
> systems relying on Hadoop security tooling, such as HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13828) Implement getFileChecksum(path, length) for ViewFileSystem

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13828:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks again for the patch Manoj! I had to do a small rebase for the test since 
the df patch was committed after the Jenkins run. Committed to trunk.

> Implement getFileChecksum(path, length) for ViewFileSystem
> --
>
> Key: HADOOP-13828
> URL: https://issues.apache.org/jira/browse/HADOOP-13828
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13828.01.patch
>
>
> {{ViewFileSystem}} inherits the default implementation of 
> {{getFileChecksum(final Path f, final long length)}} from FileSystem which 
> returns null. ViewFileSystem must override this to resolve the target 
> filesystem and file path from configured mount points and invoke the right 
> checksum method on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-11-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702893#comment-15702893
 ] 

Andrew Wang commented on HADOOP-13597:
--

Hi John, could you respond to my earlier question about if this change is 
incompatible? If so, this is a blocker for 3.0.0-beta1, and we should mark it 
as such.

bq. since MiniKMS is also based on embedded Jetty, it is possible to replace it 
with full KMS once KMS is switched to embedded Jetty?

Seems fine to me.

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread Antonios Kouzoupis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702593#comment-15702593
 ] 

Antonios Kouzoupis edited comment on HADOOP-13836 at 11/28/16 5:41 PM:
---

[~kartheek] I took a quick look on your patch. I think it's more reasonable to 
use the "hadoop.rpc.socket.factory.class.default" configuration key to load the 
desired socket factory. At the moment the StandardSocketFactory it's been used 
but you may provide your own factory with ssl/tls support. Also, it might be 
better to reuse org.apache.hadoop.security.ssl.SSLFactory for the SSLEngine 
creation.


was (Author: antkou):
[~kartheek] I took a quick look on your patch. I think it's more reasonable to 
use the "hadoop.rpc.socket.factory.class.default" configuration key to load the 
desired socket factory. At the moment the StandardSocketFactory it's been used 
but you may provide your own factory with ssl/tls support.

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702543#comment-15702543
 ] 

Weiwei Yang edited comment on HADOOP-13837 at 11/28/16 5:40 PM:


You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

To fix this issue, propose to wrap up a function to check process liveness by 
pid, it checks the process by fixed interval with timeout. This will fix both 
issues
# it always waits for $HADOOP_STOP_TIMEOUT even it doesn't necessarily to
# it doesn't wait enough time until the process gets killed.





was (Author: cheersyang):
You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

To fix this issue, propose to wrap up a function to check process liveness by 
pid, it checks the process by fixed interval with timeout. This will fix the 
issue it always waits for $HADOOP_STOP_TIMEOUT even it doesn't necessarily to, 
and the issue it doesn't wait enough time until the process gets killed.




> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702543#comment-15702543
 ] 

Weiwei Yang edited comment on HADOOP-13837 at 11/28/16 5:39 PM:


You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

To fix this issue, propose to wrap up a function to check process liveness by 
pid, it checks the process by fixed interval with timeout. This will fix the 
issue it always waits for ${HADOOP_STOP_TIMEOUT} even it doesn't necessarily 
to, and the issue it doesn't wait enough time until the process gets killed.





was (Author: cheersyang):
You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702543#comment-15702543
 ] 

Weiwei Yang edited comment on HADOOP-13837 at 11/28/16 5:39 PM:


You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

To fix this issue, propose to wrap up a function to check process liveness by 
pid, it checks the process by fixed interval with timeout. This will fix the 
issue it always waits for $HADOOP_STOP_TIMEOUT even it doesn't necessarily to, 
and the issue it doesn't wait enough time until the process gets killed.





was (Author: cheersyang):
You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

To fix this issue, propose to wrap up a function to check process liveness by 
pid, it checks the process by fixed interval with timeout. This will fix the 
issue it always waits for ${HADOOP_STOP_TIMEOUT} even it doesn't necessarily 
to, and the issue it doesn't wait enough time until the process gets killed.




> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread Antonios Kouzoupis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702593#comment-15702593
 ] 

Antonios Kouzoupis commented on HADOOP-13836:
-

[~kartheek] I took a quick look on your patch. I think it's more reasonable to 
use the "hadoop.rpc.socket.factory.class.default" configuration key to load the 
desired socket factory. At the moment the StandardSocketFactory it's been used 
but you may provide your own factory with ssl/tls support.

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13506) Redundant groupid warning in child projects

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702548#comment-15702548
 ] 

Hadoop QA commented on HADOOP-13506:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 44m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 18m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 24m 
18s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 42m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 17m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  1m 
19s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 21m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-annotations in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-auth-examples in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
54s{color} | {color:green} hadoop-common-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 31s{color} 
| 

[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon of hadoop-functions.sh

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Summary: Process check bug in hadoop_stop_daemon of hadoop-functions.sh  
(was: Process check bug in hadoop_stop_daemon)

> Process check bug in hadoop_stop_daemon of hadoop-functions.sh
> --
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13837) Process check bug in hadoop_stop_daemon

2016-11-28 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702543#comment-15702543
 ] 

Weiwei Yang commented on HADOOP-13837:
--

You can use [^check_proc.sh] to test, run

{code}
bash check_proc.sh 
{code}

it gives output like 

{code}
check proc
process 20377 is still running
check proc
process is killed
{code}

> Process check bug in hadoop_stop_daemon
> ---
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Description: 
Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill -9}}, 
see following output of stop-yarn.sh

{code}
: WARNING: nodemanager did not stop gracefully after 5 seconds: Trying 
to kill with kill -9
: ERROR: Unable to kill 18097
{code}

hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
reproduced by the script easily. kill -9 would need some time to be done, 
directly check process existence right after mostly will fail.

{code}
function hadoop_stop_daemon
{
...
  kill -9 "${pid}" >/dev/null 2>&1
fi
if ps -p "${pid}" > /dev/null 2>&1; then
  hadoop_error "ERROR: Unable to kill ${pid}"
else
  ...
}
{code}

  was:
Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill -9}}, 
see following output of stop-yarn.sh

{code}
: WARNING: nodemanager did not stop gracefully after 5 seconds: Trying 
to kill with kill -9
: ERROR: Unable to kill 18097
{code}

hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
reproduced by the script easily. kill -9 would need some time to be done, 
directly check process existence right after mostly will fail.


> Process check bug in hadoop_stop_daemon
> ---
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.
> {code}
> function hadoop_stop_daemon
> {
> ...
>   kill -9 "${pid}" >/dev/null 2>&1
> fi
> if ps -p "${pid}" > /dev/null 2>&1; then
>   hadoop_error "ERROR: Unable to kill ${pid}"
> else
>   ...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Description: 
Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill -9}}, 
see following output of stop-yarn.sh

{code}
: WARNING: nodemanager did not stop gracefully after 5 seconds: Trying 
to kill with kill -9
: ERROR: Unable to kill 18097
{code}

hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
reproduced by the script easily. kill -9 would need some time to be done, 
directly check process existence right after mostly will fail.

  was:
Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill -9}}, 
see following output of stop-yarn.sh

{code}
: WARNING: nodemanager did not stop gracefully after 5 seconds: Trying 
to kill with kill -9
: ERROR: Unable to kill 18097
{code}

hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
reproduced by the script easily.


> Process check bug in hadoop_stop_daemon
> ---
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily. kill -9 would need some time to be done, 
> directly check process existence right after mostly will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13837) Process check bug in hadoop_stop_daemon

2016-11-28 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-13837:
-
Attachment: check_proc.sh

> Process check bug in hadoop_stop_daemon
> ---
>
> Key: HADOOP-13837
> URL: https://issues.apache.org/jira/browse/HADOOP-13837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: check_proc.sh
>
>
> Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill 
> -9}}, see following output of stop-yarn.sh
> {code}
> : WARNING: nodemanager did not stop gracefully after 5 seconds: 
> Trying to kill with kill -9
> : ERROR: Unable to kill 18097
> {code}
> hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
> reproduced by the script easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13837) Process check bug in hadoop_stop_daemon

2016-11-28 Thread Weiwei Yang (JIRA)
Weiwei Yang created HADOOP-13837:


 Summary: Process check bug in hadoop_stop_daemon
 Key: HADOOP-13837
 URL: https://issues.apache.org/jira/browse/HADOOP-13837
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Always get {{ERROR: Unable to kill ...}} after {{Trying to kill with kill -9}}, 
see following output of stop-yarn.sh

{code}
: WARNING: nodemanager did not stop gracefully after 5 seconds: Trying 
to kill with kill -9
: ERROR: Unable to kill 18097
{code}

hadoop_stop_daemon doesn't check process liveness correctly, this bug can be 
reproduced by the script easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread kartheek muthyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702498#comment-15702498
 ] 

kartheek muthyala edited comment on HADOOP-13836 at 11/28/16 5:03 PM:
--

Yes,[~asuresh], that is exactly what we are doing here. The proposal intends to 
implement an SSL layer on top of existing Hadoop RPC. It introduces SSLEngine 
in Server to encode and decode messages, and uses Java's javax.net.ssl library 
to encode and decode on the Client side. We have relied on niossl library for 
the server side implementation of SSLEngine. Because, this implementation sits 
on top of SSLSocket channel implementation, we can still keep the channels open 
as before, and just encode and decode messages using the existing cipher keys. 
But, as [~ste...@apache.org]pointed out, this introduces an overhead of 
additional handshakes between Server and Client for different reasons like 
certificate exchange, validation etc. We can trade off this performance hit 
with the security that we will be enhancing. This will improve the usage of 
secure IPC in large systems. 

We have been running this patch internally with some long running jobs and the 
performance seems to be decent. I don't have the exact numbers right away, but 
I will post them soon. 


was (Author: kartheek):
Yes,[~asuresh], that is exactly what we are doing here. The proposal intends to 
implement an SSL layer on top of existing Hadoop RPC. It introduces SSLEngine 
in Server to encode and decode messages, and Java's javax.net.ssl library to 
encode and decode on the Client side. We have relied on niossl library for the 
server side implementation of SSLEngine. Because, this implementation sits on 
top of SSLSocket channel implementation, we can still keep the channels open as 
before, and just encode and decode messages using the existing cipher keys. 
But, as [~ste...@apache.org]pointed out, this introduces an overhead of 
additional handshakes between Server and Client for different reasons like 
certificate exchange, validation etc. We can trade off this performance hit 
with the security that we will be enhancing. This will improve the usage of 
secure IPC in large systems. 

We have been running this patch internally with some long running jobs and the 
performance seems to be decent. I don't have the exact numbers right away, but 
I will post them soon. 

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread kartheek muthyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702498#comment-15702498
 ] 

kartheek muthyala commented on HADOOP-13836:


Yes,[~asuresh], that is exactly what we are doing here. The proposal intends to 
implement an SSL layer on top of existing Hadoop RPC. It introduces SSLEngine 
in Server to encode and decode messages, and Java's javax.net.ssl library to 
encode and decode on the Client side. We have relied on niossl library for the 
server side implementation of SSLEngine. Because, this implementation sits on 
top of SSLSocket channel implementation, we can still keep the channels open as 
before, and just encode and decode messages using the existing cipher keys. 
But, as [~ste...@apache.org]pointed out, this introduces an overhead of 
additional handshakes between Server and Client for different reasons like 
certificate exchange, validation etc. We can trade off this performance hit 
with the security that we will be enhancing. This will improve the usage of 
secure IPC in large systems. 

We have been running this patch internally with some long running jobs and the 
performance seems to be decent. I don't have the exact numbers right away, but 
I will post them soon. 

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702466#comment-15702466
 ] 

Arun Suresh commented on HADOOP-13836:
--

bq. wire encryption can only be good, though the cost of negotiating secure 
HTTPS connections can be high; I don't know if this proposal will have the same 
problem.
[~steve_l], From my initial glance of the patch, it looks like it is replacing 
the socket used for the RPC with an SSL Socket. In which case, It should be 
technically possible to replace the standard JSSE SSLEngine with OpenSSL's JNI 
based codecs for improved performance (maybe as a later patch), like what 
Tomcat does.

[~kartheek], do you have some numbers that quantify the performance degradation 
?

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9363) AuthenticatedURL will NPE if server closes connection

2016-11-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9363:

Target Version/s: 2.9.0, 3.0.0-alpha2  (was: 2.8.0, 3.0.0-alpha2)

> AuthenticatedURL will NPE if server closes connection
> -
>
> Key: HADOOP-9363
> URL: https://issues.apache.org/jira/browse/HADOOP-9363
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> A NPE occurs if the server unexpectedly closes the connection for an 
> {{AuthenticatedURL}} w/o sending a response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13518) backport HADOOP-9258 to branch-2

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702435#comment-15702435
 ] 

Hadoop QA commented on HADOOP-13518:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
47s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} root in branch-2 failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
51s{color} | {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 51s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_111. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 10 new + 49 unchanged 
- 2 fixed = 59 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_111 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13518 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702398#comment-15702398
 ] 

Hadoop QA commented on HADOOP-13836:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 50 new + 402 unchanged - 16 fixed = 452 total (was 418) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
26s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Uninitialized read of backlogLength in new 
org.apache.hadoop.ipc.AbstractListener(String, int, int, int, String, 
Configuration, Server$ConnectionManager)  At AbstractListener.java:new 
org.apache.hadoop.ipc.AbstractListener(String, int, int, int, String, 
Configuration, Server$ConnectionManager)  At AbstractListener.java:[line 71] |
| Failed junit tests | hadoop.ipc.TestSSLIPC |
|   | hadoop.ipc.TestRPC |
|   | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13836 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840656/HADOOP-13836.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux b1c57be4d725 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5d5614f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (HADOOP-13836) Securing Hadoop RPC using SSL

2016-11-28 Thread kartheek muthyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15702238#comment-15702238
 ] 

kartheek muthyala commented on HADOOP-13836:


Hey [~antkou], Good to know that you are also working on the similar feature. 
We have submitted an initial version of the patch. Kindly review it and let us 
know your feedback. 

> Securing Hadoop RPC using SSL
> -
>
> Key: HADOOP-13836
> URL: https://issues.apache.org/jira/browse/HADOOP-13836
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: ipc
>Reporter: kartheek muthyala
> Attachments: HADOOP-13836.patch
>
>
> Today, RPC connections in Hadoop are encrypted using Simple Authentication & 
> Security Layer (SASL), with the Kerberos ticket based authentication or 
> Digest-md5 checksum based authentication protocols. This proposal is about 
> enhancing this cipher suite with SSL/TLS based encryption and authentication. 
> SSL/TLS is a proposed Internet Engineering Task Force (IETF) standard, that 
> provides data security and integrity across two different end points in a 
> network. This protocol has made its way to a number of applications such as 
> web browsing, email, internet faxing, messaging, VOIP etc. And supporting 
> this cipher suite at the core of Hadoop would give a good synergy with the 
> applications on top and also bolster industry adoption of Hadoop.
> The Server and Client code in Hadoop IPC should support the following modes 
> of communication
> 1.Plain 
> 2. SASL encryption with an underlying authentication
> 3. SSL based encryption and authentication (x509 certificate)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >