[jira] [Updated] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13597:

Attachment: HADOOP-13597.002.patch

Patch 002
- Add “hadoop kms” sub-command. kms.sh is now just a wrapper.
- Read SSL configuration from ssl-server.xml
- Put common SSL config keys in SSLConfig
- Put common HTTP config keys in HttpConfig
- Support all deprecated environment variables and give warning of deprecation
- Enhanced web page /static/index.html, not /index.html due to HttpServer2 
limitation

TESTING DONE
- Run “hadoop key list/create/delete/roll” in non-secure and SSL setup
- All KMS unit tests that actually exercise the full-blown KMS
- Script: hadoop kms, hadoop —daemon start|status|stop kms
- Script: kms.sh run|start|status|stop
- /kms/jmx, /kms/logLevel, /kms/conf, /kms/stack, /logs, and /static

TODO
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Share web apps code in Common, HDFS, and YARN


> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-12-01 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13257:
---
Status: Open  (was: Patch Available)

> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-13257.001.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-13835:
---
Attachment: HADOOP-13835.006.patch

-006
1) Fix a typo in CMakeLists.txt
2) Fix path for adding gtest files to system libs in CMakeLists.txt

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713913#comment-15713913
 ] 

Varun Vasudev commented on HADOOP-13835:


[~ajisakaa] - I think the latest patch is ready for review. Can you take a 
look? Thanks!

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13827) Add reencryptEncryptedKey interface to KMS

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713786#comment-15713786
 ] 

Hadoop QA commented on HADOOP-13827:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-common-project: The patch generated 2 new 
+ 164 unchanged - 23 fixed = 166 total (was 187) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13827 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841390/HADOOP-13827.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7297f80b49dc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2d77dc7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11179/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11179/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms U: hadoop-common-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11179/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   

[jira] [Updated] (HADOOP-13827) Add reencryptEncryptedKey interface to KMS

2016-12-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13827:
---
Summary: Add reencryptEncryptedKey interface to KMS  (was: Add 
reencryptEDEK interface for KMS)

> Add reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-13827
> URL: https://issues.apache.org/jira/browse/HADOOP-13827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13827.02.patch, HADOOP-13827.03.patch, 
> HDFS-11159.01.patch
>
>
> This is the KMS part. Please refer to HDFS-10899 for the design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13827) Add reencryptEDEK interface for KMS

2016-12-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13827:
---
Attachment: HADOOP-13827.03.patch

Patch 3 addressed all comments, and added docs. Didn't find any general 
description section, so added to the RestAPI part where the re-encrypt is 
introduced.

Also crossed-out the last todo item on my list, which is drain all providers 
after a {{rollNewVersion}} in LBKMSCP.

Thanks for reviewing Andrew!

> Add reencryptEDEK interface for KMS
> ---
>
> Key: HADOOP-13827
> URL: https://issues.apache.org/jira/browse/HADOOP-13827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13827.02.patch, HADOOP-13827.03.patch, 
> HDFS-11159.01.patch
>
>
> This is the KMS part. Please refer to HDFS-10899 for the design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12304) Applications using FileContext fail with the default file system configured to be wasb/s3/etc.

2016-12-01 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12304:

Fix Version/s: 2.8.0

> Applications using FileContext fail with the default file system configured 
> to be wasb/s3/etc.
> --
>
> Key: HADOOP-12304
> URL: https://issues.apache.org/jira/browse/HADOOP-12304
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>  Labels: 2.7.2-candidate
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HADOOP-12304.001.patch
>
>
> HADOOP-11618 fixed a bug with {{DelegateToFileSystem}} using the wrong 
> default port.  As a side effect of this patch, file path URLs that previously 
> had no port now insert :0 for the port, as per the default implementation of 
> {{FileSystem#getDefaultPort}}.  At runtime, this can cause an application to 
> erroneously try contacting port 0 for a remote blob store service.  The 
> connection fails.  Ultimately, this renders wasb, s3, and probably custom 
> file system implementations outside the Hadoop source tree completely 
> unusable as the default file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713548#comment-15713548
 ] 

Andrew Wang commented on HADOOP-13852:
--

Hi Steve, there's also a reference in 
{{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties}}
 which should probably be addressed.

What makes this change more expedient than fixing the Hive/Spark shim?

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713498#comment-15713498
 ] 

Hadoop QA commented on HADOOP-13709:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841368/HADOOP-13709.007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c88eb8f039db 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19f373a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11178/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11178/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, 

[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-01 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.007.patch

Thanks for the comments, [~jlowe]!

{quote}
The synchronized map needs to be locked explicitly when iterated otherwise we 
have concurrency issues if some other thread tries to update this map while 
we're walking it during the shutdown hook processing.
{quote}
Good catch, I put this back in in the latest patch.

{quote}
The unit test is racy because it assumes a 250ms sleep is enough to get the 
sleep processes started. It would be better to poll for getProcess() being 
non-null for the two executors. GenericTestUtils.waitFor would be useful here 
and can also replace the manually-coded poll loop.
{quote}
Changed all waits to use GenericTestUtils.waitFor in the latest patch.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13854) KMS should log error details even if a request is malformed

2016-12-01 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13854:
---
Attachment: HADOOP-13854.01.patch

Attaching patch 1. This should be {{warn}} instead of {{debug}} because IMO we 
usually want to analyze and check details of those {{ERROR}} audits.

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13854.01.patch
>
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-5323) Trash documentation should describe its directory structure and configurations

2016-12-01 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-5323:
---
Fix Version/s: 2.8.0

> Trash documentation should describe its directory structure and configurations
> --
>
> Key: HADOOP-5323
> URL: https://issues.apache.org/jira/browse/HADOOP-5323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.18.3
>Reporter: Suman Sehgal
>Assignee: Weiwei Yang
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: HDFS-5323.patch
>
>
> Trash documentation should mention the significance of "Current" and 
> "" directories which get generated inside Trash directory. The 
> documentation should also incorporate modifications done in HADOOP: 4970.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13854) KMS should log error details even if a request is malformed

2016-12-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713372#comment-15713372
 ] 

Xiao Chen commented on HADOOP-13854:


One example entry in the audit log is this intermittently:
{noformat}
2016-11-30 12:24:02,484 ERROR[user=impala] Method:'POST' Exception:'No content 
to map to Object due to end of input'
{noformat}

The exception message is hardly helpful.

> KMS should log error details even if a request is malformed
> ---
>
> Key: HADOOP-13854
> URL: https://issues.apache.org/jira/browse/HADOOP-13854
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.5
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> It appears if a KMS HTTP request is malformed, it could rejected by tomcat 
> and a response is sent by 
> [KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
>  overriding Jersey's ExceptionMapper.
> This behavior is okay, but in the logs we'll see an ERROR audit log, but 
> nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
> painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9242) Duplicate surefire plugin config in hadoop-common

2016-12-01 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9242:
---
Fix Version/s: 2.8.0

> Duplicate surefire plugin config in hadoop-common
> -
>
> Key: HADOOP-9242
> URL: https://issues.apache.org/jira/browse/HADOOP-9242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 0.23.6, 2.8.0, 2.7.2, 2.6.3, 3.0.0-alpha1
>
> Attachments: HADOOP-9242.patch
>
>
> Unfortunately in HADOOP-9217 a duplicated configuration of Surefire plugin 
> was introduced in hadoop-common/pom.xml, effectively discarding a part of 
> configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13854) KMS should log error details even if a request is malformed

2016-12-01 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13854:
--

 Summary: KMS should log error details even if a request is 
malformed
 Key: HADOOP-13854
 URL: https://issues.apache.org/jira/browse/HADOOP-13854
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.5
Reporter: Xiao Chen
Assignee: Xiao Chen


It appears if a KMS HTTP request is malformed, it could rejected by tomcat and 
a response is sent by 
[KMSExceptionsProvider|https://github.com/apache/hadoop/blob/branch-3.0.0-alpha1/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSExceptionsProvider.java#L99]
 overriding Jersey's ExceptionMapper.

This behavior is okay, but in the logs we'll see an ERROR audit log, but 
nothing in kms.log (or anywhere else). This makes trouble shooting pretty 
painful, let's improve it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713310#comment-15713310
 ] 

Jason Lowe commented on HADOOP-13709:
-

The synchronized map needs to be locked explicitly when iterated otherwise we 
have concurrency issues if some other thread tries to update this map while 
we're walking it during the shutdown hook processing.

The unit test is racy because it assumes a 250ms sleep is enough to get the 
sleep processes started.  It would be better to poll for getProcess() being 
non-null for the two executors.  GenericTestUtils.waitFor would be useful here 
and can also replace the manually-coded poll loop.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713241#comment-15713241
 ] 

Mingliang Liu commented on HADOOP-13449:


Thanks for the tip of disabling s3n integration tests. I find the command {{mvn 
-Dparallel-tests -DtestsThreadCount=8 -Dit.test='ITestS3A*' -Dtest=none clean 
verify}} is also helpful.

I'll review and/or commit [HADOOP-13793] today. You're right we have to disable 
the DDB metadatastore is disabled for unit tests even it's configured. For the 
{{DynamoDBClientFactory}}, that sounds good if both S3 client and DDB client 
are mocked except {{TestDynamoDBMetadataStore}} which will create one itself 
against the DynamoDBLocal.

I'm working on fixing other failing tests. Per offline discussion with Steve, 
he suggested we by now ignore the failing anonymous auth tests for this patch. 
We do have to support that though.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12927) Update netty-all to 4.0.34.Final

2016-12-01 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713203#comment-15713203
 ] 

Haibo Chen commented on HADOOP-12927:
-

[~ozawa], [~vinayrpet] Any specific reason why we are not updating netty-all 
from 4.1.0.Beta5 to a stable 4.1.0+ release?

> Update netty-all to 4.0.34.Final
> 
>
> Key: HADOOP-12927
> URL: https://issues.apache.org/jira/browse/HADOOP-12927
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Hendy Irawan
>
> Pull request: https://github.com/apache/hadoop/pull/84



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-12-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13848.
-
   Resolution: Duplicate
Fix Version/s: 2.8.0

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
> Fix For: 2.8.0
>
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13848) Missing auth-keys.xml prevents detecting test code build problem

2016-12-01 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reopened HADOOP-13848:
-

> Missing auth-keys.xml prevents detecting test code build problem
> 
>
> Key: HADOOP-13848
> URL: https://issues.apache.org/jira/browse/HADOOP-13848
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>
> Both hadoop-aws and hadoop-openstack require the existence of file 
> {{src/test/resources/auth-keys.xml}} to run the tests. With the design of the 
> pom.xml, the non-existence of auth-keys.xml also prevents building the test 
> code. Unfortunately this leads to delayed detection of build problems in test 
> code, e.g., introduced by a mistake in backports.
> {code}
> 
>   tests-off
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> true
>   
> 
> 
>   tests-on
>   
> 
>   src/test/resources/auth-keys.xml
> 
>   
>   
> false
>   
> 
> {code}
> Section {{Skipping by Default}} in 
> http://maven.apache.org/surefire/maven-surefire-plugin/examples/skipping-test.html
>  proposes a solution. Any time you want to run tests, you must do 2 things 
> instead of 1:
> * Copy auth-keys.xml to src/test/resources
> * Run {{mvn install}} with the extra {{-DskipTests=false}}
> Would like the community to weigh in on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-12-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713157#comment-15713157
 ] 

Mingliang Liu commented on HADOOP-13257:


{quote}
6. When generating Parameterized.Parameters, can we use loops? They're clearer 
for covering different cases.
Sorry i doubt if i understood the comment, Could you please clarify?
{quote}
For example, when generating parameters in {{TestAdlPermissionLive}},
{code}
63@Parameterized.Parameters(name = "{0}")
64public static Collection adlCreateNonRecursiveTestData()
65throws UnsupportedEncodingException {
66  /*
67Test Data
68File/Folder name, User permission, Group permission, Other 
Permission,
69Parent already exist
70shouldCreateSucceed, expectedExceptionIfFileCreateFails
71  */
72  return Arrays.asList(new Object[][] {
73  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
74  FsAction.ALL)},
75  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
76  FsAction.NONE)},
77  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
78  FsAction.EXECUTE)},
79  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
80  FsAction.READ_EXECUTE)},
81  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
82  FsAction.WRITE_EXECUTE)},
83  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
84  FsAction.WRITE)},
85  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
86  FsAction.READ)},
87  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
88  FsAction.READ_WRITE)},
89  
90  {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.ALL,
91  FsAction.ALL)},
92  {new TestData(UUID.randomUUID().toString(), FsAction.ALL,
93  FsAction.EXECUTE, FsAction.NONE)},
94  {new TestData(UUID.randomUUID().toString(), FsAction.ALL,
95  FsAction.READ_EXECUTE, FsAction.NONE)},
96  {new TestData(UUID.randomUUID().toString(), FsAction.ALL,
97  FsAction.WRITE_EXECUTE, FsAction.NONE)},
98  {new TestData(UUID.randomUUID().toString(), FsAction.ALL,
99  FsAction.WRITE, FsAction.NONE)},
100 {new TestData(UUID.randomUUID().toString(), FsAction.ALL, 
FsAction.READ,
101 FsAction.NONE)},
102 {new TestData(UUID.randomUUID().toString(), FsAction.ALL,
103 FsAction.READ_WRITE, FsAction.NONE)}});
104   }
{code}
can be some code like (may need to change it):
{code}
  public static Collection adlCreateNonRecursiveTestData()
  throws UnsupportedEncodingException {
final Collection datas = new ArrayList<>();
for (FsAction g : FsAction.values()) {
  for (FsAction o : FsAction.values()) {
datas.add(new TestData(UUID.randomUUID().toString(), FsAction.ALL, g, 
o));
  }
}
return datas;
  }
{code}

{quote}
Initially i had the simplified version of the code you proposed. Issue faced, 
output was flood with logs since TestAdlSupportedCharsetInPath has 470+ test. 
Hence added check to dump the log only when not found.
{quote}
You can still return early.
{code}
  private boolean contains(FileStatus[] statuses, String remotePath) {
for (FileStatus status : statuses) {
  if (status.getPath().toString().equals(remotePath)) {
return true;
  }
}
for (FileStatus status : statuses) {
  LOG.debug("Directory Content: {}", status.getPath());
}
return false;
  }
{code}

By the way, if you love lambda, you can use following code:

{code}
  private boolean contains(FileStatus[] statuses, String remotePath) {
for (FileStatus status : statuses) {
  if (status.getPath().toString().equals(remotePath)) {
return true;
  }
}
Arrays.stream(statuses).forEach(s -> LOG.info(s.getPath().toString()));
return false;
  }
{code}

> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-13257.001.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> 

[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-01 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713155#comment-15713155
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

That use case for Mock S3 Client + Real DDB client (for DynamoDBLocal) makes 
sense.  We also need to be able to ensure that DDB metadatastore is disabled 
for unit tests, even if it is configured in the Hadoop configuration.  That 
could be solved as part of HADOOP-13589.

In my working tree I have a patch on top of your v10 here that separates out 
the DynamoDB Client Factory into a separate class {{DynamoDBClientFactory}}.  
That would allow us to use a Mock S3 client without a real DDB client.  It is 
an easy change but depends on (or conflicts with) my outstanding patch for 
HADOOP-13793 (which we should get in soon).

As for disabling s3n integration tests, you should be able to add a couple of 
lines to your pom to exclude those.. Google the Maven Failsafe options for 
details.  I personally run just integration tests like so:

{{mvn clean test-compile failsafe:integration-test}}

and then find one failure that I want to debug, and run that one alone like 
this:

{{mvn clean test-compile failsafe:integration-test 
-Dit.test=ITestS3AFileSystemContract}}


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713137#comment-15713137
 ] 

Jason Lowe commented on HADOOP-13578:
-

I forgot to mention it would also be good to address at least some of the 
checkstyle issues, such as the indentation levels and line-length.

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch, 
> HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, HADOOP-13578.v4.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-12-01 Thread Luke Miner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713133#comment-15713133
 ] 

Luke Miner commented on HADOOP-13811:
-

So I printed out the classpath and it points to the snapshot build 
{{jar:file:/foo/hadoop-aws-2.9.0-SNAPSHOT.jar!/org/apache/hadoop/fs/s3a/S3AFileSystem.class}}

Could an earlier version of hadoop somehow crept in?

I built it from the PR that you'd indicated earlier: 
https://github.com/apache/spark/pull/12004

I used the following command on my Mac {{dev/make-distribution.sh 
-Pyarn,hadoop-2.7,hive,cloud -Pmesos -Dhadoop.version=2.9.0-SNAPSHOT}}

Is the problem with the {{hadoop-2.7}} bit?



> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2016-12-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713129#comment-15713129
 ] 

Jason Lowe commented on HADOOP-13578:
-

Thanks for updating the patch!

I'm torn on the buffer size, whether we should use the generic 
io.file.buffer.size or have a zst-specific config.  If the codec can 
significantly speed up with a larger default buffer size than the file buffer 
size then maybe we should consider having a separate, zst-specific buffer size 
config.  I'd recommend a zst-specific config key with a default value of 0 that 
means use whatever buffer size the zst library wants to use, but the user can 
override it to a custom size.  That way users can change the zst codec buffer 
size (again, useful for keeping memory usage reasonable for very wide sort 
factor merges) but not change the buffer sizes of anything else.  Similarly, 
someone changing the default buffer size to something relatively small (e.g.: 
4K) for some unrelated use case could unknowingly hamper the performance of the 
zst codec.

Regarding the test input file, is it copyrighted or do we otherwise have the 
redistribution rights?  Also the .zst binary test file is missing from the 
patch.

*ZStandardCodec.java*
Nit: should import the specific things needed rather than wildcard imports.

*ZStandardCompressor.java*
reinit calls reset then init, but reset already is calling init.

LOG should be private rather than public.

The LOG.isDebugEnabled check is unnecessary.

Nit: The VisibleForTesting annotation should be on a separate line like the 
other annotations.

*ZStandardCompressor.c:*
Shouldn't we throw if GetDirectBufferAddress returns null?

>From the zstd documentation, ZSTD_compressStream may not always fully consume 
>the input.  Therefore if the finish flag is set in deflateBytesDirect I think 
>we need to avoid calling ZTD_endStream if there is still bytes left in the 
>input or we risk dropping some of the input.  We can just return without 
>setting the finished flag and expect to get called again from the outer 
>compression loop logic.

Similarly the documentation for ZSTD_endStream says that it may not be able to 
flush the full contents of the data in one call, and if it indicates there is 
more data left to flush then we should call it again.  So if we tried to end 
the stream and there's more data left then we shouldn't throw an error.  
Instead we simply leave the finished flag unset and return so the outer logic 
loop will empty the output buffer and call compress() again.  We will then 
again call ZSTD_endStream to continue attempting to finish the compression.  
Bonus points for adding a test case that uses a 1-byte output buffer to 
demonstrate the extreme case is working properly.

Is it necessary to call ZSTD_flushStream after every ZSTD_compressStream call?  
Seems to me we can skip it entirely, but I might be missing something.

*ZStandardDecompressor.java:*
 LOG should be private rather than public.

Should there be a checkStream call at the beginning of decompress()?  This 
would mirror the same check in the compressor for the corresponding compress() 
method.

In inflateDirect, the setting of preSliced = dst in the conditional block is 
redundant since it was just done before the block.  Actually I think it would 
be faster and create less garbage to avoid creating a temporary ByteBuffer and 
have the JNI code to lookup the output buffer's position and limit and adjust 
accordingly.  That way we aren't creating an object each time.

The swapping of variables in inflateDirect is cumbersome.  It would be much 
cleaner if inflateBytesDirect took arguments so we can pass what the JNI needs 
directly to it rather than shuffling the parameters into the object's fields 
before we call it. That also cleans up the JNI code a bit since it doesn't need 
to do many of the field lookups.  For example:
{code}
  private native int inflateBytesDirect(ByteBuffer src, int srcOffset, int 
srcLen, ByteBuffer dst, int dstOffset, int dstLen);
{code}
We could simplify the variables even more if the JNI code called the ByteBuffer 
methods to update the positions and limits directly rather than poking the 
values into the Java object and having Java do it when the JNI returns.  I'll 
leave the details up to you, just pointing out that we can clean this up and 
avoid a lot of swapping.  We could do a similar thing on the compressor side.  

I'm a little confused about the endOfInput flag for the direct decompressor.  I 
assume it's for handling the case where there are multiple frames within the 
input buffer.  The JNI code will set finished = true once it hits the end of 
the frame, and it appears it will remain that way even if more input is sent 
(i.e.: finished will go from false to true after the first frame in the input 
buffer and remain true throughout all subsequent frames).  It makes me wonder 
if we should be 

[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-12-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15713109#comment-15713109
 ] 

Mingliang Liu commented on HADOOP-13257:


Hi [~vishwajeet.dusane],

I just committed the [HDFS-11132].

Thanks,




> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-13257.001.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-12-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712862#comment-15712862
 ] 

Ravi Prakash commented on HADOOP-13849:
---

That makes sense Steve and Tao Li! Thanks for your efforts. Please keep us 
updated if you find any bottlenecks. 

> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712627#comment-15712627
 ] 

Hadoop QA commented on HADOOP-13673:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 2 new + 117 unchanged - 0 fixed = 
119 total (was 117) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} The patch generated 0 new + 112 unchanged - 12 fixed 
= 112 total (was 124) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
6s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
51s{color} | {color:green} hadoop-mapreduce-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841317/HADOOP-13673.01.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 9cfd9e08c35e 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e0fa492 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11177/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11177/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11177/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11177/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as 

[jira] [Comment Edited] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712531#comment-15712531
 ] 

Allen Wittenauer edited comment on HADOOP-13673 at 12/1/16 5:38 PM:


-01:
* some basic docs
* hdfs/yarn/hadoop now support account switching
* various bugs

Some things I've been doing for testing:

hadoop-env.sh:
{code}
HDFS_NAMENODE_USER=hdfs
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
YARN_RESOURCEMANAGER_USER=yarn
{code}

{code}
root# yarn --daemon start resourcemanager
yarn$ yarn --daemon start resourcemanager
root# hdfs --daemon start datanode
hdfs$ hdfs --daemon start namenode
root# sbin/start-all.sh
root# sbin/stop-all.sh
hdfs$ start-dfs.sh
root# start-dfs.sh
yarn$ start-yarn.sh
root# start-yarn.sh
{code}

TODO:
* verify that users can run daemons as root if they set _USER=root 




was (Author: aw):
-01:
* some basic docs
* hdfs/yarn/hadoop now support account switching
* various bugs

Some things I've been doing for testing:

hadoop-env.sh:
{code}
HDFS_NAMENODE_USER=hdfs
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
YARN_RESOURCEMANAGER_USER=yarn
{code}

{code}
root# yarn --daemon start resourcemanager
yarn$ yarn --daemon start resourcemanager
root# hdfs --daemon start datanode
hdfs$ hdfs --daemon start namenode
root# sbin/start-all.sh
root# sbin/stop-all.sh
hdfs$ start-dfs.sh
root# start-dfs.sh
yarn$ start-yarn.sh
root# start-yarn.sh
{code}



> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712531#comment-15712531
 ] 

Allen Wittenauer edited comment on HADOOP-13673 at 12/1/16 5:35 PM:


-01:
* some basic docs
* hdfs/yarn/hadoop now support account switching
* various bugs

Some things I've been doing for testing:

hadoop-env.sh:
{code}
HDFS_NAMENODE_USER=hdfs
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
YARN_RESOURCEMANAGER_USER=yarn
{code}

{code}
root# yarn --daemon start resourcemanager
yarn$ yarn --daemon start resourcemanager
root# hdfs --daemon start datanode
hdfs$ hdfs --daemon start namenode
root# sbin/start-all.sh
root# sbin/stop-all.sh
hdfs$ start-dfs.sh
root# start-dfs.sh
yarn$ start-yarn.sh
root# start-yarn.sh
{code}




was (Author: aw):
-01:
* some basic docs
* hdfs/yarn/hadoop now support accoutn switching
* various bugs

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update scripts to be smarter when running with privilege

2016-12-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Summary: Update scripts to be smarter when running with privilege  (was: 
Update sbin/start-* and sbin/stop-* to be smarter)

> Update scripts to be smarter when running with privilege
> 
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-12-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Attachment: HADOOP-13673.01.patch

-01:
* some basic docs
* hdfs/yarn/hadoop now support accoutn switching
* various bugs

> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch, HADOOP-13673.01.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty

2016-12-01 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704129#comment-15704129
 ] 

John Zhuge edited comment on HADOOP-13597 at 12/1/16 4:37 PM:
--

Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer thus all KMS unit tests exercise 
KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete KMSJMXServlet. HttpServer2 uses compatible 
org.apache.hadoop.jmx.JMXJsonServlet with more features.
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway
- Obsolete {{kms.sh version}} that prints Tomcat version

TESTING DONE
- All hadoop-kms unit tests. MiniKMS equals full KMS.
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status
- JMX works
- /logs works

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001


was (Author: jzhuge):
Patch 001
- KMSHttpServer based on HttpServer2
- Redirect MiniKMS to KMSHttpServer thus all KMS unit tests exercise 
KMSHttpServer
- Add kms-default.xml
- Add Jetty properties including SSL properties
- Convert hadoop-kms from war to jar
- Rewrite kms.sh to use Hadoop shell script framework
- Obsolete HTTP admin port for the Tomcat Manager GUI which does not seem to 
work anyway
- Obsolete {{kms.sh version}} that prints Tomcat version

TESTING DONE
- All hadoop-kms unit tests. MiniKMS equals full KMS.
- Non-secure REST APIs
- Non-secure “hadoop key” commands
- SSL REST APIs
- kms.sh run/start/stop/status
- JMX works
- /logs works

TODO
- Set HTTP request header size by env KMS_MAX_HTTP_HEADER_SIZE
- Add static web content /index.html
- More ad-hoc testing
- Integration testing
- Update docs: index.md.vm

TODO in new JIRAs:
- Integrate with Hadoop SSL server configuration
- Full SSL server configuration: 
includeProtocols/excludeProtocols/includeCipherSuites/excludeCipherSuites, etc.
- Design common Http server configuration. Common properties in 
“-site.xml” with config prefix, e.g., “hadoop.kms.”.
- Design HttpServer2 configuration-based builder
- Share web apps code in Common, HDFS, and YARN

My private branch: https://github.com/jzhuge/hadoop/tree/HADOOP-13597.001

> Switch KMS from Tomcat to Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13597.001.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13853) S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts

2016-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712269#comment-15712269
 ] 

Steve Loughran commented on HADOOP-13853:
-

note that given all the other overheads of a commit, this is not the bottleneck.

> S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts
> ---
>
> Key: HADOOP-13853
> URL: https://issues.apache.org/jira/browse/HADOOP-13853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Looking at traces of work, there's invariably a PUT of a _SUCCESS at the end, 
> which, with disk output, adds the overhead of creating, writing to and then 
> reading a 0 byte file.
> With a lazy create, the creation could be postponed until the first write, 
> with special handling in the {{startUpload()}} operation to return a null 
> stream, rather than reopen the file. Saves on some disk IO: create, read, 
> delete



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.

2016-12-01 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712275#comment-15712275
 ] 

Vishwajeet Dusane commented on HADOOP-13257:


Thanks [~liuml07] for taking some time to review and the feedback.

{quote}
1. I don't have a Azure subscription;did you finish a successful full test run 
integrated against the Azure Data Lake back-end with this patch?
{quote}
i do have internal Jenkins setup to execute contract test periodically against 
Azure Data Lake back-end. All test are passing consistently. This patch do have 
dependency on HDFS-11132. [~ste...@apache.org] has a support for the change, 
Could [~ste...@apache.org], [~chris.douglas], [~cnauroth] or [~liuml07] please 
commit HDFS-11132, in case no further comment to address HDFS-11132 to unblock 
this patch.

{quote}
2. In TestAdlSupportedCharsetInPath, is failureReport ever reported? Naming 
private helper ...
{quote}

Good catch, {{failureReport}} is no longer needed. I had added it earlier to 
get collective report after concurrent test execution. So as {{assertTrue}} and 
{{assertFalse}}. I will incorporate the comment and update patch.

{quote}
3. In the TestMetadata.java, we can make the parent a static variable as it's 
used in all test cases.
{quote}
Yes, I will make parent as member variable to limit its scope to 
{{TestMetadata.java}}.

{quote}
6. When generating Parameterized.Parameters, can we use loops? They're clearer 
for covering different cases.
{quote}
Sorry i doubt if i understood the comment, Could you please clarify?

{quote}
7. The follow methods can be simplified
{quote}
Initially i had the simplified version of the code you proposed. Issue faced, 
output was flood with logs since {{TestAdlSupportedCharsetInPath}} has 470+ 
test. Hence added check to dump the log only when not found.
 
{quote}
Checkstyle warnings are related if you run ...
{quote}
Ohh i missed to see these warning, i noticed +1 from jenkins and ignore to read 
through the report. Thanks [~liuml07] for highlighting.



> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-13257.001.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13853) S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts

2016-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712267#comment-15712267
 ] 

Steve Loughran commented on HADOOP-13853:
-

DEBUG-level logs of the work used to create a marker file.
{code}
2016-12-01 15:22:29,727 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ABlockOutputStream (S3ABlockOutputStream.java:(170)) - Initialized 
S3ABlockOutputStream for {bucket=hwdev-steve-new, 
key='spark-cloud/S3ANumbersSuite/numbers_rdd_tests/_SUCCESS'} output to 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Writing, dataSize=0, limit=8388608}
2016-12-01 15:22:29,728 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ABlockOutputStream (S3ABlockOutputStream.java:close(333)) - 
S3ABlockOutputStream{{bucket=hwdev-steve-new, 
key='spark-cloud/S3ANumbersSuite/numbers_rdd_tests/_SUCCESS'}, 
blockSize=8388608, 
activeBlock=FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Writing, dataSize=0, limit=8388608}}: Closing block #1: current block= 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Writing, dataSize=0, limit=8388608}
2016-12-01 15:22:29,728 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ABlockOutputStream (S3ABlockOutputStream.java:putObject(386)) - Executing 
regular upload for {bucket=hwdev-steve-new, 
key='spark-cloud/S3ANumbersSuite/numbers_rdd_tests/_SUCCESS'}
2016-12-01 15:22:29,728 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ADataBlocks (S3ADataBlocks.java:startUpload(247)) - Start datablock upload
2016-12-01 15:22:29,728 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ADataBlocks (S3ADataBlocks.java:enterState(154)) - 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Writing, dataSize=0, limit=8388608}: entering state Upload
2016-12-01 15:22:29,729 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ABlockOutputStream (S3ABlockOutputStream.java:clearActiveBlock(212)) - 
Clearing active block
2016-12-01 15:22:29,729 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:incrementPutStartStatistics(1169)) - PUT start 0 bytes
2016-12-01 15:22:29,730 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - object_put_requests += 1  -> 
 4
2016-12-01 15:22:29,908 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:incrementPutCompletedStatistics(1186)) - PUT completed 
success=true; 0 bytes
2016-12-01 15:22:29,908 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3AFileSystem 
(S3AStorageStatistics.java:incrementCounter(60)) - 
object_put_requests_completed += 1  ->  4
2016-12-01 15:22:29,908 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3ADataBlocks 
(S3ADataBlocks.java:enterState(154)) - 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Upload, dataSize=0, limit=8388608}: entering state Closed
2016-12-01 15:22:29,909 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3ADataBlocks 
(S3ADataBlocks.java:close(269)) - Closed 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Closed, dataSize=0, limit=8388608}
2016-12-01 15:22:29,909 [s3a-transfer-shared-pool1-t5] DEBUG s3a.S3ADataBlocks 
(S3ADataBlocks.java:innerClose(743)) - Closing 
FileBlock{destFile=/Users/stevel/Projects/Hortonworks/Projects/sparkwork/spark-cloud-examples/cloud-examples/target/tmp/s3ablock8507376768330281400.tmp,
 state=Closed, dataSize=0, limit=8388608}
2016-12-01 15:22:29,909 [ScalaTest-main-running-S3ANumbersSuite] DEBUG 
s3a.S3ABlockOutputStream (S3ABlockOutputStream.java:close(360)) - Upload 
complete for {bucket=hwdev-steve-new, 
key='spark-cloud/S3ANumbersSuite/numbers_rdd_tests/_SUCCESS'}
{code}

> S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts
> ---
>
> Key: HADOOP-13853
> URL: https://issues.apache.org/jira/browse/HADOOP-13853
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Looking at traces of work, there's invariably a PUT of a _SUCCESS at the end, 
> which, with disk output, adds the overhead of creating, writing to and then 
> reading a 0 byte file.
> With a lazy create, the 

[jira] [Created] (HADOOP-13853) S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts

2016-12-01 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13853:
---

 Summary: S3ADataBlocks.DiskBlock to lazy create dest file for 
faster 0-byte puts
 Key: HADOOP-13853
 URL: https://issues.apache.org/jira/browse/HADOOP-13853
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


Looking at traces of work, there's invariably a PUT of a _SUCCESS at the end, 
which, with disk output, adds the overhead of creating, writing to and then 
reading a 0 byte file.

With a lazy create, the creation could be postponed until the first write, with 
special handling in the {{startUpload()}} operation to return a null stream, 
rather than reopen the file. Saves on some disk IO: create, read, delete



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is used

2016-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13022:

Target Version/s: 2.9.0
Priority: Minor  (was: Major)

> S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is 
> used
> -
>
> Key: HADOOP-13022
> URL: https://issues.apache.org/jira/browse/HADOOP-13022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Leonardo Contreras
>Priority: Minor
>
> When server side encryption with "aws:kms" value and no custom key is used in 
> S3A Filesystem, the AWSClient fails when verifing Md5:
> {noformat}
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> verify integrity of data upload.  Client calculated content hash (contentMD5: 
> 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
> c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may 
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
> md5DigestStream: 
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
> bucketName: abuse-messages-nonprod, key: 
> venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
>   at scala.Option.map(Option.scala:145)
>   at 
> org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
>   at 
> org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15712133#comment-15712133
 ] 

Hadoop QA commented on HADOOP-13835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
59s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 10m 33s{color} | 
{color:red} root generated 25 new + 7 unchanged - 0 fixed = 32 total (was 7) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 28s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
31s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841282/HADOOP-13835.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux d769bad1bead 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f7613b |
| Default Java | 1.8.0_111 |
| cc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11176/artifact/patchprocess/diff-compile-cc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11176/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11176/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11176/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask
 . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11176/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2016-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711975#comment-15711975
 ] 

Steve Loughran commented on HADOOP-13811:
-

I don't see it either. Why not try logging the URL returned by 
{{this.getClass.getClassloader.getResource("/org/apache/hadoop/fs/s3a/S3AFileSystem.class")}}


FWIW, Hadoop 2.8 has a built in entry point, org.apache.hadoop.util.FindClass, 
whose sole purpose is to track down where things are coming from, and to assert 
that resources/classes are on the CP.

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-13835:
---
Attachment: HADOOP-13835.005.patch

-005
Fix the compile errors in mapreduce.

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711838#comment-15711838
 ] 

Hadoop QA commented on HADOOP-13852:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841273/HADOOP-13852-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux f55528d8ffbe 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f7613b |
| Default Java | 1.8.0_111 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11175/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11175/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in 

[jira] [Commented] (HADOOP-13849) Bzip2 java-builtin and system-native have almost the same compress speed

2016-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711831#comment-15711831
 ] 

Steve Loughran commented on HADOOP-13849:
-

Well, if you want to work on it, feel free. 

however, know that the native codec uses the standard {{libbz2}}; there's not 
much that can be done in the Hadoop code to speed that up other than any 
improvements in how data is moved between the Java memory structures and those 
of libbz...if there are memory copies taking place then that could be hurting 
performance. Anything that can help there would be good.


bq. I think the "system native" should have better compress/decompress 
performance than "java builtin".

That's something to explore. The latest Java 8 compilers are fast, and if the 
algorithms aren't doing lots of object creation, then bit operations in Java 
should be on a par with C-language actions against general registers. Where you 
would expect differences is if the native code uses some special CPU registers 
and operations (example, Intel SSE2) for significant performance. I don't know 
if bzip does that.

The fun part in benchmarking is isolating things. For codec performance, maybe 
have some test data being pre generated in CPU & cached in RAM. in standard 
formats (avro, orc), and the different codecs, then compressing that to RAM not 
HDD, so that the compression code is isolated from Disk IO, etc, etc. 

If the isolated native code is faster than the java one, then the implication 
is that the bottleneck is elsewhere in the workflow, not the codec. Again: 
that's interesting information.

bq. My hardware CPU/Memory/Network bandwidh/Disk bandwidh are not bottleneck

one of them is. Always —and it can be things like CPU cache latencies, excess 
synchronization in the code, even branch-misprediction in the CPU can hurt 
efficiency. FWIW, Flamegraphs are current the tool of choice for visualising 
performance during microbenchmarks





> Bzip2 java-builtin and system-native have almost the same compress speed
> 
>
> Key: HADOOP-13849
> URL: https://issues.apache.org/jira/browse/HADOOP-13849
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.0
> Environment: os version: redhat6
> hadoop version: 2.6.0
> native bzip2 version: bzip2-devel-1.0.5-7.el6_0.x86_64
>Reporter: Tao Li
>
> I tested bzip2 java-builtin and system-native compression, and I found the 
> compress speed is almost the same. (I think the system-native should have 
> better compress speed than java-builtin)
> My test case:
> 1. input file: 2.7GB text file without compression
> 2. after bzip2 java-builtin compress: 457MB, 12min 4sec
> 3. after bzip2 system-native compress: 457MB, 12min 19sec
> My MapReduce Config:
> conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false");
> conf.set("mapreduce.output.fileoutputformat.compress", "true");
> conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
> conf.set("mapreduce.output.fileoutputformat.compress.codec", 
> "org.apache.hadoop.io.compress.BZip2Codec");
> conf.set("io.compression.codec.bzip2.library", "java-builtin"); // for 
> java-builtin
> conf.set("io.compression.codec.bzip2.library", "system-native"); // for 
> system-native
> And I am sure I have enable the bzip2 native, the output of command "hadoop 
> checknative -a" is as follows:
> Native library checking:
> hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
> zlib:true /lib64/libz.so.1
> snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
> lz4: true revision:99
> bzip2:   true /lib64/libbz2.so.1
> openssl: true /usr/lib64/libcrypto.so



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-12-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711793#comment-15711793
 ] 

Steve Loughran commented on HADOOP-13449:
-

Some of these tests are failing because dynamo db is trying to init, even when 
anonymously authenticating with an external read only store.

we need to consider how to support deployment where some object stores are read 
only + no dynamo db, and perhaps fallback to no db if auth fails. And also the 
situation where one store has an authoritative DB, another none...it'll have to 
be on a per-object store basis

{code}

testAnonymousProvider(org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider)  
Time elapsed: 0.91 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: initializing  on 
s3a://landsat-pds/scene_list.gz: 
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Request is 
missing Authentication Token (Service: AmazonDynamoDBv2; Status Code: 400; 
Error Code: MissingAuthenticationTokenException; Request ID: 
NS80UK0G6OKHI6IR7KCIV1VRONVV4KQNSO5AEMVJF66Q9ASUAAJG): Request is missing 
Authentication Token (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
MissingAuthenticationTokenException; Request ID: 
NS80UK0G6OKHI6IR7KCIV1VRONVV4KQNSO5AEMVJF66Q9ASUAAJG)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1529)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1167)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1722)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1698)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:743)
at 
com.amazonaws.services.dynamodbv2.document.DynamoDB.createTable(DynamoDB.java:96)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.createTable(DynamoDBMetadataStore.java:413)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:187)
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:85)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3246)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3295)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3269)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
at 
org.apache.hadoop.fs.s3a.ITestS3AAWSCredentialsProvider.testAnonymousProvider(ITestS3AAWSCredenti
{code}

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, 
> HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, 
> HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, 
> HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, 
> HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711777#comment-15711777
 ] 

Hadoop QA commented on HADOOP-13835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  8m 
48s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  8m 48s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 48s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 44s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841247/HADOOP-13835.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 291d79ac56cf 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f7613b |
| Default Java | 1.8.0_111 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/artifact/patchprocess/patch-compile-root.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11174/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13852:

Affects Version/s: 3.0.0-alpha1
   Status: Patch Available  (was: Open)

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13852:

Attachment: HADOOP-13852-001.patch

Patch allows a build with an argument like {{-Ddeclared.hadoop.version=2.11}} 
to specify the build to be returned via an API call.

When you do a build with that option, Spark DF test runs do appear to work 
again.

> hadoop build to allow hadoop version property to be explicitly set
> --
>
> Key: HADOOP-13852
> URL: https://issues.apache.org/jira/browse/HADOOP-13852
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13852-001.patch
>
>
> Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
> rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to 
> have the Hadoop version (currently set to pom.version) to be overridden 
> manually.
> This will not affect version names of artifacts, merely the declared Hadoop 
> version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set

2016-12-01 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13852:
---

 Summary: hadoop build to allow hadoop version property to be 
explicitly set
 Key: HADOOP-13852
 URL: https://issues.apache.org/jira/browse/HADOOP-13852
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer 
rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to have 
the Hadoop version (currently set to pom.version) to be overridden manually.

This will not affect version names of artifacts, merely the declared Hadoop 
version visible in {{VersionInfo.getVersion()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13850:

Assignee: Mingliang Liu

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13850) s3guard to log choice of metadata store at debug

2016-12-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13850:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to HADOOP-13345 branch. thanks!

> s3guard to log choice of metadata store at debug
> 
>
> Key: HADOOP-13850
> URL: https://issues.apache.org/jira/browse/HADOOP-13850
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-13850-HADOOP-13345.000.patch
>
>
> People not using s3guard really don't need to know this on every single use 
> of the S3A client. 
> {code}
> INFO  s3guard.S3Guard (S3Guard.java:getMetadataStore(77)) - Using 
> NullMetadataStore for s3a filesystem
> {code}
> downgrade to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-13835:
---
Attachment: HADOOP-13835.004.patch

Thanks for the review [~ajisakaa]! I've addressed your feedback in the latest 
patch.

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2016-12-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711277#comment-15711277
 ] 

Akira Ajisaka commented on HADOOP-13835:


Would you remove the following setting in rat plugin?
{code:title=hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml}

src/main/native/gtest/**/*
{code}
I don't think the following changes are needed.
{code:title=CMakeLists.txt}
-include_directories(SYSTEM ${SRC}/gtest/include)
+# include_directories(SYSTEM ${SRC}/gtest/include)
 
-set(CMAKE_MACOSX_RPATH TRUE)
 set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
-set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
{code}
Adding {{set(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)}} is needed for me to run 
`mvn test -Pnative` in hadoop-mapreduce-client-nativetask module successfully 
on CentOS 7.2.

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org