[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642834#comment-16642834
 ] 

Hadoop QA commented on HADOOP-15785:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 3937 unchanged - 46 fixed = 3937 total (was 3983) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942947/HADOOP-15785.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1ff100a50a1d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a30b1d1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15320/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15320/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [JDK10] Javadoc build fails on JDK 10 in 

[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642826#comment-16642826
 ] 

Takanobu Asanuma commented on HADOOP-15785:
---

Thanks for the great work, [~dineshchitlangia]! I will review your patch this 
week.

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642821#comment-16642821
 ] 

Hadoop QA commented on HADOOP-15828:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 1 unchanged - 6 fixed = 1 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestMachineList |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15828 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942945/HADOOP-15828.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5837689b6196 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 08bb6c49 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15318/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15318/testReport/ |
| Max. process+thread count | 1368 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642812#comment-16642812
 ] 

Hadoop QA commented on HADOOP-15830:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 172 unchanged - 30 fixed = 175 total (was 202) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15830 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942949/HADOOP-15830.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f875bd07863 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 08bb6c49 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15319/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15319/testReport/ |
| Max. process+thread count | 1428 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642793#comment-16642793
 ] 

Hudson commented on HADOOP-15818:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15144 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15144/])
HADOOP-15818. Fix deprecated maven-surefire-plugin configuration in (aajisaka: 
rev a30b1d1824201df45535706462505f07bb9776eb)
* (edit) hadoop-common-project/hadoop-kms/pom.xml


> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
> Fix For: 2.10.0, 3.2.0
>
> Attachments: 425.patch
>
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642794#comment-16642794
 ] 

Hudson commented on HADOOP-15775:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15144 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15144/])
HADOOP-15775. [JDK9] Add missing javax.activation-api dependency. (tasanuma: 
rev 9bbeb5248640ec5df8420a5e359436375f73e0ce)
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) hadoop-project/pom.xml
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml


> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch, 
> HADOOP-15775.06.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15810) TLS1.3 support

2018-10-08 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642777#comment-16642777
 ] 

Akira Ajisaka commented on HADOOP-15810:


TLS1.3 is supported in Java 11. Moved this from sub-task of Java 9 support to 
issue.

> TLS1.3 support
> --
>
> Key: HADOOP-15810
> URL: https://issues.apache.org/jira/browse/HADOOP-15810
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: t oo
>Priority: Major
>
> enable tls1.3 support in hadoop because...#security



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15810) TLS1.3 support

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15810:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-11123)

> TLS1.3 support
> --
>
> Key: HADOOP-15810
> URL: https://issues.apache.org/jira/browse/HADOOP-15810
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: t oo
>Priority: Major
>
> enable tls1.3 support in hadoop because...#security



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15775:
---
Fix Version/s: (was: 3.2.0)
   3.3.0

Thank you, [~tasanuma0829]!

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Fix For: 3.3.0
>
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch, 
> HADOOP-15775.06.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-08 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15775:
--
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch, 
> HADOOP-15775.06.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-08 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642772#comment-16642772
 ] 

Takanobu Asanuma commented on HADOOP-15775:
---

Committed to trunk. Thanks for the contribution, [~ajisakaa]! Thanks for your 
support, [~apurtell]!

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch, 
> HADOOP-15775.06.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15818:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, and branch-2. Thanks [~vbmudalige] for the 
contribution.

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
> Fix For: 2.10.0, 3.2.0
>
> Attachments: 425.patch
>
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642764#comment-16642764
 ] 

ASF GitHub Bot commented on HADOOP-15818:
-

Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/425


> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
> Attachments: 425.patch
>
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #425: HADOOP-15818. Fix deprecated maven-surefire-plugin...

2018-10-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/hadoop/pull/425


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642722#comment-16642722
 ] 

Hadoop QA commented on HADOOP-15818:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
42m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
58s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942937/425.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux d8f54822245e 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15316/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15316/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  

[jira] [Comment Edited] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642716#comment-16642716
 ] 

BELUGA BEHR edited comment on HADOOP-15830 at 10/9/18 2:42 AM:
---

I just submitted another patch which I was a bit more aggressive on.  It has 
all the changes before, plus:

#  Removed some instances of "log and throw" error handling.  This is an 
anti-pattern and should be avoided. (try.. catch... log... throw)  Log or 
throw; do not do both.
# Applied some code formatting to improve readability and check-style of 
certain areas
# Removed a lot of dead white space
# Remove logging guards {{LOG.isDebugEnabled()}} in favor of SLF4j parameter 
logging
# Removed many instances of logging containing 
{{Thread.currentThread().getName()}} to record the thread name performing the 
logging.  Emitting the thread name can be configured with the logging framework 
and does not need to be done explicitly by the caller.

Pick a patch that works for you :)


was (Author: belugabehr):
I just submitted another patch which I was a bit more aggressive on.  It has 
all the changes before, plus:

#  Removed some instances of "log and throw" error handling.  This is an 
anti-pattern and should be avoided. (try.. catch... log... throw)  Log or 
throw; do not do both.
# Applied some code formatting to improve readability and check-style of 
certain areas
# Removed a lot of dead white space
# Remove logging guards {{LOG.isDebugEnabled()}} in favor of SLF4j parameter 
logging
# Removed many instances of logging containing 
{{Thread.currentThread().getName()}} to record the thread name performing the 
logging.  Emitting the thread name can be configured with the logging framework 
and does not need to be done explicitly by the caller.

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15830.2.patch, HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15830:
-
Status: Patch Available  (was: Open)

I just submitted another patch which I was a bit more aggressive on.  It has 
all the changes before, plus:

#  Removed some instances of "log and throw" error handling.  This is an 
anti-pattern and should be avoided. (try.. catch... log... throw)  Log or 
throw; do not do both.
# Applied some code formatting to improve readability and check-style of 
certain areas
# Removed a lot of dead white space
# Remove logging guards {{LOG.isDebugEnabled()}} in favor of SLF4j parameter 
logging
# Removed many instances of logging containing 
{{Thread.currentThread().getName()}} to record the thread name performing the 
logging.  Emitting the thread name can be configured with the logging framework 
and does not need to be done explicitly by the caller.

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15830.2.patch, HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15830:
-
Attachment: HADOOP-15830.2.patch

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15830.2.patch, HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15830:
-
Status: Open  (was: Patch Available)

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15785:
---
Attachment: HADOOP-15785.002.patch
Status: Patch Available  (was: Open)

[~tasanuma0829] - Attached patch 002 that addresses the checkstyle issues 
generated by previous patch.

Test failure is unrelated to the patch.

Thank you.

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch, HADOOP-15785.002.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15785:
---
Status: Open  (was: Patch Available)

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15828:
-
Attachment: HADOOP-15828.2.patch

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15828:
-
Status: Patch Available  (was: Open)

Fix compilation issue.

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15828:
-
Status: Open  (was: Patch Available)

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15775) [JDK9] Add missing javax.activation-api dependency

2018-10-08 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642692#comment-16642692
 ] 

Takanobu Asanuma commented on HADOOP-15775:
---

I ran the qbt build with the latest patch on JDK11 and confirmed that it fixes 
the errors of missing java.activation. +1. Will commit it later.

> [JDK9] Add missing javax.activation-api dependency
> --
>
> Key: HADOOP-15775
> URL: https://issues.apache.org/jira/browse/HADOOP-15775
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15775.01.patch, HADOOP-15775.02.patch, 
> HADOOP-15775.03.patch, HADOOP-15775.04.patch, HADOOP-15775.05.patch, 
> HADOOP-15775.06.patch
>
>
> Many unit tests fail due to missing java.activation module. This failure can 
> be fixed by adding javax.activation-api as third-party dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642686#comment-16642686
 ] 

Hadoop QA commented on HADOOP-15785:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 3943 unchanged - 46 fixed = 3948 total (was 3989) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942933/HADOOP-15785.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 44df1c43755e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15314/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15314/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15314/testReport/ |
| Max. process+thread count | 1352 (vs. ulimit of 1) 

[jira] [Commented] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642679#comment-16642679
 ] 

BELUGA BEHR commented on HADOOP-15830:
--

[~elgoiri] Thanks for the look!

Ya, I think the assumption is that if an items in the queue are timestamped as 
they are placed into the queue, so in essence, it is sorted.  However, I'm not 
always sure that is the case.  

{code}
// Item goes on the front of the list
call.connection.responseQueue.addFirst(call);

if (inHandler) {
  // timestamp is reset
  call.timestamp = Time.now();
...
{code}

So in this case, it is actually possible that the item at the front of the list 
has the newest timestamp in the queue.  I'm not sure in practice if this 
happens or if the purge can happen when this is the case, but it would cause 
the purge loop to bump out immediately and leave expired calls in the queue.  
Regardless, without a priority queue implementation, it seems best to not 
assume order.

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642672#comment-16642672
 ] 

Akira Ajisaka commented on HADOOP-15832:


Thanks [~rkanter] for the reply! I'm +1 for the 001 patch pending Jenkins.

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642661#comment-16642661
 ] 

Robert Kanter commented on HADOOP-15832:


Thanks for taking a look [~ajisakaa].  That is intentional; it's going to be 
needed by YARN-6586/YARN-8448.  I figured I may as well introduce the changes 
needed for that now; but if you prefer, I can update the patch to keep this 
JIRA strictly about updating the artifact and version (and make the tweaks in 
YARN-8448).

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15818:
---
Attachment: 425.patch

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
> Attachments: 425.patch
>
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642650#comment-16642650
 ] 

Akira Ajisaka commented on HADOOP-15832:


Mostly look good to me.
* The scope of the dependency is compile (not test) in 
hadoop-yarn-server-web-proxy module. Is it intentional?

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642644#comment-16642644
 ] 

Akira Ajisaka commented on HADOOP-15818:


LGTM, +1 pending Jenkins.

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15818:
---
Status: Patch Available  (was: Open)

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642643#comment-16642643
 ] 

Robert Kanter commented on HADOOP-15832:


The 001 patch changes the {{bcprov-jdk16}} artifact to the {{bcprov-jdk15on}} 
artifact, and updates the version to 1.60.  It also adds the {{bcpkix-jdk15on}} 
artifact, which is needed for YARN-8448.  Finally, it excludes BouncyCastle 
from being shaded, because they are signed jars and shading them breaks that.

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15832:
---
Status: Patch Available  (was: Open)

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-15818:
--

Assignee: Vidura Bhathiya Mudalige

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Vidura Bhathiya Mudalige
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-15832:
---
Attachment: HADOOP-15832.001.patch

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: HADOOP-15832.001.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15832) Upgrade BouncyCastle

2018-10-08 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter moved YARN-8857 to HADOOP-15832:
--

Affects Version/s: (was: 3.2.0)
   3.2.0
 Target Version/s: 3.2.0  (was: 3.2.0)
  Key: HADOOP-15832  (was: YARN-8857)
  Project: Hadoop Common  (was: Hadoop YARN)

> Upgrade BouncyCastle
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-15785:
---
Attachment: HADOOP-15785.001.patch
Status: Patch Available  (was: Open)

[~tasanuma0829] - Uploaded Patch 001 for your review. It might generate a few 
checkstyle issues, which I will fix post jenkins run.

I was able to build javadocs successfully using
{code:java}
mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common 
{code}
 

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-15785.001.patch
>
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642611#comment-16642611
 ] 

Hadoop QA commented on HADOOP-15827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
34s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15827 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942900/HADOOP-15827.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 802752d68947 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15313/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15313/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> NPE in DynamoDBMetadataStore.lambda$listChildren for 

[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-08 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642599#comment-16642599
 ] 

Sean Mackrory commented on HADOOP-15823:


Ok, thanks for clarifying. So I tested this on a machine with a single 
user-managed identity assigned, and it wasn't working until I specified the 
client ID and tenant ID. I didn't trace through what the code was doing, but 
maybe this is simply a requirement in the client that needs to be removed until 
we know whether or not we actually need those properties.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15717) TGT renewal thread does not log IOException

2018-10-08 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642558#comment-16642558
 ] 

Robert Kanter commented on HADOOP-15717:


I might be missing something here, but it looks like we already log the 
Exception at the WARN level on line 945:
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L945
However, before reaching there, it first checks a few special cases; and if 
those are satisfied, it won't get to this WARN log statement.  Those cases (TGT 
destroyed and TGTG end time is null) have sufficient ERROR log levels that I 
think that's okay.  If we want to have the stack trace, then I think it's fine 
to just add the Exception to those log messages, which I believe are the ones 
that [~xiaochen] was referring to.

> TGT renewal thread does not log IOException
> ---
>
> Key: HADOOP-15717
> URL: https://issues.apache.org/jira/browse/HADOOP-15717
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15717.001.patch
>
>
> I came across a case where tgt.getEndTime() was returned null and it resulted 
> in an NPE, this observation was popped out of a test suite execution on a 
> cluster. The reason for logging the {{IOException}} is that it helps to 
> troubleshoot what caused the exception, as it can come from two different 
> calls from the try-catch.
> I can see that [~gabor.bota] handled this with HADOOP-15593, but apart from 
> logging the fact that the ticket's {{endDate}} was null, we have not logged 
> the exception at all.
> With the current code, the exception is swallowed and the thread terminates 
> in case the ticket's {{endDate}} is null. 
> As this can happen with OpenJDK for example, it is required to print the 
> exception (stack trace, message) to the log.
> The code should be updated here: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L918



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15811) Optimizations for Java's TLS performance

2018-10-08 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642543#comment-16642543
 ] 

Xiao Chen commented on HADOOP-15811:


Catching up emails after PTO. Thanks [~daryn] for the ping, and great 
archeology here... Also thanks [~Steven Rand] for the pointer.

[~knanasi] or [~jojochuang], is this something you're interested to test out? 
I'm happy to review if so.

> Optimizations for Java's TLS performance
> 
>
> Key: HADOOP-15811
> URL: https://issues.apache.org/jira/browse/HADOOP-15811
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 1.0.0
>Reporter: Daryn Sharp
>Priority: Major
>
> Java defaults to using /dev/random and disables intrinsic methods used in hot 
> code paths.  Both cause highly synchronized impls to be used that 
> significantly degrade performance.
> * -Djava.security.egd=file:/dev/urandom
> * -XX:+UseMontgomerySquareIntrinsic
> * -XX:+UseMontgomeryMultiplyIntrinsic
> * -XX:+UseSquareToLenIntrinsic
> * -XX:+UseMultiplyToLenIntrinsic
> These settings significantly boost KMS server performance.  Under load, 
> threads are not jammed in the SSLEngine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-08 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642522#comment-16642522
 ] 

Jason Lowe commented on HADOOP-15822:
-

Looked into the unit test failures.
* TestNameNodeMetadataConsistency failure is an existing issue tracked by 
HDFS-11439
* TestBalancer test has been failing in other precommit builds, filed HDFS-13975
* TestStandbyCheckpoints does not look related and does not reproduce locally
* TestHAAppend is an inode create timeout that does not look related and does 
not reproduce locally
* TestDirectoryScanner is a timeout that does not look related and does not 
reproduce locally
* TestTimelineReaderWebServicesHBaseStorage has been failing in nightly builds, 
filed YARN-8856


> zstd compressor can fail with a small output buffer
> ---
>
> Key: HADOOP-15822
> URL: https://issues.apache.org/jira/browse/HADOOP-15822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: HADOOP-15822.001.patch, HADOOP-15822.002.patch
>
>
> TestZStandardCompressorDecompressor fails a couple of tests on my machine 
> with the latest zstd library (1.3.5).  Compression can fail to successfully 
> finalize the stream when a small output buffer is used resulting in a failed 
> to init error, and decompression with a direct buffer can fail with an 
> invalid src size error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642397#comment-16642397
 ] 

Gabor Bota edited comment on HADOOP-15827 at 10/8/18 9:32 PM:
--

Thanks, it makes sense now. I was able to reproduce the issue injecting 
{{dirPathMeta=null}} in readOp while debugging.
The solution will be just to change 
{code:java}
return (metas.isEmpty() && dirPathMeta == null)
{code}
to this 
{code:java}
return (metas.isEmpty() || dirPathMeta == null)
{code}
I'll upload a patch but without a test - it wouldn't be trivial to do the test, 
but I'll try if needed.



was (Author: gabor.bota):
Thanks, it makes sense now. I was able to reproduce the issue injecting 
{{dirPathMeta=null}} in readOp while debugging.
The solution will be just to change 
{code:java}
return (metas.isEmpty() && dirPathMeta == null)
{code}
to this 
{code:java}
return (metas.isEmpty() || dirPathMeta == null)
{code}
I'll upload a patch but without a test - it would be a pain to write 
integration test for this. I hope that's acceptable.


> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
> Attachments: HADOOP-15827.001.patch
>
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus

2018-10-08 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15831:
---

 Summary: Include modificationTime in the toString method of 
CopyListingFileStatus
 Key: HADOOP-15831
 URL: https://issues.apache.org/jira/browse/HADOOP-15831
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


I was looking at a DistCp error observed in hbase backup test:
{code}
2018-10-08 18:12:03,067 WARN  [Thread-933] mapred.LocalJobRunner$Job(590): 
job_local1175594345_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/
   
c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_
 length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-
   
57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_
 length = 5142 aclEntries = null, xAttrs = null}
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
2018-10-08 18:12:03,150 INFO  [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: 
1.0 mapProgress: 1.0
{code}
I noticed that modificationTime was not included in the toString of 
CopyListingFileStatus.

I propose including modificationTime so that it is easier to tell when the 
respective files last change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642400#comment-16642400
 ] 

Gabor Bota commented on HADOOP-15827:
-

I hope the issue will be solved with this. Thanks for noticing and creating the 
issue!

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
> Attachments: HADOOP-15827.001.patch
>
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15829) Review of NetgroupCache

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642399#comment-16642399
 ] 

Hadoop QA commented on HADOOP-15829:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 6 unchanged - 5 fixed = 6 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15829 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942852/HDFS-13971.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3fde313c8322 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15311/testReport/ |
| Max. process+thread count | 1377 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15311/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Updated] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15827:

Status: Patch Available  (was: Open)

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
> Attachments: HADOOP-15827.001.patch
>
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642398#comment-16642398
 ] 

Hadoop QA commented on HADOOP-15830:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15830 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942839/HDFS-13969.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 50f4813d31dd 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15310/testReport/ |
| Max. process+thread count | 1431 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15310/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Server.java Prefer ArrayList
> 

[jira] [Updated] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15827:

Attachment: HADOOP-15827.001.patch

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
> Attachments: HADOOP-15827.001.patch
>
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642397#comment-16642397
 ] 

Gabor Bota commented on HADOOP-15827:
-

Thanks, it makes sense now. I was able to reproduce the issue injecting 
{{dirPathMeta=null}} in readOp while debugging.
The solution will be just to change 
{code:java}
return (metas.isEmpty() && dirPathMeta == null)
{code}
to this 
{code:java}
return (metas.isEmpty() || dirPathMeta == null)
{code}
I'll upload a patch but without a test - it would be a pain to write 
integration test for this. I hope that's acceptable.


> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-08 Thread Thomas Marquardt (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642380#comment-16642380
 ] 

Thomas Marquardt commented on HADOOP-15823:
---

[~mackrorysd], [~DanielZhou] correct, the tenant ID and client ID are not 
required or even valid options for a system-assigned managed identity.  
However, the client ID is needed when you have multiple user-assigned managed 
identities.  This is discussed in the following links:

[https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview]
 

[https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token]
 

 

Looking at the ABFS implementation of AzureADAuthenticator.getTokenFromMsi, I 
see it is using a couple undocumented query parameters, specifically 
"authority" and "bypass_cache".  Those should be removed, unless the above 
documentation links are incorrect.  Furthermore, client_id is optional for the 
user-assigned managed identity case, when there are multiple user-assigned 
identities.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642372#comment-16642372
 ] 

Hadoop QA commented on HADOOP-15828:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
48s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 48s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 1 unchanged - 6 fixed = 1 total (was 7) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
43s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15828 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942874/HADOOP-15828.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 498ae8ea98f5 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15309/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15309/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15309/artifact/out/patch-compile-root.txt

[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642373#comment-16642373
 ] 

Steve Loughran commented on HADOOP-15827:
-

A key thing is that in the ? : branch, the : phase may be evaluated while 
{{dirPathMeta}} == null; I think that that dirPathMeta,getLastUpdate() call has 
to be guarded itself. 

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642371#comment-16642371
 ] 

Steve Loughran commented on HADOOP-15827:
-

This is branch HADOOP-15446 so the lines will be different, sorry. There;s an 
extra method for the store to build up the AWS access policy needed to access 
the data, which will confuse the lines

here's the file I've got
https://github.com/steveloughran/hadoop/blob/s3/HADOOP-14556-delegation-token/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java



> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642354#comment-16642354
 ] 

Hadoop QA commented on HADOOP-15826:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
22s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942858/HADOOP-15826-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c7009534a18a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 347ea38 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15312/testReport/ |
| Max. process+thread count | 415 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15312/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> @Retries annotation of putObject() call & uses wrong
> 

[jira] [Comment Edited] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642283#comment-16642283
 ] 

Gabor Bota edited comment on HADOOP-15827 at 10/8/18 7:11 PM:
--

I'm not really sure if this is happening in the return. Based on your stack 
[~ste...@apache.org], 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L644
 and 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L616
 lines are mentioned, maybe the issue is in line 644.

But line 644 is just a comment. How can it be? Are you sure you are on latest 
trunk?

I've run all the tests with the mvn params you've provided and the only error 
was {{testDestroyNoBucket}} which is known.


was (Author: gabor.bota):
I'm not really sure if this is happening in the return. Based on your stack 
[~ste...@apache.org], 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L644
 and 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L616
 lines are mentioned, maybe the issue is in line 644.

But line 644 is just a comment. How can it be?

I've run all the tests with the mvn params you've provided and the only error 
was {{testDestroyNoBucket}} which is known.

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15827:

Comment: was deleted

(was: But line 644 is just a comment. How can it be?)

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642283#comment-16642283
 ] 

Gabor Bota edited comment on HADOOP-15827 at 10/8/18 6:36 PM:
--

I'm not really sure if this is happening in the return. Based on your stack 
[~ste...@apache.org], 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L644
 and 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L616
 lines are mentioned, maybe the issue is in line 644.

But line 644 is just a comment. How can it be?

I've run all the tests with the mvn params you've provided and the only error 
was {{testDestroyNoBucket}} which is known.


was (Author: gabor.bota):
I'm not really sure if this is happening in the return. Based on your stack 
[~ste...@apache.org], 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L644
 and 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L616
 lines are mentioned, maybe the issue is in line 644.

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642286#comment-16642286
 ] 

Gabor Bota commented on HADOOP-15827:
-

But line 644 is just a comment. How can it be?

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642283#comment-16642283
 ] 

Gabor Bota commented on HADOOP-15827:
-

I'm not really sure if this is happening in the return. Based on your stack 
[~ste...@apache.org], 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L644
 and 
https://github.com/apache/hadoop/blob/046b8768af8a07a9e10ce43f538d6ac16e7fa5f3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L616
 lines are mentioned, maybe the issue is in line 644.

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13936) S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642279#comment-16642279
 ] 

Steve Loughran commented on HADOOP-13936:
-

+ bulk operation needs to handle failure of delete call as it updates S3guard 
with only those changes which went through

filter array passed to bulk deletes to only those which worked
immediate delete then complete call before throwing up the failure

> S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation
> -
>
> Key: HADOOP-13936
> URL: https://issues.apache.org/jira/browse/HADOOP-13936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.1.0, 3.1.1
>Reporter: Rajesh Balamohan
>Assignee: Steve Loughran
>Priority: Blocker
>
> As a part of {{S3AFileSystem.delete}} operation {{innerDelete}} is invoked, 
> which deletes keys from S3 in batches (default is 1000). But DynamoDB is 
> updated only at the end of this operation. This can cause issues when 
> deleting large number of keys. 
> E.g, it is possible to get exception after deleting 1000 keys and in such 
> cases dynamoDB would not be updated. This can cause DynamoDB to go out of 
> sync. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642271#comment-16642271
 ] 

Íñigo Goiri commented on HADOOP-15830:
--

When going over the responses, we used to break (I guess assuming that they 
were in order).
That functionality is going, any thoughts there?
Everyhting else is pretty much the same.

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15830) Server.java Prefer ArrayList

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR moved HDFS-13969 to HADOOP-15830:
-

Affects Version/s: (was: 3.2.0)
   3.2.0
  Component/s: (was: ipc)
   ipc
  Key: HADOOP-15830  (was: HDFS-13969)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Server.java Prefer ArrayList
> 
>
> Key: HADOOP-15830
> URL: https://issues.apache.org/jira/browse/HADOOP-15830
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13969.1.patch
>
>
> *  Prefer ArrayDeque over LinkedList (faster, less memory overhead)
> * Address this code:
> {code}
> //
> // Remove calls that have been pending in the responseQueue 
> // for a long time.
> //
> private void doPurge(RpcCall call, long now) {
>   LinkedList responseQueue = call.connection.responseQueue;
>   synchronized (responseQueue) {
> Iterator iter = responseQueue.listIterator(0);
> while (iter.hasNext()) {
>   call = iter.next();
>   if (now > call.timestamp + PURGE_INTERVAL) {
> closeConnection(call.connection);
> break;
>   }
> }
>   }
> }
> {code}
> It says "Remove calls" (plural) but only one call will be removed because of 
> the 'break' statement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642265#comment-16642265
 ] 

Gabor Bota commented on HADOOP-15827:
-

I'm not able to reproduce the issue by running 
{{org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract}} alone with the params 
you've provided. I'll try to run all the integration tests with verify.

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15829) Review of NetgroupCache

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR moved HDFS-13971 to HADOOP-15829:
-

Affects Version/s: (was: 3.2.0)
   3.2.0
  Component/s: (was: security)
   (was: hdfs)
   security
  Key: HADOOP-15829  (was: HDFS-13971)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Review of NetgroupCache
> ---
>
> Key: HADOOP-15829
> URL: https://issues.apache.org/jira/browse/HADOOP-15829
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13971.1.patch
>
>
> * Simplify code and performance by using Guava Multimap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HADOOP-15828:


Assignee: BELUGA BEHR

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15828:
-
Attachment: HADOOP-15828.1.patch

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HADOOP-15828:


 Summary: Review of MachineList class
 Key: HADOOP-15828
 URL: https://issues.apache.org/jira/browse/HADOOP-15828
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
 Attachments: HADOOP-15828.1.patch

Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
implementation and use empty collections instead of 'null' values, add logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15828) Review of MachineList class

2018-10-08 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15828:
-
Status: Patch Available  (was: Open)

> Review of MachineList class
> ---
>
> Key: HADOOP-15828
> URL: https://issues.apache.org/jira/browse/HADOOP-15828
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15828.1.patch
>
>
> Clean up and simplify class {{MachineList}}.  Primarily, remove LinkedList 
> implementation and use empty collections instead of 'null' values, add 
> logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15824) RawLocalFileSystem initialize() raises Null Pointer Exception

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15824:

Summary: RawLocalFileSystem initialize() raises Null Pointer Exception  
(was: RawLocalFileSystem initialized with Null Point Exception)

> RawLocalFileSystem initialize() raises Null Pointer Exception
> -
>
> Key: HADOOP-15824
> URL: https://issues.apache.org/jira/browse/HADOOP-15824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.3, 2.8.4, 3.1.1
> Environment: Hadoop 2.8.4 + Spark & yarn client launch
>Reporter: Tank Sui
>Priority: Minor
>
> {code:java}
> [ERROR]09:33:13.143 [main] org.apache.spark.SparkContext - Error initializing 
> SparkContext.
> 10/6/2018 5:33:13 PM java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:351)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:649)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:500)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:126)
> 10/6/2018 5:33:13 PM  at services.SparkService.tryInit(SparkService.scala:49)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController.(DataController.scala:38)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController$$FastClassByGuice$$9ed55d7d.newInstance()
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:111)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1015)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1054)
> 10/6/2018 

[jira] [Commented] (HADOOP-15824) RawLocalFileSystem initialized with Null Point Exception

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642244#comment-16642244
 ] 

Steve Loughran commented on HADOOP-15824:
-

that's a good start; like I said: anything you can do to help trace it down is 
good. I wonder if the system property is being set to something unparseable or 
null.

anyway, like I've said: you are the only person in a position to track down the 
root cause; we'll do our best to work out a fix once we get a better idea of 
what's going on. 

> RawLocalFileSystem initialized with Null Point Exception
> 
>
> Key: HADOOP-15824
> URL: https://issues.apache.org/jira/browse/HADOOP-15824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.3, 2.8.4, 3.1.1
> Environment: Hadoop 2.8.4 + Spark & yarn client launch
>Reporter: Tank Sui
>Priority: Minor
>
> {code:java}
> [ERROR]09:33:13.143 [main] org.apache.spark.SparkContext - Error initializing 
> SparkContext.
> 10/6/2018 5:33:13 PM java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:351)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:649)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:500)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:126)
> 10/6/2018 5:33:13 PM  at services.SparkService.tryInit(SparkService.scala:49)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController.(DataController.scala:38)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController$$FastClassByGuice$$9ed55d7d.newInstance()
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:111)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019)
> 10/6/2018 5:33:13 PM  at 
> 

[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread Vidura Bhathiya Mudalige (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642242#comment-16642242
 ] 

Vidura Bhathiya Mudalige commented on HADOOP-15818:
---

[~ajisakaa], Could you please review my pull request and assign this jira to me?

> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13936) S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642237#comment-16642237
 ] 

Steve Loughran commented on HADOOP-13936:
-

Reviewing this with a goal of fixing.

Options
# as innerDelete/innerRename delete objects, send the deletes down in batches 
*maybe in their own thread*
# make sure all that cleanup is done in a finally clause, and hope the actual 
execution never fails (which is really the problem we are trying to address)
# Have the metastore take a delete set of files, knowing that it is part of a 
larger bulk rename or delete operation, so giving it the option of being clever.

I'm thinking of option 3, from the metastore initiating some multi-object 
operation (delete? rename?) and getting a context object back which  they will 
update as they go along, then finally call complete() on.

{code}
bulkDelete = s3guard.initiateBulkDelete(path)
//..iterate through listings, with every batch of deletes
bulkDeletes.deleted(List)
//and then finally:
bulkDeletes.complete()
{code}
Naiive implementation: ignore the deleted() ops and do what happens today in 
complete(): delete the tree.

Clever implementation on each deleted() batch, kick off the deletion of those 
objects (wrapped in a duration log), in the complete() call do a final cleanup 
treewalk tp get rid of parent entries.

The move operation would be similiar, only as it does updates in batches, it 
could also track which parent directories had already been created across 
batches, so there'd be no replication of parent dir creation.

On the topic of batches, these updates could also be done in a (single) worker 
thread within S3AFileSystem, so that even throttled DDB operations wouldn't 
take up time which copy calls could take

(also while doing this: log duration of copies @ debug; print out duration & 
total effective bandwidth. These are things we need to know, and it'd give us a 
before/after benchmark of any changes



> S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation
> -
>
> Key: HADOOP-13936
> URL: https://issues.apache.org/jira/browse/HADOOP-13936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.1.0, 3.1.1
>Reporter: Rajesh Balamohan
>Assignee: Steve Loughran
>Priority: Blocker
>
> As a part of {{S3AFileSystem.delete}} operation {{innerDelete}} is invoked, 
> which deletes keys from S3 in batches (default is 1000). But DynamoDB is 
> updated only at the end of this operation. This can cause issues when 
> deleting large number of keys. 
> E.g, it is possible to get exception after deleting 1000 keys and in such 
> cases dynamoDB would not be updated. This can cause DynamoDB to go out of 
> sync. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #425: HADOOP-15818. Fix deprecated maven-surefire-plugin...

2018-10-08 Thread vbmudalige
GitHub user vbmudalige opened a pull request:

https://github.com/apache/hadoop/pull/425

HADOOP-15818. Fix deprecated maven-surefire-plugin configuration in 
hadoop-kms module



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vbmudalige/hadoop HADOOP-15818

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #425


commit 0b1d7aaffa61239dc1c94e222fe642813418a1a9
Author: Vidura Mudalige 
Date:   2018-10-08T17:58:30Z

HADOOP-15818. Fix deprecated maven-surefire-plugin configuration in 
hadoop-kms module




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15818) Fix deprecated maven-surefire-plugin configuration in hadoop-kms module

2018-10-08 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642238#comment-16642238
 ] 

ASF GitHub Bot commented on HADOOP-15818:
-

GitHub user vbmudalige opened a pull request:

https://github.com/apache/hadoop/pull/425

HADOOP-15818. Fix deprecated maven-surefire-plugin configuration in 
hadoop-kms module



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vbmudalige/hadoop HADOOP-15818

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #425


commit 0b1d7aaffa61239dc1c94e222fe642813418a1a9
Author: Vidura Mudalige 
Date:   2018-10-08T17:58:30Z

HADOOP-15818. Fix deprecated maven-surefire-plugin configuration in 
hadoop-kms module




> Fix deprecated maven-surefire-plugin configuration in hadoop-kms module
> ---
>
> Key: HADOOP-15818
> URL: https://issues.apache.org/jira/browse/HADOOP-15818
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hadoop-kms ---
> [WARNING] The parameter forkMode is deprecated since version 2.14. Use 
> forkCount and reuseForks instead.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642218#comment-16642218
 ] 

Gabor Bota commented on HADOOP-15827:
-

I'll check it soon.

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15827:
---

Assignee: Gabor Bota

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642211#comment-16642211
 ] 

Steve Loughran commented on HADOOP-15827:
-

This is happening in new code from HADOOP-15621; there's a codepath where 
dirPathMeta==null but metas!= empty which triggers the NPE. 
{code}
  return (metas.isEmpty() && dirPathMeta == null)
  ? null
  : new DirListingMetadata(path, metas, isAuthoritative,
  dirPathMeta.getLastUpdated());
{code}
[~gabor.bota]: assigning this to you as it seems part of the recent patch

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15827:

Affects Version/s: (was: 3.2.0)
   3.3.0

> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15827:
---

 Summary: NPE in DynamoDBMetadataStore.lambda$listChildren for root 
+ auth s3guard
 Key: HADOOP-15827
 URL: https://issues.apache.org/jira/browse/HADOOP-15827
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
-Ddynamodb -Dauthoritative}}

{code}
[ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
Time elapsed: 42.822 s  <<< ERROR!
java.lang.NullPointerException
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15827) NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642206#comment-16642206
 ] 

Steve Loughran commented on HADOOP-15827:
-

{code}
[ERROR] Tests run: 43, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
387.198 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract
[ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
Time elapsed: 42.822 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.lambda$listChildren$4(DynamoDBMetadataStore.java:644)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.listChildren(DynamoDBMetadataStore.java:616)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreListFilesIterator.prefetch(MetadataStoreListFilesIterator.java:132)
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreListFilesIterator.(MetadataStoreListFilesIterator.java:102)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:3303)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:3266)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.assertListFilesFinds(FileSystemContractBaseTest.java:850)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testLSRootDir(FileSystemContractBaseTest.java:835)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}


> NPE in DynamoDBMetadataStore.lambda$listChildren for root + auth s3guard
> 
>
> Key: HADOOP-15827
> URL: https://issues.apache.org/jira/browse/HADOOP-15827
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Blocker
>
> NPE in a test run of {{-Dparallel-tests -DtestsThreadCount=6  -Ds3guard 
> -Ddynamodb -Dauthoritative}}
> {code}
> [ERROR] testLSRootDir(org.apache.hadoop.fs.s3a.ITestS3AFileSystemContract)  
> Time elapsed: 42.822 s  <<< ERROR!
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-08 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642205#comment-16642205
 ] 

Billie Rinaldi commented on HADOOP-15821:
-

That's right, we don't currently have a service to clean up the registry. 
Cleanup is performed by the client when an application is stopped and by the AM 
for individual containers.

The YarnRegistryAttributes are used by RegistryDNS to create DNS records. I 
agree that we should consider generalizing the concepts and that this can wait 
for a future ticket.

Here are some notes on how RegistryDNS is using the attributes:
 * yarn:persistence is used to select which ServiceRecordProcessor is used. 
Container and application both have ServiceRecordProcessors. Container is the 
main one being used by YARN services. The Slider AM had used the application 
processor.
 * For ServiceRecords with yarn:persistence = container, the following records 
are created:
 $COMPONENT_INSTANCE_NAME.$APP_NAME.$USER.$DOMAIN -> $IP
 $COMPONENT_NAME.$APP_NAME.$USER.$DOMAIN -> $IP
 $ID.$DOMAIN -> $IP
 * Sources of the named variables (the "yarn-service" ZK node is used by YARN 
services but any service name should work):
|$USER, $APP_NAME, $ID|ZK node 
/registry/users/$USER/services/yarn-service/$APP_NAME/components/$ID (e.g. 
container ID)|
|$DOMAIN|Configuration hadoop.registry.dns.domain-name|
|$COMPONENT_INSTANCE_NAME|ServiceRecord description (e.g. regionserver-2)|
|$COMPONENT_NAME|ServiceRecord yarn:component (e.g. regionserver)|
|$IP|ServiceRecord yarn:ip|

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15193) add bulk delete call to metastore API & DDB impl

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642189#comment-16642189
 ] 

Steve Loughran commented on HADOOP-15193:
-

I'm wrong.

* We already have a bulk directory delete {{deleteSubtree(path)}}; theres no 
performance gain in moving to providing a list of operations to the DDB API. 
* yes, you do need all those markers

without any speedups, closing as a WONTFIX

> add bulk delete call to metastore API & DDB impl
> 
>
> Key: HADOOP-15193
> URL: https://issues.apache.org/jira/browse/HADOOP-15193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> recursive dir delete (and any future bulk delete API like HADOOP-15191) 
> benefits from using the DDB bulk table delete call, which takes a list of 
> deletes and executes. Hopefully this will offer better perf. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15193) add bulk delete call to metastore API & DDB impl

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15193.
-
   Resolution: Won't Fix
Fix Version/s: 3.3.0

> add bulk delete call to metastore API & DDB impl
> 
>
> Key: HADOOP-15193
> URL: https://issues.apache.org/jira/browse/HADOOP-15193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> recursive dir delete (and any future bulk delete API like HADOOP-15191) 
> benefits from using the DDB bulk table delete call, which takes a list of 
> deletes and executes. Hopefully this will offer better perf. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15826:

Status: Patch Available  (was: Open)

 No integration test rerun here; this is just changing the annotations and 
javadocs

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15826:

Attachment: HADOOP-15826-001.patch

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15826:
---

 Summary: @Retries annotation of putObject() call & uses wrong
 Key: HADOOP-15826
 URL: https://issues.apache.org/jira/browse/HADOOP-15826
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The retry annotations of the S3AFilesystem putObject call and its 
writeOperationsHelper use aren't in sync with what the code does.

Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15620) Über-jira: S3A phase VI: Hadoop 3.3 features

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15620:

Summary: Über-jira: S3A phase VI: Hadoop 3.3 features  (was: Über-jira: S3a 
phase VI: Hadoop 3.3 features)

> Über-jira: S3A phase VI: Hadoop 3.3 features
> 
>
> Key: HADOOP-15620
> URL: https://issues.apache.org/jira/browse/HADOOP-15620
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15785) [JDK10] Javadoc build fails on JDK 10 in hadoop-common

2018-10-08 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642149#comment-16642149
 ] 

Dinesh Chitlangia commented on HADOOP-15785:


[~tasanuma0829] - Thank you for reporting the issue. I am working on this and 
will post a patch soon.

> [JDK10] Javadoc build fails on JDK 10 in hadoop-common
> --
>
> Key: HADOOP-15785
> URL: https://issues.apache.org/jira/browse/HADOOP-15785
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, documentation
>Reporter: Takanobu Asanuma
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> {noformat}
> $ mvn javadoc:javadoc --projects hadoop-common-project/hadoop-common
> ...
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 02:22 min
> [INFO] Finished at: 2018-09-25T02:23:06Z
> [INFO] Final Memory: 119M/467M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-common: MavenReportException: Error while generating Javadoc: 
> [ERROR] Exit code: 1 - javadoc: warning - You have not specified the version 
> of HTML to use.
> [ERROR] The default is currently HTML 4.01, but this will change to HTML5
> [ERROR] in a future release. To suppress this warning, please specify the
> [ERROR] version of HTML used in your documentation comments and to be
> [ERROR] generated by this doclet, using the -html4 or -html5 options.
> [ERROR] 
> /hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:1578:
>  error: malformed HTML
> [ERROR] * to servers are uniquely identified by  ticket>
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-08 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642106#comment-16642106
 ] 

Peter Bacsko commented on HADOOP-15822:
---

No, I still haven't had the time to check out with other codecs. But tomorrow 
I'll perform a test with no compression/snappy/lz4/etc.

> zstd compressor can fail with a small output buffer
> ---
>
> Key: HADOOP-15822
> URL: https://issues.apache.org/jira/browse/HADOOP-15822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: HADOOP-15822.001.patch, HADOOP-15822.002.patch
>
>
> TestZStandardCompressorDecompressor fails a couple of tests on my machine 
> with the latest zstd library (1.3.5).  Compression can fail to successfully 
> finalize the stream when a small output buffer is used resulting in a failed 
> to init error, and decompression with a direct buffer can fail with an 
> invalid src size error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-08 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642104#comment-16642104
 ] 

Jason Lowe commented on HADOOP-15822:
-

bq. do you think it's related? Or is it something different, maybe MR-specific? 

I do not think it is related.  The MapOutput buffer code is miscalculating how 
much buffer space is remaining before it forces a spill.  In this failure case 
the buffer involved is not dealing with compressed data, so it should not 
matter what codec is being used.  Have you tried reproducing it with lz4 or no 
codec at all?

I'll dig a bit into the Jenkins test failures to see if they are somehow 
related. 

> zstd compressor can fail with a small output buffer
> ---
>
> Key: HADOOP-15822
> URL: https://issues.apache.org/jira/browse/HADOOP-15822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Attachments: HADOOP-15822.001.patch, HADOOP-15822.002.patch
>
>
> TestZStandardCompressorDecompressor fails a couple of tests on my machine 
> with the latest zstd library (1.3.5).  Compression can fail to successfully 
> finalize the stream when a small output buffer is used resulting in a failed 
> to init error, and decompression with a direct buffer can fail with an 
> invalid src size error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-08 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642097#comment-16642097
 ] 

Íñigo Goiri commented on HADOOP-15821:
--

bq. If the service to clean up entries in the RM is still in yarn code, it can 
live there, maybe, or we just leave as is.

Take a look at YARN-8845. I think we can remove that.

For this, I'm fine as it is right now.
For the future, we may want to isolate some of the concepts in the future 
(e.g., persistence) and make it more generic.

Any further comments?


> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-08 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642000#comment-16642000
 ] 

Peter Bacsko edited comment on HADOOP-15822 at 10/8/18 3:28 PM:


I reproduced the problem. This is what happens if the sort buffer is 2047MiB.

{noformat}
...
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling 
map output
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart 
= 1267927860; bufend = 2082571562; bufvoid = 2146435072
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 
316981960(1267927840); kvend = 91355880(365423520); length = 225626081/134152192
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 
-1997752227 kvi 37170708(148682832)
2018-10-08 08:16:24,712 INFO [SpillThread] org.apache.hadoop.mapred.MapTask: 
Finished spill 20
2018-10-08 08:16:24,712 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) 
equator -1997752227 kv 37170708(148682832) kvi 37170708(148682832)
2018-10-08 08:16:24,713 INFO [main] org.apache.hadoop.mapred.MapTask: Starting 
flush of map output
2018-10-08 08:16:24,713 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) 
equator -1997752227 kv 37170708(148682832) kvi 37170708(148682832)
2018-10-08 08:16:24,727 INFO [main] org.apache.hadoop.mapred.Merger: Merging 21 
sorted segments
2018-10-08 08:16:24,735 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,736 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,738 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,739 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,741 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,742 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,743 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,744 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,745 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,746 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,748 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,749 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,750 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,752 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,753 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,754 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,755 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,756 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,757 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,769 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,770 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 21 segments left of total size: 35310116 bytes
2018-10-08 08:16:30,104 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1469)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1365)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273)
at org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253)
at org.apache.hadoop.io.Text.write(Text.java:330)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1163)
at 
org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:727)
at 

[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer

2018-10-08 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16642000#comment-16642000
 ] 

Peter Bacsko commented on HADOOP-15822:
---

I reproduced the problem. This is what happens if the sort buffer is 2047MiB.

{noformat}
...
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling 
map output
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart 
= 1267927860; bufend = 2082571562; bufvoid = 2146435072
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 
316981960(1267927840); kvend = 91355880(365423520); length = 225626081/134152192
2018-10-08 08:15:04,126 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 
-1997752227 kvi 37170708(148682832)
2018-10-08 08:16:24,712 INFO [SpillThread] org.apache.hadoop.mapred.MapTask: 
Finished spill 20
2018-10-08 08:16:24,712 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) 
equator -1997752227 kv 37170708(148682832) kvi 37170708(148682832)
2018-10-08 08:16:24,713 INFO [main] org.apache.hadoop.mapred.MapTask: Starting 
flush of map output
2018-10-08 08:16:24,713 INFO [main] org.apache.hadoop.mapred.MapTask: (RESET) 
equator -1997752227 kv 37170708(148682832) kvi 37170708(148682832)
2018-10-08 08:16:24,727 INFO [main] org.apache.hadoop.mapred.Merger: Merging 21 
sorted segments
2018-10-08 08:16:24,735 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,736 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,738 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,739 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,741 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,742 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,743 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,744 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,745 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,746 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,748 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,749 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,750 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,752 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,753 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,754 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,755 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,756 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,757 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,769 INFO [main] org.apache.hadoop.io.compress.CodecPool: 
Got brand-new decompressor [.zst]
2018-10-08 08:16:24,770 INFO [main] org.apache.hadoop.mapred.Merger: Down to 
the last merge-pass, with 21 segments left of total size: 35310116 bytes
2018-10-08 08:16:30,104 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1469)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$Buffer.write(MapTask.java:1365)
at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
at org.apache.hadoop.io.WritableUtils.writeVLong(WritableUtils.java:273)
at org.apache.hadoop.io.WritableUtils.writeVInt(WritableUtils.java:253)
at org.apache.hadoop.io.Text.write(Text.java:330)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:98)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:82)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1163)
at 
org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:727)
at 

[jira] [Commented] (HADOOP-15193) add bulk delete call to metastore API & DDB impl

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16641997#comment-16641997
 ] 

Steve Loughran commented on HADOOP-15193:
-

DDB batch delete just takes the list of operations and runs through them in 
sequence, retrying if needed. There is no speedup compared to making individual 
requests

We do need a call in the metastore API though, as it can be a bit cleverer 
about the operation.

In particular: if I delete a directory, do I need to explicitly add deleted 
markers to all the children, or would a delete marker on the dir be enough? If 
so, you could be very efficient & not create deleted file markers, just those 
for the directories 


> add bulk delete call to metastore API & DDB impl
> 
>
> Key: HADOOP-15193
> URL: https://issues.apache.org/jira/browse/HADOOP-15193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> recursive dir delete (and any future bulk delete API like HADOOP-15191) 
> benefits from using the DDB bulk table delete call, which takes a list of 
> deletes and executes. Hopefully this will offer better perf. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15824) RawLocalFileSystem initialized with Null Point Exception

2018-10-08 Thread Tank Sui (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16641888#comment-16641888
 ] 

Tank Sui commented on HADOOP-15824:
---

it should caused by sbt dist, looks the right FileSystem implements exclude 
from sbt dist. because the same error not happened in develop mode while it 
throws at production mode which run sbt dist.

> RawLocalFileSystem initialized with Null Point Exception
> 
>
> Key: HADOOP-15824
> URL: https://issues.apache.org/jira/browse/HADOOP-15824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.3, 2.8.4, 3.1.1
> Environment: Hadoop 2.8.4 + Spark & yarn client launch
>Reporter: Tank Sui
>Priority: Minor
>
> {code:java}
> [ERROR]09:33:13.143 [main] org.apache.spark.SparkContext - Error initializing 
> SparkContext.
> 10/6/2018 5:33:13 PM java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:351)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:649)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:500)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:126)
> 10/6/2018 5:33:13 PM  at services.SparkService.tryInit(SparkService.scala:49)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController.(DataController.scala:38)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController$$FastClassByGuice$$9ed55d7d.newInstance()
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:111)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1015)
> 10/6/2018 5:33:13 PM  at 
> 

[jira] [Commented] (HADOOP-15722) regression: Hadoop 2.7.7 release breaks spark submit

2018-10-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16641855#comment-16641855
 ] 

Steve Loughran commented on HADOOP-15722:
-

Looks related: HIVE-18858  that patch ended up creating HIVE-20521



> regression: Hadoop 2.7.7 release breaks spark submit
> 
>
> Key: HADOOP-15722
> URL: https://issues.apache.org/jira/browse/HADOOP-15722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, conf, security
>Affects Versions: 2.7.7
>Reporter: Steve Loughran
>Priority: Major
>
> SPARK-25330 highlights that upgrading spark to hadoop 2.7.7 is causing a 
> regression in client setup, with things only working when 
> {{Configuration.getRestrictParserDefault(Object resource)}} = false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15824) RawLocalFileSystem initialized with Null Point Exception

2018-10-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15824:

Component/s: (was: common)

> RawLocalFileSystem initialized with Null Point Exception
> 
>
> Key: HADOOP-15824
> URL: https://issues.apache.org/jira/browse/HADOOP-15824
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.3, 2.8.4, 3.1.1
> Environment: Hadoop 2.8.4 + Spark & yarn client launch
>Reporter: Tank Sui
>Priority: Minor
>
> {code:java}
> [ERROR]09:33:13.143 [main] org.apache.spark.SparkContext - Error initializing 
> SparkContext.
> 10/6/2018 5:33:13 PM java.lang.RuntimeException: 
> java.lang.reflect.InvocationTargetException
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
> 10/6/2018 5:33:13 PM  at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:351)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:649)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:500)
> 10/6/2018 5:33:13 PM  at 
> org.apache.spark.SparkContext.(SparkContext.scala:126)
> 10/6/2018 5:33:13 PM  at services.SparkService.tryInit(SparkService.scala:49)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController.(DataController.scala:38)
> 10/6/2018 5:33:13 PM  at 
> controllers.DataController$$FastClassByGuice$$9ed55d7d.newInstance()
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:111)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1015)
> 10/6/2018 5:33:13 PM  at 
> com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1054)
> 10/6/2018 5:33:13 PM  at 
> play.api.inject.guice.GuiceInjector.instanceOf(GuiceInjectorBuilder.scala:409)
> 10/6/2018 5:33:13 PM  

  1   2   >