[jira] [Commented] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475997#comment-16475997
 ] 

Hudson commented on HADOOP-15466:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14201 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14201/])
HADOOP-15466. Correct units in adl.http.timeout. Contributed by Sean (stevel: 
rev 07d8505f75ec401e5847fe158dad765ce5175fab)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] to double check I'm not missing anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475655#comment-16475655
 ] 

genericqa commented on HADOOP-15154:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 17 unchanged - 0 fixed = 20 total (was 17) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15154 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923442/HADOOP-15154.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a23bfa6023e3 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 58b97c7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14628/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14628/testReport/ |
| Max. process+thread count | 1717 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14628/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475677#comment-16475677
 ] 

genericqa commented on HADOOP-12549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  2m 
30s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 50s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-12549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770469/HADOOP-12549.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dc78324d510e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 58b97c7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14629/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14629/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14629/testReport/ |
| Max. process+thread count | 1517 

[jira] [Commented] (HADOOP-15354) hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475784#comment-16475784
 ] 

Steve Loughran commented on HADOOP-15354:
-

thanks; forgotten about this...

> hadoop-aliyun & hadoop-azure modules to mark hadoop-common as provided
> --
>
> Key: HADOOP-15354
> URL: https://issues.apache.org/jira/browse/HADOOP-15354
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/azure, fs/oss
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15351-branch-3.1.001.patch
>
>
> Although the aws/openstack and adl modules now declare hadopo-common as 
> "provided" the hadoop-aliyun and hadoop-azure modules don't, so it gets into 
> the set of dependencies passed on through hadoop-cloud-storage. It should be 
> switched to provided in the POMs of these modules



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475796#comment-16475796
 ] 

Steve Loughran commented on HADOOP-15465:
-

be good to scan some of the major downstream apps (hbase, hive, ...) to see if 
they used that getSymlinkCommand. 


> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475940#comment-16475940
 ] 

genericqa commented on HADOOP-15250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
43s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 22s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.key.TestKeyProviderFactory |
|   | hadoop.crypto.key.TestKeyShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HADOOP-15250 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922782/HADOOP-15250-branch-3.1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 9b4419c19bcc 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 2bb3933 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14630/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14630/testReport/ |
| Max. process+thread count | 1382 (vs. ulimit of 

[jira] [Commented] (HADOOP-15460) S3A FS to add "s3a:no-existence-checks" to the builder file creation option set

2018-05-15 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476003#comment-16476003
 ] 

Stephan Ewen commented on HADOOP-15460:
---

The discussion was motivated by downstream consumers of the S3AFileSystem that 
like all the tooling around security, retries, multi-part uploads etc.

We would like to opt out of the consistency implications from trying to mimic a 
directory structure, and rather use it more blob-store like, meaning a path is 
simply a key, and it is not trying to check that no parents exist as files.


> S3A FS to add  "s3a:no-existence-checks" to the builder file creation option 
> set
> 
>
> Key: HADOOP-15460
> URL: https://issues.apache.org/jira/browse/HADOOP-15460
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
> to create files for all existence checks to be skipped.
> This
> # eliminates a few hundred milliseconds
> # avoids any caching of negative HEAD/GET responses in the S3 load balancers.
> Callers will be expected to know what what they are doing.
> FWIW, we are doing some PUT calls in the committer which bypass this stuff, 
> for the same reason. If you've just created a directory, you know there's 
> nothing underneath, so no need to check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15469:

Summary: S3A directory committer commit job fails if _temporary directory 
created under dest  (was: S3A directory committer commit job fails if 
_temporary directory created under set)

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15442) ITestS3AMetrics.testMetricsRegister can't know metrics source's name

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15442:

Affects Version/s: 3.1.0
  Component/s: metrics
   fs/s3

> ITestS3AMetrics.testMetricsRegister can't know metrics source's name
> 
>
> Key: HADOOP-15442
> URL: https://issues.apache.org/jira/browse/HADOOP-15442
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, metrics
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15442.001.patch
>
>
> I've seen this test fail a bunch lately - mainly when the tests are all run 
> (i.e. not individually) but not in parallel, it seems. If you dump out the 
> sources when it fails, you see:
> * The sources are numbered in the hundreds, so it's very unlikely that this 
> actually gets the first one.
> * The sources are numbered twice. There was logic to have the first one not 
> be numbered, but that got messed up and now all sources are numbered twice, 
> but the first one is only number once.
> We could just remove the bad assertion, but then we're only testing the 
> registry and not anything else about the way metrics flow all the way through 
> the whole system. Worth it to fix the failing test, I think - knowing the 
> source gets registered doesn't add a whole lot of value toward end-to-end 
> metrics testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15442) ITestS3AMetrics.testMetricsRegister can't know metrics source's name

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15442:

   Resolution: Fixed
Fix Version/s: 3.1.1
   Status: Resolved  (was: Patch Available)

committed to branch-3.1 & trunk/ Thx for fixing this..flaky tests are always a 
pain, but hard to motivate yourself to fix unless you are rigorous...

> ITestS3AMetrics.testMetricsRegister can't know metrics source's name
> 
>
> Key: HADOOP-15442
> URL: https://issues.apache.org/jira/browse/HADOOP-15442
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, metrics
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HADOOP-15442.001.patch
>
>
> I've seen this test fail a bunch lately - mainly when the tests are all run 
> (i.e. not individually) but not in parallel, it seems. If you dump out the 
> sources when it fails, you see:
> * The sources are numbered in the hundreds, so it's very unlikely that this 
> actually gets the first one.
> * The sources are numbered twice. There was logic to have the first one not 
> be numbered, but that got messed up and now all sources are numbered twice, 
> but the first one is only number once.
> We could just remove the bad assertion, but then we're only testing the 
> registry and not anything else about the way metrics flow all the way through 
> the whole system. Worth it to fix the failing test, I think - knowing the 
> source gets registered doesn't add a whole lot of value toward end-to-end 
> metrics testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15442) ITestS3AMetrics.testMetricsRegister can't know metrics source's name

2018-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475996#comment-16475996
 ] 

Hudson commented on HADOOP-15442:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14200 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14200/])
HADOOP-15442. ITestS3AMetrics.testMetricsRegister can't know metrics (stevel: 
rev b6708374692e6c4d786e2f3f1f45cc7aa1e4e88f)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AMetrics.java


> ITestS3AMetrics.testMetricsRegister can't know metrics source's name
> 
>
> Key: HADOOP-15442
> URL: https://issues.apache.org/jira/browse/HADOOP-15442
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, metrics
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: HADOOP-15442.001.patch
>
>
> I've seen this test fail a bunch lately - mainly when the tests are all run 
> (i.e. not individually) but not in parallel, it seems. If you dump out the 
> sources when it fails, you see:
> * The sources are numbered in the hundreds, so it's very unlikely that this 
> actually gets the first one.
> * The sources are numbered twice. There was logic to have the first one not 
> be numbered, but that got messed up and now all sources are numbered twice, 
> but the first one is only number once.
> We could just remove the bad assertion, but then we're only testing the 
> registry and not anything else about the way metrics flow all the way through 
> the whole system. Worth it to fix the failing test, I think - knowing the 
> source gets registered doesn't add a whole lot of value toward end-to-end 
> metrics testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15466) Correct units in adl.http.timeout

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15466:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

+1, committed. thanks

> Correct units in adl.http.timeout
> -
>
> Key: HADOOP-15466
> URL: https://issues.apache.org/jira/browse/HADOOP-15466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15466.001.patch
>
>
> Comment in core-default.xml says seconds, but according to the SDK docs it's 
> getting interpreted as milliseconds 
> ([https://github.com/Azure/azure-data-lake-store-java/blob/master/src/main/java/com/microsoft/azure/datalake/store/ADLStoreOptions.java#L139-L144).]
>  Pinging [~ASikaria] to double check I'm not missing anything.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15442) ITestS3AMetrics.testMetricsRegister can't know metrics source's name

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475979#comment-16475979
 ] 

Steve Loughran commented on HADOOP-15442:
-

+1

> ITestS3AMetrics.testMetricsRegister can't know metrics source's name
> 
>
> Key: HADOOP-15442
> URL: https://issues.apache.org/jira/browse/HADOOP-15442
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: HADOOP-15442.001.patch
>
>
> I've seen this test fail a bunch lately - mainly when the tests are all run 
> (i.e. not individually) but not in parallel, it seems. If you dump out the 
> sources when it fails, you see:
> * The sources are numbered in the hundreds, so it's very unlikely that this 
> actually gets the first one.
> * The sources are numbered twice. There was logic to have the first one not 
> be numbered, but that got messed up and now all sources are numbered twice, 
> but the first one is only number once.
> We could just remove the bad assertion, but then we're only testing the 
> registry and not anything else about the way metrics flow all the way through 
> the whole system. Worth it to fix the failing test, I think - knowing the 
> source gets registered doesn't add a whole lot of value toward end-to-end 
> metrics testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under set

2018-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15469:
---

 Summary: S3A directory committer commit job fails if _temporary 
directory created under set
 Key: HADOOP-15469
 URL: https://issues.apache.org/jira/browse/HADOOP-15469
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
 Environment: spark test runs
Reporter: Steve Loughran
Assignee: Steve Loughran


The directory staging committer fails in commit job if any temporary files/dirs 
have been created. Spark work can create such a dir for placement of absolute 
files.

This is because commitJob() looks for the dest dir existing, not containing 
non-hidden files.
As the comment says, "its kind of superfluous". More specifically, it means 
jobs which would commit with the classic committer & overwrite=false will fail

Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475949#comment-16475949
 ] 

Steve Loughran commented on HADOOP-15469:
-

stack. 
* The job setup has already done the path doesn't exists check; this is job 
commit
* I'm running s3guard in debug, and it says the only path which exists 
underneath is _temporary, and even that is empty.

{code}
org.apache.hadoop.fs.PathExistsException: 
`s3a://hwdev-steve-new/spark_committer/orc': Destination path exists and 
committer conflict resolution mode is "fail"
at 
org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter.preCommitJob(DirectoryStagingCommitter.java:99)
at 
org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.commitJob(AbstractS3ACommitter.java:576)
at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:166)
at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:213)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
{code}

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Resolved] (HADOOP-15468) The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.

2018-05-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-15468.
-
Resolution: Invalid

Please send your questions to common-...@hadoop.apache.org.

> The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.
> 
>
> Key: HADOOP-15468
> URL: https://issues.apache.org/jira/browse/HADOOP-15468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.2
>Reporter: Wenming He
>Priority: Minor
> Fix For: 3.0.2
>
>
> 在判断overlay变量是否存在弃用的键时,为什么他是直接判断overlay中的值 ,而不是去判断overlay中存在相同的键。这是个什么逻辑?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475792#comment-16475792
 ] 

Steve Loughran commented on HADOOP-15458:
-

Could the stream be explicitly closed?

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15250:

Status: Patch Available  (was: Reopened)

reopening & submitting patch

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 3.0.0, 2.9.0, 2.7.3
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15250-branch-3.1.patch, HADOOP-15250.00.patch, 
> HADOOP-15250.01.patch, HADOOP-15250.02.patch, HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  

[jira] [Reopened] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-15250:
-

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: HADOOP-15250-branch-3.1.patch, HADOOP-15250.00.patch, 
> HADOOP-15250.01.patch, HADOOP-15250.02.patch, HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY 
> = 

[jira] [Commented] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16475799#comment-16475799
 ] 

Steve Loughran commented on HADOOP-15461:
-

+1 for anything which can move off winutils, though it will need to be matched 
by us actually releasing the JNI bits. Currently winutils only gets done 
intermittently and [not as an official ASF 
artifact|https://github.com/steveloughran/winutils]

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15469:

Attachment: HADOOP-15469-001.patch

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15469-001.patch
>
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15470) S3A staging committers to not log FNFEs on job abort listings

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476059#comment-16476059
 ] 

Steve Loughran commented on HADOOP-15470:
-

Stack trace I don't want to see
{code}
18/05/15 14:34:56 INFO AbstractS3ACommitter: Task committer 
attempt_20180515143450__m_00_0: aborting job (no job ID) in state FAILED
18/05/15 14:34:56 INFO StagingCommitter: Starting: Task committer 
attempt_20180515143450__m_00_0: aborting job in state (no job ID) 
18/05/15 14:34:56 INFO AbstractS3ACommitter: Listing pending uploads
java.io.FileNotFoundException: File 
hdfs://cluster/user/stevel/tmp/staging/stevel/application_1525120872005_0416/staging-uploads/_temporary/0
 does not exist.
at 
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1222)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1200)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1145)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1141)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1159)
at 
org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2105)
at org.apache.hadoop.fs.FileSystem$5.(FileSystem.java:2234)
at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:2231)
at org.apache.hadoop.fs.s3a.S3AUtils.listAndFilter(S3AUtils.java:1148)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.listPendingUploads(StagingCommitter.java:500)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.listPendingUploadsToAbort(StagingCommitter.java:482)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.abortJobInternal(StagingCommitter.java:554)
at 
org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.abortJob(AbstractS3ACommitter.java:503)
at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:199)
at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:223)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at 
org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at 

[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-15 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15456:

Attachment: HADOOP-15456-docker-hadoop-runner.002.patch

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> HADOOP-15456-docker-hadoop-runner.002.patch, secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15470) S3A staging committers to not log FNFEs on job abort listings

2018-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15470:
---

 Summary: S3A staging committers to not log FNFEs on job abort 
listings
 Key: HADOOP-15470
 URL: https://issues.apache.org/jira/browse/HADOOP-15470
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


When aborting a job, the staging committers list staged files in the cluster FS 
to abort...all exceptions are caught & downgraded to log events.

We shouldn't even log FNFEs except at debug level, as all it means is "the job 
is aborting before things got that far. Printing the full stack simply creates 
confusion about what the problem is



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2018-05-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476113#comment-16476113
 ] 

Íñigo Goiri commented on HADOOP-10075:
--

[My bad, my chrome got stuck and I accidentally changed the assignee.]

HDFS-13561 is trying to fix {{TestTransferFsImage}} in branch-2 by updating the 
timeout.
This was done in this JIRA, I will take that part to branch-2 as part of 
HDFS-13561.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch, HADOOP-10075_addendum.002.patch, 
> HADOOP-10075_addendum.003.patch, HADOOP-10075_addendum.004.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476094#comment-16476094
 ] 

Xiao Chen commented on HADOOP-15154:


Thanks Zsolt for working on this, looks pretty good to me overall.
Some minors:
- I think naming the {{Object}} parameter to s/subject/stream/g is clearer.
- Should null check the 2 arrays in {{assertCapabilities}}.

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476087#comment-16476087
 ] 

Wei-Chiu Chuang commented on HADOOP-15455:
--

I'll commit the patch skipping the precommit. Couldn't get precommit to run 
against a GitHub PR.

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10075) Update jetty dependency to version 9

2018-05-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-10075:


Assignee: Robert Kanter  (was: Íñigo Goiri)

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch, HADOOP-10075_addendum.002.patch, 
> HADOOP-10075_addendum.003.patch, HADOOP-10075_addendum.004.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476201#comment-16476201
 ] 

Íñigo Goiri commented on HADOOP-15465:
--

To be safe we could also mark {{getSymlinkCommand()}} as Deprecated.
I'm not sure what is the policy for this.

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-15 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476018#comment-16476018
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

Here's my review comments. Some of them are already mentioned previously. 
[~dapengsun] would you please update the patch if you have time available?

 
{quote}do you know why there's a ~50% degradation? That's concerning and may 
severely impede performance (to the point I can't use it :() 
{quote}
While that's the case, it is still 4~5x  performant than existing 
implementation, and I guess I'm still happy about that. The amount of memory 
allocation is concerning. I'll try to find time to profile it today.

 
 * CryptoInputStream
 the new readFully() method is a public method but lacks javadoc.
 is it different from the existing readFully() methods? Can you reuse the 
existing readFully() method?
 * Client
 Why are we catching exceptions in Client.shouldAuthenticateOverKrb()? That 
seems unnecessary. If not catching exception causes a problem, please have a 
test case for it.
 * SaslUtil
 ** negotiateCipherOption()
 can you please throw a non IOException? I’m not in favor of making every 
method throwing a generic IOExceptions. Similarly, update the method signature 
for the code path (getCipherOption, processSaslToken)
 This method is very similar to DataTransferSaslUtil#negotiateCipherOption() 
except for the configuration keys.
 I see no reason to duplicate the code, especially it involves some 
coding/decoding, which is not that easy to comprehend.
 ** Also, every time this method is called, it returns a new List<>. I feel 
like this is too much of a cost. Can we reduce the memory footprint?

 * SaslRpcClient
 ** saslConnect()
 LOG.debug(
 "Get SASL RPC CipherOption from Conf" + cipherOptions);
 —> missing a space after Conf
 the method is well over 100 lines in length now. Some code refactor will 
greatly improve readability.
 ** handleSaslCipherOptions()
 throw new SaslException(e.getMessage(), e);
 the exception message should be more descriptive. It could be something like 
“Unable to initialize SaslCryptoCodec”

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10075) Update jetty dependency to version 9

2018-05-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-10075:


Assignee: Íñigo Goiri  (was: Robert Kanter)

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Íñigo Goiri
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch, HADOOP-10075_addendum.002.patch, 
> HADOOP-10075_addendum.003.patch, HADOOP-10075_addendum.004.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-05-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476198#comment-16476198
 ] 

Íñigo Goiri commented on HADOOP-15461:
--

Thanks [~ste...@apache.org], it would be nice if you could track this effort 
and provide feedback on the Windows build fixes.

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15455:
-
Status: Open  (was: Patch Available)

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15460) S3A FS to add "s3a:no-existence-checks" to the builder file creation option set

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476095#comment-16476095
 ] 

Steve Loughran commented on HADOOP-15460:
-

we'd skip both the checks at the beginning, and any DELETE calls put upstream 
at the end. For S3Guard we still want to update the DDB tables, as long as the 
cost is low. Stephan is really motivated by the problem of "writing small 
checkpoint files every few seconds"; there's too much overhead around the PUT 
for their code righ tnow.

> S3A FS to add  "s3a:no-existence-checks" to the builder file creation option 
> set
> 
>
> Key: HADOOP-15460
> URL: https://issues.apache.org/jira/browse/HADOOP-15460
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> As promised to [~StephanEwen]: add and s3a-specific option to the builder-API 
> to create files for all existence checks to be skipped.
> This
> # eliminates a few hundred milliseconds
> # avoids any caching of negative HEAD/GET responses in the S3 load balancers.
> Callers will be expected to know what what they are doing.
> FWIW, we are doing some PUT calls in the committer which bypass this stuff, 
> for the same reason. If you've just created a directory, you know there's 
> nothing underneath, so no need to check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15467) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476186#comment-16476186
 ] 

Íñigo Goiri commented on HADOOP-15467:
--

This does not fail in the daily Windows build so it seems spurious.
Any insight on why Windows takes longer?

> TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
>  time out on Windows
> --
>
> Key: HADOOP-15467
> URL: https://issues.apache.org/jira/browse/HADOOP-15467
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HDFS-13549.000.patch
>
>
> {color:#d04437}[INFO] Running 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
> elapsed: 8.307 s <<< FAILURE! - in 
> org.apache.hadoop.security.TestDoAsEffectiveUser{color}
> {color:#d04437}[ERROR] 
> testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
> elapsed: 4.107 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[ERROR] 
> testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
>  Time elapsed: 4.002 s <<< ERROR!{color}
> {color:#d04437}java.lang.Exception: test timed out after 4000 
> milliseconds{color}
> {color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native 
> Method){color}
> {color:#d04437} at 
> java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
> {color:#d04437} at 
> java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
> {color:#d04437} at 
> java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
> {color:#d04437} at 
> org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
> {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method){color}
> {color:#d04437} at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
> {color:#d04437} at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
> {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
> {color:#d04437} at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
> {color:#d04437} at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
> {color:#d04437} at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}
> {color:#d04437}[INFO]{color}
> {color:#d04437}[INFO] Results:{color}
> {color:#d04437}[INFO]{color}
> 

[jira] [Commented] (HADOOP-14946) S3Guard testPruneCommandCLI can fail in parallel runs

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476014#comment-16476014
 ] 

Steve Loughran commented on HADOOP-14946:
-

stil recurring when the thread count is >= core count. Maybe: pull this 
specific test out and run in the serial phase

> S3Guard testPruneCommandCLI can fail in parallel runs
> -
>
> Key: HADOOP-14946
> URL: https://issues.apache.org/jira/browse/HADOOP-14946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> The test of the S3Guard CLI prune can sometimes fail on parallel test runs. 
> Assumption: it is the parallelism which is causing the problem
> {code}
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB
> testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 10.765 sec  <<< FAILURE!
> java.lang.AssertionError: Pruned children count [] expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15468) The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.

2018-05-15 Thread Wenming He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476028#comment-16476028
 ] 

Wenming He commented on HADOOP-15468:
-

Are you a Chinese?

> The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.
> 
>
> Key: HADOOP-15468
> URL: https://issues.apache.org/jira/browse/HADOOP-15468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.2
>Reporter: Wenming He
>Priority: Minor
> Fix For: 3.0.2
>
>
> 在判断overlay变量是否存在弃用的键时,为什么他是直接判断overlay中的值 ,而不是去判断overlay中存在相同的键。这是个什么逻辑?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476170#comment-16476170
 ] 

genericqa commented on HADOOP-15469:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
44s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15469 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923511/HADOOP-15469-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c4406e655832 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 07d8505 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14631/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14631/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
>

[jira] [Commented] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-15 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476297#comment-16476297
 ] 

Ajay Kumar commented on HADOOP-15456:
-

patch v2 to merge changes with hadoop-docker-runner.

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> HADOOP-15456-docker-hadoop-runner.002.patch, secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15465) Deprecate WinUtils#Symlinks by using native java code

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476322#comment-16476322
 ] 

Steve Loughran commented on HADOOP-15465:
-

Shell is tagged as Public + Evolving, making it part of our public APIs. People 
may be using it with an expectation of stability, something [we 
promise|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#Java_Binary_compatibility_for_end-user_applications_i.e._Apache_Hadoop_ABI]

But: I don't see anyone using it, as FileUtils.symlink is picked up directly. 

Deprecating is safest and must be done for at least one dot release; also good 
to do the due diligence first and find out if anyone has been using it. I've 
just checked spark in my IDE; all is well there. Normally I'd suspect HBase and 
then Hive as the ones most prone to using internal stuff. 

> Deprecate WinUtils#Symlinks by using native java code
> -
>
> Key: HADOOP-15465
> URL: https://issues.apache.org/jira/browse/HADOOP-15465
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15465.v0.patch, HADOOP-15465.v0.proto.patch
>
>
> Hadoop uses the shell to create symbolic links. Now that Hadoop relies on 
> Java 7+, we can deprecate all the shell code and rely on the Java APIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476309#comment-16476309
 ] 

genericqa commented on HADOOP-15455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 1 new + 7 unchanged - 1 fixed = 8 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
2s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923517/HADOOP-15455.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 08346aa31be3 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 07d8505 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14634/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14634/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console 

[jira] [Commented] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476318#comment-16476318
 ] 

genericqa commented on HADOOP-15455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 1 new + 7 unchanged - 1 fixed = 8 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
56s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923517/HADOOP-15455.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4fbe877893eb 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 07d8505 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14635/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14635/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console 

[jira] [Commented] (HADOOP-15432) AzureBlobFS - Base package classes and configuration files

2018-05-15 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476402#comment-16476402
 ] 

Sean Mackrory commented on HADOOP-15432:


Regarding the autogenerated code, I'm not sure why we're leaving it as-is. If 
we there's opposition to manually removing unused imports, what are we supposed 
to do if there's ever a bug in this code?

> AzureBlobFS - Base package classes and configuration files
> --
>
> Key: HADOOP-15432
> URL: https://issues.apache.org/jira/browse/HADOOP-15432
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15432-001.patch, HADOOP-15432-003.patch
>
>
> Patch contains:
> - AzureBlobFileSystem and SecureAzureBlobFileSystem classes which are the 
> main interfaces Hadoop interacts with.
> - Updated Azure pom.xml with updated dependencies, updated parallel tests 
> configurations and maven shader plugin.
> - Checkstyle suppression file. Since http layer is generated automatically by 
> another libraries, it will not follow hadoop coding guidelines. Therefore a 
> few rules for checkstyles have been disabled.
> - Added test configuration file template to be used by the consumers. Similar 
> to wasb, all the configurations will go into this file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15472) Fix NPE in DefaultUpgradeComponentsFinder

2018-05-15 Thread Suma Shivaprasad (JIRA)
Suma Shivaprasad created HADOOP-15472:
-

 Summary: Fix NPE in DefaultUpgradeComponentsFinder 
 Key: HADOOP-15472
 URL: https://issues.apache.org/jira/browse/HADOOP-15472
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Suma Shivaprasad
Assignee: Suma Shivaprasad


In current upgrades for Yarn native services, we do not support 
addition/deletion of compoents during upgrade. On trying to upgrade with the 
same number of components in target spec as the current service spec but with 
the one of the components having a new target spec and name, see the following 
NPE in service AM logs

{noformat}
2018-05-15 00:10:41,489 [IPC Server handler 0 on 37488] ERROR 
service.ClientAMService - Error while trying to upgrade service {} 
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.lambda$findTargetComponentSpecs$0(UpgradeComponentsFinder.java:103)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.findTargetComponentSpecs(UpgradeComponentsFinder.java:100)
at 
org.apache.hadoop.yarn.service.ServiceManager.processUpgradeRequest(ServiceManager.java:259)
at 
org.apache.hadoop.yarn.service.ClientAMService.upgrade(ClientAMService.java:163)
at 
org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPBServiceImpl.upgradeService(ClientAMProtocolPBServiceImpl.java:81)
at 
org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2.callBlockingMethod(ClientAMProtocol.java:5972)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15469:

Status: Patch Available  (was: Open)

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15469-001.patch
>
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15469) S3A directory committer commit job fails if _temporary directory created under dest

2018-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476055#comment-16476055
 ] 

Steve Loughran commented on HADOOP-15469:
-

Patch 001. Remove test, fix mock tests to verify that the behaviour has now 
changed. There's still an existence check in job setup, we just don't overreact 
otherwise. The alternative would be a more complex & brittle scan for >1 
non-temp entry. I think that's overkill

Testing: s3 ireland. First run failed with HADOOP-14946 over a slow network & 
many workers; rerun made it go away.

FYI, [~rdblue]

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> ---
>
> Key: HADOOP-15469
> URL: https://issues.apache.org/jira/browse/HADOOP-15469
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: spark test runs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15469-001.patch
>
>
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15455:
-
Attachment: HADOOP-15455.001.patch

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
> Attachments: HADOOP-15455.001.patch
>
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15455) Incorrect debug message in KMSACL#hasAccess

2018-05-15 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15455:
-
Status: Patch Available  (was: Open)

> Incorrect debug message in KMSACL#hasAccess
> ---
>
> Key: HADOOP-15455
> URL: https://issues.apache.org/jira/browse/HADOOP-15455
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuen-Kuei Hsueh
>Priority: Trivial
> Attachments: HADOOP-15455.001.patch
>
>
> If the user is in the blacklist "foo bar", it prints "user is not in foo bar".
> else, it prints "user is in foo bar"
> {code:title=KMSACLs#hasAccess()}
> if (access) {
>   AccessControlList blacklist = blacklistedAcls.get(type);
>   access = (blacklist == null) || !blacklist.isUserInList(ugi);
>   if (LOG.isDebugEnabled()) {
> if (blacklist == null) {
>   LOG.debug("No blacklist for {}", type.toString());
> } else if (access) {
>   LOG.debug("user is in {}" , blacklist.getAclString());
> } else {
>   LOG.debug("user is not in {}" , blacklist.getAclString());
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15471) Hdfs recursive listing operation is very slow

2018-05-15 Thread Ajay Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Sachdev moved HDFS-13398 to HADOOP-15471:
--

Fix Version/s: (was: 2.7.1)
   2.7.1
Affects Version/s: (was: 2.7.1)
   2.7.1
 Target Version/s: 2.7.6  (was: 2.7.1)
  Component/s: (was: hdfs)
   fs
  Key: HADOOP-15471  (was: HDFS-13398)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Hdfs recursive listing operation is very slow
> -
>
> Key: HADOOP-15471
> URL: https://issues.apache.org/jira/browse/HADOOP-15471
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.1
> Environment: HCFS file system where HDP 2.6.1 is connected to ECS 
> (Object Store).
>Reporter: Ajay Sachdev
>Assignee: Ajay Sachdev
>Priority: Major
> Fix For: 2.7.1
>
> Attachments: HDFS-13398.001.patch, HDFS-13398.002.patch, 
> parallelfsPatch
>
>
> The hdfs dfs -ls -R command is sequential in nature and is very slow for a 
> HCFS system. We have seen around 6 mins for 40K directory/files structure.
> The proposal is to use multithreading approach to speed up recursive list, du 
> and count operations.
> We have tried a ForkJoinPool implementation to improve performance for 
> recursive listing operation.
> [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli]
> commit id : 
> 82387c8cd76c2e2761bd7f651122f83d45ae8876
> Another implementation is to use Java Executor Service to improve performance 
> to run listing operation in multiple threads in parallel. This has 
> significantly reduced the time to 40 secs from 6 mins.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15468) The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.

2018-05-15 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476266#comment-16476266
 ] 

Xiaoyu Yao commented on HADOOP-15468:
-

[~whe], you are welcome to discuss questions on the dev mailing list. However, 
Jira is used to track bugs. We have a wiki page explaining why this kind of 
JIRAs are resolved as invalid. https://wiki.apache.org/hadoop/InvalidJiraIssues

> The setDeprecatedProperties method for the Configuration in Hadoop3.0.2.
> 
>
> Key: HADOOP-15468
> URL: https://issues.apache.org/jira/browse/HADOOP-15468
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.2
>Reporter: Wenming He
>Priority: Minor
> Fix For: 3.0.2
>
>
> 在判断overlay变量是否存在弃用的键时,为什么他是直接判断overlay中的值 ,而不是去判断overlay中存在相同的键。这是个什么逻辑?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15154:
---
Status: Patch Available  (was: Open)

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15154:
---
Attachment: HADOOP-15154.01.patch

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel reassigned HADOOP-15154:
--

Assignee: Zsolt Venczel

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15154:
---
Attachment: HADOOP-15154.01.patch

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15154) Abstract new method assertCapability for StreamCapabilities testing

2018-05-15 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15154:
---
Attachment: (was: HADOOP-15154.01.patch)

> Abstract new method assertCapability for StreamCapabilities testing
> ---
>
> Key: HADOOP-15154
> URL: https://issues.apache.org/jira/browse/HADOOP-15154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Xiao Chen
>Assignee: Zsolt Venczel
>Priority: Minor
> Attachments: HADOOP-15154.01.patch
>
>
> From Steve's 
> [comment|https://issues.apache.org/jira/browse/HADOOP-15149?focusedCommentId=16306806=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16306806]:
> bq.  it'd have been cleaner for the asserts to have been one in a 
> assertCapability(key, StreamCapabilities subject, bool outcome) and had it 
> throw meaningful exceptions on a failure
> We can consider abstract such a method to a test util class and use it for 
> all {{StreamCapabilities}} tests as needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13694) Data transfer encryption with AES 192: Invalid key length.

2018-05-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HADOOP-13694:


Assignee: (was: Harsh J)

> Data transfer encryption with AES 192: Invalid key length.
> --
>
> Key: HADOOP-13694
> URL: https://issues.apache.org/jira/browse/HADOOP-13694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.2
> Environment: OS: Ubuntu 14.04
> /hadoop-2.7.2/bin$ uname -a
> Linux wkstn-kpalaniappan 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 
> 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> /hadoop-2.7.2/bin$ java -version
> java version "1.7.0_95"
> OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
> Hadoop version: 2.7.2
>Reporter: Karthik Palaniappan
>Priority: Major
>
> Configuring aes 128 or aes 256 encryption 
> (dfs.encrypt.data.transfer.cipher.key.bitlength = [128, 256]) works perfectly 
> fine. Trying to use AES 192 generates this exception on the datanode:
> 16/02/29 17:34:10 ERROR datanode.DataNode: 
> wkstn-kpalaniappan:50010:DataXceiver error processing unknown operation  src: 
> /127.0.0.1:57237 dst: /127.0.0.1:50010
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:396)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getEncryptedStreams(SaslDataTransferServer.java:178)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:110)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:193)
>   at java.lang.Thread.run(Thread.java:745)
> And this exception on the client:
> /hadoop-2.7.2/bin$ ./hdfs dfs -copyFromLocal ~/.vimrc /vimrc
> 16/02/29 17:34:10 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266)
>   at 
> 

[jira] [Assigned] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2018-05-15 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HADOOP-12549:


Assignee: (was: Harsh J)

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-05-15 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476482#comment-16476482
 ] 

Suma Shivaprasad commented on HADOOP-15418:
---

@lqjack Can you pls add a UT for the patch else do you mind if I take it to 
completion?

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15472) Fix NPE in DefaultUpgradeComponentsFinder

2018-05-15 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476473#comment-16476473
 ] 

Suma Shivaprasad commented on HADOOP-15472:
---

Raised this in HADOOP project by mistake instead of YARN. Closing it as Invalid 
here and will raise an issue in YARN

> Fix NPE in DefaultUpgradeComponentsFinder 
> --
>
> Key: HADOOP-15472
> URL: https://issues.apache.org/jira/browse/HADOOP-15472
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> In current upgrades for Yarn native services, we do not support 
> addition/deletion of compoents during upgrade. On trying to upgrade with the 
> same number of components in target spec as the current service spec but with 
> the one of the components having a new target spec and name, see the 
> following NPE in service AM logs
> {noformat}
> 2018-05-15 00:10:41,489 [IPC Server handler 0 on 37488] ERROR 
> service.ClientAMService - Error while trying to upgrade service {} 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.lambda$findTargetComponentSpecs$0(UpgradeComponentsFinder.java:103)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.findTargetComponentSpecs(UpgradeComponentsFinder.java:100)
>   at 
> org.apache.hadoop.yarn.service.ServiceManager.processUpgradeRequest(ServiceManager.java:259)
>   at 
> org.apache.hadoop.yarn.service.ClientAMService.upgrade(ClientAMService.java:163)
>   at 
> org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPBServiceImpl.upgradeService(ClientAMProtocolPBServiceImpl.java:81)
>   at 
> org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2.callBlockingMethod(ClientAMProtocol.java:5972)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15472) Fix NPE in DefaultUpgradeComponentsFinder

2018-05-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad resolved HADOOP-15472.
---
Resolution: Invalid

> Fix NPE in DefaultUpgradeComponentsFinder 
> --
>
> Key: HADOOP-15472
> URL: https://issues.apache.org/jira/browse/HADOOP-15472
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
>
> In current upgrades for Yarn native services, we do not support 
> addition/deletion of compoents during upgrade. On trying to upgrade with the 
> same number of components in target spec as the current service spec but with 
> the one of the components having a new target spec and name, see the 
> following NPE in service AM logs
> {noformat}
> 2018-05-15 00:10:41,489 [IPC Server handler 0 on 37488] ERROR 
> service.ClientAMService - Error while trying to upgrade service {} 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.lambda$findTargetComponentSpecs$0(UpgradeComponentsFinder.java:103)
>   at java.util.ArrayList.forEach(ArrayList.java:1257)
>   at 
> org.apache.hadoop.yarn.service.UpgradeComponentsFinder$DefaultUpgradeComponentsFinder.findTargetComponentSpecs(UpgradeComponentsFinder.java:100)
>   at 
> org.apache.hadoop.yarn.service.ServiceManager.processUpgradeRequest(ServiceManager.java:259)
>   at 
> org.apache.hadoop.yarn.service.ClientAMService.upgrade(ClientAMService.java:163)
>   at 
> org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPBServiceImpl.upgradeService(ClientAMProtocolPBServiceImpl.java:81)
>   at 
> org.apache.hadoop.yarn.proto.ClientAMProtocol$ClientAMProtocolService$2.callBlockingMethod(ClientAMProtocol.java:5972)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org