[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671153#comment-16671153
 ] 

Hadoop QA commented on HADOOP-15889:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m  3s{color} 
| {color:red} root generated 1 new + 1448 unchanged - 1 fixed = 1449 total (was 
1449) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 240 unchanged - 2 fixed = 241 total (was 242) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15889 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946479/HADOOP-15889.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 215e1d6b8666 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5eb237 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15441/artifact/out/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15441/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15441/testReport/ |
| Max. process+thread count | 1717 (vs. ulimit of 1) |
| modules | C: hadoop-c

[jira] [Commented] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671141#comment-16671141
 ] 

Hadoop QA commented on HADOOP-15082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15082 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12900269/HADOOP-15082-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15445/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread zhenzhao wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenzhao wang updated HADOOP-15891:
---
Attachment: HDFS-13948.008.patch

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948.007.patch, HDFS-13948.008.patch, HDFS-13948_ 
> Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount 
> Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-31 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671126#comment-16671126
 ] 

Akira Ajisaka commented on HADOOP-15124:


Removed released versions from the target versions.

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15124) Slow FileSystem.Statistics counters implementation

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15124:
---
Target Version/s: 2.10.0, 3.2.0, 3.0.4, 3.1.2, 2.8.6, 2.9.3  (was: 2.6.6, 
3.1.0, 2.10.0, 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.8, 3.0.4, 3.1.2)

> Slow FileSystem.Statistics counters implementation
> --
>
> Key: HADOOP-15124
> URL: https://issues.apache.org/jira/browse/HADOOP-15124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
>  Labels: common, filesystem, fs, statistics
> Attachments: HADOOP-15124.001.patch
>
>
> While profiling 1TB TeraGen job on Hadoop 2.8.2 cluster (Google Dataproc, 2 
> workers, GCS connector) I saw that FileSystem.Statistics code paths Wall time 
> is 5.58% and CPU time is 26.5% of total execution time.
> After switching FileSystem.Statistics implementation to LongAdder, consumed 
> Wall time decreased to 0.006% and CPU time to 0.104% of total execution time.
> Total job runtime decreased from 66 mins to 61 mins.
> These results are not conclusive, because I didn't benchmark multiple times 
> to average results, but regardless of performance gains switching to 
> LongAdder simplifies code and reduces its complexity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15069) support git-secrets commit hook to keep AWS secrets out of git

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15069:
---
Target Version/s: 3.2.0, 3.0.4, 2.8.6, 2.9.3  (was: 2.8.3, 3.2.0, 3.0.2, 
2.9.2)

> support git-secrets commit hook to keep AWS secrets out of git
> --
>
> Key: HADOOP-15069
> URL: https://issues.apache.org/jira/browse/HADOOP-15069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15069-001.patch, HADOOP-15069-002.patch
>
>
> The latest Uber breach looks like it involved AWS keys in git repos.
> Nobody wants that, which is why amazon provide 
> [git-secrets|https://github.com/awslabs/git-secrets]; a script you can use to 
> scan a repo and its history, *and* add as an automated check.
> Anyone can set this up, but there are a few false positives in the scan, 
> mostly from longs and a few all-upper-case constants. These can all be added 
> to a .gitignore file.
> Also: mention git-secrets in the aws testing docs; say "use it"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15082) add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement the test

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15082:
---
Target Version/s: 2.9.3  (was: 2.9.2)

> add AbstractContractRootDirectoryTest test for mkdir / ; wasb to implement 
> the test
> ---
>
> Key: HADOOP-15082
> URL: https://issues.apache.org/jira/browse/HADOOP-15082
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15082-001.patch, HADOOP-15082-002.patch
>
>
> I managed to get a stack trace on an older version of WASB with some coding 
> doing a mkdir(new Path("/"))some of the ranger parentage checks didn't 
> handle that specific case.
> # Add a new root Fs contract test for this operation
> # Have WASB implement the test suite as an integration test.
> # if the test fails shows a problem fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671125#comment-16671125
 ] 

Hadoop QA commented on HADOOP-15885:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 15 unchanged - 0 fixed = 23 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946475/HADOOP-15885.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fcf323fdf7cc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5eb237 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15440/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15440/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15440/testReport/ |
| Max. process+thread count | 1359 (vs. ulimit

[jira] [Updated] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2018-10-31 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-10584:
---
Target Version/s: 3.2.0, 2.9.3  (was: 3.2.0, 2.9.2)

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Major
> Attachments: HADOOP-10584.prelim.patch, hadoop-10584-prelim.patch, 
> rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15895) [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi

2018-10-31 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15895:
--
Attachment: HADOOP-15895.1.patch

> [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi
> --
>
> Key: HADOOP-15895
> URL: https://issues.apache.org/jira/browse/HADOOP-15895
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15895.1.patch
>
>
> Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.
> {noformat}
> $ mvn javadoc:javadoc --projects 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
> [ERROR] Exit code: 1 - 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR]
> [ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15895) [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi

2018-10-31 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15895:
--
Status: Patch Available  (was: Open)

> [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi
> --
>
> Key: HADOOP-15895
> URL: https://issues.apache.org/jira/browse/HADOOP-15895
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-15895.1.patch
>
>
> Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.
> {noformat}
> $ mvn javadoc:javadoc --projects 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
> [ERROR] Exit code: 1 - 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR]
> [ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15895) [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi

2018-10-31 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15895:
--
Summary: [JDK9+] Add missing javax.annotation-api dependency to 
hadoop-yarn-csi  (was: [JDK9+] Add missing javax.activation-api dependency to 
hadoop-yarn-csi)

> [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi
> --
>
> Key: HADOOP-15895
> URL: https://issues.apache.org/jira/browse/HADOOP-15895
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.
> {noformat}
> $ mvn javadoc:javadoc --projects 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
> [ERROR] Exit code: 1 - 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR]
> [ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15895) [JDK9+] Add missing javax.activation-api dependency to hadoop-yarn-csi

2018-10-31 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HADOOP-15895:
-

 Summary: [JDK9+] Add missing javax.activation-api dependency to 
hadoop-yarn-csi
 Key: HADOOP-15895
 URL: https://issues.apache.org/jira/browse/HADOOP-15895
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


[JDK9+] Add missing javax.activation-api dependency to hadoop-yarn-csi

Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.

{noformat}
$ mvn javadoc:javadoc --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
[ERROR] Exit code: 1 - 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR]
[ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15895) [JDK9+] Add missing javax.activation-api dependency to hadoop-yarn-csi

2018-10-31 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-15895:
--
Description: 
Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.
{noformat}
$ mvn javadoc:javadoc --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
[ERROR] Exit code: 1 - 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR]
[ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
{noformat}

  was:
[JDK9+] Add missing javax.activation-api dependency to hadoop-yarn-csi

Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.

{noformat}
$ mvn javadoc:javadoc --projects hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
[ERROR] Exit code: 1 - 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
 error: cannot find symbol
[ERROR] @javax.annotation.Generated(
[ERROR]  ^
[ERROR]   symbol:   class Generated
[ERROR]   location: package javax.annotation
[ERROR]
[ERROR] Command line was: /usr/java/jdk-9.0.4/bin/javadoc @options @packages
{noformat}



> [JDK9+] Add missing javax.activation-api dependency to hadoop-yarn-csi
> --
>
> Key: HADOOP-15895
> URL: https://issues.apache.org/jira/browse/HADOOP-15895
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> Javadoc build fails in hadoop-yarn-csi due to missing {{javax.annotation}}.
> {noformat}
> $ mvn javadoc:javadoc --projects 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:jar (module-javadocs) on 
> project hadoop-yarn-csi: MavenReportException: Error while generating Javadoc:
> [ERROR] Exit code: 1 - 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/IdentityGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/ControllerGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR] 
> /hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/generated-sources/protobuf/grpc-java/csi/v0/NodeGrpc.java:20:
>  error: cannot find symbol
> [ERROR] @javax.annotation.Generated(
> [ERROR]  ^
> [ERROR]   symbol:   class Generated
> [ERROR]   location: package javax.annotation
> [ERROR]
> [ERROR] Command line was: /usr/java/jdk-9.0.4/

[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread zhenzhao wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenzhao wang updated HADOOP-15891:
---
Attachment: HDFS-13948.007.patch

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948.007.patch, HDFS-13948_ Regex Link Type In 
> Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread zhenzhao wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenzhao wang updated HADOOP-15891:
---
Attachment: (was: HDFS-13948.007.patch)

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948_ Regex Link Type In Mont Table-V0.pdf, 
> HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671085#comment-16671085
 ] 

Hadoop QA commented on HADOOP-15891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 31s{color} | {color:orange} root: The patch generated 4 new + 188 unchanged 
- 5 fixed = 192 total (was 193) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m  
3s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}250m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.util.TestBasicDiskValidator |
|   | hadoop.util.TestDiskCheckerWithDiskIo |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15891 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946437/HDFS-13948.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0b391d2f2cfb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| m

[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread zhenzhao wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenzhao wang updated HADOOP-15891:
---
Attachment: HDFS-13948.007.patch

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948.007.patch, HDFS-13948_ Regex Link Type In 
> Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2018-10-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671055#comment-16671055
 ] 

Íñigo Goiri commented on HADOOP-15889:
--

Thanks [~ajayydv] for the comments, in [^HADOOP-15889.001.patch] I added the 
following:
* Logged number of tokens added through file or through base64 (separately).
* I checked the identifiers now so this should check what you proposed.
* I added a new part where I add the same token through file and base64.
* I also added a test for properties. I couldn't figure how to manage the 
environments cleanly as I would have to start a process and so on.

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2018-10-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15889:
-
Attachment: HADOOP-15889.001.patch

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch, HADOOP-15889.001.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671010#comment-16671010
 ] 

Íñigo Goiri commented on HADOOP-15885:
--

I made the test more correct as it was checking for a substring service name, 
now it checks for the actual string.
Let's see what Yetus thinks about it but it should be ready to go.

> Add base64 (urlString) support to DTUtil
> 
>
> Key: HADOOP-15885
> URL: https://issues.apache.org/jira/browse/HADOOP-15885
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, 
> HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch, 
> HADOOP-15885.005.patch
>
>
> HADOOP-12563 added a utility to manage Delegation Token files. Currently, it 
> supports Java and Protobuf formats. However, When interacting with WebHDFS, 
> we use base64. In addition, when printing a token, we also print the base64 
> value. We should be able to import base64 tokens in the utility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15885:
-
Attachment: HADOOP-15885.005.patch

> Add base64 (urlString) support to DTUtil
> 
>
> Key: HADOOP-15885
> URL: https://issues.apache.org/jira/browse/HADOOP-15885
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, 
> HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch, 
> HADOOP-15885.005.patch
>
>
> HADOOP-12563 added a utility to manage Delegation Token files. Currently, it 
> supports Java and Protobuf formats. However, When interacting with WebHDFS, 
> we use base64. In addition, when printing a token, we also print the base64 
> value. We should be able to import base64 tokens in the utility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15885:
-
Attachment: (was: HADOOP-15885.005.patch)

> Add base64 (urlString) support to DTUtil
> 
>
> Key: HADOOP-15885
> URL: https://issues.apache.org/jira/browse/HADOOP-15885
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, 
> HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch
>
>
> HADOOP-12563 added a utility to manage Delegation Token files. Currently, it 
> supports Java and Protobuf formats. However, When interacting with WebHDFS, 
> we use base64. In addition, when printing a token, we also print the base64 
> value. We should be able to import base64 tokens in the utility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-15885:
-
Attachment: HADOOP-15885.005.patch

> Add base64 (urlString) support to DTUtil
> 
>
> Key: HADOOP-15885
> URL: https://issues.apache.org/jira/browse/HADOOP-15885
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, 
> HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch, 
> HADOOP-15885.005.patch
>
>
> HADOOP-12563 added a utility to manage Delegation Token files. Currently, it 
> supports Java and Protobuf formats. However, When interacting with WebHDFS, 
> we use base64. In addition, when printing a token, we also print the base64 
> value. We should be able to import base64 tokens in the utility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15846) ABFS: fix mask related bugs in setAcl, modifyAclEntries and removeAclEntries.

2018-10-31 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670985#comment-16670985
 ] 

Da Zhou commented on HADOOP-15846:
--

+1.
I tested it locally, here is the full tests results:

namespace enabled account: OAuth
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 315, Failures: 0, Errors: 0, Skipped: 20
Tests run: 165, Failures: 0, Errors: 0, Skipped: 21

namespace enabled account: SharedKey
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0
Tests run: 315, Failures: 0, Errors: 0, Skipped: 19
Tests run: 165, Failures: 0, Errors: 0, Skipped: 15

> ABFS: fix mask related bugs in setAcl, modifyAclEntries and removeAclEntries.
> -
>
> Key: HADOOP-15846
> URL: https://issues.apache.org/jira/browse/HADOOP-15846
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15763-HADOOP-15846-001.patch
>
>
> # setAcl, modifyAclEntries and removeAclEntries should not re-calculate 
> default mask if not touched.
>  # Duplicate Acl Entries are not allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15846) ABFS: fix mask related bugs in setAcl, modifyAclEntries and removeAclEntries.

2018-10-31 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou reassigned HADOOP-15846:


Assignee: junhua gu  (was: Da Zhou)

> ABFS: fix mask related bugs in setAcl, modifyAclEntries and removeAclEntries.
> -
>
> Key: HADOOP-15846
> URL: https://issues.apache.org/jira/browse/HADOOP-15846
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Thomas Marquardt
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15763-HADOOP-15846-001.patch
>
>
> # setAcl, modifyAclEntries and removeAclEntries should not re-calculate 
> default mask if not touched.
>  # Duplicate Acl Entries are not allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions

2018-10-31 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670860#comment-16670860
 ] 

Mingliang Liu commented on HADOOP-15781:


+1

checkstyle warning seems related, and can be addressed when commit.

> S3A assumed role tests failing due to changed error text in AWS exceptions
> --
>
> Key: HADOOP-15781
> URL: https://issues.apache.org/jira/browse/HADOOP-15781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.1.0, 3.2.0
> Environment: some of the fault-catching tests in {{ITestAssumeRole}} 
> are failing as the SDK update of HADOOP-15642 changed the text. Fix the 
> tests, perhaps by removing the text check entirely 
> —it's clearly too brittle
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch
>
>
> This is caused by HADOOP-15642 but I'd missed it because I'd been playing 
> with assumed roles locally (restricting their rights) and mistook the 
> failures for "steve's misconfigured the test role", not "the SDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670850#comment-16670850
 ] 

Hadoop QA commented on HADOOP-15781:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
19s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-3.1 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
15s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
32s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:080e9d0 |
| JIRA Issue | HADOOP-15781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946209/HADOOP-15781-branch-3.1-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d1c76100691f 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / dd70b1f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15438/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15438/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15438/testReport/ |
| Max. process+thread count | 86 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Conso

[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670840#comment-16670840
 ] 

Hadoop QA commented on HADOOP-15885:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.token.TestDtUtilShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946233/HADOOP-15885.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7f05c7026588 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15436/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15436/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15436/testReport/ |
| Max. process+thread

[jira] [Commented] (HADOOP-15887) Add an option to avoid writing data locally in Distcp

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670833#comment-16670833
 ] 

Hadoop QA commented on HADOOP-15887:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 93 unchanged - 0 fixed = 94 total (was 93) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
51s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15887 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946171/HADOOP-15887.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e43e07f33477 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15437/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15437/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15437/console |
| Powered by | Apache Yetus 0.8.0   http://

[jira] [Commented] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670761#comment-16670761
 ] 

Hadoop QA commented on HADOOP-15893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 19s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}223m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestHDFSTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15893 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946380/HADOOP-15893.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ee2cef765ec4 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 478b2cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle

[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670783#comment-16670783
 ] 

Hadoop QA commented on HADOOP-15891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 32s{color} | {color:orange} root: The patch generated 6 new + 188 unchanged 
- 4 fixed = 194 total (was 192) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}207m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15891 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946326/HDFS-13948.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9506314c7ddd 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revisi

[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2018-10-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670778#comment-16670778
 ] 

Ajay Kumar commented on HADOOP-15889:
-

[~elgoiri] thanks for working on this. Looks good to me. Few minor comments.

UGI L80:1NIT Shall we log total tokens added?

TestUGI
 * Shall we assert that token created from base 64 string is same as orignal 
one?
 * test redundant token added via file, base64 string and environment variable?

> Add hadoop.token configuration parameter to load tokens
> ---
>
> Key: HADOOP-15889
> URL: https://issues.apache.org/jira/browse/HADOOP-15889
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15889.000.patch
>
>
> Currently, Hadoop allows passing files containing tokens.
> WebHDFS provides base64 delegation tokens that can be used directly.
> This JIRA adds the option to pass base64 tokens directly without using files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670745#comment-16670745
 ] 

Hudson commented on HADOOP-15886:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HADOOP-15886. Fix findbugs warnings in RegistryDNS.java. (aajisaka: rev 
f747f5b06cb0da59c7c20b9f0e46d3eec9622eed)
* (add) hadoop-common-project/hadoop-registry/dev-support/findbugs-exclude.xml
* (edit) hadoop-common-project/hadoop-registry/pom.xml
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml


> Fix findbugs warnings in RegistryDNS.java
> -
>
> Key: HADOOP-15886
> URL: https://issues.apache.org/jira/browse/HADOOP-15886
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8956.01.patch
>
>
> {noformat}
>   FindBugs :
>module:hadoop-common-project/hadoop-registry
>Exceptional return value of 
> java.util.concurrent.ExecutorService.submit(Callable) ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
> At RegistryDNS.java:ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) 
> At RegistryDNS.java:[line 900]
>Exceptional return value of 
> java.util.concurrent.ExecutorService.submit(Callable) ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
> At RegistryDNS.java:ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) 
> At RegistryDNS.java:[line 926]
>Exceptional return value of 
> java.util.concurrent.ExecutorService.submit(Callable) ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
>  InetAddress, int) At RegistryDNS.java:ignored in 
> org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel,
>  InetAddress, int) At RegistryDNS.java:[line 850]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details

2018-10-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670742#comment-16670742
 ] 

Hudson commented on HADOOP-15855:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15340/])
HADOOP-15855. Review hadoop credential doc, including object store (stevel: rev 
62d98ca92aee15d1790d169bfdf0043b05b748ce)
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProvider.java


> Review hadoop credential doc, including object store details
> 
>
> Key: HADOOP-15855
> URL: https://issues.apache.org/jira/browse/HADOOP-15855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.2, 3.2.1
>
> Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch
>
>
> I've got some changes to make to the hadoop credentials API doc; some minor 
> editing and examples of credential paths in object stores with some extra 
> details (i.e how you can't refer to a store from the same store URI)
> these examples need to come with unit tests to verify that the examples are 
> correct, obviously



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14128) ChecksumFs should override rename with overwrite flag

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670733#comment-16670733
 ] 

Hadoop QA commented on HADOOP-14128:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 102 unchanged - 0 fixed = 104 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-14128 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896927/HADOOP-14128.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b752ee4a6ead 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6668c19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15435/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15435/testReport/ |
| Max. process+thread count | 1579 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15435/console |
| 

[jira] [Commented] (HADOOP-11391) Enabling HVE/node awareness does not rebalance replicas on data that existed prior to topology changes.

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670691#comment-16670691
 ] 

Hadoop QA commented on HADOOP-11391:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 12 new + 256 unchanged - 0 fixed = 268 total (was 256) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-11391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946289/HADOOP-11391-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d755bc75102a 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 478b2cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15433/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15433/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Bu

[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670679#comment-16670679
 ] 

Hadoop QA commented on HADOOP-15889:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 
53s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 42s{color} 
| {color:red} root generated 198 new + 1251 unchanged - 1 fixed = 1449 total 
(was 1252) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 240 unchanged - 2 fixed = 242 total (was 242) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
7s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15889 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946279/HADOOP-15889.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b28080a24e92 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 08bb036 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15434/artifact/out/branch-compile-root.txt
 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15434/artifact/out/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15434/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/Pre

[jira] [Commented] (HADOOP-15687) Credentials class should allow access to aliases

2018-10-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670650#comment-16670650
 ] 

Hadoop QA commented on HADOOP-15687:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 8 new + 17 unchanged - 4 fixed = 25 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15687 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12946311/HADOOP-15687.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 643cc366980a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 478b2cb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15431/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15431/testReport/ |
| Max. process+thread count | 1324 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15431/console |
| Powere

[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670625#comment-16670625
 ] 

ASF GitHub Bot commented on HADOOP-15891:
-

Github user JohnZZGithub commented on the issue:

https://github.com/apache/hadoop/pull/424
  
Closed by mistake. Reopen it now.


> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948_ Regex Link Type In Mont Table-V0.pdf, 
> HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670626#comment-16670626
 ] 

ASF GitHub Bot commented on HADOOP-15891:
-

GitHub user JohnZZGithub reopened a pull request:

https://github.com/apache/hadoop/pull/424

HDFS-13948: provide Regex Based Mount Point In Inode Tree



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JohnZZGithub/hadoop HDFS-13948

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #424


commit 0f35b915e2b2d3462538cd6635323dc07bd41af5
Author: JohnZZGithub 
Date:   2018-10-31T19:31:07Z

HDFS-13948: provide Regex Based Mount Point In Inode Tree




> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948_ Regex Link Type In Mont Table-V0.pdf, 
> HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop issue #424: HDFS-13948: provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread JohnZZGithub
Github user JohnZZGithub commented on the issue:

https://github.com/apache/hadoop/pull/424
  
Closed by mistake. Reopen it now.


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #424: HDFS-13948: provide Regex Based Mount Point In Ino...

2018-10-31 Thread JohnZZGithub
GitHub user JohnZZGithub reopened a pull request:

https://github.com/apache/hadoop/pull/424

HDFS-13948: provide Regex Based Mount Point In Inode Tree



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/JohnZZGithub/hadoop HDFS-13948

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #424


commit 0f35b915e2b2d3462538cd6635323dc07bd41af5
Author: JohnZZGithub 
Date:   2018-10-31T19:31:07Z

HDFS-13948: provide Regex Based Mount Point In Inode Tree




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670572#comment-16670572
 ] 

ASF GitHub Bot commented on HADOOP-15891:
-

Github user JohnZZGithub closed the pull request at:

https://github.com/apache/hadoop/pull/424


> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948_ Regex Link Type In Mont Table-V0.pdf, 
> HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #424: HDFS-13948: provide Regex Based Mount Point In Ino...

2018-10-31 Thread JohnZZGithub
Github user JohnZZGithub closed the pull request at:

https://github.com/apache/hadoop/pull/424


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree

2018-10-31 Thread zhenzhao wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhenzhao wang updated HADOOP-15891:
---
Attachment: HDFS-13948.006.patch

> Provide Regex Based Mount Point In Inode Tree
> -
>
> Key: HADOOP-15891
> URL: https://issues.apache.org/jira/browse/HADOOP-15891
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: zhenzhao wang
>Assignee: zhenzhao wang
>Priority: Major
> Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, 
> HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, 
> HDFS-13948.006.patch, HDFS-13948_ Regex Link Type In Mont Table-V0.pdf, 
> HDFS-13948_ Regex Link Type In Mount Table-v1.pdf
>
>
> This jira is created to support regex based mount point in Inode Tree. We 
> noticed that mount point only support fixed target path. However, we might 
> have user cases when target needs to refer some fields from source. e.g. We 
> might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we 
> want to refer `cluster` and `user` field in source to construct target. It's 
> impossible to archive this with current link type. Though we could set 
> one-to-one mapping, the mount table would become bloated if we have thousands 
> of users. Besides, a regex mapping would empower us more flexibility. So we 
> are going to build a regex based mount point which target could refer groups 
> from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop issue #436: Use print() function in both Python 2 and Python 3

2018-10-31 Thread cclauss
Github user cclauss commented on the issue:

https://github.com/apache/hadoop/pull/436
  
[flake8](http://flake8.pycqa.org) testing of 
https://github.com/apache/hadoop on Python 3.7.1

$ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source 
--statistics__
```

./hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/job_history_summary.py:73:61:
 E999 SyntaxError: invalid syntax
print "Name reduce-output-bytes shuffle-finish reduce-finish"
^
./dev-support/determine-flaky-tests-hadoop.py:125:5: F823 local variable 
'error_count' (defined in enclosing scope on line 76) referenced before 
assignment
error_count += 1
^
./dev-support/bin/checkcompatibility.py:195:10: F821 undefined name 'file'
with file(annotations_path, "w") as f:
 ^
1 E999 SyntaxError: invalid syntax
1 F821 undefined name 'file'
1 F823 local variable 'error_count' (defined in enclosing scope on line 
76) referenced before assignment
3
```


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #436: Use print() function in both Python 2 and Python 3

2018-10-31 Thread cclauss
GitHub user cclauss opened a pull request:

https://github.com/apache/hadoop/pull/436

Use print() function in both Python 2 and Python 3

__print()__ is a function in Python 3.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cclauss/hadoop modernize-Python-2-codes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/436.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #436


commit d94224777a05c3373f92349d98e566507df4981a
Author: cclauss 
Date:   2018-10-31T17:55:06Z

Use print() function in both Python 2 and Python 3

__print()__ is a function in Python 3.




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-10-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670416#comment-16670416
 ] 

Steve Loughran commented on HADOOP-15894:
-

+ update SGuard if a file is found

> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Encountered a 404 failure in 
> {{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly 
> created file wasn't seen. Even with S3guard enabled, that method isn't doing 
> anything to query the store for it existing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-10-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670413#comment-16670413
 ] 

Steve Loughran commented on HADOOP-15894:
-

to get a 404 just after a PUT means that the negative 404 response on the HEAD 
call made before creation is still in the AWS load-balancer cache, which is a 
fairly small window, and the longer the gap between open() and the 
getFileChecksum() call, the less likely this is to happen. 

What could be done here? 

check s3guard state. 
* File deleted -> FNFE
* file exists, remember this
* issue HEAD request
* if HEAD -> 404, conclude brief inconsistency, log and retry both the s3guard 
check and the HEAD. (policy: #of attempts?)


> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Encountered a 404 failure in 
> {{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly 
> created file wasn't seen. Even with S3guard enabled, that method isn't doing 
> anything to query the store for it existing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-10-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670402#comment-16670402
 ] 

Steve Loughran commented on HADOOP-15894:
-

Stack
{code}
ERROR] 
testNonEmptyFileChecksumsUnencrypted(org.apache.hadoop.fs.s3a.ITestS3AMiscOperations)
  Time elapsed: 1.404 s  <<< ERROR!
java.io.FileNotFoundException: getFileChecksum on 
s3a://hwdev-steve-ireland-new/fork-0003/test/file6: 
com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon 
S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 97936E9B2C01578F; 
S3 Extended Request ID: 
cZkFdLvI88LbW+MOCvUFIN0lwXQBvk2cfv78G50MoItFXb3b4LBNL5MPpPJK2VmDGkYf2EnwSF8=), 
S3 Extended Request ID: 
cZkFdLvI88LbW+MOCvUFIN0lwXQBvk2cfv78G50MoItFXb3b4LBNL5MPpPJK2VmDGkYf2EnwSF8=:404
 Not Found
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:225)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileChecksum(S3AFileSystem.java:3025)
at 
org.apache.hadoop.fs.s3a.ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted(ITestS3AMiscOperations.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Not Found 
(Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: 
97936E9B2C01578F; S3 Extended Request ID: 
cZkFdLvI88LbW+MOCvUFIN0lwXQBvk2cfv78G50MoItFXb3b4LBNL5MPpPJK2VmDGkYf2EnwSF8=), 
S3 Extended Request ID: 
cZkFdLvI88LbW+MOCvUFIN0lwXQBvk2cfv78G50MoItFXb3b4LBNL5MPpPJK2VmDGkYf2EnwSF8=
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1264)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$4(S3AFileSystem.java:1235)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:280)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1232)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1089)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileChecksum$14(S3AFileSystem.java:3028)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 14 more
{code}

Went away on replay.


> getFileChecksum() needs to adopt S3Guard
> 
>
> Key: HADOOP-15894
> URL: https://issues.apache.org/jira/browse/HADOOP-15894
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Encountered a 404 failure in 
> {

[jira] [Created] (HADOOP-15894) getFileChecksum() needs to adopt S3Guard

2018-10-31 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15894:
---

 Summary: getFileChecksum() needs to adopt S3Guard
 Key: HADOOP-15894
 URL: https://issues.apache.org/jira/browse/HADOOP-15894
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


Encountered a 404 failure in 
{{ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted}}; newly created 
file wasn't seen. Even with S3guard enabled, that method isn't doing anything 
to query the store for it existing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-10-31 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15229:

Summary: Add FileSystem builder-based openFile() API to match createFile()  
(was: Add FileSystem builder-based open API to match create())

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15687) Credentials class should allow access to aliases

2018-10-31 Thread Lars Francke (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670082#comment-16670082
 ] 

Lars Francke commented on HADOOP-15687:
---

And thanks for taking a look [~ste...@apache.org]

Am I doing something wrong that Jenkins does not get triggered?

> Credentials class should allow access to aliases
> 
>
> Key: HADOOP-15687
> URL: https://issues.apache.org/jira/browse/HADOOP-15687
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HADOOP-15687.2.patch, HADOOP-15687.patch, 
> HADOOP-15687.patch
>
>
> The credentials class can read token files from disk which are keyed by an 
> alias. It also allows to retrieve tokens by alias and it also allows to list 
> all tokens.
> It does not - however - allow to get the full map of all tokens including the 
> aliases (or at least a list of all aliases).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: (was: HADOOP-15893.001.patch)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch
Status: Patch Available  (was: Open)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Status: Open  (was: Patch Available)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: (was: HADOOP-15893.001.patch)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670050#comment-16670050
 ] 

Steve Loughran commented on HADOOP-14556:
-

I'm thinking: if we do want more independent upload of fs data in job 
submission, should we include all bucket-specific options in the store
 * signing type
 * endpoint
 * etc

Pro: lets me submit work to a cluster which can include a whole new endpoint, 
auth mech, etc

con: it gets complicated fast.

What I might do is add to the s3a token identifier the map of k->v options for 
this, but not collect or use them yet, just read and write.

I know, I could just give up and embrace protobuf rather than try and do 
versioning in my own code, but, well, its no like protoc likes maps either

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556.oath-002.patch, 
> HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15890) Some S3A committer tests don't match ITest* pattern; don't run in maven

2018-10-31 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16670039#comment-16670039
 ] 

Steve Loughran commented on HADOOP-15890:
-

Exploring a bit further, if you run from mvn with s3guard off some of the tests 
fail, as they always use the inconsistent S3 client. 

Before they can be set to run every time, the inconsistent client should only 
be set if s3guard is on for the destination FS

> Some S3A committer tests don't match ITest* pattern; don't run in maven
> ---
>
> Key: HADOOP-15890
> URL: https://issues.apache.org/jira/browse/HADOOP-15890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> some of the s3A committer tests don't have the right prefix for the maven IT 
> test runs to pick up
> {code}
> ITMagicCommitMRJob.java
> ITStagingCommitMRJobBad
> ITDirectoryCommitMRJob
> ITStagingCommitMRJob
> {code}
> They all work when run by name or in the IDE (which is where I developed 
> them), but they don't run in maven builds.
> Fix: rename. There are some new tests in branch-3.2 from HADOOP-15107 which 
> aren't in 3.1; need patches for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Summary: fs.TrashPolicyDefault: can't create trash directory and race 
condition  (was: fs.TrashPolicyDefault: Can't create trash directory and race 
condition)

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15892) hadoop distcp command fail without reminder

2018-10-31 Thread Chang Zhichao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Zhichao updated HADOOP-15892:
---
Description: 
I am try using hadoop distcp command to copy a file (79_0) to a directory 
(target_directory/part_date=2018-10-28/), and the directory is not exist, like 
this
{code:java}
$ hadoop fs -ls /user/hive/warehouse/migration_chang.db/target_directory/
$
$ hadoop distcp 
hdfs://sdg/user/hive/warehouse/migration_chang.db/source_directory/part_date=2018-10-28/79_0
   
hdfs://sdg/user/hive/warehouse/migration_chang.db/target_directory/part_date=2018-10-28/
{code}
It will copy the source file '79_0' to a file called "part_date=2018-10-28".
{code:java}
$ hadoop fs -ls /user/hive/warehouse/migration_chang.db/target_directory/
Found 1 items
-rw-r--r-- 3 hadoop supergroup 353024605 2018-10-31 19:51 
/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28{code}
I think it is confusing, and better way is remind error like " No such 
directory" .('hadoop fs -cp' command or Linux 'cp' command do like this way.)

 

 

  was:
I am try using hadoop distcp command to copy a file to a directory, like this
{code:java}
hadoop distcp 
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
   
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
{code}
It will copy the source file '79_0' to a file called "part_date=2018-10-28".

I think it is confusing, and better way is remind error like " No such file or 
directory" .('hadoop fs -cp' command or Linux 'cp' command do like this way.)

 

 


>  hadoop distcp command fail without reminder
> 
>
> Key: HADOOP-15892
> URL: https://issues.apache.org/jira/browse/HADOOP-15892
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Chang Zhichao
>Priority: Minor
>
> I am try using hadoop distcp command to copy a file (79_0) to a directory 
> (target_directory/part_date=2018-10-28/), and the directory is not exist, 
> like this
> {code:java}
> $ hadoop fs -ls /user/hive/warehouse/migration_chang.db/target_directory/
> $
> $ hadoop distcp 
> hdfs://sdg/user/hive/warehouse/migration_chang.db/source_directory/part_date=2018-10-28/79_0
>    
> hdfs://sdg/user/hive/warehouse/migration_chang.db/target_directory/part_date=2018-10-28/
> {code}
> It will copy the source file '79_0' to a file called 
> "part_date=2018-10-28".
> {code:java}
> $ hadoop fs -ls /user/hive/warehouse/migration_chang.db/target_directory/
> Found 1 items
> -rw-r--r-- 3 hadoop supergroup 353024605 2018-10-31 19:51 
> /user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28{code}
> I think it is confusing, and better way is remind error like " No such 
> directory" .('hadoop fs -cp' command or Linux 'cp' command do like this way.)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15892) hadoop distcp command fail without reminder

2018-10-31 Thread Chang Zhichao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Zhichao updated HADOOP-15892:
---
Description: 
I am try using hadoop distcp command to copy a file to a directory, like this
{code:java}
hadoop distcp 
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
   
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
{code}
It will copy the source file '79_0' to a file called "part_date=2018-10-28".

I think it is confusing, and better way is remind error like " No such file or 
directory" .('hadoop fs -cp' command or Linux 'cp' command do like this way.)

 

 

  was:
hadoop distcp command copy a file to a directory, like this
{code:java}
hadoop distcp  
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
   
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
{code}
It will copy the source file '79_0' to a file called "part_date=2018-10-28".

I think it is confusing and better way is remind error like 'hadoop fs -cp' 
command: " No such file or directory" .

 

 


>  hadoop distcp command fail without reminder
> 
>
> Key: HADOOP-15892
> URL: https://issues.apache.org/jira/browse/HADOOP-15892
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Chang Zhichao
>Priority: Minor
>
> I am try using hadoop distcp command to copy a file to a directory, like this
> {code:java}
> hadoop distcp 
> hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
>    
> hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
> {code}
> It will copy the source file '79_0' to a file called 
> "part_date=2018-10-28".
> I think it is confusing, and better way is remind error like " No such file 
> or directory" .('hadoop fs -cp' command or Linux 'cp' command do like this 
> way.)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: (was: HADOOP-15893.001.patch)

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: (was: HADOOP-15893.001.patch)

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Description: 
After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch
Status: Patch Available  (was: Open)

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: (was: HADOOP-15893.001.patch)

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Attachment: HADOOP-15893.001.patch

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory and race condition

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Summary: fs.TrashPolicyDefault: Can't create trash directory and race 
condition  (was: fs.TrashPolicyDefault: Can't create trash directory)

> fs.TrashPolicyDefault: Can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory

2018-10-31 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

External issue ID:   (was: HADOOP-15633)

> fs.TrashPolicyDefault: Can't create trash directory
> ---
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15893) fs.TrashPolicyDefault: Can't create trash directory

2018-10-31 Thread sunlisheng (JIRA)
sunlisheng created HADOOP-15893:
---

 Summary: fs.TrashPolicyDefault: Can't create trash directory
 Key: HADOOP-15893
 URL: https://issues.apache.org/jira/browse/HADOOP-15893
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: sunlisheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15892) hadoop distcp command fail without reminder

2018-10-31 Thread Chang Zhichao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Zhichao updated HADOOP-15892:
---
Issue Type: Wish  (was: Bug)

>  hadoop distcp command fail without reminder
> 
>
> Key: HADOOP-15892
> URL: https://issues.apache.org/jira/browse/HADOOP-15892
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Chang Zhichao
>Priority: Minor
>
> hadoop distcp command copy a file to a directory, like this
> {code:java}
> hadoop distcp  
> hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
>    
> hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
> {code}
> It will copy the source file '79_0' to a file called 
> "part_date=2018-10-28".
> I think it is confusing and better way is remind error like 'hadoop fs -cp' 
> command: " No such file or directory" .
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15892) hadoop distcp command fail without reminder

2018-10-31 Thread Chang Zhichao (JIRA)
Chang Zhichao created HADOOP-15892:
--

 Summary:  hadoop distcp command fail without reminder
 Key: HADOOP-15892
 URL: https://issues.apache.org/jira/browse/HADOOP-15892
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0
Reporter: Chang Zhichao


hadoop distcp command copy a file to a directory, like this
{code:java}
hadoop distcp  
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_3/part_date=2018-10-28/79_0
   
hdfs://sdg/user/hive/warehouse/migration_chang.db/snda_game_user_profile_mid_5/part_date=2018-10-28/
{code}
It will copy the source file '79_0' to a file called "part_date=2018-10-28".

I think it is confusing and better way is remind error like 'hadoop fs -cp' 
command: " No such file or directory" .

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org