[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154782#comment-16154782
 ] 

Hadoop QA commented on HADOOP-14835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-project-dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 15s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 38s{color} 
| {color:red} hadoop-mapreduce-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.mapreduce.v2.hs.webapp.TestHSWebApp |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885492/HADOOP-14835.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux f1bdf7dbc7ab 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d4035d4 |
| Default Java | 1.8.0_144 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13173/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13173/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13173/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 

[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154760#comment-16154760
 ] 

Hadoop QA commented on HADOOP-14839:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
42s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-extras in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14839 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885504/HADOOP-14839.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b06f25456418 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d4035d4 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13175/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp hadoop-tools/hadoop-extras U: 
hadoop-tools |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13175/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: 

[jira] [Commented] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154759#comment-16154759
 ] 

Hadoop QA commented on HADOOP-13421:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 8 new + 8 unchanged - 
0 fixed = 16 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-13421 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885499/HADOOP-13421.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux d20e05fdc706 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d4035d4 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13174/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-09-05 Thread Chen Hongfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154745#comment-16154745
 ] 

Chen Hongfei commented on HADOOP-14804:
---

Now this patch should be corrected all places in core-default.xml .

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, 
> HADOOP-14804.003.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-09-05 Thread Chen Hongfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Hongfei updated HADOOP-14804:
--
Attachment: HADOOP-14804.003.patch

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, 
> HADOOP-14804.003.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-09-05 Thread Chen Hongfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154730#comment-16154730
 ] 

Chen Hongfei commented on HADOOP-14804:
---

Oh My God,I find some more places again :

hadoop.registry.zk.session.timeout.ms
hadoop.registry.zk.connection.timeout.ms
hadoop.registry.zk.retry.times
hadoop.registry.zk.retry.interval.ms
hadoop.registry.zk.retry.ceiling.ms
hadoop.registry.zk.quorum
hadoop.registry.secure
hadoop.registry.system.acls
hadoop.registry.kerberos.realm
hadoop.registry.jaas.context
hadoop.shell.missing.defaultFs.warning

hadoop.http.logs.enabled

hadoop.zk.address
hadoop.zk.num-retries
hadoop.zk.retry-interval-ms
hadoop.zk.timeout-ms
hadoop.zk.acl
hadoop.zk.auth

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HADOOP-14839:
---
Attachment: HADOOP-14839.006.patch

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HADOOP-14839.006.patch, HDFS-10234.001.patch, 
> HDFS-10234.002.patch, HDFS-10234.003.patch, HDFS-10234.004.patch, 
> HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154722#comment-16154722
 ] 

Yiqun Lin commented on HADOOP-14839:


Thanks [~xyao] for catching this, Update the updated patch.

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-09-05 Thread Chen Hongfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154718#comment-16154718
 ] 

Chen Hongfei commented on HADOOP-14804:
---

Thanks Chen Liang ! 
I have added the three param into this patch.

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml

2017-09-05 Thread Chen Hongfei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Hongfei updated HADOOP-14804:
--
Attachment: HADOOP-14804.002.patch

> correct wrong parameters format order in core-default.xml
> -
>
> Key: HADOOP-14804
> URL: https://issues.apache.org/jira/browse/HADOOP-14804
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: Chen Hongfei
>Priority: Trivial
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch
>
>
> descriptions of "HTTP CORS" parameters is before the names:  
> 
>Comma separated list of headers that are allowed for web
> services needing cross-origin (CORS) support.
>   hadoop.http.cross-origin.allowed-headers
>   X-Requested-With,Content-Type,Accept,Origin
>  
> ..
> but they should be following value as others.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154699#comment-16154699
 ] 

Allen Wittenauer commented on HADOOP-14835:
---

Oh, you're likely bombing out due to YARN-6877 being committed despite it 
clearly failing the build.  Revert 91cc070d67533ebb3325b982eba2135e0d175a82 and 
javadoc will work.

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-14835.001.patch
>
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A

2017-09-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154691#comment-16154691
 ] 

Aaron Fabbri edited comment on HADOOP-13421 at 9/6/17 1:51 AM:
---

Attaching v2 patch.

I ended up using a separate class that can hold either version for the list 
objects request and response.  Always translating to the SDK's v2 objects saved 
a little garbage but ended up being error-prone.

All integration tests passed in us-west-2.  Rerunning right now with dynamodb.


was (Author: fabbri):
Attaching v2 patch.

I ended up using a separate class that for the list objects request and 
response.  Always translating to the SDK's v2 objects saved a little garbage 
but ended up being error-prone.

All integration tests passed in us-west-2.  Rerunning right now with dynamodb.

> Switch to v2 of the S3 List Objects API in S3A
> --
>
> Key: HADOOP-13421
> URL: https://issues.apache.org/jira/browse/HADOOP-13421
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13421.002.patch, 
> HADOOP-13421-HADOOP-13345.001.patch
>
>
> Unlike [version 
> 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the 
> S3 List Objects API, [version 
> 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by 
> default does not fetch object owner information, which S3A doesn't need 
> anyway. By switching to v2, there will be less data to transfer/process. 
> Also, it should be more robust when listing a versioned bucket with "a large 
> number of delete markers" ([according to 
> AWS|https://aws.amazon.com/releasenotes/Java/0735652458007581]).
> Methods in S3AFileSystem that use this API include:
> * getFileStatus(Path)
> * innerDelete(Path, boolean)
> * innerListStatus(Path)
> * innerRename(Path, Path)
> Requires AWS SDK 1.10.75 or later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A

2017-09-05 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13421:
--
Attachment: HADOOP-13421.002.patch

Attaching v2 patch.

I ended up using a separate class that for the list objects request and 
response.  Always translating to the SDK's v2 objects saved a little garbage 
but ended up being error-prone.

All integration tests passed in us-west-2.  Rerunning right now with dynamodb.

> Switch to v2 of the S3 List Objects API in S3A
> --
>
> Key: HADOOP-13421
> URL: https://issues.apache.org/jira/browse/HADOOP-13421
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13421.002.patch, 
> HADOOP-13421-HADOOP-13345.001.patch
>
>
> Unlike [version 
> 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the 
> S3 List Objects API, [version 
> 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by 
> default does not fetch object owner information, which S3A doesn't need 
> anyway. By switching to v2, there will be less data to transfer/process. 
> Also, it should be more robust when listing a versioned bucket with "a large 
> number of delete markers" ([according to 
> AWS|https://aws.amazon.com/releasenotes/Java/0735652458007581]).
> Methods in S3AFileSystem that use this API include:
> * getFileStatus(Path)
> * innerDelete(Path, boolean)
> * innerListStatus(Path)
> * innerRename(Path, Path)
> Requires AWS SDK 1.10.75 or later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13421) Switch to v2 of the S3 List Objects API in S3A

2017-09-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154687#comment-16154687
 ] 

Aaron Fabbri commented on HADOOP-13421:
---

[~ste...@apache.org] thanks, I was thinking the same thing for regression 
testing.  I used ITestS3AContractGetFileStatus as it seems to exercise a lot of 
different cases for the list objects API.


> Switch to v2 of the S3 List Objects API in S3A
> --
>
> Key: HADOOP-13421
> URL: https://issues.apache.org/jira/browse/HADOOP-13421
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steven K. Wong
>Assignee: Aaron Fabbri
>Priority: Minor
> Attachments: HADOOP-13421-HADOOP-13345.001.patch
>
>
> Unlike [version 
> 1|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html] of the 
> S3 List Objects API, [version 
> 2|http://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html] by 
> default does not fetch object owner information, which S3A doesn't need 
> anyway. By switching to v2, there will be less data to transfer/process. 
> Also, it should be more robust when listing a versioned bucket with "a large 
> number of delete markers" ([according to 
> AWS|https://aws.amazon.com/releasenotes/Java/0735652458007581]).
> Methods in S3AFileSystem that use this API include:
> * getFileStatus(Path)
> * innerDelete(Path, boolean)
> * innerListStatus(Path)
> * innerRename(Path, Path)
> Requires AWS SDK 1.10.75 or later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14835:
-
Status: Patch Available  (was: Open)

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-14835.001.patch
>
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14835:
-
Attachment: HADOOP-14835.001.patch

Here's a patch which pulls in jdiff's xerces when the docs profile is activated.

My build gets farther but fails with MBs of javadoc error output, I didn't look 
deeply enough to know what's actually wrong. Appreciate anyone's help if they 
have any quick ideas.

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-14835.001.patch
>
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HADOOP-14835:


Assignee: Andrew Wang

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Andrew Wang
>Priority: Critical
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154660#comment-16154660
 ] 

Allen Wittenauer commented on HADOOP-14835:
---

That would make sense.  I think I'm more surprised that it's not a fatal error 
though.  I'm guessing that's a bug in the doclet?

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Priority: Critical
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154642#comment-16154642
 ] 

Andrew Wang commented on HADOOP-14835:
--

This is possibly because we purged xerces in HDFS-12221.

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Priority: Critical
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE

2017-09-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13948 started by Xiao Chen.
--
> Create automated scripts to update LICENSE/NOTICE
> -
>
> Key: HADOOP-13948
> URL: https://issues.apache.org/jira/browse/HADOOP-13948
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14841) Let KMS Client retry 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14841:
---
Summary: Let KMS Client retry 'No content to map' EOFExceptions  (was: Let 
KMS Client to retry 'No content to map' EOFExceptions)

> Let KMS Client retry 'No content to map' EOFExceptions
> --
>
> Key: HADOOP-14841
> URL: https://issues.apache.org/jira/browse/HADOOP-14841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14841.01.patch
>
>
> We have seen quite some occurrences when the KMS server is stressed, some of 
> the requests would end up getting a 500 return code, with this in the server 
> log:
> {noformat}
> 2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: 
> User impala/HOSTNAME@REALM (auth:KERBEROS) request POST 
> https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt
>  caused exception.
> java.io.EOFException: No content to map to Object due to end of input
> at 
> org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
> at 
> 

[jira] [Updated] (HADOOP-14841) Let KMS Client to retry 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14841:
---
Summary: Let KMS Client to retry 'No content to map' EOFExceptions  (was: 
Add KMS Client retry to handle 'No content to map' EOFExceptions)

> Let KMS Client to retry 'No content to map' EOFExceptions
> -
>
> Key: HADOOP-14841
> URL: https://issues.apache.org/jira/browse/HADOOP-14841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14841.01.patch
>
>
> We have seen quite some occurrences when the KMS server is stressed, some of 
> the requests would end up getting a 500 return code, with this in the server 
> log:
> {noformat}
> 2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: 
> User impala/HOSTNAME@REALM (auth:KERBEROS) request POST 
> https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt
>  caused exception.
> java.io.EOFException: No content to map to Object due to end of input
> at 
> org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
> at 
> 

[jira] [Updated] (HADOOP-14841) Add KMS Client retry to handle 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14841:
---
Attachment: HADOOP-14841.01.patch

Attached a patch to express the idea and a unit test.

Intentionally reproducing this on a KMS is pretty difficult, but I was able to 
reproduce it if by setting kms max threads to 1 and script-generating some 
loads (with the old tomcat version).

Not tried with the Jetty kms yet, but since {{KMSJSONReader}} code is still the 
same, IMO the issue is unlikely to go away by itself.

> Add KMS Client retry to handle 'No content to map' EOFExceptions
> 
>
> Key: HADOOP-14841
> URL: https://issues.apache.org/jira/browse/HADOOP-14841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14841.01.patch
>
>
> We have seen quite some occurrences when the KMS server is stressed, some of 
> the requests would end up getting a 500 return code, with this in the server 
> log:
> {noformat}
> 2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: 
> User impala/HOSTNAME@REALM (auth:KERBEROS) request POST 
> https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt
>  caused exception.
> java.io.EOFException: No content to map to Object due to end of input
> at 
> org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
> at 
> 

[jira] [Updated] (HADOOP-14841) Add KMS Client retry to handle 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14841:
---
Status: Patch Available  (was: Open)

> Add KMS Client retry to handle 'No content to map' EOFExceptions
> 
>
> Key: HADOOP-14841
> URL: https://issues.apache.org/jira/browse/HADOOP-14841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14841.01.patch
>
>
> We have seen quite some occurrences when the KMS server is stressed, some of 
> the requests would end up getting a 500 return code, with this in the server 
> log:
> {noformat}
> 2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: 
> User impala/HOSTNAME@REALM (auth:KERBEROS) request POST 
> https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt
>  caused exception.
> java.io.EOFException: No content to map to Object due to end of input
> at 
> org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444)
> at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396)
> at 
> org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54)
> at 
> com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
> at 
> com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:130)
> at 
> 

[jira] [Commented] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions

2017-09-05 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154532#comment-16154532
 ] 

Vrushali C commented on HADOOP-14840:
-

Thanks Subru for the response! 

No, we don't need to break it up into smaller patches. My question was more out 
of curiosity and was wondering if this was still at an initial stage and if so, 
would there be sub tasks. Good to hear that you already have something running 
internally and want to contribute it back to the community. Will look forward 
to the design doc with the perspective of understanding it for future use cases 
for TSv2. Will also look through the paper. 

thanks!

> Tool to estimate resource requirements of an application pipeline based on 
> prior executions
> ---
>
> Key: HADOOP-14840
> URL: https://issues.apache.org/jira/browse/HADOOP-14840
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Subru Krishnan
>Assignee: Rui Li
>
> We have been working on providing SLAs for job execution on Hadoop. At high 
> level this involves 2 parts: deriving the resource requirements of a job and 
> guaranteeing the estimated resources at runtime. The {{YARN 
> ReservationSystem}} (YARN-1051/YARN-2572/YARN-5326) enable the latter and in 
> this JIRA, we propose to add a tool to Hadoop to predict the  resource 
> requirements of a job based on past executions of the job. The system (aka 
> *Morpheus*) deep dive can be found in our OSDI'16 paper 
> [here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions

2017-09-05 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154517#comment-16154517
 ] 

Subru Krishnan commented on HADOOP-14840:
-

[~vrushalic], thanks for your comments, please find my response inline.

bq. This sounds interesting. Will there be subtasks under this? 

We have developed this tool and are using it internally. So it has already been 
through rigorous reviews etc so was just planning on a single uber patch to 
save time/efficiency. We can break it up if really required.

bq. Similar estimations for MR jobs like reducer estimation & memory estimation 
are done based on past runs using a tool called hRaven 
(https://github.com/twitter/hraven)

We are aware of hRaven. In fact, we have a similar work for Hive and Tez called 
[PerfOrator|http://dl.acm.org/citation.cfm?id=2987566]. With the ever growing 
frameworks (and optimizations/updates to existing ones), we decided to follow a 
different approach here:
* Be framework agnostic, i.e. we model only resources over time and so are not 
limited to map or reduce (or Tez or Reef). We are essentially emulating the 
YARN model natively.
* Work purely based on history (can be direct hook to hRaven). We have 
developed a Linear Programming model to estimate the resources required for the 
job's future execution.
* Optionally reserve the resources using YARN's ReservationSystem.

 The gory details are available in the [Morpheus 
paper|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].

bq. I would think of this as a natural extension to Timeline Service v2. Do you 
think you might want to make use of TSv2 in any way for this?

We can't agree more :). The initial version we have works on YARN RM logs only 
because that's what is widely available (in prod deployments). We have designed 
it with the clear intent to integrate with TSv2 (or even hRaven) in future, 
with the community help, of course. 

We will upload a design doc shortly.

> Tool to estimate resource requirements of an application pipeline based on 
> prior executions
> ---
>
> Key: HADOOP-14840
> URL: https://issues.apache.org/jira/browse/HADOOP-14840
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Subru Krishnan
>Assignee: Rui Li
>
> We have been working on providing SLAs for job execution on Hadoop. At high 
> level this involves 2 parts: deriving the resource requirements of a job and 
> guaranteeing the estimated resources at runtime. The {{YARN 
> ReservationSystem}} (YARN-1051/YARN-2572/YARN-5326) enable the latter and in 
> this JIRA, we propose to add a tool to Hadoop to predict the  resource 
> requirements of a job based on past executions of the job. The system (aka 
> *Morpheus*) deep dive can be found in our OSDI'16 paper 
> [here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14652) Update metrics-core version to 3.2.3

2017-09-05 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14652:

Summary: Update metrics-core version to 3.2.3  (was: Update metrics-core 
version)

> Update metrics-core version to 3.2.3
> 
>
> Key: HADOOP-14652
> URL: https://issues.apache.org/jira/browse/HADOOP-14652
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch
>
>
> The current artifact is:
> com.codehale.metrics:metrics-core:3.0.1
> That version could either be bumped to 3.0.2 (the latest of that line), or 
> use the latest artifact:
> io.dropwizard.metrics:metrics-core:3.2.3



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-09-05 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14688:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed the patch to branch-2, branch-3.0 and trunk. Thanks [~xiaochen] for 
the patch, and [~daryn] [~mi...@cloudera.com] for review and thoughts!

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png, jxray.report
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-09-05 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14688:
-
Fix Version/s: 3.0.0-beta1
   2.9.0

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png, jxray.report
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1; rework docs

2017-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154466#comment-16154466
 ] 

Andrew Wang commented on HADOOP-14738:
--

Hi Steve, thanks for the patch,

I'd rather we keep the jets3t scope at "compile" if we're going to keep it at 
all. It punts the classpath problems to end users, who are less equipped to 
deal with it. IMO features should be fully supported or not supported at all.

Removing in 3.0 also seems better than removing in 3.1 since at least 3.0 is a 
major release.

Whether to remove or not depends on how often S3N is used, and how hard it is 
to use S3A instead. I'll defer to subject matter experts like yourself and 
Aaron on this question. If it's not used and easy to migrate, then I'm +1 on 
removing in 3.0.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1; rework docs
> --
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14739-001.patch
>
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue

2017-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154458#comment-16154458
 ] 

Allen Wittenauer commented on HADOOP-14842:
---

Just in case it isn't obvioius: you'll need to pull it apart for 2.8.2.  It 
also needs some documentation, etc.



> Hadoop 2.8.2 release build process get stuck due to java issue
> --
>
> Key: HADOOP-14842
> URL: https://issues.apache.org/jira/browse/HADOOP-14842
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Junping Du
>Priority: Blocker
>
> In my latest 2.8.2 release build (via docker) get failed, and following 
> errors received:
>  
> {noformat}
> "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean 
> install
> Error: JAVA_HOME is not defined correctly. We cannot execute 
> /usr/lib/jvm/java-7-oracle/bin/java"
> {noformat}
> This looks like related to HADOOP-14474. However, reverting that patch 
> doesn't work here because build progress will get failed earlier in java 
> download/installation - may be as mentioned in HADOOP-14474, some java 7 
> download address get changed by Oracle. 
> Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work 
> here although it show correct java home. My suspect so far is we still need 
> to download java 7 from somewhere to make build progress continue in docker 
> building process, but haven't got clue to go through this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue

2017-09-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154447#comment-16154447
 ] 

Junping Du commented on HADOOP-14842:
-

Thanks for update, [~aw]. I will try your patch in HADOOP-14816 and provide 
update.

> Hadoop 2.8.2 release build process get stuck due to java issue
> --
>
> Key: HADOOP-14842
> URL: https://issues.apache.org/jira/browse/HADOOP-14842
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Junping Du
>Priority: Blocker
>
> In my latest 2.8.2 release build (via docker) get failed, and following 
> errors received:
>  
> {noformat}
> "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean 
> install
> Error: JAVA_HOME is not defined correctly. We cannot execute 
> /usr/lib/jvm/java-7-oracle/bin/java"
> {noformat}
> This looks like related to HADOOP-14474. However, reverting that patch 
> doesn't work here because build progress will get failed earlier in java 
> download/installation - may be as mentioned in HADOOP-14474, some java 7 
> download address get changed by Oracle. 
> Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work 
> here although it show correct java home. My suspect so far is we still need 
> to download java 7 from somewhere to make build progress continue in docker 
> building process, but haven't got clue to go through this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14841) Add KMS Client retry to handle 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154442#comment-16154442
 ] 

Xiao Chen commented on HADOOP-14841:


Adding some contexts from my investigation, and a proposed solution.
- The exception comes from {{KMSJSONReader}}, when it tries to deserialize the 
json object. Writer seems fine. This is most likely due to the different 
underlying jackson requirement:
[MessageBodyReader|http://grepcode.com/file/repo1.maven.org/maven2/javax.ws.rs/javax.ws.rs-api/2.0.1/javax/ws/rs/ext/MessageBodyReader.java#140]
 and 
[MessageBodyWriter|http://grepcode.com/file/repo1.maven.org/maven2/javax.ws.rs/javax.ws.rs-api/2.0.1/javax/ws/rs/ext/MessageBodyWriter.java#128].
 

{quote}In case the entity input stream is empty, the reader is expected to 
either return a Java representation of a zero-length entity or throw a 
javax.ws.rs.core.NoContentException in case no zero-length entity 
representation is defined for the supported Java type. A NoContentException, if 
thrown by a message body reader while reading a server request entity, is 
automatically translated by JAX-RS server runtime into a 
javax.ws.rs.BadRequestException wrapping the original NoContentException and 
rethrown for a standard processing by the registered exception mappers.
{quote}

So it looks like KMS reader code should special-case the 'entity input stream 
is empty' situation.


- jackson 2.4.0+ [added a 
flag|https://github.com/fasterxml/jackson-jaxrs-providers/issues/49] to allow 
empty input stream. If we were ever to upgrade to that, we maybe able to get 
away with it's IOE - which means kms client will retry.
- Why the input stream is empty? Not 100% sure in our case, but very likely 3rd 
party - I checked hadoop codes through the entire call stack (below) into 
the KMSJSONReader, didn't find any violation that could result in the stream 
being read prematurely. Closest [googling 
result|https://stackoverflow.com/questions/8522568/why-is-httpservletrequest-inputstream-empty]
 I found seems to point their issues to gae/catalina.

{noformat}
"561010393@qtp-1495634188-2@4730" prio=5 tid=0x22 nid=NA runnable
  java.lang.Thread.State: RUNNABLE
  at 
org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:57)
  at 
org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:38)
  at 
com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
  at 
com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
  at 
com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
  at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
  at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
  at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
  at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
  at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
  at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
  at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
  at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
  at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
  at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
  at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
  at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
  at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
  at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
  at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
  at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
  at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
  at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)

[jira] [Commented] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions

2017-09-05 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154435#comment-16154435
 ] 

Vrushali C commented on HADOOP-14840:
-

This sounds interesting. Will there be subtasks under this? 

Similar estimations for MR jobs like reducer estimation & memory estimation are 
done based on past runs using a tool called hRaven 
(https://github.com/twitter/hraven). I would think of this as a natural 
extension to Timeline Service v2.  Do you think you might want to make use of 
TSv2 in any way for this?


> Tool to estimate resource requirements of an application pipeline based on 
> prior executions
> ---
>
> Key: HADOOP-14840
> URL: https://issues.apache.org/jira/browse/HADOOP-14840
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Subru Krishnan
>Assignee: Rui Li
>
> We have been working on providing SLAs for job execution on Hadoop. At high 
> level this involves 2 parts: deriving the resource requirements of a job and 
> guaranteeing the estimated resources at runtime. The {{YARN 
> ReservationSystem}} (YARN-1051/YARN-2572/YARN-5326) enable the latter and in 
> this JIRA, we propose to add a tool to Hadoop to predict the  resource 
> requirements of a job based on past executions of the job. The system (aka 
> *Morpheus*) deep dive can be found in our OSDI'16 paper 
> [here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue

2017-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154433#comment-16154433
 ] 

Allen Wittenauer commented on HADOOP-14842:
---

create-release only supports oracle JDK:

https://github.com/apache/hadoop/blob/ccd2ac60ecc5fccce56debf21a068e663c1d5f11/dev-support/bin/create-release#L492

I started to undo that in HADOOP-14816.

> Hadoop 2.8.2 release build process get stuck due to java issue
> --
>
> Key: HADOOP-14842
> URL: https://issues.apache.org/jira/browse/HADOOP-14842
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Junping Du
>Priority: Blocker
>
> In my latest 2.8.2 release build (via docker) get failed, and following 
> errors received:
>  
> {noformat}
> "/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean 
> install
> Error: JAVA_HOME is not defined correctly. We cannot execute 
> /usr/lib/jvm/java-7-oracle/bin/java"
> {noformat}
> This looks like related to HADOOP-14474. However, reverting that patch 
> doesn't work here because build progress will get failed earlier in java 
> download/installation - may be as mentioned in HADOOP-14474, some java 7 
> download address get changed by Oracle. 
> Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work 
> here although it show correct java home. My suspect so far is we still need 
> to download java 7 from somewhere to make build progress continue in docker 
> building process, but haven't got clue to go through this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14474) Use OpenJDK 7 instead of Oracle JDK 7 to avoid oracle-java7-installer failures

2017-09-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154365#comment-16154365
 ] 

Junping Du commented on HADOOP-14474:
-

HADOOP-14842 get filed for more discussions.

> Use OpenJDK 7 instead of Oracle JDK 7 to avoid oracle-java7-installer failures
> --
>
> Key: HADOOP-14474
> URL: https://issues.apache.org/jira/browse/HADOOP-14474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 2.9.0, 2.7.4, 2.6.6, 2.8.2
>
> Attachments: HADOOP-14474-branch-2.01.patch
>
>
> Recently Oracle has changed the download link for Oracle JDK7, and that's why 
> oracle-java7-installer fails. Precommit jobs for branch-2* are failing 
> because of this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14842) Hadoop 2.8.2 release build process get stuck due to java issue

2017-09-05 Thread Junping Du (JIRA)
Junping Du created HADOOP-14842:
---

 Summary: Hadoop 2.8.2 release build process get stuck due to java 
issue
 Key: HADOOP-14842
 URL: https://issues.apache.org/jira/browse/HADOOP-14842
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Junping Du
Priority: Blocker


In my latest 2.8.2 release build (via docker) get failed, and following errors 
received:
 
{noformat}
"/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean 
install
Error: JAVA_HOME is not defined correctly. We cannot execute 
/usr/lib/jvm/java-7-oracle/bin/java"
{noformat}

This looks like related to HADOOP-14474. However, reverting that patch doesn't 
work here because build progress will get failed earlier in java 
download/installation - may be as mentioned in HADOOP-14474, some java 7 
download address get changed by Oracle. 
Hard coding my local JAVA_HOME to create-release or Dockerfile doesn't work 
here although it show correct java home. My suspect so far is we still need to 
download java 7 from somewhere to make build progress continue in docker 
building process, but haven't got clue to go through this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14583) wasb throws an exception if you try to create a file and there's no parent directory

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14583:
-
Fix Version/s: 3.0.0-beta1

> wasb throws an exception if you try to create a file and there's no parent 
> directory
> 
>
> Key: HADOOP-14583
> URL: https://issues.apache.org/jira/browse/HADOOP-14583
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14583-001.patch, HADOOP-14583-002.patch, 
> HADOOP-14583-003.patch
>
>
> It's a known defect of the Hadoop FS API (and one we don't explicitly test 
> for enough), but you can create a file on a path which doesn't exist. In that 
> situation, the create() logic is expectd to create the entries.
> Wasb appears to raise an exception if you try to call {{create(filepath)}} 
> without calling {{mkdirs(filepath.getParent()}} first. That's the semantics 
> expected of {{createNonRecursive()}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-13998:
--

Re-opening to resolve as "Complete" or something, since this code change was 
attributed to the parent JIRA HADOOP-13345 in the commit message.

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13998.
--
Resolution: Done

Re-resolving per above.

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14474) Use OpenJDK 7 instead of Oracle JDK 7 to avoid oracle-java7-installer failures

2017-09-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154335#comment-16154335
 ] 

Junping Du commented on HADOOP-14474:
-

Hi, my latest 2.8.2 release docker build get failed, and I get following errors:
{noformat} 
"/usr/bin/mvn -Dmaven.repo.local=/maven -pl hadoop-maven-plugins -am clean 
install
Error: JAVA_HOME is not defined correctly. We cannot execute 
/usr/lib/jvm/java-7-oracle/bin/java"
{noformat}
Is that related to this patch? I tried to revert this patch but build progress 
get failed earlier in java download - may be as mentioned above, some  download 
address get changed by Oracle. Also, hard coding my local JAVA_HOME to 
create-release or Dockerfile doesn't work here. Any workaround/fix for 
resolving this issue?

> Use OpenJDK 7 instead of Oracle JDK 7 to avoid oracle-java7-installer failures
> --
>
> Key: HADOOP-14474
> URL: https://issues.apache.org/jira/browse/HADOOP-14474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 2.9.0, 2.7.4, 2.6.6, 2.8.2
>
> Attachments: HADOOP-14474-branch-2.01.patch
>
>
> Recently Oracle has changed the download link for Oracle JDK7, and that's why 
> oracle-java7-installer fails. Precommit jobs for branch-2* are failing 
> because of this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14820) Wasb mkdirs security checks inconsistent with HDFS

2017-09-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154332#comment-16154332
 ] 

Andrew Wang commented on HADOOP-14820:
--

Cherry-picked this back to branch-3.0 for beta1 as well, thanks folks.

> Wasb mkdirs security checks inconsistent with HDFS
> --
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch, 
> HADOOP-14820.003.patch, HADOOP-14820.004.patch, HADOOP-14820.005.patch, 
> HADOOP-14820-006.patch, HADOOP-14820-007.patch, 
> HADOOP-14820-branch-2-001.patch.txt
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14841) Add KMS Client retry to handle 'No content to map' EOFExceptions

2017-09-05 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14841:
--

 Summary: Add KMS Client retry to handle 'No content to map' 
EOFExceptions
 Key: HADOOP-14841
 URL: https://issues.apache.org/jira/browse/HADOOP-14841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


We have seen quite some occurrences when the KMS server is stressed, some of 
the requests would end up getting a 500 return code, with this in the server 
log:
{noformat}
2017-08-31 06:45:33,021 WARN org.apache.hadoop.crypto.key.kms.server.KMS: User 
impala/HOSTNAME@REALM (auth:KERBEROS) request POST 
https://HOSTNAME:16000/kms/v1/keyversion/MNHDKEdWtZWM4vPb0p2bw544vdSRB2gy7APAQURcZns/_eek?eek_op=decrypt
 caused exception.
java.io.EOFException: No content to map to Object due to end of input
at 
org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2444)
at 
org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2396)
at 
org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1648)
at 
org.apache.hadoop.crypto.key.kms.server.KMSJSONReader.readFrom(KMSJSONReader.java:54)
at 
com.sun.jersey.spi.container.ContainerRequest.getEntity(ContainerRequest.java:474)
at 
com.sun.jersey.server.impl.model.method.dispatch.EntityParamDispatchProvider$EntityInjectable.getValue(EntityParamDispatchProvider.java:123)
at 
com.sun.jersey.server.impl.inject.InjectableValuesProvider.getInjectableValues(InjectableValuesProvider.java:46)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$EntityParamInInvoker.getParams(AbstractResourceMethodDispatchProvider.java:153)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:203)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:631)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:301)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:579)
at 
org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:130)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 

[jira] [Commented] (HADOOP-14738) Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1; rework docs

2017-09-05 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154294#comment-16154294
 ] 

Aaron Fabbri commented on HADOOP-14738:
---

This looks great.  +1 (non-binding)  The docs are really coming together, 
thanks for the improvements.

> Deprecate S3N in hadoop 3.0/2,9, target removal in Hadoop 3.1; rework docs
> --
>
> Key: HADOOP-14738
> URL: https://issues.apache.org/jira/browse/HADOOP-14738
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14739-001.patch
>
>
> We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf 
> since Hadoop 2.8
> It's now time to kill S3N off, remove the source, the tests, the transitive 
> dependencies.
> I propose that in Hadoop 3.0 beta we tell people off from using it, and link 
> to a doc page (wiki?) about how to migrate (Change URLs, update config ops).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions

2017-09-05 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned HADOOP-14840:
---

Assignee: Rui Li

> Tool to estimate resource requirements of an application pipeline based on 
> prior executions
> ---
>
> Key: HADOOP-14840
> URL: https://issues.apache.org/jira/browse/HADOOP-14840
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Subru Krishnan
>Assignee: Rui Li
>
> We have been working on providing SLAs for job execution on Hadoop. At high 
> level this involves 2 parts: deriving the resource requirements of a job and 
> guaranteeing the estimated resources at runtime. The {{YARN 
> ReservationSystem}} (YARN-1051/YARN-2572/YARN-5326) enable the latter and in 
> this JIRA, we propose to add a tool to Hadoop to predict the  resource 
> requirements of a job based on past executions of the job. The system (aka 
> *Morpheus*) deep dive can be found in our OSDI'16 paper 
> [here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14840) Tool to estimate resource requirements of an application pipeline based on prior executions

2017-09-05 Thread Subru Krishnan (JIRA)
Subru Krishnan created HADOOP-14840:
---

 Summary: Tool to estimate resource requirements of an application 
pipeline based on prior executions
 Key: HADOOP-14840
 URL: https://issues.apache.org/jira/browse/HADOOP-14840
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: Subru Krishnan


We have been working on providing SLAs for job execution on Hadoop. At high 
level this involves 2 parts: deriving the resource requirements of a job and 
guaranteeing the estimated resources at runtime. The {{YARN ReservationSystem}} 
(YARN-1051/YARN-2572/YARN-5326) enable the latter and in this JIRA, we propose 
to add a tool to Hadoop to predict the  resource requirements of a job based on 
past executions of the job. The system (aka *Morpheus*) deep dive can be found 
in our OSDI'16 paper 
[here|https://www.usenix.org/conference/osdi16/technical-sessions/presentation/jyothi].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154215#comment-16154215
 ] 

Xiaoyu Yao commented on HADOOP-14839:
-

Also, move the JIRA from hadoop-hdfs to hadoop-common project. Please change 
the patch name accordingly when attaching new patch. 

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14217) Object Storage: support colon in object path

2017-09-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154214#comment-16154214
 ] 

ASF GitHub Bot commented on HADOOP-14217:
-

GitHub user yufeldman opened a pull request:

https://github.com/apache/hadoop/pull/269

HADOOP-14217. Support colon in Hadoop Path



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yufeldman/hadoop HADOOP-14217

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #269


commit c3c7225bd1f324ae3a9e6f6875e3d23d5725eb68
Author: Yuliya Feldman 
Date:   2017-09-02T23:53:16Z

HADOOP-14217. Support colon in Hadoop Path




> Object Storage: support colon in object path
> 
>
> Key: HADOOP-14217
> URL: https://issues.apache.org/jira/browse/HADOOP-14217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/oss
>Affects Versions: 2.8.1
>Reporter: Genmao Yu
>Assignee: Yuliya Feldman
> Attachments: Colon handling in hadoop Path.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154213#comment-16154213
 ] 

Xiaoyu Yao commented on HADOOP-14839:
-

[~linyiqun], can you update the DistCp_Counter.properties with the new DIR_COPY 
counter? Without it, the DistCp Counters output will only give raw name like 
below, which is not very user friendly.

{code}
DistCp Counters
...
Files Copied=6
DIR_COPY=3
{code}

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154208#comment-16154208
 ] 

Hadoop QA commented on HADOOP-14839:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
0s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14839 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884459/HDFS-10234.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7a10693c26d6 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0ba8ff4 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13171/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13171/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> 

[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-09-05 Thread Ping Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154192#comment-16154192
 ] 

Ping Liu commented on HADOOP-14600:
---

I couldn't successfully set up a local environment to run test-patch.  So I 
went to the test result at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13153/testReport/ in the 
above table from [~hadoopqa].

I did manual test on all of five tests as follows.

* TestSFTPFileSystem.testStatFile
* TestDNS.testDefaultDnsServer
* TestRaceWhenRelogin.test
* TestKDiag.testKeytabAndPrincipal
* TestKDiag.testFileOutput
* TestKDiag.testLoadResource

But none of the tests hits on the new method *loadPermissionInfoByNativeIO()* 
in *RawLocalFileSystem* -- *loadPermissionInfoByNativeIO()* is the new code 
that swaps the original *_loadPermissionInfo()_* and is the only change to the 
previous version.

Additionally, I ran "mvn test -Pnative -Dtest=allNative" on my local 
environment and found 3 failures and 5 errors.

But they are mainly timed out.  After giving more time, majority of the tests 
passed.  For  TestRPCWaitForProxy.testInterruptedWaitForProxy, it's the only 
one still generating error after timeout time has been increased.  However, 
manual test on it didn't hit the break point in 
*loadPermissionInfoByNativeIO()* too.

In summary, I didn't find any failed test case for the target new method, 
*loadPermissionInfoByNativeIO()*.  Please let me know if this is enough for the 
verification or there are more tests to run and how.

CC: [~hadoopqa]



> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, 
> TestRawLocalFileSystemContract.java
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14838) backport S3guard to branch-2

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154168#comment-16154168
 ] 

Hadoop QA commented on HADOOP-14838:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 57 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m  
7s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
50s{color} | {color:red} root in branch-2 failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 16s{color} 
| {color:red} root-jdk1.8.0_144 with JDK v1.8.0_144 generated 18 new + 1327 
unchanged - 4 fixed = 1345 total (was 1331) {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
54s{color} | {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 54s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 39s{color} | {color:orange} root: The patch generated 4 new + 140 unchanged 
- 0 fixed = 144 total (was 140) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | 

[jira] [Moved] (HADOOP-14839) DistCp log output should contain copied and deleted files and directories

2017-09-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao moved HDFS-10234 to HADOOP-14839:


Affects Version/s: (was: 2.7.1)
   2.7.1
  Component/s: (was: distcp)
   tools/distcp
  Key: HADOOP-14839  (was: HDFS-10234)
  Project: Hadoop Common  (was: Hadoop HDFS)

> DistCp log output should contain copied and deleted files and directories
> -
>
> Key: HADOOP-14839
> URL: https://issues.apache.org/jira/browse/HADOOP-14839
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Konstantin Shaposhnikov
>Assignee: Yiqun Lin
> Attachments: HDFS-10234.001.patch, HDFS-10234.002.patch, 
> HDFS-10234.003.patch, HDFS-10234.004.patch, HDFS-10234.005.patch
>
>
> DistCp log output (specified via {{-log}} command line option) currently 
> contains only skipped and failed (when failures are ignored via {{-i}}) files.
> It will be more useful if it also contains copied and deleted files and 
> created directories.
> This should be fixed in 
> https://github.com/apache/hadoop/blob/branch-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14838:

Status: Patch Available  (was: Open)

+credit for the (painful) POM work to [~liuml07]

> backport S3guard to branch-2
> 
>
> Key: HADOOP-14838
> URL: https://issues.apache.org/jira/browse/HADOOP-14838
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14838-branch-2-001.patch
>
>
> Backport S3Guard to branch-2
> this consists of
> * classpath updates (AWS SDK, ...)
> * hadoop bin classpath and command setup
> * java 7 compatibility
> * testing
> The last patch of HADOOP-13998 brought the java code down to java 7 & has 
> already been tested/merged with branch-2; all that's left is the packaging, 
> bin/hadoop and review



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14838:

Attachment: HADOOP-14838-branch-2-001.patch

Patch 001
tested against s3 ireland with -Dscale, and all of localdynamo and dynamodb

> backport S3guard to branch-2
> 
>
> Key: HADOOP-14838
> URL: https://issues.apache.org/jira/browse/HADOOP-14838
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14838-branch-2-001.patch
>
>
> Backport S3Guard to branch-2
> this consists of
> * classpath updates (AWS SDK, ...)
> * hadoop bin classpath and command setup
> * java 7 compatibility
> * testing
> The last patch of HADOOP-13998 brought the java code down to java 7 & has 
> already been tested/merged with branch-2; all that's left is the packaging, 
> bin/hadoop and review



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154053#comment-16154053
 ] 

Steve Loughran edited comment on HADOOP-13998 at 9/5/17 5:49 PM:
-

-Patch 001-

-tested against s3 ireland with  -Dscale, and all of localdynamo and dynamodb-

(wrong JIRA)


was (Author: ste...@apache.org):
Patch 001

tested against s3 ireland with  -Dscale, and all of localdynamo and dynamodb

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Attachment: (was: HADOOP-14838-branch-2-001.patch)

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13998:

Attachment: HADOOP-14838-branch-2-001.patch

Patch 001

tested against s3 ireland with  -Dscale, and all of localdynamo and dynamodb

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) Merge initial S3guard release into trunk

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154050#comment-16154050
 ] 

Steve Loughran commented on HADOOP-13998:
-

see HADOOP-14838 for branch-2 backport

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch, HADOOP-13998-005.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14838) backport S3guard to branch-2

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14838:

Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-13345

> backport S3guard to branch-2
> 
>
> Key: HADOOP-14838
> URL: https://issues.apache.org/jira/browse/HADOOP-14838
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Backport S3Guard to branch-2
> this consists of
> * classpath updates (AWS SDK, ...)
> * hadoop bin classpath and command setup
> * java 7 compatibility
> * testing
> The last patch of HADOOP-13998 brought the java code down to java 7 & has 
> already been tested/merged with branch-2; all that's left is the packaging, 
> bin/hadoop and review



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14838) backport S3guard to branch-2

2017-09-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14838:
---

 Summary: backport S3guard to branch-2
 Key: HADOOP-14838
 URL: https://issues.apache.org/jira/browse/HADOOP-14838
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Backport S3Guard to branch-2

this consists of
* classpath updates (AWS SDK, ...)
* hadoop bin classpath and command setup
* java 7 compatibility
* testing

The last patch of HADOOP-13998 brought the java code down to java 7 & has 
already been tested/merged with branch-2; all that's left is the packaging, 
bin/hadoop and review



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-09-05 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13714:
--
Attachment: Compatibility.pdf
InterfaceClassification.pdf

> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: Compatibility.pdf, HADOOP-13714.001.patch, 
> HADOOP-13714.002.patch, HADOOP-13714.003.patch, HADOOP-13714.WIP-001.patch, 
> InterfaceClassification.pdf
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-09-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154038#comment-16154038
 ] 

Daniel Templeton commented on HADOOP-13714:
---

Maybe I'm missing your point.  Hadoop common is absolutely the common bits for 
use by Hadoop, and hence Limited Private (HDFS, MapReduce, YARN, Common) seems 
quite reasonable.  Is there are reason it isn't?

The point I'm trying to make is that the reason we want to set and uphold rules 
around audience and stability is so that we can have a clear and sustainable 
contract with the consumers of our interfaces.  If we have things that are 
labeled Limited Private (MapReduce) that everybody just knows are really 
Public, then there's nothing to stop someone who doesn't just know from 
breaking those APIs and all the downstream consumers.  If we make sure the 
Public interfaces are actually labeled as Public, then we can catch the 
breakage before it happens.

> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-13714.001.patch, HADOOP-13714.002.patch, 
> HADOOP-13714.003.patch, HADOOP-13714.WIP-001.patch
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-09-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154008#comment-16154008
 ] 

Wei-Chiu Chuang commented on HADOOP-14688:
--

+1. will commit today.

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png, jxray.report
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12219) Fix typos in hadoop-common-project module

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16154001#comment-16154001
 ] 

Hadoop QA commented on HADOOP-12219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-12219 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12744755/HADOOP-12219.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13168/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typos in hadoop-common-project module
> -
>
> Key: HADOOP-12219
> URL: https://issues.apache.org/jira/browse/HADOOP-12219
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Anthony Rojas
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12219.001.patch
>
>
> Fix a bunch of typos in comments, strings, variable names, and method names 
> in the hadoop-common-project module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153999#comment-16153999
 ] 

Hadoop QA commented on HADOOP-11398:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-11398 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-11398 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12745345/HADOOP-11398.003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13167/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398.002.patch, HADOOP-11398.003.patch, 
> HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12219) Fix typos in hadoop-common-project module

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153998#comment-16153998
 ] 

Hadoop QA commented on HADOOP-12219:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-12219 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12219 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12744755/HADOOP-12219.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13166/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix typos in hadoop-common-project module
> -
>
> Key: HADOOP-12219
> URL: https://issues.apache.org/jira/browse/HADOOP-12219
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Anthony Rojas
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12219.001.patch
>
>
> Fix a bunch of typos in comments, strings, variable names, and method names 
> in the hadoop-common-project module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-05 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153996#comment-16153996
 ] 

Jonathan Hung commented on HADOOP-14828:


Yes, seems so. Will close this as duplicate, thanks.

> RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time
> -
>
> Key: HADOOP-14828
> URL: https://issues.apache.org/jira/browse/HADOOP-14828
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>
> In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
> RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
> {noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
> sleepTime,
> TimeUnit timeUnit) {
>   super((int) (maxTime / sleepTime), sleepTime, timeUnit);
>   this.maxTime = maxTime;
>   this.timeUnit = timeUnit;
> }
> {noformat}
> But if retries take a long time, then the maxTime passed to the 
> RetryUpToMaximumTimeWithFixedSleep is exceeded.
> As an example, while doing NM restarts, we saw an issue where the NMProxy 
> creates a retry policy which specifies a maximum wait time of 15 minutes and 
> a 10 sec interval (which is converted to a MaximumCount policy with 15 min / 
> 10 sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's 
> retry policy: {noformat}  if (connectionRetryPolicy == null) {
> final int max = conf.getInt(
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
> final int retryInterval = conf.getInt(
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
> CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);
> connectionRetryPolicy = 
> RetryPolicies.retryUpToMaximumCountWithFixedSleep(
> max, retryInterval, TimeUnit.MILLISECONDS);
>   }{noformat}
> So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
> NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
> client retries 10 times with a 1 sec interval, meaning the time it takes for 
> NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
> specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-05 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung resolved HADOOP-14828.

Resolution: Duplicate

> RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time
> -
>
> Key: HADOOP-14828
> URL: https://issues.apache.org/jira/browse/HADOOP-14828
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>
> In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
> RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
> {noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
> sleepTime,
> TimeUnit timeUnit) {
>   super((int) (maxTime / sleepTime), sleepTime, timeUnit);
>   this.maxTime = maxTime;
>   this.timeUnit = timeUnit;
> }
> {noformat}
> But if retries take a long time, then the maxTime passed to the 
> RetryUpToMaximumTimeWithFixedSleep is exceeded.
> As an example, while doing NM restarts, we saw an issue where the NMProxy 
> creates a retry policy which specifies a maximum wait time of 15 minutes and 
> a 10 sec interval (which is converted to a MaximumCount policy with 15 min / 
> 10 sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's 
> retry policy: {noformat}  if (connectionRetryPolicy == null) {
> final int max = conf.getInt(
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
> final int retryInterval = conf.getInt(
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
> CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);
> connectionRetryPolicy = 
> RetryPolicies.retryUpToMaximumCountWithFixedSleep(
> max, retryInterval, TimeUnit.MILLISECONDS);
>   }{noformat}
> So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
> NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
> client retries 10 times with a 1 sec interval, meaning the time it takes for 
> NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
> specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14837) Handle S3A "glacier" data

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14837:

Priority: Minor  (was: Major)

> Handle S3A "glacier" data
> -
>
> Key: HADOOP-14837
> URL: https://issues.apache.org/jira/browse/HADOOP-14837
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Minor
>
> SPARK-21797 covers how if you have AWS S3 set to copy some files to glacier, 
> they appear in the listing but GETs fail, and so does everything else
> We should think about how best to handle this.
> # report better
> # if listings can identify files which are glaciated then maybe we could have 
> an option to filter them out
> # test & see what happens



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14837) Handle S3A "glacier" data

2017-09-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14837:
---

 Summary: Handle S3A "glacier" data
 Key: HADOOP-14837
 URL: https://issues.apache.org/jira/browse/HADOOP-14837
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0-beta1
Reporter: Steve Loughran


SPARK-21797 covers how if you have AWS S3 set to copy some files to glacier, 
they appear in the listing but GETs fail, and so does everything else

We should think about how best to handle this.

# report better
# if listings can identify files which are glaciated then maybe we could have 
an option to filter them out
# test & see what happens




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14836) multiple versions of maven-clean-plugin in use

2017-09-05 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14836:
-

 Summary: multiple versions of maven-clean-plugin in use
 Key: HADOOP-14836
 URL: https://issues.apache.org/jira/browse/HADOOP-14836
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer


hadoop-yarn-ui re-declares maven-clean-plugin with 3.0 while the rest of the 
source tree uses 2.5.  This should get synced up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14707) AbstractContractDistCpTest to test attr preservation with -p, verify blobstores downgrade

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153894#comment-16153894
 ] 

Steve Loughran commented on HADOOP-14707:
-

Issue is that even if you don't set -p, you can't use distcp to upload the .raw 
stuff from an encrypted HDFS zone, as that force sets the -preserve flag, which 
of course fails against: s3a, wasb, adl, ...



> AbstractContractDistCpTest to test attr preservation with -p, verify 
> blobstores downgrade
> -
>
> Key: HADOOP-14707
> URL: https://issues.apache.org/jira/browse/HADOOP-14707
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> It *may* be that trying to use {{distcp -p}} with S3a triggers a stack trace 
> {code}
> java.lang.UnsupportedOperationException: S3AFileSystem doesn't support 
> getXAttrs 
> at org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559) 
> at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
>  
> {code}
> Add a test to {{AbstractContractDistCpTest}} to verify that this is handled 
> better. What is "handle better" here? Either ignore the option or fail with 
> "don't do that" text



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-14707) AbstractContractDistCpTest to test attr preservation with -p, verify blobstores downgrade

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14707 stopped by Steve Loughran.
---
> AbstractContractDistCpTest to test attr preservation with -p, verify 
> blobstores downgrade
> -
>
> Key: HADOOP-14707
> URL: https://issues.apache.org/jira/browse/HADOOP-14707
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> It *may* be that trying to use {{distcp -p}} with S3a triggers a stack trace 
> {code}
> java.lang.UnsupportedOperationException: S3AFileSystem doesn't support 
> getXAttrs 
> at org.apache.hadoop.fs.FileSystem.getXAttrs(FileSystem.java:2559) 
> at 
> org.apache.hadoop.tools.util.DistCpUtils.toCopyListingFileStatus(DistCpUtils.java:322)
>  
> {code}
> Add a test to {{AbstractContractDistCpTest}} to verify that this is handled 
> better. What is "handle better" here? Either ignore the option or fail with 
> "don't do that" text



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14835:
-

 Summary: mvn site build throws SAX errors
 Key: HADOOP-14835
 URL: https://issues.apache.org/jira/browse/HADOOP-14835
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, site
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Priority: Critical




Running mvn  install site site:stage -DskipTests -Pdist,src -Preleasedocs,docs 
results in a stack trace when run on a fresh .m2 directory.  It appears to be 
coming from the jdiff doclets in the annotations code.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors

2017-09-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153890#comment-16153890
 ] 

Allen Wittenauer commented on HADOOP-14835:
---

Stack trace:

{code}
{code}
 [javadoc] Constructing Javadoc information...
  [javadoc] IncludePublicAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] org.xml.sax.SAXException: SAX2 driver class 
org.apache.xerces.parsers.SAXParser not found
  [javadoc] JDiff: reading the old API in from file 
'/Users/aw/shared-vmware/hadoop/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.8.0.xml'...SAXException:
 org.xml.sax.SAXException: SAX2 driver class 
org.apache.xerces.parsers.SAXParser not found
  [javadoc] java.lang.ClassNotFoundException: 
org.apache.xerces.parsers.SAXParser
  [javadoc] java.lang.ClassNotFoundException: 
org.apache.xerces.parsers.SAXParser
  [javadoc] at 
org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:230)
  [javadoc] at 
org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:221)
  [javadoc] at jdiff.XMLToAPI.readFile(XMLToAPI.java:51)
  [javadoc] at jdiff.JDiff.startGeneration(JDiff.java:83)
  [javadoc] at jdiff.JDiff.start(JDiff.java:29)
  [javadoc] at 
org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet.start(IncludePublicAnnotationsJDiffDoclet.java:47)
  [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  [javadoc] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  [javadoc] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  [javadoc] at java.lang.reflect.Method.invoke(Method.java:498)
  [javadoc] at 
com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
  [javadoc] at 
com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
  [javadoc] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
  [javadoc] at com.sun.tools.javadoc.Start.begin(Start.java:219)
  [javadoc] at com.sun.tools.javadoc.Start.begin(Start.java:205)
  [javadoc] at com.sun.tools.javadoc.Main.execute(Main.java:64)
  [javadoc] at com.sun.tools.javadoc.Main.main(Main.java:54)
  [javadoc] Caused by: java.lang.ClassNotFoundException: 
org.apache.xerces.parsers.SAXParser
  [javadoc] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  [javadoc] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  [javadoc] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  [javadoc] at 
org.xml.sax.helpers.NewInstance.newInstance(NewInstance.java:82)
  [javadoc] at 
org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:228)
  [javadoc] ... 16 more
{code}

> mvn site build throws SAX errors
> 
>
> Key: HADOOP-14835
> URL: https://issues.apache.org/jira/browse/HADOOP-14835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Priority: Critical
>
> Running mvn  install site site:stage -DskipTests -Pdist,src 
> -Preleasedocs,docs results in a stack trace when run on a fresh .m2 
> directory.  It appears to be coming from the jdiff doclets in the annotations 
> code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14827) Allow StopWatch to accept a Timer parameter for tests

2017-09-05 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153864#comment-16153864
 ] 

Erik Krogen edited comment on HADOOP-14827 at 9/5/17 3:44 PM:
--

TestRaceWhenRelogin tracked at HADOOP-14078, TestLambdaTestUtils tracked at 
HADOOP-13882, TestDNS tracked at HADOOP-13101, TestKDiag tracked at 
HADOOP-14030. TestZKFailoverController and TestShellBasedUnixGroupsMapping I 
was unable to reproduce locally and do not look related.


was (Author: xkrogen):
TestRaceWhenRelogin tracked at HADOOP-14078, TestLambdaTestUtils tracked at 
HADOOP-13882, TestDNS tracked at HADOOP-13101, TestKDiag tracked at 
HADOOP-14030. TestZKFailoverController and TestShellBasedUnixGroupsMapping I 
was unable to reproduce locally.

> Allow StopWatch to accept a Timer parameter for tests
> -
>
> Key: HADOOP-14827
> URL: https://issues.apache.org/jira/browse/HADOOP-14827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-14827.000.patch
>
>
> {{StopWatch}} should optionally accept a {{Timer}} parameter rather than 
> directly using {{Time}} so that its behavior can be controlled during tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14827) Allow StopWatch to accept a Timer parameter for tests

2017-09-05 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153864#comment-16153864
 ] 

Erik Krogen commented on HADOOP-14827:
--

TestRaceWhenRelogin tracked at HADOOP-14078, TestLambdaTestUtils tracked at 
HADOOP-13882, TestDNS tracked at HADOOP-13101, TestKDiag tracked at 
HADOOP-14030. TestZKFailoverController and TestShellBasedUnixGroupsMapping I 
was unable to reproduce locally.

> Allow StopWatch to accept a Timer parameter for tests
> -
>
> Key: HADOOP-14827
> URL: https://issues.apache.org/jira/browse/HADOOP-14827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HADOOP-14827.000.patch
>
>
> {{StopWatch}} should optionally accept a {{Timer}} parameter rather than 
> directly using {{Time}} so that its behavior can be controlled during tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14828) RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time

2017-09-05 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153829#comment-16153829
 ] 

Jason Lowe commented on HADOOP-14828:
-

This looks like a duplicate of HADOOP-11398.

> RetryUpToMaximumTimeWithFixedSleep is not bounded by maximum time
> -
>
> Key: HADOOP-14828
> URL: https://issues.apache.org/jira/browse/HADOOP-14828
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>
> In RetryPolicies.java, RetryUpToMaximumTimeWithFixedSleep is converted to a 
> RetryUpToMaximumCountWithFixedSleep, whose count is the maxTime / sleepTime: 
> {noformat}public RetryUpToMaximumTimeWithFixedSleep(long maxTime, long 
> sleepTime,
> TimeUnit timeUnit) {
>   super((int) (maxTime / sleepTime), sleepTime, timeUnit);
>   this.maxTime = maxTime;
>   this.timeUnit = timeUnit;
> }
> {noformat}
> But if retries take a long time, then the maxTime passed to the 
> RetryUpToMaximumTimeWithFixedSleep is exceeded.
> As an example, while doing NM restarts, we saw an issue where the NMProxy 
> creates a retry policy which specifies a maximum wait time of 15 minutes and 
> a 10 sec interval (which is converted to a MaximumCount policy with 15 min / 
> 10 sec = 90 tries). But each NMProxy retry policy invokes o.a.h.ipc.Client's 
> retry policy: {noformat}  if (connectionRetryPolicy == null) {
> final int max = conf.getInt(
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY,
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_DEFAULT);
> final int retryInterval = conf.getInt(
> 
> CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_RETRY_INTERVAL_KEY,
> CommonConfigurationKeysPublic
> .IPC_CLIENT_CONNECT_RETRY_INTERVAL_DEFAULT);
> connectionRetryPolicy = 
> RetryPolicies.retryUpToMaximumCountWithFixedSleep(
> max, retryInterval, TimeUnit.MILLISECONDS);
>   }{noformat}
> So the time it takes the NMProxy to fail is actually (90 retries) * (10 sec 
> NMProxy interval + o.a.h.ipc.Client retry time). In the default case, ipc 
> client retries 10 times with a 1 sec interval, meaning the time it takes for 
> NMProxy to fail is (90)(10 sec + 10 sec) = 30 min instead of the 15 min 
> specified by NMProxy configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14103) Sort out hadoop-aws contract-test-options.xml

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153710#comment-16153710
 ] 

Steve Loughran commented on HADOOP-14103:
-

LGTM

+1

thanks for doing this bit of housekeeping

> Sort out hadoop-aws contract-test-options.xml
> -
>
> Key: HADOOP-14103
> URL: https://issues.apache.org/jira/browse/HADOOP-14103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14103.001.patch, HADOOP-14103.002.patch, 
> HADOOP-14103.003.patch, HADOOP-14103.004.patch
>
>
> The doc update of HADOOP-14099 has shown that there's confusion about whether 
> we need a src/test/resources/contract-test-options.xml file.
> It's documented as needed, branch-2 has it in .gitignore; trunk doesn't.
> I think it's needed for the contract tests, which the S3A test base extends 
> (And therefore needs). However, we can just put in an SCM managed one and 
> have it just XInclude auth-keys.xml
> I propose: do that, fix up the testing docs to match



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153702#comment-16153702
 ] 

Steve Loughran commented on HADOOP-14439:
-

Oh, and I'm thinking of only doing this for branch-2, not trunk. Trunk I'd like 
to pull support for this entirely...it's just too leaky

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14439:

Release Note: We have restored the retention of a username:secret in an 
s3a: URL if you authenticate with S3 using embedded URI secrets. This is 
because some applications using strings to marshall the URLs were breaking. 
However, using secrets in this way is dangerous as it will end up in logs. It 
will be unsupported in Hadoop 3. To use different credentials in different 
buckets, move to per-bucket configuration

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-09-05 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153695#comment-16153695
 ] 

Steve Loughran commented on HADOOP-14439:
-

OK, so this is reverting the URI fixup. LGTM  though we need to call it out in 
the release notes

One thing I'm thinking of here is: should we patch s3n to tell of users too?  
All we need is to call {{extractLoginDetailsWithWarnings()}} in the S3N init 
code?

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Assignee: Vinayakumar B
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153693#comment-16153693
 ] 

Hadoop QA commented on HADOOP-14774:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14774 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885386/HADOOP-14774.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13165/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: failsafe-report.html, HADOOP-14774.001.patch, 
> HADOOP-14774.002.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] 

[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14774:

Attachment: HADOOP-14774.002.patch

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: failsafe-report.html, HADOOP-14774.001.patch, 
> HADOOP-14774.002.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
> 2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "x-amz-request-id: 
> tx0001e-005992b67e-27a45-default[\r][\n]"
> 2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Type: 

[jira] [Commented] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153669#comment-16153669
 ] 

Hadoop QA commented on HADOOP-14774:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14774 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884069/HADOOP-14774.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13164/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: failsafe-report.html, HADOOP-14774.001.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> 

[jira] [Updated] (HADOOP-14820) Wasb mkdirs security checks inconsistent with HDFS

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14820:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

rested locally & applied to branch-2 & trunk

+1

thanks!

> Wasb mkdirs security checks inconsistent with HDFS
> --
>
> Key: HADOOP-14820
> URL: https://issues.apache.org/jira/browse/HADOOP-14820
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14820.001.patch, HADOOP-14820.002.patch, 
> HADOOP-14820.003.patch, HADOOP-14820.004.patch, HADOOP-14820.005.patch, 
> HADOOP-14820-006.patch, HADOOP-14820-007.patch, 
> HADOOP-14820-branch-2-001.patch.txt
>
>
> No authorization checks should be made when a user tries to create (mkdirs 
> -p) an existing folder hierarchy.
> For example, if we start with _/home/hdiuser/prefix_ pre-created, and do the 
> following operations, the results should be as shown below.
> {noformat}
> hdiuser@hn0-0d2f67:~$ sudo chown root:root prefix
> hdiuser@hn0-0d2f67:~$ sudo chmod 555 prefix
> hdiuser@hn0-0d2f67:~$ ls -l
> dr-xr-xr-x 3 rootroot  4096 Aug 29 08:25 prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix
> hdiuser@hn0-0d2f67:~$ mkdir -p /home/hdiuser/prefix/1
> mkdir: cannot create directory â/home/hdiuser/prefix/1â: Permission denied
> The first three mkdirs succeed, because the ancestor is already present. The 
> fourth one fails because of a permission check against the (shorter) ancestor 
> (as compared to the path being created).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14774:

Attachment: failsafe-report.html

test against on Ceph object store over s3a.

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: failsafe-report.html, HADOOP-14774.001.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
> 2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "x-amz-request-id: 
> tx0001e-005992b67e-27a45-default[\r][\n]"
> 2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << 

[jira] [Commented] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153515#comment-16153515
 ] 

Hadoop QA commented on HADOOP-14774:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-14774 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884069/HADOOP-14774.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13163/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: HADOOP-14774.001.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << 

[jira] [Updated] (HADOOP-14603) S3A input stream to support ByteBufferReadable

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14603:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> S3A input stream to support ByteBufferReadable
> --
>
> Key: HADOOP-14603
> URL: https://issues.apache.org/jira/browse/HADOOP-14603
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> S3AInputStream should support {{ByteBufferReadable, 
> HasEnhancedByteBufferAccess}} and the operations to read into byte buffers.
> This is only if we can see a clear performance benefit from doing this or the 
> API is being more broadly used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14736) S3AInputStream to implement an efficient skip() call through seeking

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-14736:
-

> S3AInputStream to implement an efficient skip() call through seeking
> 
>
> Key: HADOOP-14736
> URL: https://issues.apache.org/jira/browse/HADOOP-14736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AInputStream}} implements skip() naively through the byte class: Reading 
> and discarding all data. Efficient on classic "sequential" reads, provided 
> the forward skip is <1MB. For larger skip values or on random IO, seek() 
> should be used.
> After some range checks/handling past-EOF skips to seek (EOF-1), a seek() 
> should handle the skip file.
> *there are no FS contract tests for skip semantics*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14606) S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward seek

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14606:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward 
> seek
> ---
>
> Key: HADOOP-14606
> URL: https://issues.apache.org/jira/browse/HADOOP-14606
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>
> There's some hints in the InputStream docs that {{skip(n)}} may skip  bytes. Codepaths only seem to do this if read() returns -1, meaning end of 
> stream is reached.
> If that happens when doing a forward seek via skip, then we have got our 
> numbers wrong and are in trouble. Look for a negative response, log @ ERROR 
> and revert to a close/reopen seek to an absolute position.
> *I have no evidence of this acutally occurring*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14291) S3a "Bad Request" message to include diagnostics

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14291:

Parent Issue: HADOOP-14531  (was: HADOOP-13204)

> S3a "Bad Request" message to include diagnostics
> 
>
> Key: HADOOP-14291
> URL: https://issues.apache.org/jira/browse/HADOOP-14291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's a whole section in s3a troubleshooting because requests can get auth 
> failures for many reasons, including
> * no credentials
> * wrong credentials
> * right credentials, wrong bucket
> * wrong endpoint for v4 auth
> * trying to use private S3 server without specifying endpoint, so AWS being 
> hit
> * clock out
> * joda time
> 
> We can aid with debugging this by including as much as we can in in the 
> message and a URL To a new S3A bad auth wiki page.
> Info we could include
> * bucket
> * fs.s3a.endpoint
> * nslookup of endpoint
> * Anything else relevant but not a security risk
> Goal; people stand a chance of working out what is failing within a bounded 
> time period



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14291) S3a "Bad Request" message to include diagnostics

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14291:

Target Version/s:   (was: 2.9.0)

> S3a "Bad Request" message to include diagnostics
> 
>
> Key: HADOOP-14291
> URL: https://issues.apache.org/jira/browse/HADOOP-14291
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> There's a whole section in s3a troubleshooting because requests can get auth 
> failures for many reasons, including
> * no credentials
> * wrong credentials
> * right credentials, wrong bucket
> * wrong endpoint for v4 auth
> * trying to use private S3 server without specifying endpoint, so AWS being 
> hit
> * clock out
> * joda time
> 
> We can aid with debugging this by including as much as we can in in the 
> message and a URL To a new S3A bad auth wiki page.
> Info we could include
> * bucket
> * fs.s3a.endpoint
> * nslookup of endpoint
> * Anything else relevant but not a security risk
> Goal; people stand a chance of working out what is failing within a bounded 
> time period



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13507) export s3a BlockingThreadPoolExecutorService pool info (size, load) as metrics

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13507:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> export s3a BlockingThreadPoolExecutorService pool info (size, load) as metrics
> --
>
> Key: HADOOP-13507
> URL: https://issues.apache.org/jira/browse/HADOOP-13507
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> We should publish load info on {{BlockingThreadPoolExecutorService}} as s3a 
> metrics: size, available, maybe even some timer info on load (at least: rate 
> of recent semaphore acquire/release)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13371) S3A globber to use bulk listObject call over recursive directory scan

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13371:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> S3A globber to use bulk listObject call over recursive directory scan
> -
>
> Key: HADOOP-13371
> URL: https://issues.apache.org/jira/browse/HADOOP-13371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-13208 produces O(1) listing of directory trees in 
> {{FileSystem.listStatus}} calls, but doesn't do anything for 
> {{FileSystem.globStatus()}}, which uses a completely different codepath, one 
> which does a selective recursive scan by pattern matching as it goes down, 
> filtering out those patterns which don't match. Cost is 
> O(matching-directories) + cost of examining the files.
> It should be possible to do the glob status listing in S3A not through the 
> filtered treewalk, but through a list + filter operation. This would be an 
> O(files) lookup *before any filtering took place*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13330) s3a large subdir delete to do listObjects() and delete() calls in parallel

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13330:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> s3a large subdir delete to do listObjects() and delete() calls in parallel
> --
>
> Key: HADOOP-13330
> URL: https://issues.apache.org/jira/browse/HADOOP-13330
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> When doing deletes of large directories/directory trees, paged list+delete 
> calls will be made. It would be faster if the delete and list calls were done 
> in parallel; the next listing batch obtained while the current batch was 
> being deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13371) S3A globber to use bulk listObject call over recursive directory scan

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13371:

Target Version/s:   (was: 2.9.0)

> S3A globber to use bulk listObject call over recursive directory scan
> -
>
> Key: HADOOP-13371
> URL: https://issues.apache.org/jira/browse/HADOOP-13371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-13208 produces O(1) listing of directory trees in 
> {{FileSystem.listStatus}} calls, but doesn't do anything for 
> {{FileSystem.globStatus()}}, which uses a completely different codepath, one 
> which does a selective recursive scan by pattern matching as it goes down, 
> filtering out those patterns which don't match. Cost is 
> O(matching-directories) + cost of examining the files.
> It should be possible to do the glob status listing in S3A not through the 
> filtered treewalk, but through a list + filter operation. This would be an 
> O(files) lookup *before any filtering took place*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14774) S3A case "testRandomReadOverBuffer" failed due to improper range parameter

2017-09-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14774:

Status: Patch Available  (was: Open)

have you tested this locally? What endpoint/s3 implementation was it?

> S3A case "testRandomReadOverBuffer" failed due to improper range parameter
> --
>
> Key: HADOOP-14774
> URL: https://issues.apache.org/jira/browse/HADOOP-14774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Hadoop 2.8.0  
> s3-compatible storage 
>Reporter: Yonger
>Assignee: Yonger
>Priority: Minor
> Attachments: HADOOP-14774.001.patch
>
>
> {code:java}
> Running org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.821 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance
> testRandomReadOverBuffer(org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance)
>   Time elapsed: 2.605 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<8192> but was:<8193>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3AInputStreamPerformance.testRandomReadOverBuffer(ITestS3AInputStreamPerformance.java:533)
> {code}
> From log, the length of content is exceed than our expect:
> {code:java}
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(280)) - >> GET 
> /test-aws-s3a/test/testReadOverBuffer.bin HTTP/1.1
> 2017-08-15 16:53:16,464 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Host: 10.0.2.254
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> 
> x-amz-content-sha256: 
> e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
> 2017-08-15 16:53:16,465 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Authorization: 
> AWS4-HMAC-SHA256 
> Credential=JFDAM9KF9IY8S5P0JIV6/20170815/us-east-1/s3/aws4_request, 
> SignedHeaders=content-type;host;range;user-agent;x-amz-content-sha256;x-amz-date,
>  Signature=42bce4a43d2b1bf6e6d599613c60812e6716514da4ef5b3839ef0566c31279ee
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> X-Amz-Date: 
> 20170815T085316Z
> 2017-08-15 16:53:16,466 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> User-Agent: Hadoop 
> 2.8.0, aws-sdk-java/1.10.6 Linux/3.10.0-514.21.2.el7.x86_64 
> Java_HotSpot(TM)_64-Bit_Server_VM/25.131-b11/1.8.0_131
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Range: bytes=0-8192
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Content-Type: 
> application/x-www-form-urlencoded; charset=utf-8
> 2017-08-15 16:53:16,467 [JUnit-testRandomReadOverBuffer] DEBUG http.headers 
> (DefaultClientConnection.java:sendRequestHeader(283)) - >> Connection: 
> Keep-Alive
> 2017-08-15 16:53:16,473 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "HTTP/1.1 206 Partial Content[\r][\n]"
> 2017-08-15 16:53:16,475 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Range: bytes 0-8192/32768[\r][\n]"
> 2017-08-15 16:53:16,476 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Content-Length: 8193[\r][\n]"
> 2017-08-15 16:53:16,477 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Accept-Ranges: bytes[\r][\n]"
> 2017-08-15 16:53:16,478 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "Last-Modified: Tue, 15 Aug 2017 08:51:39 
> GMT[\r][\n]"
> 2017-08-15 16:53:16,479 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "ETag: "e7191764798ba504d6671d4c434d2f4d"[\r][\n]"
> 2017-08-15 16:53:16,480 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> (Wire.java:wire(72)) -  << "x-amz-request-id: 
> tx0001e-005992b67e-27a45-default[\r][\n]"
> 2017-08-15 16:53:16,481 [JUnit-testRandomReadOverBuffer] DEBUG http.wire 
> 

[jira] [Created] (HADOOP-14834) Make default output stream of S3a the block output stream

2017-09-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14834:
---

 Summary: Make default output stream of S3a the block output stream
 Key: HADOOP-14834
 URL: https://issues.apache.org/jira/browse/HADOOP-14834
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0-beta1
Reporter: Steve Loughran
Priority: Minor


The S3A Block output stream is working well and much better than the original 
stream in terms of: scale, performance, instrumentation, robustness

Proposed: switch this to be the default, as a precursor to removing it later 
HADOOP-14746



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >