[jira] [Updated] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13715:

Attachment: HADOOP-13715.02.patch

Thanks for the review [~ste...@apache.org]. Please take a look at the attached 
v02 patch.

bq. FileStatus.toString() needs to include the EC status.
Included erasure coded details in FileStatus.toString()

bq. There's enough assertFalse(fs.getFileStatus(dir).isErasureCoded()) and 
assertTrue that they could be pulled out into a method with better diags
Created ContractTestUtils #assertErasureCoded and #assertNotErasureCoded and 
made the tests to invoke this helper test util. 

b2. The filesystem specification doesn't have any coverage of erasure coding or 
this bit...
Updated {{filesystem.md}} to include Erasure Coding details in 
{{getFileStatus(p)}} specification. Followed the encryption model in extending 
the result structure and a short description on the same. Any suggestions on 
how to further improve this spec is highly welcome. 

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Attachment: HADOOP-14195.001.patch

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Status: Patch Available  (was: Open)

Attaching the patch which synchronizes access to serviceLoader in 
CredentialProviderFactory

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933569#comment-15933569
 ] 

Hadoop QA commented on HADOOP-14202:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 new + 100 unchanged - 0 fixed = 
106 total (was 100) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
48s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-mapreduce-project in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed TAP tests | hadoop_verify_user_resolves.bats.tap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14202 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859628/HADOOP-14202.00.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 71a654949d9e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6c399a8 |
| shellcheck | v0.4.5 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11859/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| TAP logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/11859/artifact/patchprocess/patch-hadoop-common-project_hadoop-common.tap
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11859/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11859/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn 
hadoop-mapreduce-project U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11859/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the 

[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-03-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933777#comment-15933777
 ] 

Mingliang Liu commented on HADOOP-13345:


{code}
$ mvn -Dit.test='ITestS3A*, ITestS3Guard*' -Dtest=none -Dscale -Ds3guard 
-Ddynamo -q clean verify

Results :

Tests run: 348, Failures: 0, Errors: 0, Skipped: 16
{code}
Merge happened.

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13945:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

+1 on v11 patch.

Let's get this in first and make it better in follow-up JIRAs. I've committed 
to {{branch-2}} and {{trunk}} branches. Thanks for your contribution [~snayak]. 
Thanks for your review [~ste...@apache.org].

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.11.patch, 
> HADOOP-13945.1.patch, HADOOP-13945.2.patch, HADOOP-13945.3.patch, 
> HADOOP-13945.4.patch, HADOOP-13945.5.patch, HADOOP-13945.6.patch, 
> HADOOP-13945.7.patch, HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933736#comment-15933736
 ] 

Manoj Govindassamy edited comment on HADOOP-13715 at 3/20/17 10:35 PM:
---

Thanks for the review [~ste...@apache.org]. Please take a look at the attached 
v02 patch.

bq. FileStatus.toString() needs to include the EC status.
Included erasure coded details in FileStatus.toString()

bq. There's enough assertFalse(fs.getFileStatus(dir).isErasureCoded()) and 
assertTrue that they could be pulled out into a method with better diags
Created ContractTestUtils #assertErasureCoded and #assertNotErasureCoded and 
made the tests to invoke this helper test util. 

bq. The filesystem specification doesn't have any coverage of erasure coding or 
this bit...
Updated {{filesystem.md}} to include Erasure Coding details in 
{{getFileStatus(p)}} specification. Followed the encryption model in extending 
the result structure and a short description on the same. Any suggestions on 
how to further improve this spec is highly welcome. 


was (Author: manojg):
Thanks for the review [~ste...@apache.org]. Please take a look at the attached 
v02 patch.

bq. FileStatus.toString() needs to include the EC status.
Included erasure coded details in FileStatus.toString()

bq. There's enough assertFalse(fs.getFileStatus(dir).isErasureCoded()) and 
assertTrue that they could be pulled out into a method with better diags
Created ContractTestUtils #assertErasureCoded and #assertNotErasureCoded and 
made the tests to invoke this helper test util. 

b2. The filesystem specification doesn't have any coverage of erasure coding or 
this bit...
Updated {{filesystem.md}} to include Erasure Coding details in 
{{getFileStatus(p)}} specification. Followed the encryption model in extending 
the result structure and a short description on the same. Any suggestions on 
how to further improve this spec is highly welcome. 

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: HADOOP-13945.10.patch

Thanks [~liuml07]. Patch #9 looks good to me. I have added similar exception 
handling to {{RemoteWasbAuthorizerImpl.init()}} in patch #10. 
[~ste...@apache.org] Could you please review?


> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.10.patch, 
> HADOOP-13945.1.patch, HADOOP-13945.2.patch, HADOOP-13945.3.patch, 
> HADOOP-13945.4.patch, HADOOP-13945.5.patch, HADOOP-13945.6.patch, 
> HADOOP-13945.7.patch, HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932216#comment-15932216
 ] 

Hadoop QA commented on HADOOP-13945:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 74 unchanged - 2 fixed = 75 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859497/HADOOP-13945.11.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f7cc0ac32745 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34a931c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11854/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11854/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11854/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G 

[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: (was: HADOOP-13945.10.patch)

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.1.patch, 
> HADOOP-13945.2.patch, HADOOP-13945.3.patch, HADOOP-13945.4.patch, 
> HADOOP-13945.5.patch, HADOOP-13945.6.patch, HADOOP-13945.7.patch, 
> HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Comment: was deleted

(was: Thanks [~liuml07]. Patch #9 looks good to me. I have added similar 
exception handling to {{RemoteWasbAuthorizerImpl.init()}} in patch #10. 
[~ste...@apache.org] Could you please review?
)

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.1.patch, 
> HADOOP-13945.2.patch, HADOOP-13945.3.patch, HADOOP-13945.4.patch, 
> HADOOP-13945.5.patch, HADOOP-13945.6.patch, HADOOP-13945.7.patch, 
> HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.001

Patch branch-2.001
- Add property fs.adl.impl and fs.AbstractFileSystem.adl.impl to 
core-default.xml
- Copy ADLS jars for hadoop-dist

Testing done
- Manual tests in single node setup
- Live unit tests

The following live unit tests failed:
{noformat}
Failed tests: 
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:257
 expected:<1> but was:<10>

Tests in error: 
  
TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:254
 » AccessControl
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:190
 » AccessControl
{noformat}

The 2 testMkdirsFailsForSubdirectoryOfExistingFile errors are fixed by 
HDFS-11132.
Test testListStatus passes if the file system is empty, thus a test code 
problem.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Patch Available  (was: Open)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933401#comment-15933401
 ] 

Hadoop QA commented on HADOOP-14204:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 5 unchanged - 1 fixed = 6 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | HADOOP-14204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859617/HADOOP-14204-branch-2.8-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b61ea52345a 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933691#comment-15933691
 ] 

Steve Loughran commented on HADOOP-14205:
-

LGTM

+1

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933572#comment-15933572
 ] 

Hadoop QA commented on HADOOP-14205:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
4s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 12s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools-dist in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | hadoop.conf.TestCommonConfigurationFields 
|
| JDK v1.7.0_121 Failed junit tests | hadoop.conf.TestCommonConfigurationFields 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-14205 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933663#comment-15933663
 ] 

Mingliang Liu commented on HADOOP-14204:


+1 on this, Thanks Steve.

> S3A multipart commit failing, "UnsupportedOperationException at 
> java.util.Collections$UnmodifiableList.sort"
> 
>
> Key: HADOOP-14204
> URL: https://issues.apache.org/jira/browse/HADOOP-14204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14204-branch-2.8-001.patch
>
>
> Stack trace seen trying to commit a multipart upload, as the EMR code (which 
> takes a {{List etags}} is trying to sort that list directly, which it 
> can't do if the list doesn't want to be sorted.
> later versions of the SDK clone the list before sorting.
> We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14196) Azure Data Lake doc is missing required config entry

2017-03-20 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14196:
-
Target Version/s: 2.8.1, 3.0.0-alpha3  (was: 3.0.0-alpha2, 2.8.1)

> Azure Data Lake doc is missing required config entry
> 
>
> Key: HADOOP-14196
> URL: https://issues.apache.org/jira/browse/HADOOP-14196
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14196-001.patch
>
>
> The index.md for adl file system is missing one of the config entries needed 
> for setting up OAuth with client credentials. Users need to set the key 
> dfs.adls.oauth2.access.token.provider.type = ClientCredential, but the 
> instructions do not say that. 
> This has led to people not being able to connect to the backend after setting 
> up a cluster with ADL.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933652#comment-15933652
 ] 

Hadoop QA commented on HADOOP-14195:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 10 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14195 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859633/HADOOP-14195.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 929c614cbb97 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6c399a8 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11860/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11860/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11860/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11860/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11860/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT 

[jira] [Comment Edited] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933465#comment-15933465
 ] 

Allen Wittenauer edited comment on HADOOP-14202 at 3/20/17 8:22 PM:


* Make everything use secure_user
* rename hadoop_verify_user to hadoop_verify_user_perm to better reflect reality
* add a new var creator function to condense code further
* remove vast amounts of boiler plate from the base bin commands. required 
moving mapred.jobsummary.logger definition in mapred:

{code}
 13 files changed, 285 insertions(+), 322 deletions(-)
{code}


was (Author: aw):
* Make everything use secure_user
* remove vast amounts of boiler plate from the base bin commands. required 
moving mapred.jobsummary.logger definition in mapred.
* rename hadoop_verify_user to hadoop_verify_user_perm to better reflect reality
* add a new var creator function to condense code further



> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933438#comment-15933438
 ] 

John Zhuge commented on HADOOP-12875:
-

Thanks [~vishwajeet.dusane].

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933396#comment-15933396
 ] 

Steve Loughran commented on HADOOP-13887:
-

* javac warnings are about deprecation; don't worry about there.
* checkstyle, yes, lines are too long. If you want to know why hadoop is 
(stuck) @ 80, old issue; a key argument is for ease of side-by-side patch 
comparison via the chrome plugin & other tools. FWIW, it's not a hard veto 
limit, especially if things "look nicer", but do try to come in below 80 where 
it's easy to do so.
* is bouncycastle mandatory? If so, that's trouble. I don't see it being used 
anywhere except test scope right now. Not only would making it compile add 
yet-another-dependency-conflict, bouncycastle is the JAR which adds export 
restrictions. We really don't want to go there. If it's just testing, set the 
scope

Test-wise, it'd be nice if we could use JUnit parameterized tests. I say nice, 
but not really necessary...just helps debug problems so that if one size fails, 
the successors can still get by. Otherwise, tests look good (though the 
{{rm()}} calls in {{validateEncryptionForFilesize()}} may want to be in a 
finally clause.

Now, if we look at the s3a roadmap, the big topic on the way is "merge 
HADOOP-13345 back into trunk". That's going to get priority over this...just 
because it's a big change that people have been working on. As this patch is 
now changing the decision as to what a directory is, it could have consequences 
there, or at least merge problems. I think we'll have to start thinking about 
how best to apply this patch. FWIW, HADOOP-11345 has trunk merged in every 
week; I've been doing my work in a branch off it, rebasing on the main '13345 
branch as needed, dealing with merge pain as it arises. This might be something 
to do here

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-branch-2-003.patch, 
> HADOOP-13897-branch-2-004.patch, HADOOP-13897-branch-2-005.patch, 
> HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Igor Mazur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933560#comment-15933560
 ] 

Igor Mazur commented on HADOOP-13887:
-

About bouncycastle. According to this 
https://aws.amazon.com/blogs/developer/amazon-s3-client-side-authenticated-encryption/
 - yes - it's required for client-encryption. 
I suppose, that solution might be next - set a scope to "provided" and add this 
dependency to the documentation.

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-branch-2-003.patch, 
> HADOOP-13897-branch-2-004.patch, HADOOP-13897-branch-2-005.patch, 
> HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933578#comment-15933578
 ] 

Ravi Prakash commented on HADOOP-10738:
---

I'm not sure Siqi Li is active anymore Arpit. I suspect the way most people use 
distcp is in an [oozie 
action|https://oozie.apache.org/docs/4.0.0/DG_DistCpActionExtension.html]. I 
suspect they want to specify a single xml file where they can have all the 
configuration for a source-destination pair. The alternative is to have the 
exact set of parameters copy-pasted in multiple workflows (which if changed 
then has to be updated in all workflows). Please correct me if I'm wrong 
[~ferhui] . Other users?

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-10738:
-
Attachment: HADOOP-10738-branch-2.001.patch

update base on the latest branch-2


> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Attachment: (was: TestCredentialProvider.java)

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Attachment: TestCredentialProvider.java

Attaching the Java application which reproduces the race condition. The 
exception is not thrown when the patch is applied.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933877#comment-15933877
 ] 

Hadoop QA commented on HADOOP-13715:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} root: The patch generated 0 new + 545 unchanged - 4 
fixed = 545 total (was 549) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 34s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13715 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859660/HADOOP-13715.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1414f3aad384 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933950#comment-15933950
 ] 

Fei Hui commented on HADOOP-14176:
--

CC [~jrottinghuis] [~cnauroth]

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: (was: HADOOP-14205.branch-2.001)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: (was: HADOOP-14205.branch-2.002)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Patch Available  (was: Open)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.001.patch

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Status: Open  (was: Patch Available)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.002.patch

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933971#comment-15933971
 ] 

Hadoop QA commented on HADOOP-14205:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-tools/hadoop-tools-dist {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 53s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-tools-dist in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | 

[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934040#comment-15934040
 ] 

John Zhuge commented on HADOOP-14205:
-

TestSFTPFileSystem#testFileExists failed often lately. Filed HADOOP-14206. The 
test passed locally for me.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-03-20 Thread Omkar Aradhya K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934045#comment-15934045
 ] 

Omkar Aradhya K S commented on HADOOP-11794:


I was trying to evaluate your patch with ADLS:
Tried the bits on a HDInsight 3.5 cluster (this comes with hadoop 2.7)
Observed following compatibility issues:
 a. You are checking for instance of {code}DistributedFileSystem{code} in 
many places and all other {code}FileSystem{code} implementations don’t 
implement {code}DistributedFileSystem{code}
  i.Could this be changed to something more compatible with other 
{code}FileSystem{code} implementations?
 b. You are using the new {code}DFSUtilClient{code}, which makes DistCp 
incompatible with older versions of Hadoop
 i. Can this be changed to be backward compatible?
If the compatibility issues are addressed, the DistCp with your feature would 
be available for other {code}FileSystem{code} implementations and also would be 
backward compatible.

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934116#comment-15934116
 ] 

Fei Hui commented on HADOOP-10738:
--

distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.
I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934116#comment-15934116
 ] 

Fei Hui edited comment on HADOOP-10738 at 3/21/17 5:25 AM:
---

distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.

I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 


was (Author: ferhui):
distcp uses parameters
* from distcp-default.xml, which overrides 
hdfs-site.xml,yarn-site.xml,mapred-site.xml
* from -D in commandline.
I think we should add another way to set parameters to override 
distcp-default.xml. If distcp runs with -D every time, it is boring. it is 
useful for adding distcp-site.xml 

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13966) add ability to start DDB local server in every test

2017-03-20 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13966:
---
Attachment: HADOOP-13966-HADOOP-13345.000.patch

V0 patch is a starting point. Open question is how to reuse the singleton 
instance and when to stop it.

> add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
> Attachments: HADOOP-13966-HADOOP-13345.000.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14187) Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0

2017-03-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934124#comment-15934124
 ] 

Akira Ajisaka commented on HADOOP-14187:


LGTM, +1. Thanks [~ozawa].

> Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0
> -
>
> Key: HADOOP-14187
> URL: https://issues.apache.org/jira/browse/HADOOP-14187
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-14187.001.patch
>
>
> This is a update for using Apache Curator which shades guava.
> Why is Curator updated to 2.12.0 instead of 3.3.0?
> It's because Curator 3.x only supports ZooKeeper 3.5.x. ZooKeeper 3.5.x is 
> still alpha release. Hence, I think we should do conservative choice here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14204:

Attachment: HADOOP-14204-branch-2.8-001.patch

Patch 001; create a new, sortable list. This is what the later AWS sdk does 
internally, it is mostly harmless on those SDKs, and should prevent the problem 
on the version in Hadoop 2.7-2.8.

Testing: s3a frankfurt, also rebuilt spark & ran the tests downstream, as that 
was where I saw it. No occurrences in repeated test runs.

> S3A multipart commit failing, "UnsupportedOperationException at 
> java.util.Collections$UnmodifiableList.sort"
> 
>
> Key: HADOOP-14204
> URL: https://issues.apache.org/jira/browse/HADOOP-14204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14204-branch-2.8-001.patch
>
>
> Stack trace seen trying to commit a multipart upload, as the EMR code (which 
> takes a {{List etags}} is trying to sort that list directly, which it 
> can't do if the list doesn't want to be sorted.
> later versions of the SDK clone the list before sorting.
> We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14204:

Target Version/s: 2.8.1
  Status: Patch Available  (was: Open)

> S3A multipart commit failing, "UnsupportedOperationException at 
> java.util.Collections$UnmodifiableList.sort"
> 
>
> Key: HADOOP-14204
> URL: https://issues.apache.org/jira/browse/HADOOP-14204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14204-branch-2.8-001.patch
>
>
> Stack trace seen trying to commit a multipart upload, as the EMR code (which 
> takes a {{List etags}} is trying to sort that list directly, which it 
> can't do if the list doesn't want to be sorted.
> later versions of the SDK clone the list before sorting.
> We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14202:
--
Attachment: HADOOP-14202.00.patch

* Make everything use secure_user
* remove vast amounts of boiler plate from the base bin commands. required 
moving mapred.jobsummary.logger definition in mapred.
* rename hadoop_verify_user to hadoop_verify_user_perm to better reflect reality
* add a new var creator function to condense code further



> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14202:
--
Status: Patch Available  (was: Open)

> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14202.00.patch
>
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2017-03-20 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933552#comment-15933552
 ] 

Ravi Prakash commented on HADOOP-14176:
---

Hi Fei Hui!

What I meant was that I'd lean towards setting {{mapreduce.map.memory.mb}} to 
1280 and {{mapreduce.map.java.opts}} to 1024. That way no jobs which used to 
work would suddenly fail (If in the past {{mapred.job.map.memory.mb}} was 
1024.) I would like to hear other people's opinion though since you want this 
in branch-2.

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Attachment: HADOOP-14205.branch-2.002

Patch branch-2.002
- Fix TestCommonConfigurationFields unit test failure about fs.adl.impl


> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001, HADOOP-14205.branch-2.002
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2017-03-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933884#comment-15933884
 ] 

Kai Zheng commented on HADOOP-13200:


Roughly the problem is, how to configure a raw coder impl for a codec. A raw 
coder impl includes both an encoder and decoder. Currently it uses a raw coder 
factory to combine an encoder and decoder together to represent a raw coder 
impl for a codec. Available codecs are rs-default, rs-legacy, xor, hh-xor and 
etc. raw coder impls for rs-default codec are: RSRawErasureCoderFactory and 
NativeRSRawErasureCoderFactory. More impls for a codec could be provided/added 
in future. The issue originated from the discussion with Colin and he disliked 
the current way using the factory method. It was meant to figure out a way to 
get rid of the raw coder factories. 
 
I don’t have a perfect solution for this in mind yet. Some related ideas so far:
1. Dynamically combine the encoder/decoder name given codec name and other 
info, roughly suggested by Colin but I may not catch him quite exactly. It 
doesn’t look to me very attractive because there is no easy or intuitive way to 
reduce a raw coder name directly from some configuration properties. Given the 
raw coder name is reduced and then the corresponding encoder/decoder class name 
can be out so we can create the needed encoder/decoder instances directly.

2. Combine encoder and decoder together, suggested by ATM somewhere. If we 
combine encoder and decoder together, then we can directly save or avoid the 
factory. It sounds good for some raw coder impls but not for others. Some raw 
encoder/decoder impl is pretty complex, if we combine them the resultant class 
will be pretty large and hard to maintain. Generally, decoding logic will be 
much complex than encoding. An extreme example, LRC codec, the both encoding 
and decoding logic will be quite complicated, so better to be separate. 

Note in current approach, to configure something for a codec or a raw coder 
impl for the codec, the configuration property starts with something like 
{{io.erasurecode.codec.rs.rawcoder.*}}

[~andrew.wang], what's your thought? Thanks!

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933813#comment-15933813
 ] 

Hudson commented on HADOOP-13945:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11430 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11430/])
HADOOP-13945. Azure: Add Kerberos and Delegation token support to WASB 
(liuml07: rev 8e15e240597f821968e14893eabfea39815de207)
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockWasbAuthorizerImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteSASKeyGeneratorImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbDelegationTokenIdentifier.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/package-info.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/WasbAuthorizerInterface.java
* (add) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/RemoteWasbAuthorizerImpl.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/SecurityUtils.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SecureStorageInterfaceImpl.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/Constants.java
* (add) 
hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/security/WasbTokenRenewer.java


> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.11.patch, 
> HADOOP-13945.1.patch, HADOOP-13945.2.patch, HADOOP-13945.3.patch, 
> HADOOP-13945.4.patch, HADOOP-13945.5.patch, HADOOP-13945.6.patch, 
> HADOOP-13945.7.patch, HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11794) distcp can copy blocks in parallel

2017-03-20 Thread Omkar Aradhya K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934045#comment-15934045
 ] 

Omkar Aradhya K S edited comment on HADOOP-11794 at 3/21/17 3:46 AM:
-

I was trying to evaluate your patch with ADLS:
Tried the bits on a HDInsight 3.5 cluster (this comes with hadoop 2.7)
Observed following compatibility issues:
 a. You are checking for instance of *DistributedFileSystem* in many places 
and all other *FileSystem* implementations don’t implement 
*DistributedFileSystem*
  i.Could this be changed to something more compatible with other 
*FileSystem* implementations?
 b. You are using the new *DFSUtilClient*, which makes DistCp incompatible 
with older versions of Hadoop
 i. Can this be changed to be backward compatible?
If the compatibility issues are addressed, the DistCp with your feature would 
be available for other *FileSystem* implementations and also would be backward 
compatible.


was (Author: omkarksa):
I was trying to evaluate your patch with ADLS:
Tried the bits on a HDInsight 3.5 cluster (this comes with hadoop 2.7)
Observed following compatibility issues:
 a. You are checking for instance of {code}DistributedFileSystem{code} in 
many places and all other {code}FileSystem{code} implementations don’t 
implement {code}DistributedFileSystem{code}
  i.Could this be changed to something more compatible with other 
{code}FileSystem{code} implementations?
 b. You are using the new {code}DFSUtilClient{code}, which makes DistCp 
incompatible with older versions of Hadoop
 i. Can this be changed to be backward compatible?
If the compatibility issues are addressed, the DistCp with your feature would 
be available for other {code}FileSystem{code} implementations and also would be 
backward compatible.

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933810#comment-15933810
 ] 

Andrew Wang commented on HADOOP-13715:
--

Hi Manoj, thanks for revving, thanks also to Steve for reviewing, overall looks 
really good, few small comments:

* FileStatus#toString, why do we hide the EC bit behind an if statement? I'm 
also wondering if we should touch this at all for compatibility reasons. We 
didn't modify toString for the ACL bit or encrypted bit. [~ste...@apache.org], 
thoughts? I remember there was some discussion of public vs. private toString 
methods. Maybe we handle all these bits in a new JIRA?
* In the new WebHDFS test, it'd be good to assert not just that the statuses 
are equal, but also that the "normal" ones are true or false as appropriate.
* Could we add HTTPFS tests too?

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-20 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Attachment: HADOOP-14195.002.patch

Attaching the second version for the patch which fixes the white-spaces. This 
patch fixes a race-condition in java.util.ServiceLoader when it is iterated in 
parallel by multiple threads. Its very hard to write a testcase to consistently 
reproduce the race-condition. I tried adding a test case which creates a 
Threadpool and calls {{CredentialProviderFactory.getProviders(conf))}} from 
multiple threads. It works when run individually but creates false positives 
when run along with other threads. This could be due to the way Junit schedules 
the threads for each testcase.

I tested the patch manually with the attached test java application which 
reproduces the race-condition consistently.

The JUnit test {{TestKDiag}} is working for me locally. [~jzhuge] Do you know 
how can I reproduce this test failure?

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at 

[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933890#comment-15933890
 ] 

Manoj Govindassamy commented on HADOOP-13715:
-

Thanks for the review [~andrew.wang]. Will add HTTPFS test, fix WebHDFS and 
also take care of above jenkins QA errors. 

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934037#comment-15934037
 ] 

John Zhuge commented on HADOOP-14206:
-

5 failures in {{Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86}} from Feb 6 
to Mar 9.

> TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature
> -
>
> Key: HADOOP-14206
> URL: https://issues.apache.org/jira/browse/HADOOP-14206
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, test
>Affects Versions: 2.9.0
>Reporter: John Zhuge
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt:
> {noformat}
> Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem
> testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem)  Time elapsed: 
> 0.19 sec  <<< ERROR!
> java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: 
> java.security.SignatureException: Invalid encoding for signature
>   at com.jcraft.jsch.Session.connect(Session.java:565)
>   at com.jcraft.jsch.Session.connect(Session.java:183)
>   at 
> org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
>   at 
> org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
>   at 
> org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14206:
---

 Summary: TestSFTPFileSystem#testFileExists failure: Invalid 
encoding for signature
 Key: HADOOP-14206
 URL: https://issues.apache.org/jira/browse/HADOOP-14206
 Project: Hadoop Common
  Issue Type: Test
  Components: fs, test
Affects Versions: 2.9.0
Reporter: John Zhuge


https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt:
{noformat}
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec <<< 
FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem
testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem)  Time elapsed: 
0.19 sec  <<< ERROR!
java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: 
java.security.SignatureException: Invalid encoding for signature
at com.jcraft.jsch.Session.connect(Session.java:565)
at com.jcraft.jsch.Session.connect(Session.java:183)
at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933295#comment-15933295
 ] 

Vishwajeet Dusane commented on HADOOP-12875:


I think HADOOP-13257 ported to branch 2.8 should already cover the changes from 
this patch.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14059) typo in s3a rename(self, subdir) error message

2017-03-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933302#comment-15933302
 ] 

Hudson commented on HADOOP-14059:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11428 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11428/])
HADOOP-14059. typo in s3a rename(self, subdir) error message. (arp: rev 
6c399a88e9b5ef8f822a9bd469dbf9fdb3141e38)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java


> typo in s3a rename(self, subdir) error message
> --
>
> Key: HADOOP-14059
> URL: https://issues.apache.org/jira/browse/HADOOP-14059
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14059-001.patch
>
>
> HADOOP-13823 added clearer error messages on renames, except for one, where 
> it introduced a typo:
>  "cannot rename a directory to a subdirectory o fitself ");



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14202) fix jsvc/secure user var inconsistencies

2017-03-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933301#comment-15933301
 ] 

Allen Wittenauer commented on HADOOP-14202:
---

Now I remember why I pushed this off:  the user vars are used to determine if 
something is running with privilege.  This make cleaning this up significantly 
more complicated. :(

> fix jsvc/secure user var inconsistencies
> 
>
> Key: HADOOP-14202
> URL: https://issues.apache.org/jira/browse/HADOOP-14202
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> Post-HADOOP-13341 and (more importantly) HADOOP-13673, there has been a major 
> effort on making the configuration environment variables consistent among all 
> the projects. The vast majority of vars now look like 
> (command)_(subcommand)_(etc). Two hold outs are HADOOP_SECURE_DN_USER  and 
> HADOOP_PRIVILEGED_NFS_USER.
> Additionally, there is
> * no generic handling
> * no documentation for anyone
> * no safety checks to make sure things are defined
> In order to fix all of this, we should:
> * deprecate the previous vars using the deprecation function, updating the 
> HDFS documentation that references them
> * add generic (command)_(subcommand)_SECURE_USER support
> * add some verification for the previously mentioned var
> * add some docs to UnixShellGuide.md



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-20 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933307#comment-15933307
 ] 

Surendra Singh Lilhore commented on HADOOP-14167:
-

[~xyao] can you please review..

> UserIdentityProvider should use short user name in DecayRpcScheduler
> 
>
> Key: HADOOP-14167
> URL: https://issues.apache.org/jira/browse/HADOOP-14167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14167.001.patch
>
>
> In secure cluster {{UserIdentityProvider}} use principal name for user, it 
> should use short name of principal.
> {noformat}
>   {
> "name" : 
> "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
>  .
>  .
>  .
> "Caller(hdfs/had...@hadoop.com).Volume" : 436,
> "Caller(hdfs/had...@hadoop.com).Priority" : 3,
> .
> .
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933310#comment-15933310
 ] 

Steve Loughran commented on HADOOP-14204:
-

This is one of those Schoedingbugs: it doesn't exist until it surfaces, but now 
you see it, it's obvious that the code never worked. Except it does, doesn't it?

> S3A multipart commit failing, "UnsupportedOperationException at 
> java.util.Collections$UnmodifiableList.sort"
> 
>
> Key: HADOOP-14204
> URL: https://issues.apache.org/jira/browse/HADOOP-14204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Stack trace seen trying to commit a multipart upload, as the EMR code (which 
> takes a {{List etags}} is trying to sort that list directly, which it 
> can't do if the list doesn't want to be sorted.
> later versions of the SDK clone the list before sorting.
> We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14203) performAuthCheck fails with wasbs scheme

2017-03-20 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14203:
--
Affects Version/s: (was: 2.6.5)
   2.7.3

> performAuthCheck fails with wasbs scheme
> 
>
> Key: HADOOP-14203
> URL: https://issues.apache.org/jira/browse/HADOOP-14203
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Varada Hemeswari
>Assignee: Sivaguru Sankaridurg
>Priority: Critical
>  Labels: azure, fs, secure;, wasb
>
> Accessing Azure file system with 'wasbs' scheme fails on enabling wasb 
> authorization.
> Stack trace :
> {code}
> adminuser1@hn0-f6adaa:/etc/hadoop/conf$ yarn jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount "/examplefile" "/output"
> 17/03/20 07:58:48 INFO client.AHSProxy: Connecting to Application History 
> server at hn0-f6adaa.team2testdomain.onmicrosoft.com/10.45.0.190:10200
> 17/03/20 07:58:48 INFO security.TokenCache: Got dt for 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net; 
> Kind: WASB delegation, Service: 10.45.0.190:50911, Ident: (owner=adminuser1, 
> renewer=yarn, realUser=, issueDate=1489996728687, maxDate=1490601528687, 
> sequenceNumber=15, masterKeyId=11)
> org.apache.hadoop.fs.azure.WasbAuthorizationException: getFileStatus 
> operation for Path : 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net/output
>  not allowed
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1425)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2058)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at 
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}
> In the above fs.defaultFS is set to 
> "wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net"
> If fs.defaultFS is changed to 
> "wasb://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net", the 
> job runs fine



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14203) performAuthCheck fails with wasbs scheme

2017-03-20 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14203:
--
Labels: azure fs secure; wasb  (was: azure fs wasb)

> performAuthCheck fails with wasbs scheme
> 
>
> Key: HADOOP-14203
> URL: https://issues.apache.org/jira/browse/HADOOP-14203
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Varada Hemeswari
>Assignee: Sivaguru Sankaridurg
>Priority: Critical
>  Labels: azure, fs, secure;, wasb
>
> Accessing Azure file system with 'wasbs' scheme fails on enabling wasb 
> authorization.
> Stack trace :
> {code}
> adminuser1@hn0-f6adaa:/etc/hadoop/conf$ yarn jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount "/examplefile" "/output"
> 17/03/20 07:58:48 INFO client.AHSProxy: Connecting to Application History 
> server at hn0-f6adaa.team2testdomain.onmicrosoft.com/10.45.0.190:10200
> 17/03/20 07:58:48 INFO security.TokenCache: Got dt for 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net; 
> Kind: WASB delegation, Service: 10.45.0.190:50911, Ident: (owner=adminuser1, 
> renewer=yarn, realUser=, issueDate=1489996728687, maxDate=1490601528687, 
> sequenceNumber=15, masterKeyId=11)
> org.apache.hadoop.fs.azure.WasbAuthorizationException: getFileStatus 
> operation for Path : 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net/output
>  not allowed
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1425)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2058)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at 
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}
> In the above fs.defaultFS is set to 
> "wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net"
> If fs.defaultFS is changed to 
> "wasb://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net", the 
> job runs fine



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14203) performAuthCheck fails with wasbs scheme

2017-03-20 Thread Varada Hemeswari (JIRA)
Varada Hemeswari created HADOOP-14203:
-

 Summary: performAuthCheck fails with wasbs scheme
 Key: HADOOP-14203
 URL: https://issues.apache.org/jira/browse/HADOOP-14203
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.5
Reporter: Varada Hemeswari
Assignee: Sivaguru Sankaridurg
Priority: Critical


Accessing Azure file system with 'wasbs' scheme fails on enabling wasb 
authorization.

Stack trace :
{code}
adminuser1@hn0-f6adaa:/etc/hadoop/conf$ yarn jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount "/examplefile" "/output"
17/03/20 07:58:48 INFO client.AHSProxy: Connecting to Application History 
server at hn0-f6adaa.team2testdomain.onmicrosoft.com/10.45.0.190:10200
17/03/20 07:58:48 INFO security.TokenCache: Got dt for 
wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net; Kind: 
WASB delegation, Service: 10.45.0.190:50911, Ident: (owner=adminuser1, 
renewer=yarn, realUser=, issueDate=1489996728687, maxDate=1490601528687, 
sequenceNumber=15, masterKeyId=11)
org.apache.hadoop.fs.azure.WasbAuthorizationException: getFileStatus operation 
for Path : 
wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net/output 
not allowed
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1425)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2058)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
at 
org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
{code}

In the above fs.defaultFS is set to 
"wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net"

If fs.defaultFS is changed to 
"wasb://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net", the 
job runs fine



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-20 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: HADOOP-13945.11.patch

Thanks [~liuml07]. Patch #10 looks good. I have added similar exception 
handling to {{RemoteWasbAuthorizerImpl.init()}} in patch #11. 
[~ste...@apache.org] Could you please review?

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.10.patch, HADOOP-13945.11.patch, 
> HADOOP-13945.1.patch, HADOOP-13945.2.patch, HADOOP-13945.3.patch, 
> HADOOP-13945.4.patch, HADOOP-13945.5.patch, HADOOP-13945.6.patch, 
> HADOOP-13945.7.patch, HADOOP-13945.8.patch, HADOOP-13945.9.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932488#comment-15932488
 ] 

Steve Loughran commented on HADOOP-14199:
-

It's an OS that doesn't yet you create files called "COM2", which was notorious 
for causing problems with microsoft's own COM2 project, hence COM+ and OLE2.

One of the windows funnies is about asking for unicode vs non UC names, there's 
this trick of having paths beginning \\?\  the start, as mentioned in 
[::CreateFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx].
 I think Java switched to that way, way back -without it you couldn't have 
paths > 256 chars long. (see [http://bugs.java.com/view_bug.do?bug_id=4403166])

so: I don't trust windows filename logic to make any sense whatsoever.

 But it could be as you say, escaping. Except, the "\b and "\t" conversion is 
happening in the java string, isn't it? By the time it goes down the stack, 
it'll be as an array of unicode characters. If it is being turned back into \, 
then that's a bug. Actually, there's a way to test that, isn't there: have a 
string declaring a unicode char code which is a normal ascii string \u0065 = 
"a" (correct?), then assert that the returned filename has an "a" in it


Now, one thing to consider here is: how much do we care about NTFS path listing 
when there are invalid bits in there? As really the test case is using the 
local FS verify that odd chars are handled when listing HDFS files. You want to 
do an ls of a local directory, use "DIR" over the hadoop CLI.

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Minor
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14203) performAuthCheck fails with wasbs scheme

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932490#comment-15932490
 ] 

Steve Loughran commented on HADOOP-14203:
-

What happens with the latest 2.8.0 RC?

> performAuthCheck fails with wasbs scheme
> 
>
> Key: HADOOP-14203
> URL: https://issues.apache.org/jira/browse/HADOOP-14203
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Varada Hemeswari
>Assignee: Sivaguru Sankaridurg
>Priority: Critical
>  Labels: azure, fs, secure;, wasb
>
> Accessing Azure file system with 'wasbs' scheme fails on enabling wasb 
> authorization.
> Stack trace :
> {code}
> adminuser1@hn0-f6adaa:/etc/hadoop/conf$ yarn jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount "/examplefile" "/output"
> 17/03/20 07:58:48 INFO client.AHSProxy: Connecting to Application History 
> server at hn0-f6adaa.team2testdomain.onmicrosoft.com/10.45.0.190:10200
> 17/03/20 07:58:48 INFO security.TokenCache: Got dt for 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net; 
> Kind: WASB delegation, Service: 10.45.0.190:50911, Ident: (owner=adminuser1, 
> renewer=yarn, realUser=, issueDate=1489996728687, maxDate=1490601528687, 
> sequenceNumber=15, masterKeyId=11)
> org.apache.hadoop.fs.azure.WasbAuthorizationException: getFileStatus 
> operation for Path : 
> wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net/output
>  not allowed
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1425)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2058)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)
> at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at 
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}
> In the above fs.defaultFS is set to 
> "wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net"
> If fs.defaultFS is changed to 
> "wasb://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net", the 
> job runs fine



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14030) PreCommit TestKDiag failure

2017-03-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932421#comment-15932421
 ] 

Duo Zhang commented on HADOOP-14030:


OK, seems we can the logs from this page

https://builds.apache.org/job/PreCommit-HADOOP-Build/11847/testReport/junit/org.apache.hadoop.security/TestKDiag/testKeytabAndPrincipal/

So let me see how can we add the ''-Dsun.security.krb5.debug=true' when running 
tests.

> PreCommit TestKDiag failure
> ---
>
> Key: HADOOP-14030
> URL: https://issues.apache.org/jira/browse/HADOOP-14030
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11523/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
> {noformat}
> Tests run: 13, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 2.175 sec 
> <<< FAILURE! - in org.apache.hadoop.security.TestKDiag
> testKeytabAndPrincipal(org.apache.hadoop.security.TestKDiag)  Time elapsed: 
> 0.05 sec  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
>   at 
> com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
>   at 
> com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
>   at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
>   at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1355)
>   at org.apache.hadoop.security.KDiag.loginFromKeytab(KDiag.java:630)
>   at org.apache.hadoop.security.KDiag.execute(KDiag.java:396)
>   at org.apache.hadoop.security.KDiag.run(KDiag.java:236)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.security.KDiag.exec(KDiag.java:1047)
>   at org.apache.hadoop.security.TestKDiag.kdiag(TestKDiag.java:119)
>   at 
> org.apache.hadoop.security.TestKDiag.testKeytabAndPrincipal(TestKDiag.java:162)
> testFileOutput(org.apache.hadoop.security.TestKDiag)  Time elapsed: 0.033 sec 
>  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: Login failure for user: 
> f...@example.com from keytab 
> /testptch/hadoop/hadoop-common-project/hadoop-common/target/keytab 
> javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:897)
>   at 
> com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
>   at 
> com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
>   at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
>   at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
>   at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1355)
>   at 

[jira] [Commented] (HADOOP-14189) add distcp-site.xml for distcp on branch-2

2017-03-20 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933140#comment-15933140
 ] 

Ravi Prakash commented on HADOOP-14189:
---

bq. You should be able to set them in core-site.xml.
Do we really want core-site.xml to be the catch-all for all configuration for 
all tools? See how that atomic blaster shoots both ways ;-)

Let's continue on HADOOP-10738

> add distcp-site.xml for distcp on branch-2
> --
>
> Key: HADOOP-14189
> URL: https://issues.apache.org/jira/browse/HADOOP-14189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-14189-branch-2.001.patch
>
>
> On hadoop 2.x , we could not config hadoop parameters for distcp. It only 
> uses distcp-default.xml.
> We should add distcp-site.xml to overrides hadoop paramers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933162#comment-15933162
 ] 

Hadoop QA commented on HADOOP-13887:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 59s{color} 
| {color:red} root-jdk1.8.0_121 with JDK v1.8.0_121 generated 4 new + 884 
unchanged - 3 fixed = 888 total (was 887) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m  9s{color} 
| {color:red} root-jdk1.7.0_121 with JDK v1.7.0_121 generated 2 new + 982 
unchanged - 1 fixed = 984 total (was 983) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 47s{color} | {color:orange} root: The patch generated 104 new + 42 unchanged 
- 5 fixed = 146 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
20s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 

[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933192#comment-15933192
 ] 

Hadoop QA commented on HADOOP-13887:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
34s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 46s{color} 
| {color:red} root-jdk1.8.0_121 with JDK v1.8.0_121 generated 4 new + 884 
unchanged - 3 fixed = 888 total (was 887) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 47s{color} 
| {color:red} root-jdk1.7.0_121 with JDK v1.7.0_121 generated 2 new + 982 
unchanged - 1 fixed = 984 total (was 983) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 21 new + 42 unchanged 
- 5 fixed = 63 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 

[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933215#comment-15933215
 ] 

Allen Wittenauer commented on HADOOP-10738:
---

>From HADOOP-14189:

bq. Do we really want core-site.xml to be the catch-all for all configuration 
for all tools?

a) It already is.  That ship sailed long, long ago.
b) Yes.  FWIW: I really don't see much purpose to hdfs-site.xml, yarn-site.xml, 
or mapred-site.xml.  Splitting the configs made things harder for people who 
don't look at this stuff every day.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-14205:

Target Version/s: 2.9.0, 2.8.1  (was: 2.8.0, 2.9.0)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932893#comment-15932893
 ] 

Steve Loughran commented on HADOOP-13715:
-


* {{FileStatus.toString()}} needs to include the EC status. It'll be invaluable 
for assertions and diagnostics
* The filesystem specification doesn't have any coverage of erasure coding or 
this bit. At the very least it needs a mention in the FileStatus structure.
hich doesn't seem to have any explicit coverage except in the getFileStatus() 
call and in invariants regarding consistency. Now would seem to be the time to
add more on the structure.
* There's enough {{assertFalse(fs.getFileStatus(dir).isErasureCoded())}} and 
assertTrue that they could be pulled out into a method with better diags
 
{code}
 
assertErasureCoded(fs, path) {
 FileStatus s = fs.getFileStatus(path)
 assertTrue("Not erasure coded: " +s, s.isErasureCoded())
}
{code}
 
+ equivalent for assertNotErasureCoded. ContractTestUtils would be the obvious 
place for them.


> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Igor Mazur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Mazur updated HADOOP-13887:

Attachment: HADOOP-13897-branch-2-005.patch

Style fixes

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-branch-2-003.patch, 
> HADOOP-13897-branch-2-004.patch, HADOOP-13897-branch-2-005.patch, 
> HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14201) Some 2.8.0 unit tests are failing on windows

2017-03-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14201:

Summary: Some 2.8.0 unit tests are failing on windows  (was: Fix some 
failing tests on windows)

> Some 2.8.0 unit tests are failing on windows
> 
>
> Key: HADOOP-14201
> URL: https://issues.apache.org/jira/browse/HADOOP-14201
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Affects Versions: 2.8.0
> Environment: Windows Server 2012.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14201-001.patch
>
>
> Some of the 2.8.0 tests are failing locally, without much in the way of 
> diagnostics. They may be false alarms related to system, VM setup, 
> performance, or they may be a sign of a problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14205:
---

 Summary: No FileSystem for scheme: adl
 Key: HADOOP-14205
 URL: https://issues.apache.org/jira/browse/HADOOP-14205
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


{noformat}
$ bin/hadoop fs -ls /
ls: No FileSystem for scheme: adl
{noformat}

The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
{{fs.AbstractFileSystem.adl.impl}}.

After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
error:
{noformat}
$ bin/hadoop fs -ls /
-ls: Fatal internal error
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
... 18 more
{noformat}

The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932911#comment-15932911
 ] 

Steve Loughran commented on HADOOP-14204:
-

Stack. This is my github cloud examples running with Spark master built against 
hadoop-2.8.0 RC3

{code}
   org.apache.spark.SparkException: Job aborted.
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:196)
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:161)
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:161)
  at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
  at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:161)
  at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:137)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:93)
  at 
org.apache.spark.sql.execution.datasources.DataSource.writeInFileFormat(DataSource.scala:442)
  at 
org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:478)
  at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:93)
  at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:93)
  at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:606)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:217)
  at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:509)
  at 
com.hortonworks.spark.cloud.examples.S3DataFrameExample.action(S3DataFrameExample.scala:160)
  at 
com.hortonworks.spark.cloud.ObjectStoreExample$class.action(ObjectStoreExample.scala:67)
  at 
com.hortonworks.spark.cloud.examples.S3DataFrameExample.action(S3DataFrameExample.scala:56)
  at 
com.hortonworks.spark.cloud.examples.S3DataFrameExampleSuite$$anonfun$2.apply$mcV$sp(S3DataFrameExampleSuite.scala:47)
  at 
com.hortonworks.spark.cloud.CloudSuite$$anonfun$ctest$1.apply$mcV$sp(CloudSuite.scala:133)
  at 
com.hortonworks.spark.cloud.CloudSuite$$anonfun$ctest$1.apply(CloudSuite.scala:131)
  at 
com.hortonworks.spark.cloud.CloudSuite$$anonfun$ctest$1.apply(CloudSuite.scala:131)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)
  at org.scalatest.Transformer.apply(Transformer.scala:20)
  at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)

[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

Target Version/s: 2.8.0, 2.9.0  (was: 2.8.0)

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932914#comment-15932914
 ] 

John Zhuge commented on HADOOP-14205:
-

The issues were caused by backporting HADOOP-13037 to branch-2 and earlier when 
HADOOP-12666 were not backported. Unfortunately some changes in HADOOP-12666 
are needed.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932929#comment-15932929
 ] 

John Zhuge commented on HADOOP-14205:
-

Not a problem in trunk.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-03-20 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932993#comment-15932993
 ] 

Mingliang Liu commented on HADOOP-13345:


Hi all, I'll merge from trunk again in 24 hours for latest conflict changes in 
{{FileSystemContractBaseTest}}. I will commit if no test failing. Thanks,


> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932806#comment-15932806
 ] 

Steve Loughran commented on HADOOP-13811:
-

+stack trace of spark dataframes, this time on Hadoop 2.8.0 RC3. This is same 
situation as before: error rising during stream interrupt/teardown, where I 
think an interrupted exception is being converted to an abort
{code}
2017-03-20 15:09:31,440 [JobGenerator] WARN  dstream.FileInputDStream 
(Logging.scala:logWarning(87)) - Error finding new files
org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on 
spark-cloud/S3AStreamingSuite/streaming/streaming/: 
com.amazonaws.AmazonClientException: Failed to sanitize XML document destined 
for handler class 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler:
 Failed to sanitize XML document destined for handler class 
com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1638)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1393)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1369)
at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1704)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2030)
at 
org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205)
at 
org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
at scala.Option.orElse(Option.scala:289)
at 
org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
at 
org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
at scala.Option.orElse(Option.scala:289)
at 
org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331)
at 
org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
at 

[jira] [Updated] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-03-20 Thread Igor Mazur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Mazur updated HADOOP-13887:

Attachment: HADOOP-13897-branch-2-004.patch

Fix license headers

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-branch-2-003.patch, 
> HADOOP-13897-branch-2-004.patch, HADOOP-14171-001.patch
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14204:
---

 Summary: S3A multipart commit failing, 
"UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"
 Key: HADOOP-14204
 URL: https://issues.apache.org/jira/browse/HADOOP-14204
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Critical


Stack trace seen trying to commit a multipart upload, as the EMR code (which 
takes a {{List etags}} is trying to sort that list directly, which it 
can't do if the list doesn't want to be sorted.

later versions of the SDK clone the list before sorting.

We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932838#comment-15932838
 ] 

Steve Loughran commented on HADOOP-13811:
-

Looking at the AWS SDK, 

1. {{AbortedException}} is only ever raised on a thread interrupt; it could be 
translated
2. that log that [~fabbri] saw, "Unable to close response InputStream ..." is 
just a log @ error of the exception raised when the XML parser closes the input 
stream: it's not the actual point where something was thrown, but just the 
errors in the close() call. The stuff we'd log @ debug in our own code.

I propose translateException has a special handler for an aborted exception at 
the base of the call chain; if thrown raises in InterrupteIOE. Or we actually 
set the interrupted bit on the thread again? That'd be purer, but more of a 
change in the system operation, potentially



> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13371) S3A globber to use bulk listObject call over recursive directory scan

2017-03-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933065#comment-15933065
 ] 

ASF GitHub Bot commented on HADOOP-13371:
-

Github user kazuyukitanimura commented on the issue:

https://github.com/apache/hadoop/pull/204
  
Hi @steveloughran 
Thank you for sharing this S3A globber. I started reading the code, but at 
high level what are things already done, and needs to be done?


> S3A globber to use bulk listObject call over recursive directory scan
> -
>
> Key: HADOOP-13371
> URL: https://issues.apache.org/jira/browse/HADOOP-13371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-13208 produces O(1) listing of directory trees in 
> {{FileSystem.listStatus}} calls, but doesn't do anything for 
> {{FileSystem.globStatus()}}, which uses a completely different codepath, one 
> which does a selective recursive scan by pattern matching as it goes down, 
> filtering out those patterns which don't match. Cost is 
> O(matching-directories) + cost of examining the files.
> It should be possible to do the glob status listing in S3A not through the 
> filtered treewalk, but through a list + filter operation. This would be an 
> O(files) lookup *before any filtering took place*.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933072#comment-15933072
 ] 

Steve Loughran commented on HADOOP-14204:
-

Issue is

# AWS SDK assumes passed in {{List}} can be sorted.
# We are generating it with {{Futures.allAsList(partETagsFutures).get();}}, 
which inside goes {{return new ListFuture(ImmutableList.copyOf(futures), 
true,  MoreExecutors.sameThreadExecutor());}}. That is: returns an immutable 
list.

Fix is what the later SDKs do internally: copy the list elements into a new 
ArrayList.


> S3A multipart commit failing, "UnsupportedOperationException at 
> java.util.Collections$UnmodifiableList.sort"
> 
>
> Key: HADOOP-14204
> URL: https://issues.apache.org/jira/browse/HADOOP-14204
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>
> Stack trace seen trying to commit a multipart upload, as the EMR code (which 
> takes a {{List etags}} is trying to sort that list directly, which it 
> can't do if the list doesn't want to be sorted.
> later versions of the SDK clone the list before sorting.
> We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13371) S3A globber to use bulk listObject call over recursive directory scan

2017-03-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933107#comment-15933107
 ] 

ASF GitHub Bot commented on HADOOP-13371:
-

Github user kazuyukitanimura commented on the issue:

https://github.com/apache/hadoop/pull/203
  
Thanks @steveloughran 

I understand your point that getting the sign off for the core class 
changes is not easy. At the same time, #204 seems to be a big change. I was 
wondering if there is a way to meet at somewhere in the middle. It is important 
to provide the end users a way to glob things on S3. It easily hits OOM with 
the current code.

Meanwhile, I will keep trying to contribute to #204, which seems to be a 
right long term solution.

Also, I made a few other fixes related to S3A. My current employer just 
allowed me to spend 20% of my time to contribute back to the community. I hope 
you don't mind that I mention your name in the pull requests that I am going to 
file.


> S3A globber to use bulk listObject call over recursive directory scan
> -
>
> Key: HADOOP-13371
> URL: https://issues.apache.org/jira/browse/HADOOP-13371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-13208 produces O(1) listing of directory trees in 
> {{FileSystem.listStatus}} calls, but doesn't do anything for 
> {{FileSystem.globStatus()}}, which uses a completely different codepath, one 
> which does a selective recursive scan by pattern matching as it goes down, 
> filtering out those patterns which don't match. Cost is 
> O(matching-directories) + cost of examining the files.
> It should be possible to do the glob status listing in S3A not through the 
> filtered treewalk, but through a list + filter operation. This would be an 
> O(files) lookup *before any filtering took place*.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14162) Improve release scripts to automate missing steps

2017-03-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933248#comment-15933248
 ] 

Allen Wittenauer commented on HADOOP-14162:
---

I don't think adding Yet Another Build Tool is a good idea. We should be able 
to do everything we can do in ant with maven.  It's fine to steal ideas though.

That said.

The only relevant thing I'm seeing that really needs to change in the core 
create-release script is being able to specify a different location for PGP 
keys file verification.  Everything else belongs elsewhere.

Let me clarify:

My modifications to create-release were based on the idea that it does one 
thing: it creates a release artifact.  That's it.  Taking bits on disk and 
turning them into verifiable tar balls is already complicated (and time 
consume!) enough.   A lot of the stuff being talked about in this JIRA issue 
should really be getting done in other components before and after 
create-release executes.  This allows for those components to be easily 
replaced by local ones (if need be) as well as cuts down on the amount of 
program logic required to "do everything".  It greatly simplifies the testing.  
If we want a "one step", then we just need a driver to run those different 
components.  

> Improve release scripts to automate missing steps
> -
>
> Key: HADOOP-14162
> URL: https://issues.apache.org/jira/browse/HADOOP-14162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> According to the conversation on the dev mailing list one pain point of the 
> release making is that even with the latest create-release script a lot of 
> steps are not automated.
> This Jira is about creating a script which guides the release manager throw 
> the proces:
> Goals:
>   * It would work even without the apache infrastructure: with custom 
> configuration (forked repositories/alternative nexus), it would be possible 
> to test the scripts even by a non-commiter.  
>   * every step which could be automated should be scripted (create git 
> branches, build,...). if something could be not automated there an 
> explanation could be printed out, and wait for confirmation
>   * Before dangerous steps (eg. bulk jira update) we can ask for confirmation 
> and explain the 
>   * The run should be idempontent (and there should be an option to continue 
> the release from any steps).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14059) typo in s3a rename(self, subdir) error message

2017-03-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14059:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

> typo in s3a rename(self, subdir) error message
> --
>
> Key: HADOOP-14059
> URL: https://issues.apache.org/jira/browse/HADOOP-14059
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14059-001.patch
>
>
> HADOOP-13823 added clearer error messages on renames, except for one, where 
> it introduced a typo:
>  "cannot rename a directory to a subdirectory o fitself ");



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12875:

Component/s: test

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2017-03-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933261#comment-15933261
 ] 

John Zhuge commented on HADOOP-12875:
-

[~chris.douglas], [~vishwajeet.dusane] Should we backport this to branch-2.8.0 
as well? Would love pass live ADLS unit tests in all supported branches.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/adl, test, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha1
>
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch, Hadoop-12875-005.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14162) Improve release scripts to automate missing steps

2017-03-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933248#comment-15933248
 ] 

Allen Wittenauer edited comment on HADOOP-14162 at 3/20/17 6:30 PM:


I don't think adding Yet Another Build Tool is a good idea. We should be able 
to do everything we can do in ant with maven.  It's fine to steal ideas though.

That said.

The only relevant thing I'm seeing that really needs to change in the core 
create-release script is being able to specify a different location for PGP 
keys file verification.  Everything else belongs elsewhere.

Let me clarify:

My modifications to create-release were based on the idea that it does one 
thing: it creates a release artifact.  That's it.  Taking bits on disk and 
turning them into verifiable tar balls is already complicated (and time 
consuming!) enough.   A lot of the stuff being talked about in this JIRA issue 
should really be getting done in other components before and after 
create-release executes.  This allows for those components to be easily 
replaced by local ones (if need be) as well as cuts down on the amount of 
program logic required to "do everything".  It greatly simplifies the testing.  
If we want a "one step", then we just need a driver to run those different 
components.  


was (Author: aw):
I don't think adding Yet Another Build Tool is a good idea. We should be able 
to do everything we can do in ant with maven.  It's fine to steal ideas though.

That said.

The only relevant thing I'm seeing that really needs to change in the core 
create-release script is being able to specify a different location for PGP 
keys file verification.  Everything else belongs elsewhere.

Let me clarify:

My modifications to create-release were based on the idea that it does one 
thing: it creates a release artifact.  That's it.  Taking bits on disk and 
turning them into verifiable tar balls is already complicated (and time 
consume!) enough.   A lot of the stuff being talked about in this JIRA issue 
should really be getting done in other components before and after 
create-release executes.  This allows for those components to be easily 
replaced by local ones (if need be) as well as cuts down on the amount of 
program logic required to "do everything".  It greatly simplifies the testing.  
If we want a "one step", then we just need a driver to run those different 
components.  

> Improve release scripts to automate missing steps
> -
>
> Key: HADOOP-14162
> URL: https://issues.apache.org/jira/browse/HADOOP-14162
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>
> According to the conversation on the dev mailing list one pain point of the 
> release making is that even with the latest create-release script a lot of 
> steps are not automated.
> This Jira is about creating a script which guides the release manager throw 
> the proces:
> Goals:
>   * It would work even without the apache infrastructure: with custom 
> configuration (forked repositories/alternative nexus), it would be possible 
> to test the scripts even by a non-commiter.  
>   * every step which could be automated should be scripted (create git 
> branches, build,...). if something could be not automated there an 
> explanation could be printed out, and wait for confirmation
>   * Before dangerous steps (eg. bulk jira update) we can ask for confirmation 
> and explain the 
>   * The run should be idempontent (and there should be an option to continue 
> the release from any steps).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14059) typo in s3a rename(self, subdir) error message

2017-03-20 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14059:
---
Hadoop Flags: Reviewed

+1 I will commit it shortly.

> typo in s3a rename(self, subdir) error message
> --
>
> Key: HADOOP-14059
> URL: https://issues.apache.org/jira/browse/HADOOP-14059
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14059-001.patch
>
>
> HADOOP-13823 added clearer error messages on renames, except for one, where 
> it introduced a typo:
>  "cannot rename a directory to a subdirectory o fitself ");



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org