[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-03-21 Thread Omkar Aradhya K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935743#comment-15935743
 ] 

Omkar Aradhya K S commented on HADOOP-11794:


{quote}
The main reason of checking DistributedFileSystem is the support of 
getBlockLocations, and concat feature. I'm not sure whether we can assume other 
File System support that.
{quote}
The *getFileBlockLocations* and *concat* are APIs that are part of 
*FileSystem.java* from [hadoop 
v1.2.1|https://hadoop.apache.org/docs/r1.2.1/api/index.html]
{quote}
The current patch is for trunk where client and server code are separated. When 
we backport this change to other version of hadoop, we can make the change 
accordingly, for example, to use DFSUtil. 
{quote}
You could just use the default constructor that would internally get the 
NNAddress:
{code}
final DFSClient dfs = new DFSClient(conf);
{code}

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935741#comment-15935741
 ] 

John Zhuge commented on HADOOP-14195:
-

Sure, I will review the patch in the next few days. Thanks for the hard work, 
it is tough to reliably reproduce race conditions.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-21 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935706#comment-15935706
 ] 

Allen Wittenauer commented on HADOOP-10738:
---

In other words, MAPREDUCE-5653.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-21 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13715:

Attachment: HADOOP-13715.03.patch

Thanks [~andrew.wang], [~ste...@apache.org] for the review. Attached v03 patch 
with following changes. Please take a look.

I see FileStatus#toString() extended by S3 and Swift, but not displayed as-is 
in on shell. Stat command use FileStatus to construct its own customized 
string. And few more commands use the FileStatus to get path or other 
attributes. Removing the enhancement to the FileStatus#toString() based on the 
discussion.

Added TestHttpFSServerECFileStatus to verify HttpFS FileStatus with EC Files

PS: The previous precheckin run showed compilation and symbol not found errors. 
My clean build is not showing any of these problems. Not sure what went wrong 
with the module dependencies in the previous jenkins run. Will monitor the new 
precheckin run. 

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch, 
> HADOOP-13715.03.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "authorityNeeded"

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935662#comment-15935662
 ] 

Hadoop QA commented on HADOOP-14211:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14211 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859824/HADOOP-14211.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 452e5a3cf6b5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f462e1f |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11867/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11867/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11867/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ChRootedFs is too aggressive about enforcing "authorityNeeded"
> --
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
> 

[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2017-03-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935657#comment-15935657
 ] 

Tsuyoshi Ozawa commented on HADOOP-10101:
-

[~hitesh] thanks for sharing the information. IIUC, is the problem method 
AMRMClient#waitFor? It takes a com.google.common.base.Supplier instance as an 
argument.

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.patch, 
> HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-21 Thread Fei Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935653#comment-15935653
 ] 

Fei Hui commented on HADOOP-10738:
--

[~arpitagarwal] Thanks for replay. HADOOP-14176 discussed.
I need to reconfig {{mapreduce.map.memory.mb}} and {{mapreduce.map.java.opts}} 
to make distcp success.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version

2017-03-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935643#comment-15935643
 ] 

Tsuyoshi Ozawa commented on HADOOP-10101:
-

Since HADOOP-14187 is merged, I'm resubmitting Jenkins CI again.

> Update guava dependency to the latest version
> -
>
> Key: HADOOP-10101
> URL: https://issues.apache.org/jira/browse/HADOOP-10101
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Rakesh R
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10101-002.patch, HADOOP-10101-004.patch, 
> HADOOP-10101-005.patch, HADOOP-10101-006.patch, HADOOP-10101-007.patch, 
> HADOOP-10101-008.patch, HADOOP-10101-009.patch, HADOOP-10101-009.patch, 
> HADOOP-10101-010.patch, HADOOP-10101-010.patch, HADOOP-10101-011.patch, 
> HADOOP-10101.012.patch, HADOOP-10101.013.patch, HADOOP-10101.patch, 
> HADOOP-10101.patch
>
>
> The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "needsAuthority"

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Status: Patch Available  (was: Open)

> ChRootedFs is too aggressive about enforcing "needsAuthority"
> -
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch
>
>
> Right now {{ChRootedFs}} passes the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS itself is an example 
> of this. In fact you will encounter this issue if you try to nest one ViewFS 
> within another - I can't think of any reason why you would want to do that 
> but there's no reason why you shouldn't be able to and in general ViewFS is 
> making an assumption that it then proves invalid by its own behavior. The 
> {{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
> already an instantiated {{AbstractFileSystem}} which means it has already 
> used the same constructor with the value of {{authorityNeeded}} (and 
> corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "authorityNeeded"

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Summary: ChRootedFs is too aggressive about enforcing "authorityNeeded"  
(was: ChRootedFs is too aggressive about enforcing "needsAuthority")

> ChRootedFs is too aggressive about enforcing "authorityNeeded"
> --
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch
>
>
> Right now {{ChRootedFs}} passes the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS itself is an example 
> of this. In fact you will encounter this issue if you try to nest one ViewFS 
> within another - I can't think of any reason why you would want to do that 
> but there's no reason why you shouldn't be able to and in general ViewFS is 
> making an assumption that it then proves invalid by its own behavior. The 
> {{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
> already an instantiated {{AbstractFileSystem}} which means it has already 
> used the same constructor with the value of {{authorityNeeded}} (and 
> corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "needsAuthority"

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Attachment: HADOOP-14211.000.patch

Attaching a patch which simply passes {{false}} for {{authorityNeeded}}. It 
includes a unit test which demonstrates this issue by nesting ViewFS though I'm 
not attached to that if it doesn't seem necessary...

Pinging [~sanjay.radia] based on original authorship; [~manojg] / 
[~andrew.wang] based on recent involvement in other ViewFS-related work.

> ChRootedFs is too aggressive about enforcing "needsAuthority"
> -
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HADOOP-14211.000.patch
>
>
> Right now {{ChRootedFs}} passes the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS itself is an example 
> of this. In fact you will encounter this issue if you try to nest one ViewFS 
> within another - I can't think of any reason why you would want to do that 
> but there's no reason why you shouldn't be able to and in general ViewFS is 
> making an assumption that it then proves invalid by its own behavior. The 
> {{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
> already an instantiated {{AbstractFileSystem}} which means it has already 
> used the same constructor with the value of {{authorityNeeded}} (and 
> corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "needsAuthority"

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14211:
-
Description: 
Right now {{ChRootedFs}} passes the following up to the {{AbstractFileSystem}} 
superconstructor:
{code}
super(fs.getUri(), fs.getUri().getScheme(),
fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
{code}
This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
authority, but this isn't necessarily the case - ViewFS itself is an example of 
this. In fact you will encounter this issue if you try to nest one ViewFS 
within another - I can't think of any reason why you would want to do that but 
there's no reason why you shouldn't be able to and in general ViewFS is making 
an assumption that it then proves invalid by its own behavior. The 
{{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
already an instantiated {{AbstractFileSystem}} which means it has already used 
the same constructor with the value of {{authorityNeeded}} (and corresponding 
validation) that it actually requires.

  was:
Right now {{ChRootedFs}} passes the following up to the {{AbstractFileSystem}} 
superconstructor:
{code}
super(fs.getUri(), fs.getUri().getScheme(),
fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
{code}
This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
authority, but this isn't necessarily the case--ViewFS itself is an example of 
this. In fact you will encounter this issue if you try to nest one ViewFS 
within another--I can't think of any reason why you would want to do that but 
there's no reason why you shouldn't be able to and in general ViewFS is making 
an assumption that it then proves invalid by its own behavior. The 
{{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
already an instantiated {{AbstractFileSystem}} which means it has already used 
the same constructor with the value of {{authorityNeeded}} (and corresponding 
validation) that it actually requires.


> ChRootedFs is too aggressive about enforcing "needsAuthority"
> -
>
> Key: HADOOP-14211
> URL: https://issues.apache.org/jira/browse/HADOOP-14211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Right now {{ChRootedFs}} passes the following up to the 
> {{AbstractFileSystem}} superconstructor:
> {code}
> super(fs.getUri(), fs.getUri().getScheme(),
> fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
> {code}
> This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
> authority, but this isn't necessarily the case - ViewFS itself is an example 
> of this. In fact you will encounter this issue if you try to nest one ViewFS 
> within another - I can't think of any reason why you would want to do that 
> but there's no reason why you shouldn't be able to and in general ViewFS is 
> making an assumption that it then proves invalid by its own behavior. The 
> {{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
> already an instantiated {{AbstractFileSystem}} which means it has already 
> used the same constructor with the value of {{authorityNeeded}} (and 
> corresponding validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14211) ChRootedFs is too aggressive about enforcing "needsAuthority"

2017-03-21 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-14211:


 Summary: ChRootedFs is too aggressive about enforcing 
"needsAuthority"
 Key: HADOOP-14211
 URL: https://issues.apache.org/jira/browse/HADOOP-14211
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.6.0
Reporter: Erik Krogen
Assignee: Erik Krogen


Right now {{ChRootedFs}} passes the following up to the {{AbstractFileSystem}} 
superconstructor:
{code}
super(fs.getUri(), fs.getUri().getScheme(),
fs.getUri().getAuthority() != null, fs.getUriDefaultPort());
{code}
This passes a value of {{authorityNeeded==true}} for any {{fs}} which has an 
authority, but this isn't necessarily the case--ViewFS itself is an example of 
this. In fact you will encounter this issue if you try to nest one ViewFS 
within another--I can't think of any reason why you would want to do that but 
there's no reason why you shouldn't be able to and in general ViewFS is making 
an assumption that it then proves invalid by its own behavior. The 
{{authorityNeeded}} check isn't necessary in this case anyway; {{fs}} is 
already an instantiated {{AbstractFileSystem}} which means it has already used 
the same constructor with the value of {{authorityNeeded}} (and corresponding 
validation) that it actually requires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14210) Directories are not listed recursively when fs.defaultFs is viewFs

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-14210:
-
Component/s: viewfs

> Directories are not listed recursively when fs.defaultFs is viewFs
> --
>
> Key: HADOOP-14210
> URL: https://issues.apache.org/jira/browse/HADOOP-14210
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.0
>Reporter: Ajith S
>  Labels: viewfs
> Attachments: HDFS-8413.patch
>
>
> Mount a cluster on client throught viewFs mount table
> Example:
> {quote}
>  
> fs.defaultFS
> viewfs:///
>   
> 
> fs.viewfs.mounttable.default.link./nn1
> hdfs://ns1/  
> 
> 
> fs.viewfs.mounttable.default.link./user
> hdfs://host-72:8020/
> 
>  
> {quote}
> Try to list the files recursively *(hdfs dfs -ls -R / or hadoop fs -ls -R /)* 
> only the parent folders are listed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14210) Directories are not listed recursively when fs.defaultFs is viewFs

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen reassigned HADOOP-14210:


 Assignee: (was: Ajith S)
Affects Version/s: (was: 2.7.0)
   2.7.0
  Key: HADOOP-14210  (was: HDFS-8413)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Directories are not listed recursively when fs.defaultFs is viewFs
> --
>
> Key: HADOOP-14210
> URL: https://issues.apache.org/jira/browse/HADOOP-14210
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Ajith S
>  Labels: viewfs
> Attachments: HDFS-8413.patch
>
>
> Mount a cluster on client throught viewFs mount table
> Example:
> {quote}
>  
> fs.defaultFS
> viewfs:///
>   
> 
> fs.viewfs.mounttable.default.link./nn1
> hdfs://ns1/  
> 
> 
> fs.viewfs.mounttable.default.link./user
> hdfs://host-72:8020/
> 
>  
> {quote}
> Try to list the files recursively *(hdfs dfs -ls -R / or hadoop fs -ls -R /)* 
> only the parent folders are listed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11034) ViewFileSystem is missing getStatus(Path)

2017-03-21 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HADOOP-11034:
-
   Resolution: Duplicate
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

This got fixed as part of HDFS-11058

> ViewFileSystem is missing getStatus(Path)
> -
>
> Key: HADOOP-11034
> URL: https://issues.apache.org/jira/browse/HADOOP-11034
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Reporter: Gary Steelman
>Assignee: Gary Steelman
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-11034.2.patch, HADOOP-11034-trunk-1.patch
>
>
> This patch implements ViewFileSystem#getStatus(Path), which is currently 
> unimplemented.
> getStatus(Path) should return the FsStatus of the FileSystem backing the 
> path. Currently it returns the same as getStatus(), which is a default 
> Long.MAX_VALUE for capacity, 0 used, and Long.MAX_VALUE for remaining space. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-03-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935473#comment-15935473
 ] 

Yongjun Zhang commented on HADOOP-11794:


Thanks much for reviewing and trying [~ste...@apache.org] and [~omkarksa]!

{quote}
this is an opportunity to switch distcp over to using the slf4j logger class; 
existing logging can be left alone, but all new logs can switch to the inline 
logging
{quote}
Since this jira has been going on for long, I hope we can address logger issue 
as a separate follow-up jira.

{quote}
What does "YJD ls before distcp" in tests mean?
{quote}
Good catch, I forgot to drop some debugging stuff in test code. will in next 
rev.

{quote}
Does it still cleanup on a failure? If not, what is the final state of the call 
& does it matter
{quote}
It does not really matter since the test failed, but cleaning it up would be ok 
too. 

{quote}
in the s3a tests we now have a -Pscale profile for scalable tests, and can set 
file sizes. It might be nice to have here, but it's a complex piece of work: 
not really justifiable except as a bigger set of scale tests
{quote}
Scale test is a good thing to do, the unit of the patch mostly focus on 
functionality.

{quote}
5.  Observed following compatibility issues:
a.  You are checking for instance of DistributedFileSystem in many places 
and all other FileSystem implementations don’t implement DistributedFileSystem
i. Could this be 
changed to something more compatible with other implementations of FileSystem?
{quote}
The main reason of checking DistributedFileSystem is the support of 
getBlockLocations, and concat feature. I'm not sure whether we can assume other 
File System support that.
  
{quote}
b.  You are using the new DFSUtilClient, which makes DistCp incompatible 
with older versions of Hadoop
i. Can this be changed 
to be backward compatible
{quote}
The current patch is for trunk where client and server code are separated. When 
we backport this change to other version of hadoop, we can make the change 
accordingly, for example, to use DFSUtil. 

{quote}
6.  If the compatibility issues are addressed, the new DistCp with your 
feature would be available for other FileSystem implementations as well as 
backward compatible.
a.  I was able to make little modifications to your patch and got it 
working with ADLS.
{quote}
Good work there! Glad to hear that it works for you with little modifications. 
I think we can probably commit this patch first, and then do other work as 
improvement jiras.

Thanks again!
 



  



> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14207) "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler

2017-03-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935390#comment-15935390
 ] 

Xiaoyu Yao commented on HADOOP-14207:
-

Thanks [~surendrasingh] for reporting the issue and propose the fix. The patch 
looks good to me overall. Just a few comments:

*DecayRpcScheduler.java*
Line 236: we can remove this line and consolidate the registration inside 
MetricsProxy constructor().

Line 704: we should also unregister the MBeans similarly. 

Can you add a test case in TestRefreshCallQueue with DecayRpcScheduler?

> "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler
> --
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: HADOOP-14207.001.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935332#comment-15935332
 ] 

Andrew Wang commented on HADOOP-13715:
--

I found the toStringStable JIRA I was thinking about: HADOOP-13150.

I don't think FileStatus#toString is dumped to the shell anywhere, but Manoj do 
you mind verifying? 3.0 is a new major release, but we have other ways of 
exposing this besides toString, and there's no point needlessly changing stable 
interfaces.

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-017.patch

patch  017
*More tests of what happens in various abort sequences; I'll next have the 
staging task committers provide a way to locate their local dirs, so I can 
verify they get deleted.
* Minor doc tuning...turning off the TOC macro gets it to render.
* new ITest case testAbortJobNotTask shows that Magic committer jobAbort() 
doesn't reliably abort pending requests from tasks. While I'm focusing on the 
staging, I do want this to at least be passing the basic tests.

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Open  (was: Patch Available)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935109#comment-15935109
 ] 

Vihang Karajgaonkar commented on HADOOP-14195:
--

Thanks [~jzhuge] for letting me know. Can you please review the patch? I could 
not add a test-case which reproduces the race-condition consistently. If you 
have any ideas please let me know and I would be happy to add it as a 
test-case. I have tried the following approaches but I am not getting the 
expected results consistently.

1. Introduce a testcase in {{TestCredentialProviderFactory}} which spawns a 
threadpool which calls {{CredentialProviderFactory.getProviders(conf)}} from 
each thread. --> This test case works consistently when run individually but 
produces false positives when run along with other tests (I think junit 
schedules the threads along with other testcases such that race condition is 
not reproduced)
2. Call {{CredentialProviderFactory.getProviders(conf)}} in a loop from each 
thread 100 times to improve the chances of race-condition. But it didn't help 
either.

Thanks!

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at 

[jira] [Resolved] (HADOOP-7847) branch-1 has findbugs warnings

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HADOOP-7847.
--
Resolution: Won't Fix
  Assignee: Daniel Templeton

Closing since branch-1 is no longer actively maintained.

> branch-1 has findbugs warnings
> --
>
> Key: HADOOP-7847
> URL: https://issues.apache.org/jira/browse/HADOOP-7847
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Eli Collins
>Assignee: Daniel Templeton
>  Labels: newbie
>
> A month or so ago test-patch used to run cleanly. There are now two findbugs 
> warnings.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8939) Backport HADOOP-7457: remove cn-docs from branch-1

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HADOOP-8939.
--
Resolution: Won't Fix
  Assignee: Daniel Templeton

Gonna say we're not doing backports to branch-1 anymore.


> Backport HADOOP-7457: remove cn-docs from branch-1
> --
>
> Key: HADOOP-8939
> URL: https://issues.apache.org/jira/browse/HADOOP-8939
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Daniel Templeton
>  Labels: newbie
>
> The cn-docs in branch-1 are also out-dated



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9849) License information is missing

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-9849:
-
Priority: Critical  (was: Major)

> License information is missing
> --
>
> Key: HADOOP-9849
> URL: https://issues.apache.org/jira/browse/HADOOP-9849
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>Priority: Critical
>  Labels: newbie
>
> The following files are licensed under the BSD license but the BSD
> license is not part if the distribution:
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
> I believe this file is BSD as well:
> hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9849) License information is missing

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-9849:
-
Target Version/s: 3.0.0-beta1  (was: 2.1.1-beta)

> License information is missing
> --
>
> Key: HADOOP-9849
> URL: https://issues.apache.org/jira/browse/HADOOP-9849
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>Priority: Critical
>  Labels: newbie
>
> The following files are licensed under the BSD license but the BSD
> license is not part if the distribution:
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
> I believe this file is BSD as well:
> hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-03-21 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-14209:
-
Affects Version/s: 2.9.0

> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Steve Moist
>Assignee: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
> ignored through the @Ignore annotation, this should be removed as it is a 
> valid test class.  This was a minor mistake introduced during development of 
> HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9849) License information is missing

2017-03-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935083#comment-15935083
 ] 

Daniel Templeton commented on HADOOP-9849:
--

This issue seems largely addressed by the current LICENSE.txt file, except for 
the BSD license on the CRC code.  From LICENSE.txt:

{noformat}
For portions of the native implementation of slicing-by-8 CRC calculation
in src/main/native/src/org/apache/hadoop/util:

/**
 *   Copyright 2008,2009,2010 Massachusetts Institute of Technology.
 *   All rights reserved. Use of this source code is governed by a
 *   BSD-style license that can be found in the LICENSE file.
 */
{noformat}

There is no "LICENSE file" included that contains the appropriate BSD license.  
That segment of text looks like it was just copied from the header of the CRC 
files themselves.  Bumping the priority, because licensing is important.

> License information is missing
> --
>
> Key: HADOOP-9849
> URL: https://issues.apache.org/jira/browse/HADOOP-9849
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Timothy St. Clair
>  Labels: newbie
>
> The following files are licensed under the BSD license but the BSD
> license is not part if the distribution:
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4.c
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
> I believe this file is BSD as well:
> hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935078#comment-15935078
 ] 

John Zhuge commented on HADOOP-14195:
-

Both unit test failures are known. TestSFTPFileSystem#testFileExists failed 
often lately. It is tracked by HADOOP-14206.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935076#comment-15935076
 ] 

Hadoop QA commented on HADOOP-14195:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 15s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14195 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859759/HADOOP-14195.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 29ad58d90a44 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2841666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11866/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11866/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11866/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop 

[jira] [Assigned] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned HADOOP-14208:
-

Assignee: Daniel Templeton

> Fix typo in the top page in branch-2.8
> --
>
> Key: HADOOP-14208
> URL: https://issues.apache.org/jira/browse/HADOOP-14208
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> There is a typo in the summary of the release.
> {noformat:title=index.md.vm}
> *   Allow node labels get specificed in submitting MR jobs
> {noformat}
> specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14209:
--
Labels: newbie  (was: )

> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Moist
>Priority: Trivial
>  Labels: newbie
>
> The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
> ignored through the @Ignore annotation, this should be removed as it is a 
> valid test class.  This was a minor mistake introduced during development of 
> HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-03-21 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned HADOOP-14209:
-

Assignee: Daniel Templeton

> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Moist
>Assignee: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
>
> The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
> ignored through the @Ignore annotation, this should be removed as it is a 
> valid test class.  This was a minor mistake introduced during development of 
> HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-03-21 Thread Steve Moist (JIRA)
Steve Moist created HADOOP-14209:


 Summary: Remove @Ignore from valid S3a test.
 Key: HADOOP-14209
 URL: https://issues.apache.org/jira/browse/HADOOP-14209
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Affects Versions: 3.0.0-alpha2
Reporter: Steve Moist
Priority: Trivial


The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
ignored through the @Ignore annotation, this should be removed as it is a valid 
test class.  This was a minor mistake introduced during development of 
HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-03-21 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935012#comment-15935012
 ] 

Steve Moist commented on HADOOP-14209:
--

This is in trunk and branch-2 and should be fixed in both places.

> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Moist
>Priority: Trivial
>
> The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
> ignored through the @Ignore annotation, this should be removed as it is a 
> valid test class.  This was a minor mistake introduced during development of 
> HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14187) Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935008#comment-15935008
 ] 

Hudson commented on HADOOP-14187:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11436/])
HADOOP-14187. Update ZooKeeper dependency to 3.4.9 and Curator (aajisaka: rev 
74af0bdf68d56f02e92dad2411b26d1cde3dc703)
* (edit) hadoop-project/pom.xml


> Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0
> -
>
> Key: HADOOP-14187
> URL: https://issues.apache.org/jira/browse/HADOOP-14187
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14187.001.patch
>
>
> This is a update for using Apache Curator which shades guava.
> Why is Curator updated to 2.12.0 instead of 3.3.0?
> It's because Curator 3.x only supports ZooKeeper 3.5.x. ZooKeeper 3.5.x is 
> still alpha release. Hence, I think we should do conservative choice here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2017-03-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935001#comment-15935001
 ] 

Arpit Agarwal commented on HADOOP-10738:


Thanks for the clarification [~raviprak].

[~ferhui] do you have any examples of config parameters that are specific to a 
source-destination pair?

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-21 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934990#comment-15934990
 ] 

John Zhuge commented on HADOOP-14205:
-

The following unit test failures in {{branch-2.8.0}} are also caused by this 
JIRA:
{noformat}
  
TestAzureADTokenProvider.testExcludeAllProviderTypesFromConfig:274->excludeAndTestExpectations:283
 expected: but was:
  
TestAzureADTokenProvider.testCredentialProviderPathExclusions:261->excludeAndTestExpectations:283
 expected:
 but 
was:
{noformat}

Create {{hadoop-tools/hadoop-azure-datalake/src/test/resources/core-site.xml}} 
with these 2 properties, the tests will pass:
{noformat}
  
fs.adl.impl
org.apache.hadoop.fs.adl.AdlFileSystem
  

  
fs.AbstractFileSystem.adl.impl
org.apache.hadoop.fs.adl.Adl
  
{noformat}

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 2.9.0, 2.8.1
>
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14187) Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0

2017-03-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14187:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~ozawa] for the contribution.

> Update ZooKeeper dependency to 3.4.9 and Curator dependency to 2.12.0
> -
>
> Key: HADOOP-14187
> URL: https://issues.apache.org/jira/browse/HADOOP-14187
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14187.001.patch
>
>
> This is a update for using Apache Curator which shades guava.
> Why is Curator updated to 2.12.0 instead of 3.3.0?
> It's because Curator 3.x only supports ZooKeeper 3.5.x. ZooKeeper 3.5.x is 
> still alpha release. Hence, I think we should do conservative choice here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13966) Add ability to start DDB local server in every test

2017-03-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13966:
--

   Assignee: Mingliang Liu
Component/s: fs/s3
Summary: Add ability to start DDB local server in every test  (was: add 
ability to start DDB local server in every test)

> Add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HADOOP-13966-HADOOP-13345.000.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934937#comment-15934937
 ] 

Steve Loughran commented on HADOOP-13715:
-

LGTM. I worry that the webhdfs test will have the delay of an extra cluster 
launch in the method, but looking at the existing code and the many different 
cluster configs they want, moving to a static one wouldn't work. What could be 
done in future would be to identify those tests which could all share the same 
instance. Minor detail, not something I'd expect for this patch.




> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13715) Add isErasureCoded() API to FileStatus class

2017-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934916#comment-15934916
 ] 

Steve Loughran commented on HADOOP-13715:
-

toString here is something that surfaces in diagnostics logs more than anything 
else, especially test failuresI don't think its one of the "Sacred" paths. 
It's extnded by the swift and s3 subclasses, and nobody has complained about 
that.

> Add isErasureCoded() API to FileStatus class
> 
>
> Key: HADOOP-13715
> URL: https://issues.apache.org/jira/browse/HADOOP-13715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13715.01.patch, HADOOP-13715.02.patch
>
>
> Per the discussion in 
> [HDFS-10971|https://issues.apache.org/jira/browse/HDFS-10971?focusedCommentId=15567108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15567108]
>  I would like to add a new API {{isErasureCoded()}} to {{FileStatus}} so that 
> tools and downstream applications can tell if it needs to treat a file 
> differently.
> Hadoop tools that can benefit from this effort include: distcp and 
> teragen/terasort.
> Downstream applications such as flume or hbase may also benefit from it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HADOOP-14195:
-
Attachment: HADOOP-14195.003.patch

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14195) CredentialProviderFactory is not thread-safe

2017-03-21 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934899#comment-15934899
 ] 

Vihang Karajgaonkar commented on HADOOP-14195:
--

Based on what I see in the console logs in 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11863/console I see that 
pre-commit is trying to apply the test java application instead of the patch. 
Re-attaching the patch again.

> CredentialProviderFactory is not thread-safe
> 
>
> Key: HADOOP-14195
> URL: https://issues.apache.org/jira/browse/HADOOP-14195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HADOOP-14195.001.patch, HADOOP-14195.002.patch, 
> HADOOP-14195.003.patch, TestCredentialProvider.java
>
>
> Multi-threaded access to CredentialProviderFactory is not thread-safe because 
> {{java.util.ServiceLoader}} is not thread-safe (as noted in its Java doc). 
> Thanks to [~jzhuge] I was able to reproduce this issue but creating a simple 
> multi-threaded application which executes the following code in parallel.
> {code:java}
> for (int i = 0; i < ITEMS; i++) {
>   futures.add(executor.submit(new Callable() {
>   @Override
>   public Void call() throws Exception {
>   boolean found = false;
>   for (CredentialProviderFactory factory : serviceLoader) {
>   CredentialProvider kp = factory.createProvider(uri, 
> conf);
>   if (kp != null) {
>   result.add(kp);
>   found = true;
>   break;
>   }
>   }
>   if (!found) {
>   throw new IOException(Thread.currentThread() + "No 
> CredentialProviderFactory for " + uri);
>   } else {
>   System.out.println(Thread.currentThread().getName() + " 
> found credentialProvider for " + path);
>   }
>   return null;
>   }
>   }));
>   }
> {code}
> I see the following exception trace when I execute the above code.
> {code:java}
> java.util.concurrent.ExecutionException: java.util.NoSuchElementException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.util.NoSuchElementException
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:615)
>   at java.net.URLClassLoader$3.nextElement(URLClassLoader.java:590)
>   at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:61)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I also see a NPE sometimes 
> {code:java}
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at TestCredentialProvider.main(TestCredentialProvider.java:58)
> Caused by: java.lang.NullPointerException
>   at java.util.ServiceLoader.parse(ServiceLoader.java:304)
>   at java.util.ServiceLoader.access$200(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357)
>   at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
>   at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:38)
>   at TestCredentialProvider$1.call(TestCredentialProvider.java:1)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HADOOP-14207) "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934897#comment-15934897
 ] 

Hadoop QA commented on HADOOP-14207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 29s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14207 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859750/HADOOP-14207.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9843513ab78 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2841666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11865/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11865/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11865/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11865/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler
> 

[jira] [Comment Edited] (HADOOP-13966) add ability to start DDB local server in every test

2017-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934842#comment-15934842
 ] 

Steve Loughran edited comment on HADOOP-13966 at 3/21/17 4:41 PM:
--

# don't worry about the logout, JVM termination will do that
# do skip setting the sql lite sysprop if already set
# we need to make sure that the sqllist property is unique for multiple JVMs 
running in parallel. That could be done by using the system property 
test.build.data , which is customised in the maven parallel test runner, 
falling back to "target" if unset.
# {{startSingletonServer}} to throw exceptions
# oh, and use the OS specific "/" path separator character
# maybe mention in test docs "how to keep costs down"




was (Author: ste...@apache.org):
# don't worry about the logout, JVM termination will do that
# do skip setting the sql lite sysprop if already set
# we need to make sure that the sqllist property is unique for multiple JVMs 
running in parallel. That could be done by using the system property 
test.build.data , which is customised in the maven parallel test runner, 
falling back to "target" if unset.
# {{startSingletonServer}} to throw exceptions
# oh, and use the OS specific "/" path separator character


> add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
> Attachments: HADOOP-13966-HADOOP-13345.000.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13966) add ability to start DDB local server in every test

2017-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934842#comment-15934842
 ] 

Steve Loughran edited comment on HADOOP-13966 at 3/21/17 4:34 PM:
--

# don't worry about the logout, JVM termination will do that
# do skip setting the sql lite sysprop if already set
# we need to make sure that the sqllist property is unique for multiple JVMs 
running in parallel. That could be done by using the system property 
test.build.data , which is customised in the maven parallel test runner, 
falling back to "target" if unset.
# {{startSingletonServer}} to throw exceptions
# oh, and use the OS specific "/" path separator character



was (Author: ste...@apache.org):
# don't worry about the logout, JVM termination will do that
# do skip setting the sql lite sysprop if already set
# we need to make sure that the sqllist property is unique for multiple JVMs 
running in parallel. That could be done by using the system property 
test.build.data , which is customised in the maven parallel test runner, 
falling back to "target" if unset.
# oh, and use the OS specific "/" path separator character


> add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
> Attachments: HADOOP-13966-HADOOP-13345.000.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13966) add ability to start DDB local server in every test

2017-03-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934842#comment-15934842
 ] 

Steve Loughran commented on HADOOP-13966:
-

# don't worry about the logout, JVM termination will do that
# do skip setting the sql lite sysprop if already set
# we need to make sure that the sqllist property is unique for multiple JVMs 
running in parallel. That could be done by using the system property 
test.build.data , which is customised in the maven parallel test runner, 
falling back to "target" if unset.
# oh, and use the OS specific "/" path separator character


> add ability to start DDB local server in every test
> ---
>
> Key: HADOOP-13966
> URL: https://issues.apache.org/jira/browse/HADOOP-13966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
> Attachments: HADOOP-13966-HADOOP-13345.000.patch
>
>
> the local in memory DDB starts up in only 2+ seconds, so we have to reason to 
> not use it in all our integration tests, if we add a switch to do this



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14205:

   Resolution: Fixed
Fix Version/s: 2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and branch-2.8.

All ADLS live unit tests passed except hitting HADOOP-13928 which I just 
backported it to branch-2.8 as well.

Thanks [~ste...@apache.org] and [~vishwajeet.dusane] for the review.

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 2.9.0, 2.8.1
>
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14208) Fix typo in the top page in branch-2.8

2017-03-21 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14208:
--

 Summary: Fix typo in the top page in branch-2.8
 Key: HADOOP-14208
 URL: https://issues.apache.org/jira/browse/HADOOP-14208
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Akira Ajisaka
Priority: Trivial


There is a typo in the summary of the release.
{noformat:title=index.md.vm}
*   Allow node labels get specificed in submitting MR jobs
{noformat}
specificed should be specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13928) TestAdlFileContextMainOperationsLive.testGetFileContext1 runtime error

2017-03-21 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13928:

Fix Version/s: 2.8.1
   2.9.0

> TestAdlFileContextMainOperationsLive.testGetFileContext1 runtime error
> --
>
> Key: HADOOP-13928
> URL: https://issues.apache.org/jira/browse/HADOOP-13928
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
> Environment: Mac OS X Sierra 10.12.1
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 2.9.0, 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13928.001.patch
>
>
> {nformat}
> Tests run: 61, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 102.532 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
>   Time elapsed: 0.445 sec  <<< ERROR!
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
>   at 
> org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
>   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:85)
>   at org.apache.hadoop.fs.FileContext.create(FileContext.java:685)
>   at 
> org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testGetFileContext1(FileContextMainOperationsBaseTest.java:1350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> 

[jira] [Commented] (HADOOP-13928) TestAdlFileContextMainOperationsLive.testGetFileContext1 runtime error

2017-03-21 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934837#comment-15934837
 ] 

John Zhuge commented on HADOOP-13928:
-

Backport to 2.8 as well.

> TestAdlFileContextMainOperationsLive.testGetFileContext1 runtime error
> --
>
> Key: HADOOP-13928
> URL: https://issues.apache.org/jira/browse/HADOOP-13928
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
> Environment: Mac OS X Sierra 10.12.1
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13928.001.patch
>
>
> {nformat}
> Tests run: 61, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 102.532 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive
> testGetFileContext1(org.apache.hadoop.fs.adl.live.TestAdlFileContextMainOperationsLive)
>   Time elapsed: 0.445 sec  <<< ERROR!
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
>   at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:328)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1857)
>   at 
> org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:328)
>   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:320)
>   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:85)
>   at org.apache.hadoop.fs.FileContext.create(FileContext.java:685)
>   at 
> org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testGetFileContext1(FileContextMainOperationsBaseTest.java:1350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 

[jira] [Commented] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934799#comment-15934799
 ] 

Xiaoyu Yao commented on HADOOP-14167:
-

Thanks [~surendrasingh] for reporting the issue and posting the patch. My only 
concern of this change is the potential duplicate short user name from 
different host/domains combinations. A simple one maybe hive@domainA and 
hive@domainB. This may cause calculation of the 
DecayRpcScheduler#getPriorityLevel for users.

> UserIdentityProvider should use short user name in DecayRpcScheduler
> 
>
> Key: HADOOP-14167
> URL: https://issues.apache.org/jira/browse/HADOOP-14167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14167.001.patch
>
>
> In secure cluster {{UserIdentityProvider}} use principal name for user, it 
> should use short name of principal.
> {noformat}
>   {
> "name" : 
> "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
>  .
>  .
>  .
> "Caller(hdfs/had...@hadoop.com).Volume" : 436,
> "Caller(hdfs/had...@hadoop.com).Priority" : 3,
> .
> .
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14207) "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler

2017-03-21 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14207:

Status: Patch Available  (was: Open)

Attached initial patch ...
Please review... 

> "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler
> --
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: HADOOP-14207.001.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14207) "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler

2017-03-21 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14207:

Attachment: HADOOP-14207.001.patch

> "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler
> --
>
> Key: HADOOP-14207
> URL: https://issues.apache.org/jira/browse/HADOOP-14207
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: HADOOP-14207.001.patch
>
>
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
> be constructed.
> at 
> org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
> at 
> org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
> at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
> at 
> org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
> at 
> org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
> Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
> DecayRpcSchedulerMetrics2.ipc.65110 already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-21 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934763#comment-15934763
 ] 

Vishwajeet Dusane commented on HADOOP-14205:


+1 LGTM. 

> No FileSystem for scheme: adl
> -
>
> Key: HADOOP-14205
> URL: https://issues.apache.org/jira/browse/HADOOP-14205
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14205.branch-2.001.patch, 
> HADOOP-14205.branch-2.002.patch
>
>
> {noformat}
> $ bin/hadoop fs -ls /
> ls: No FileSystem for scheme: adl
> {noformat}
> The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
> {{fs.AbstractFileSystem.adl.impl}}.
> After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
> error:
> {noformat}
> $ bin/hadoop fs -ls /
> -ls: Fatal internal error
> java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
> Caused by: java.lang.ClassNotFoundException: Class 
> org.apache.hadoop.fs.adl.AdlFileSystem not found
>   at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
>   at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
>   ... 18 more
> {noformat}
> The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14207) "dfsadmin -refreshCallQueue" command is failing with DecayRpcScheduler

2017-03-21 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HADOOP-14207:
---

 Summary: "dfsadmin -refreshCallQueue" command is failing with 
DecayRpcScheduler
 Key: HADOOP-14207
 URL: https://issues.apache.org/jira/browse/HADOOP-14207
 Project: Hadoop Common
  Issue Type: Bug
  Components: rpc-server
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore
Priority: Blocker


{noformat}
java.lang.RuntimeException: org.apache.hadoop.ipc.DecayRpcScheduler could not 
be constructed.
at 
org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:89)
at 
org.apache.hadoop.ipc.CallQueueManager.swapQueue(CallQueueManager.java:260)
at org.apache.hadoop.ipc.Server.refreshCallQueue(Server.java:650)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.refreshCallQueue(NameNodeRpcServer.java:1582)
at 
org.apache.hadoop.ipc.protocolPB.RefreshCallQueueProtocolServerSideTranslatorPB.refreshCallQueue(RefreshCallQueueProtocolServerSideTranslatorPB.java:49)
at 
org.apache.hadoop.ipc.proto.RefreshCallQueueProtocolProtos$RefreshCallQueueProtocolService$2.callBlockingMethod(RefreshCallQueueProtocolProtos.java:769)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
DecayRpcSchedulerMetrics2.ipc.65110 already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:144)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:117)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org