[jira] [Updated] (HADOOP-13190) Mention LoadBalancingKMSClientProvider in KMS HA documentation

2016-10-27 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13190:
---
Fix Version/s: 3.0.0-alpha1
   2.8.0

> Mention LoadBalancingKMSClientProvider in KMS HA documentation
> --
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch, 
> HADOOP-13190.003.patch, HADOOP-13190.004.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614372#comment-15614372
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-maven-plugins in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 14m 
38s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-10075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835744/HADOOP-10075_addendum.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83571c421430 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 57187fd |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10913/artifact/patchprocess/branch-findbugs-hadoop-maven-plugins-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10913/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10913/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-maven-plugins U: hadoop-maven-plugins |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10913/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
>   

[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614361#comment-15614361
 ] 

Rakesh R commented on HADOOP-10075:
---

I too faced same problem in my Windows env. I could see the issue has been 
resolved with this addendum patch and able to continue building. Thanks 
[~rkanter].

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Status: Patch Available  (was: Reopened)

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0, 2.2.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reopened HADOOP-10075:


> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Attachment: HADOOP-10075_addendum.001.patch

Sorry about that [~brahmareddy].  I believe I've figured out the problem.  It 
has to do with Windows filepaths and this code in {{ResourceGzMojo}}:
{code:java}
File outFile = new File(outputDir, path.toFile().getCanonicalPath()
.replaceFirst(inputDir.getCanonicalPath(), "") + ".gz");
{code}
The first argument in {{replaceFirst}} is actually a regex, so with a Windows 
path, you end up with an unescaped "\" and it fails.

I've attached an addendum patch that I think should fix the problem, assuming 
my diagnosis is correct.  Can you verify that it solves the problem?  I don't 
have a Windows setup handy at the moment.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch, 
> HADOOP-10075_addendum.001.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2016-10-27 Thread Cole Ferrier (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614240#comment-15614240
 ] 

Cole Ferrier commented on HADOOP-13759:
---

is there no other dependency on the lib? i only ask, because i didn't see that 
being added as a dependency in the patches for it in hadoop-5732 and jsch is a 
dependency..

in fact i played around with a mvn project and rolled back the version of 
hadoop-common i depended on all the way to 2.2.0 and it still pulls in jsch.

i only am posting because i've been troubleshooting some issues with the code 
in git, and are working through some odd behavior so i was looking at sftp jira 
items.


> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-10-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614188#comment-15614188
 ] 

Larry McCay commented on HADOOP-12804:
--

[~ste...@apache.org] - can I bother you for a review of this?
Thanks!

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch, HADOOP-12804-003.patch, 
> HADOOP-12804-004.patch, HADOOP-12804-005.patch, 
> HADOOP-12804-branch-2-002.patch, HADOOP-12804-branch-2-003.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614079#comment-15614079
 ] 

Brahma Reddy Battula edited comment on HADOOP-10075 at 10/28/16 2:46 AM:
-

Compilation fails with following error, after this checked-in. Not going to 
revert, as the chnages are more..

 {noformat}
[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-alpha2-SNAPSHOT:resource-gz 
(resource-gz) on
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
java.util.regex.PatternSyntaxException: Illegal/
unsupported escape sequence near index 3
[ERROR] 
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
[ERROR] ^
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.
0.0-alpha2-SNAPSHOT:resource-gz (resource-gz) on project hadoop-common: 
org.apache.maven.plugin.MojoExecutionException:
java.util.regex.PatternSyntaxException: Illegal/unsupported escape sequence 
near index 3
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
   ^
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoExecutionException: 
org.apache.maven.plugin.MojoExecutionException: java.util.reg
ex.PatternSyntaxException: Illegal/unsupported escape sequence near index 3
D:\OSCode\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
   ^
at 
org.apache.hadoop.maven.plugin.resourcegz.ResourceGzMojo.execute(ResourceGzMojo.java:82)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: org.apache.maven.plugin.MojoExecutionException: 
java.util.regex.PatternSyntaxException: Illegal/unsupported e
scape sequence near index 3
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
{noformat}

Following added as part of this jira,

{code}

resource-gz
generate-resources

  resource-gz


  
${basedir}/src/main/webapps/static
  
${basedir}/target/webapps/static
  js,css

  
{code}


was (Author: brahmareddy):
Compilation fails with following error, after this checked-in.

 {noformat}
[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-alpha2-SNAPSHOT:resource-gz 
(resource-gz) on
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
java.util.regex.PatternSyntaxException: Illegal/
unsupported escape sequence near index 3
[ERROR] 
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
[ERROR] ^
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.
0.0-alpha2-SNAPSHOT:resource-gz (resource-gz) on project hadoop-common: 
org.apache.maven.plugin.MojoExecutionException:
java.util.regex.PatternSyntaxException: 

[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614079#comment-15614079
 ] 

Brahma Reddy Battula commented on HADOOP-10075:
---

Compilation fails with following error, after this checked-in.

 {noformat}
[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-alpha2-SNAPSHOT:resource-gz 
(resource-gz) on
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
java.util.regex.PatternSyntaxException: Illegal/
unsupported escape sequence near index 3
[ERROR] 
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
[ERROR] ^
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.
0.0-alpha2-SNAPSHOT:resource-gz (resource-gz) on project hadoop-common: 
org.apache.maven.plugin.MojoExecutionException:
java.util.regex.PatternSyntaxException: Illegal/unsupported escape sequence 
near index 3
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
   ^
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoExecutionException: 
org.apache.maven.plugin.MojoExecutionException: java.util.reg
ex.PatternSyntaxException: Illegal/unsupported escape sequence near index 3
D:\OSCode\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
   ^
at 
org.apache.hadoop.maven.plugin.resourcegz.ResourceGzMojo.execute(ResourceGzMojo.java:82)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: org.apache.maven.plugin.MojoExecutionException: 
java.util.regex.PatternSyntaxException: Illegal/unsupported e
scape sequence near index 3
D:\hadoop-trunk\hadoop\hadoop-common-project\hadoop-common\src\main\webapps\static
{noformat}

Following added as part of this jira,

{code}

resource-gz
generate-resources

  resource-gz


  
${basedir}/src/main/webapps/static
  
${basedir}/target/webapps/static
  js,css

  
{code}

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15614071#comment-15614071
 ] 

Yuanbo Liu commented on HADOOP-13765:
-

LGTM, [~byh0831] Thanks for filing this jira.
[~ste...@apache.org] I also checked the behavior of {{FTPFileSystem}}, it 
throws a runtime exception if there is a running error.
I'm not sure which behavior is more reasonable, looking forward to your 
thoughts.

> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
> Attachments: HADOOP-13765.001.patch
>
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //Maybe we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613932#comment-15613932
 ] 

Hudson commented on HADOOP-13763:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10716 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10716/])
HADOOP-13763. KMS REST API Documentation Decrypt URL typo. Contributed (xiao: 
rev b62bc2bbd80bb751348f0c1f655d5e456624663e)
* (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm


> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13763:
---
Component/s: documentation

> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13763:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.0.0-alpha1)
   3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thank you for the contribution, [~jeffreyr97].

> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613872#comment-15613872
 ] 

Xiao Chen commented on HADOOP-13763:


Thanks for reporting and fixing this [~jeffreyr97], good catch. +1, committing 
this shortly.

A comment about the jira is please leave out Fix versions as empty - that's 
what committers set when checking-in the change. Also, 3.0.0-alpha1 is proudly 
released, so this will be in alpha2.

> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13763:
---
Assignee: Jeffrey E  Rodriguez

> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Assignee: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-27 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613780#comment-15613780
 ] 

Aaron Fabbri commented on HADOOP-13651:
---

Thanks [~eddyxu].  I will post a new patch here shortly.

This patch depends on HADOOP-13631, so we need to commit that one first.  Once 
that happens I can submit this latest patch so we can get a jenkins run on it.



> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613701#comment-15613701
 ] 

Hudson commented on HADOOP-10075:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10713 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10713/])
HADOOP-10075. Update jetty dependency to version 9 (rkanter) (rkanter: rev 
5877f20f9c3f6f0afa505715e9a2ee312475af17)
* (edit) hadoop-common-project/hadoop-nfs/pom.xml
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/MockResourceManagerFacade.java
* (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestAlignedPlanner.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServices.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodeLabels.java
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSWithKerberos.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemTestSetup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java
* (add) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/resourcegz/ResourceGzMojo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* (edit) hadoop-client/pom.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokens.java
* (edit) 

[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613686#comment-15613686
 ] 

Hadoop QA commented on HADOOP-13709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 32s{color} 
| {color:red} root generated 1 new + 700 unchanged - 1 fixed = 701 total (was 
701) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 55 unchanged - 0 fixed = 57 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13709 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835684/HADOOP-13709.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a8aa9ea6906e 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9e03ee5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10912/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10912/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10912/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10912/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: 

[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks everyone for reviews and comments.  Committed to trunk!

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613501#comment-15613501
 ] 

Robert Kanter commented on HADOOP-10075:


Thanks [~raviprak].  I know reviewing this and looking at the tests also took a 
lot of time.

I can take care of committing it now.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13766) Fix a typo in the comments of RPC.getProtocolVersion

2016-10-27 Thread Ethan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613478#comment-15613478
 ] 

Ethan Li commented on HADOOP-13766:
---

Just fix typos in comments. No new tests are needed

> Fix a typo in the comments of RPC.getProtocolVersion
> 
>
> Key: HADOOP-13766
> URL: https://issues.apache.org/jira/browse/HADOOP-13766
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ethan Li
>Priority: Trivial
> Attachments: HADOOP-13766.001.patch
>
>
> Typo in the comments. Protocol name should be versionID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613462#comment-15613462
 ] 

Ravi Prakash commented on HADOOP-10075:
---

All the changes in java and pom files look good to me. I ran all the unit tests 
on trunk with and without patch, and the same unit tests fail, so I'm crossing 
my fingers that the patch doesn't introduce any new unit test failures. I also 
started all daemons and clicked around and saw nothing unusual. I checked that 
the /conf, /jmx and REST URIs still work.

I can't submit jobs on unpatched trunk right now (it complains {{Could not find 
or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster}}) but its an 
orthogonal issue and not affected by your patch.

Thanks for the massive amount of effort. The 011 patch looks good to me. +1.

Please feel free to commit it yourself. Otherwise I'm happy to do it by the end 
of the day.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-27 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613440#comment-15613440
 ] 

Eric Badger edited comment on HADOOP-13709 at 10/27/16 10:19 PM:
-

Taking [~jlowe]'s advice and attaching a new patch that makes 
{{destroyChildProcesses()}} public. That way it can be called outside of the 
shutdown hook. This will be useful so that the localizer can kill its 
subprocesses and clean up after itself before the shutdown hook is called (this 
would be a follow-up change in YARN-5641).


was (Author: ebadger):
Taking [~jlowe]'s advice and attaching a new patch that makes 
{{destroyChildProcesses()}} public. That way it can be called outside of the 
shutdown hook. This will be useful so that the localizer can kill its 
subprocesses and clean up after itself before the shutdown hook is called. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits

2016-10-27 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13709:
-
Attachment: HADOOP-13709.004.patch

Taking [~jlowe]'s advice and attaching a new patch that makes 
{{destroyChildProcesses()}} public. That way it can be called outside of the 
shutdown hook. This will be useful so that the localizer can kill its 
subprocesses and clean up after itself before the shutdown hook is called. 

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> ---
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613426#comment-15613426
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Good discussion, [~liuml07] and [~fabbri]

bq. The contract assumes we create the direct parent directory (other ancestors 
should be taken care of by the clients/callers) when putting a new file item. I 
checked the in-memory local metadata store and it implements this idea. This 
may be not efficient to DDB. Basically for putting X items, we have to issue 
2X~3X DDB requests (X for putting file, X for checking its parent directories, 
and possible X for updating its parent directories). I'm wondering if we can 
also let the client/caller pre-create the direct parent directory as other 
ancestors.

I suggest to consider this into two aspects: 
* Checking parents directories in normal {{S3AFileSystem}} operations  (i.e., 
create / mkdirs ). In such case, S3AFileSystem should already ensure the 
invariant of the contracts(the parent directories existed before S3AFileSystem 
starts to create files on S3). 
* Loading files and directories outside of normal {{S3AFileSystem}} operations, 
e.g., load a *non-cached* directory or from CLI tool, in such cases, would a 
small local "dentry_cache" types of data structure be sufficient for a batch 
operation? Because these operations can ensure that the namespace structure 
exists on S3 already. 

The last resort is, if {{S3AFileSystem}} considers that it is safe to {{create 
/ mkdir}} on a path. You can always create all its parent directories in a 
single batch to dynamodb. In short, I'd suggest to let {{S3AFileSystem}} ensure 
the contract. 

bq. We store the is_empty for directory in the DynamoDB (DDB) metadata store 
now. We have to update this information in a consistent and efficient way. We 
don't want to check the parent directory every time we delete/put a file item. 
At least we can optimize this when deleting a subtree.

Another way to do it is letting the {{isEmpty()}} flag being set by issuing a 
small _additional_ query on the directory with a 
[Limit=1|http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#ScanQueryLimit].
 So if it returns more than 1 result, the {{isEmpty}} flag is false, otherwise, 
the flag is true. And this value can be cached with the lifetime of 
{{S3AFileStatus}}, as it can not reliably reflect the changes in S3 anyway. So 
the query cost only occurs when you call the {{IsEmpty()}} for the first time. 
And you don't need to update this flag for any S3 writes. 

Hope that works.

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-27 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613313#comment-15613313
 ] 

Aaron Fabbri edited comment on HADOOP-13449 at 10/27/16 9:43 PM:
-

Exciting stuff, thanks for update.

{quote}
I changed the base unit test as the owner, group and permission etc are not 
part of the metadata we're interested in by now.
{quote}

Good. We could have a helper function that all tests could use, e.g. 
doesMetadataStorePersistOwnerGroupPermission() which returns false if 
MetadataStore instanceof DynamoDBMetadataStore.  This is also another spot it 
might be nice to add a function {{getProperty()}} for MetadataStore, so we 
could {{getProperty(PERSISTS_PERMISSIONS}} etc.  We could do that later on.

{quote}
We store the is_empty for directory in the DynamoDB (DDB) metadata store now. 
We have to update this information in a consistent and efficient way. We don't 
want to check the parent directory every time we delete/put a file item. At 
least we can optimize this when deleting a subtree.
{quote}
This part is a pain.  We should revisit the whole 
{{S3AFileStatus#isEmptyDirectory}} idea in the future. 

In case it helps, my algorithm is here:

In put(PathMetadata meta):
{code}
  if we have PathMetadata for meta's parent path:
  parentMeta.setIsEmpty(false)
{code}

The harder case, when we are removing an entry:

{code}

  // If we have cached a FileStatus for the parent...
  DirListingMetadata dir = dirHash.get(parent);
  if (dir != null) {
LOG.debug("removing parent's entry for {} ", path);

// Remove our path from the parent dir
dir.remove(path);

// S3A-specific logic dealing with S3AFileStatus#isEmptyDirectory()
if (isS3A) {
  if (dir.isAuthoritative() && dir.numEntries() == 0) {
setS3AIsEmpty(parent, true);
  } else if (dir.numEntries() == 0) {
// We do not know of any remaining entries in parent directory.
// However, we do not have authoritative listing, so there may
// still be some entries in the dir.  Since we cannot know the
// proper state of the parent S3AFileStatus#isEmptyDirectory, we
// will invalidate our entries for it.
// Better than deleting entries would be marking them as "missing
// metadata".  Deleting them means we lose consistent listing and
// ability to retry for eventual consistency for the parent path.

// TODO implement missing metadata feature
invalidateFileStatus(parent);
  }
  // else parent directory still has entries in it, isEmptyDirectory
  // does not change
}
{code}

Fixing the loss of consistency on the parent could be achieved by leaving an 
empty PathMetadata for the parent that does not contain a FileStatus in it.  
That "missing metadata" PathMetadata would indicate to future getFileStatus() 
or listStatus() calls that the file does exist (so retry if S3 is eventually 
consistent), but the FileStatus needs to be recreated (the regular 
getFileStatus() logic) , since we cannot know the value of its 
isEmptyDirectory()

I added a TODO because we can tackle this later if we want.

{quote}The contract assumes we create the direct parent directory (other 
ancestors should be taken care of by the clients/callers) when putting a new 
file item{quote}

Yeah this is for consistent listing on the parent after the child is created.  
I'm wondering if we can relax this or make it configurable?  When 
{{fs.s3a.metadatastore.authoritative}} is true, the performance hit on create 
could be offset by a performance gain on subsequent listing of the parent 
directory. 

Looks like good progress! Please shout if I can help at all.



was (Author: fabbri):
Exciting stuff, thanks for update.

{quote}
I changed the base unit test as the owner, group and permission etc are not 
part of the metadata we're interested in by now.
{quote}

Good. We could have a helper function that all tests could use, e.g. 
doesMetadataStorePersistOwnerGroupPermission() which returns false if 
MetadataStore instanceof DynamoDBMetadataStore.  This is also another spot it 
might be nice to add a function {{getProperty()}} for MetadataStore, so we 
could {{getProperty(PERSISTS_PERMISSIONS}} etc.  We could do that later on.

{quote}
We store the is_empty for directory in the DynamoDB (DDB) metadata store now. 
We have to update this information in a consistent and efficient way. We don't 
want to check the parent directory every time we delete/put a file item. At 
least we can optimize this when deleting a subtree.
{quote}
This part is a pain.  We should revisit the whole 
{{S3AFileStatus#isEmptyDirectory}} idea in the future. 

In case it helps, my algorithm is here:

In put(PathMetadata meta):
{code}
  if we have PathMetadata for meta's 

[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-27 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613313#comment-15613313
 ] 

Aaron Fabbri commented on HADOOP-13449:
---

Exciting stuff, thanks for update.

{quote}
I changed the base unit test as the owner, group and permission etc are not 
part of the metadata we're interested in by now.
{quote}

Good. We could have a helper function that all tests could use, e.g. 
doesMetadataStorePersistOwnerGroupPermission() which returns false if 
MetadataStore instanceof DynamoDBMetadataStore.  This is also another spot it 
might be nice to add a function {{getProperty()}} for MetadataStore, so we 
could {{getProperty(PERSISTS_PERMISSIONS}} etc.  We could do that later on.

{quote}
We store the is_empty for directory in the DynamoDB (DDB) metadata store now. 
We have to update this information in a consistent and efficient way. We don't 
want to check the parent directory every time we delete/put a file item. At 
least we can optimize this when deleting a subtree.
{quote}
This part is a pain.  We should revisit the whole 
{{S3AFileStatus#isEmptyDirectory}} idea in the future. 

In case it helps, my algorithm is here:

In put(PathMetadata meta):
{code}
  if we have PathMetadata for meta's parent path:
  parentMeta.setIsEmpty(false)
{code}

The harder case, when we are removing an entry:

{code}

  // If we have cached a FileStatus for the parent...
  DirListingMetadata dir = dirHash.get(parent);
  if (dir != null) {
LOG.debug("removing parent's entry for {} ", path);

// Remove our path from the parent dir
dir.remove(path);

// S3A-specific logic dealing with S3AFileStatus#isEmptyDirectory()
if (isS3A) {
  if (dir.isAuthoritative() && dir.numEntries() == 0) {
setS3AIsEmpty(parent, true);
  } else if (dir.numEntries() == 0) {
// We do not know of any remaining entries in parent directory.
// However, we do not have authoritative listing, so there may
// still be some entries in the dir.  Since we cannot know the
// proper state of the parent S3AFileStatus#isEmptyDirectory, we
// will invalidate our entries for it.
// Better than deleting entries would be marking them as "missing
// metadata".  Deleting them means we lose consistent listing and
// ability to retry for eventual consistency for the parent path.

// TODO implement missing metadata feature
invalidateFileStatus(parent);
  }
  // else parent directory still has entries in it, isEmptyDirectory
  // does not change
}
{code}

Fixing the loss of consistency on the parent could be achieved by leaving an 
empty PathMetadata for the parent that does not contain a FileStatus in it.  
That "missing metadata" PathMetadata would indicate to future getFileStatus() 
or listStatus() calls that the file does exist (so retry if S3 is eventually 
consistent), but the FileStatus needs to be fetched from S3, since we cannot 
know the value of its isEmptyDirectory()

I added a TODO because we can tackle this later if we want.

{quote}The contract assumes we create the direct parent directory (other 
ancestors should be taken care of by the clients/callers) when putting a new 
file item{quote}

Yeah this is for consistent listing on the parent after the child is created.  
I'm wondering if we can relax this or make it configurable?  When 
{{fs.s3a.metadatastore.authoritative}} is true, the performance hit on create 
could be offset by a performance gain on subsequent listing of the parent 
directory. 

Looks like good progress! Please shout if I can help at all.


> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-10-27 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613288#comment-15613288
 ] 

Kihwal Lee commented on HADOOP-13742:
-

At a first glance, the following seems racy. Since 
{{processConnectionContext()}} is done by the reader threads, unless you 
configure with only one reader thread, one can step over another one adding a 
new count for the same user. Also the decrement method could remove the hash 
map entry after this method verified it not being null. The update will be lost 
in this case too.
{code}
void incrUserConnections(String user) {
  AtomicInteger count = userVsConnectionsMap.get(user);
  if (count == null) {
count = new AtomicInteger(1);
userVsConnectionsMap.put(user, count);
  } else {
count.getAndIncrement();
  }
}
{code}

Slapping a big synchronization will make it safe, but then that's not desirable 
for the performance. Probably there can be a fine-grained way.

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-27 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13449:
---
Attachment: HADOOP-13449-HADOOP-13345.001.patch

Thanks for asking [~eddyxu]. I attach the v1 patch for quick feedback.

# I changed the base unit test as the {{owner}}, {{group}} and {{permission}} 
etc are not part of the metadata we're interested in by now.
# We store the {{is_empty}} for directory in the DynamoDB (DDB) metadata store 
now. We have to update this information in a consistent and efficient way. We 
don't want to check the parent directory every time we delete/put a file item. 
At least we can optimize this when deleting a subtree.
# The contract assumes we create the direct parent directory (other ancestors 
should be taken care of by the clients/callers) when putting a new file item. I 
checked the in-memory local metadata store and it implements this idea. This 
may be not efficient to DDB. Basically for putting X items, we have to issue 
2X~3X DDB requests (X for putting file, X for checking its parent directories, 
and possible X for updating its parent directories). I'm wondering if we can 
also let the client/caller pre-create the direct parent directory as other 
ancestors.
This is root cause of the only left 2 of 16 failing unit tests, i.e. 
{{testPutDirListing}} and {{testPutNew}}.
# As to replacing FileStatus with S3AFileStatus in {{PathMetadata}}, I'm +0 for 
the idea. If we do agree on the switch, [HADOOP-13736] is basically good to me. 
If not, I can live with the similar way to {{S3AFileSystem}} vs. {{FileSystem}} 
in the {{MetadataStore#initialize()}}.
# I need to review [HADOOP-13651] and revisit the patch after catching up the 
current discussion. Will post v2 patch in one week. I will also handle the 
{{isAuthoritative}} in the next patch. Storing an extra field is a good and 
simple idea. Any idea how client sets/gets this value?

Thanks,

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch, 
> HADOOP-13449-HADOOP-13345.001.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-10-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: (was: HADOOP-12847.010.branch-2.8.patch)

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.patch, HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15613082#comment-15613082
 ] 

Hudson commented on HADOOP-12847:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10709 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10709/])
YARN-5172. Update yarn daemonlog documentation due to HADOOP-12847. (jlowe: rev 
b4a8fbcbbc5ea4ab3087ecf913839a53f32be113)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md


> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.8.patch, HADOOP-12847.010.branch-2.patch, 
> HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-27 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612868#comment-15612868
 ] 

Ravi Prakash commented on HADOOP-10075:
---

Still working on it Robert! I'll try to finish by today

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13766) Fix a typo in the comments of RPC.getProtocolVersion

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612851#comment-15612851
 ] 

Hadoop QA commented on HADOOP-13766:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 189 unchanged - 0 fixed = 190 total (was 189) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13766 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835630/HADOOP-13766.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 68cf80d4baeb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac35ee9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10911/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10911/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10911/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10911/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix a typo in 

[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-10-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612736#comment-15612736
 ] 

Sean Busbey commented on HADOOP-11804:
--

{quote}
The Jersey shading is causing problems to WebHDFS. In unit testing my mini DFS 
cluster cannot start web server. If I use the hadoop-client-runtime jar to talk 
to a WebHDFS server, the response cannot be correctly parsed. Seems related to 
ServiceLoader
{quote}

I have to chase this down still. [~zhz] can you help me out with a step-by-step 
for reproducing?

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-10-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Target Version/s: 3.0.0-alpha2  (was: 2.8.0)
  Status: Patch Available  (was: In Progress)

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-10-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.4.patch

-04

* update for current Hadoop 3 branch
* clean up for dependencies that are already in Java 7+ SE

limitations:

* logging libraries still relocated
* htrace libraries still relocated
* timeline server excluded from shaded minicluster and marked as optional

I'm vetting this against HBase now, but figured I'd post an update for some 
initial review.

I *think* the answer for the logging libraries and htrace is to leave them 
unshaded, since it's common to want to modify logging settings and to want to 
trace through e.g. the hdfs client. Would like some feedback here.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-10-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612642#comment-15612642
 ] 

Lei (Eddy) Xu commented on HADOOP-13449:


Ping [~liuml07].  Just wondering is there any update on this?

Thanks!

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Attachments: HADOOP-13449-HADOOP-13345.000.patch
>
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13766) Fix a typo in the comments of RPC.getProtocolVersion

2016-10-27 Thread Ethan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Li updated HADOOP-13766:
--
Attachment: HADOOP-13766.001.patch

> Fix a typo in the comments of RPC.getProtocolVersion
> 
>
> Key: HADOOP-13766
> URL: https://issues.apache.org/jira/browse/HADOOP-13766
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ethan Li
>Priority: Trivial
> Attachments: HADOOP-13766.001.patch
>
>
> Typo in the comments. Protocol name should be versionID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13766) Fix a typo in the comments of RPC.getProtocolVersion

2016-10-27 Thread Ethan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Li updated HADOOP-13766:
--
Release Note: simple fix a typo in comments
  Status: Patch Available  (was: Open)

> Fix a typo in the comments of RPC.getProtocolVersion
> 
>
> Key: HADOOP-13766
> URL: https://issues.apache.org/jira/browse/HADOOP-13766
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ethan Li
>Priority: Trivial
>
> Typo in the comments. Protocol name should be versionID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13766) Fix a typo in the comments of RPC.getProtocolVersion

2016-10-27 Thread Ethan Li (JIRA)
Ethan Li created HADOOP-13766:
-

 Summary: Fix a typo in the comments of RPC.getProtocolVersion
 Key: HADOOP-13766
 URL: https://issues.apache.org/jira/browse/HADOOP-13766
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Ethan Li
Priority: Trivial


Typo in the comments. Protocol name should be versionID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-10-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608723#comment-15608723
 ] 

Brahma Reddy Battula edited comment on HADOOP-13742 at 10/27/16 5:07 PM:
-

[~kihwal] could please review this..?


was (Author: brahmareddy):
can somebody review this..?

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13762) S3A: Set thread names with more specific information about the call.

2016-10-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612440#comment-15612440
 ] 

Chris Nauroth commented on HADOOP-13762:


Yes, you can change the thread name at any time.  The thread name acts somewhat 
like a mutable thread-local variable, accessed via 
[{{Thread#getName()}}|http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#getName--]
 and 
[{{Thread#setName(String)}}|http://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#setName-java.lang.String-].
  We have some existing precedent for this in the DataNode, where we set the 
{{DataXceiver}} threads with information about the specific data transfer 
protocol method call and the block ID:

{code}
  @Override
  public void readBlock(final ExtendedBlock block,
  final Token blockToken,
  final String clientName,
  final long blockOffset,
  final long length,
  final boolean sendChecksum,
  final CachingStrategy cachingStrategy) throws IOException {
previousOpClientName = clientName;
long read = 0;
updateCurrentThreadName("Sending block " + block);
...
{code}

{code}
  /**
   * Update the current thread's name to contain the current status.
   * Use this only after this receiver has started on its thread, i.e.,
   * outside the constructor.
   */
  private void updateCurrentThreadName(String status) {
StringBuilder sb = new StringBuilder();
sb.append("DataXceiver for client ");
if (previousOpClientName != null) {
  sb.append(previousOpClientName).append(" at ");
}
sb.append(remoteAddress);
if (status != null) {
  sb.append(" [").append(status).append("]");
}
Thread.currentThread().setName(sb.toString());
  }
{code}

I've never observed changing the thread name to cause any significant cost.  It 
would be good to watch out for the same pitfalls as logging, such as avoiding 
calls to expensive {{toString}} implementations with a lot of string 
concatenation in a tight loop.

> S3A: Set thread names with more specific information about the call.
> 
>
> Key: HADOOP-13762
> URL: https://issues.apache.org/jira/browse/HADOOP-13762
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>
> Running {{jstack}} on a hung process and reading the stack traces is a 
> helpful way to determine exactly what code in the process is stuck.  This 
> would be even more helpful if we included more descriptive information about 
> the specific file system method call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612425#comment-15612425
 ] 

Hadoop QA commented on HADOOP-13037:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
44s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835602/HADOOP-13037-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 7823266a2f09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ac35ee9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10910/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10910/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612423#comment-15612423
 ] 

Akira Ajisaka commented on HADOOP-13514:


Thanks [~ste...@apache.org] and [~vinayrpet] for the work.

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612212#comment-15612212
 ] 

Hadoop QA commented on HADOOP-13765:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  9s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.fs.sftp.SFTPFileSystem.getHomeDirectory() might ignore 
java.io.IOException  At SFTPFileSystem.java:At SFTPFileSystem.java:[line 644] |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835523/HADOOP-13765.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 827e39c9f245 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0c837db |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10909/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10909/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10909/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console 

[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: HADOOP-13037-003.patch

Fixed Findbugs, Checkstyle and JUnit issues.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Patch Available  (was: Open)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch, HADOOP-13037-003.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Open  (was: Patch Available)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612105#comment-15612105
 ] 

Hudson commented on HADOOP-13201:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10706 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10706/])
HADOOP-13201. Print the directory paths when ViewFs denies the rename (brahma: 
rev 0c837db8a874079dd5db83a7eef9c4d2b9d0e9ff)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13201:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.7.4
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2,branch-2.8 and branch-2.7..[~tianyin] thanks for  
reporting and contributing and thanks to [~rakeshr] for rebasing the patch.

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HADOOP-13765:
--
Status: Patch Available  (was: Open)

> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
> Attachments: HADOOP-13765.001.patch
>
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //Maybe we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611960#comment-15611960
 ] 

Wei-Chiu Chuang commented on HADOOP-13514:
--

I am not sure why I didn't see OOM in my local env. But thanks [~vinayrpet] and 
[~steve_l] for taking the action. I agree we should have enabled all hdfs/yarn 
tests for any change in pom.xml.

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611929#comment-15611929
 ] 

Hudson commented on HADOOP-13514:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10705 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10705/])
Revert "Addendum patch for HADOOP-13514 Upgrade maven surefire plugin to 
(stevel: rev 94e77e9115167e41cd9897472159b1eda24230ab)
* (edit) hadoop-project/pom.xml
Revert "HADOOP-13514. Upgrade maven surefire plugin to 2.19.1. (stevel: rev 
b43951750254290b0aaec3641cff3061a3927991)
* (edit) hadoop-project/pom.xml


> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-13514:
-

reverted all the changes; 

# let's get a patch which combines all changes we need: prop passdown, surefire 
version and whatever Maven opts we think are needed. 
# have this submitted as a jenkins build for HDFS, which may or may not pick it 
up.
# If it does tune MAVEN_OPTS, then the different recommendations for yetus, dev 
dockerfile and building.txt need to be consistent
# Apply the patch to trunk first, before cherry picking, make sure all is well 
there for a day or two


> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611831#comment-15611831
 ] 

Steve Loughran commented on HADOOP-13514:
-

Now, spawned process memory requirements are set in {{hadoop-project/pom.xml}

{code}
-Xmx2048m -XX:MaxPermSize=768m 
-XX:+HeapDumpOnOutOfMemoryError
{code}

That is: 2GB and a heap dump. 

* If the OOM is happening in the spawned process, then that is the line to 
update; we'll need separate patches for branch-2 & maven
* if the OOM is happening in the maven process itself, then we'd need to change 
the MAVEN_OPTS.

Which process do we believe is at fault here? 

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611810#comment-15611810
 ] 

Steve Loughran commented on HADOOP-13514:
-

we're using Yetus for the run though; I'm trying to work out where that 
MAVEN_OPTS variable is set

Assuming it is 
patchprocess/yetus-0.3.0/lib/precommit/test-patch-docker/launch-test-patch.sh , 
(which is where I can find that env var being set in IDE)

I'll file a patch which bumps things to 1.25 G (1280M) and hope things go away.

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611805#comment-15611805
 ] 

Rakesh R commented on HADOOP-13201:
---

Thank you [~brahmareddy]

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611777#comment-15611777
 ] 

Brahma Reddy Battula commented on HADOOP-13201:
---

[~rakeshr] thanks for rebasing the patch.. +1, will commit soon.

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611750#comment-15611750
 ] 

Rakesh R commented on HADOOP-10829:
---

Thanks [~lmccay],
bq. I am inclined to prefer the approach used in KeyProviderFactory
IIUC, you are suggesting to the use the following way.
{code}
+  // Iterate through the serviceLoader to avoid lazy loading.
+  // Lazy loading would require synchronization in concurrent use cases.
+  static {
+Iterator iterServices = 
serviceLoader.iterator();
+while (iterServices.hasNext()) {
+  iterServices.next();
+}
+  }
+
{code}
I had uploaded similar approach in the recent patch 
[HADOOP-10829.003.patch|https://issues.apache.org/jira/secure/attachment/12835546/HADOOP-10829.003.patch],
 is that ok?

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611744#comment-15611744
 ] 

Ewan Higgs edited comment on HADOOP-13514 at 10/27/16 12:33 PM:


[A stack overflow 
question|http://stackoverflow.com/questions/13382421/running-cucumber-tests-fails-with-outofmemoryerror-java-heap-space-exception#13400465]
 that [~ste...@apache.org] is talking about has the same question about 
surefire memory issues.

It was fixed using: 

{quote} Go into the Jenkins settings and add the environment variable 
MAVEN_OPTS to -Xmx512m -XX:MaxPermSize=256m. It looks like after your tests are 
finished it's trying to parse the results but the XML file is too large.{quote}


was (Author: ehiggs):
[A stack overflow question that [~ste...@apache.org] is talking 
about|http://stackoverflow.com/questions/13382421/running-cucumber-tests-fails-with-outofmemoryerror-java-heap-space-exception#13400465]
 has the same question about surefire memory issues.

It was fixed using: 

{quote} Go into the Jenkins settings and add the environment variable 
MAVEN_OPTS to -Xmx512m -XX:MaxPermSize=256m. It looks like after your tests are 
finished it's trying to parse the results but the XML file is too large.{quote}

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611744#comment-15611744
 ] 

Ewan Higgs commented on HADOOP-13514:
-

[A stack overflow question that [~ste...@apache.org] is talking 
about|http://stackoverflow.com/questions/13382421/running-cucumber-tests-fails-with-outofmemoryerror-java-heap-space-exception#13400465]
 has the same question about surefire memory issues.

It was fixed using: 

{quote} Go into the Jenkins settings and add the environment variable 
MAVEN_OPTS to -Xmx512m -XX:MaxPermSize=256m. It looks like after your tests are 
finished it's trying to parse the results but the XML file is too large.{quote}

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611719#comment-15611719
 ] 

Steve Loughran commented on HADOOP-13514:
-

OK, I see it in 
[https://builds.apache.org/job/PreCommit-HDFS-Build/17313/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]

{code}
Exception in thread "Thread-1246" Exception in thread "Thread-1267" 
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.(String.java:207)
at java.io.BufferedReader.readLine(BufferedReader.java:356)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at 
org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:76)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Thread-1249" java.lang.OutOfMemoryError: GC overhead limit 
exceeded
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.(String.java:207)
at java.io.BufferedReader.readLine(BufferedReader.java:356)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at 
org.apache.maven.surefire.shade.org.apache.maven.shared.utils.cli.StreamPumper.run(StreamPumper.java:76)
Exception in thread "Thread-1264" java.lang.OutOfMemoryError: GC overhead limit 
exceeded
Exception in thread "ping-timer-10sec" java.lang.OutOfMemoryError: GC overhead 
limit exceeded
{code}

Maybe it's asking for even more than usual. Apparently a search for the error 
online brings up a StackOverflow topic placing this near the Xerces code —and 
remember, to generate the XML result, surefire has to build up the entire DOM 
of the output. If we've got a test which generates lots of console output, it 
may be close to the edge in memory use —and the SF update tipping it over the 
edge.

# I'm going to see if I can tweak the memory consumption of the HDFS preruns 
... increase that, rerun the failed build. If the problem goes away, then roll 
out the change to the (many) other builds. If it doesn't, rollback

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611684#comment-15611684
 ] 

Steve Loughran commented on HADOOP-13514:
-

I'm looking @ the precommit trace now. Most recent one, HDFS-11061 went 
through. If they do start failing again, then we'll have to look @ rolling this 
back. 

Can you link to JIRAs of failed builds so I don' t have to chase this down? I'm 
seeing https://builds.apache.org/job/PreCommit-HDFS-Build/17318/console timing 
out, not OOM-ing


> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2016-10-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611587#comment-15611587
 ] 

Larry McCay commented on HADOOP-10829:
--

[~rakeshr] - I am inclined to prefer the approach used in KeyProviderFactory 
instead which avoids the lazy loading by iterating over the providers in a 
static initialization block. This avoids the need for synchronization during 
each call to getProviders for every client in a JVM.

{code}
  public abstract KeyProvider createProvider(URI providerName,
 Configuration conf
 ) throws IOException;

  private static final ServiceLoader serviceLoader =
  ServiceLoader.load(KeyProviderFactory.class,
  KeyProviderFactory.class.getClassLoader());

  // Iterate through the serviceLoader to avoid lazy loading.
  // Lazy loading would require synchronization in concurrent use cases.
  static {
Iterator iterServices = serviceLoader.iterator();
while (iterServices.hasNext()) {
  iterServices.next();
}
  }
  
  public static List getProviders(Configuration conf
   ) throws IOException {
List result = new ArrayList();
for(String path: conf.getStringCollection(KEY_PROVIDER_PATH)) {
  try {
URI uri = new URI(path);
KeyProvider kp = get(uri, conf);
if (kp != null) {
  result.add(kp);
} else {
  throw new IOException("No KeyProviderFactory for " + uri + " in " +
  KEY_PROVIDER_PATH);
}
  } catch (URISyntaxException error) {
throw new IOException("Bad configuration of " + KEY_PROVIDER_PATH +
" at " + path, error);
  }
}
return result;
  }
{code}

What do you think?

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611578#comment-15611578
 ] 

Hadoop QA commented on HADOOP-10829:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-10829 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835546/HADOOP-10829.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0097c104cba3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4e403de |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10908/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10908/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> 

[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611568#comment-15611568
 ] 

Rakesh R commented on HADOOP-13201:
---

cc/[~vinayrpet]

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611551#comment-15611551
 ] 

Vinayakumar B commented on HADOOP-13514:


Many recent HDFS Precommit builds are getting OOMs. Is there anything to do 
with this update?

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611496#comment-15611496
 ] 

Rakesh R commented on HADOOP-13201:
---

Kindly review the patch. Thanks!

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HADOOP-13201:
-

Assignee: Rakesh R

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Rakesh R
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611480#comment-15611480
 ] 

Rakesh R commented on HADOOP-10829:
---

Thanks [~benoyantony] for the fix. 

+1 (non-binding) for the fix. I've rebased the patch on latest trunk code, 
please someone help in reviewing the changes. Thanks!

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2016-10-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-10829:
--
Attachment: HADOOP-10829.003.patch

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.003.patch, HADOOP-10829.patch, 
> HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611387#comment-15611387
 ] 

Ewan Higgs commented on HADOOP-13514:
-

Thanks [~ajisakaa], [~jojochuang].

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HADOOP-13765:
--
Description: 
In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in finally 
block.
If we get the homeDir Path successfully but got an IOE in the finally block we 
will return the null result.
Maybe we can simply ignore this IOE and just return the result we have got.
Related codes are shown below.
{code:title=SFTPFileSystem.java|borderStyle=solid}
  public Path getHomeDirectory() {
ChannelSftp channel = null;
try {
  channel = connect();
  Path homeDir = new Path(channel.pwd());
  return homeDir;
} catch (Exception ioe) {
  return null;
} finally {
  try {
disconnect(channel);
  } catch (IOException ioe) {
//Maybe we can just ignore this IOE and do not return null here.
return null;
  }
}
  }
{code}

  was:
In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in finally 
block.
If we get the homeDir Path successfully but got an IOE in the finally block we 
will return the null result.
Maybe we can simply ignore this IOE and just return the result we have got.
Related codes are shown below.
{code:title=SFTPFileSystem.java|borderStyle=solid}
  public Path getHomeDirectory() {
ChannelSftp channel = null;
try {
  channel = connect();
  Path homeDir = new Path(channel.pwd());
  return homeDir;
} catch (Exception ioe) {
  return null;
} finally {
  try {
disconnect(channel);
  } catch (IOException ioe) {
//May be we can just ignore this IOE and do not return null here.
return null;
  }
}
  }
{code}


> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
> Attachments: HADOOP-13765.001.patch
>
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //Maybe we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611275#comment-15611275
 ] 

Hadoop QA commented on HADOOP-13201:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
48s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835517/HADOOP-13201-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42664ce607db 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6cc7c43 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10907/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10907/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HADOOP-13201-001.patch, 

[jira] [Updated] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HADOOP-13765:
--
Attachment: HADOOP-13765.001.patch

> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
> Attachments: HADOOP-13765.001.patch
>
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //May be we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611155#comment-15611155
 ] 

Hadoop QA commented on HADOOP-13037:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 23s{color} 
| {color:red} root generated 1 new + 703 unchanged - 0 fixed = 704 total (was 
703) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 4 new + 0 unchanged - 
0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-tools/hadoop-azure-datalake generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 25s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 57s{color} 
| {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure-datalake |
|  |  org.apache.hadoop.fs.adl.AdlPermission doesn't override 
org.apache.hadoop.fs.permission.FsPermission.equals(Object)  At 
AdlPermission.java:At AdlPermission.java:[line 1] |
| Failed junit tests | hadoop.fs.adl.live.TestAdlSupportedCharsetInPath |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835494/HADOOP-13037-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 68dfa0aebd7a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611154#comment-15611154
 ] 

Rakesh R commented on HADOOP-13201:
---

Thanks [~tianyin] for reporting this. I've modified the exception message and 
placed {{src}} and {{dest}} properly, please review this.

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs

2016-10-27 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HADOOP-13201:
--
Attachment: HADOOP-13201-001.patch

> Print the directory paths when ViewFs denies the rename operation on internal 
> dirs
> --
>
> Key: HADOOP-13201
> URL: https://issues.apache.org/jira/browse/HADOOP-13201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HADOOP-13201-001.patch, HADOOP-13201.000.patch
>
>
> With ViewFs, the delete and rename operations on internal dirs are denied by 
> throwing {{AccessControlException}}. 
> Unlike the {{delete()}} which notify the internal dir path, rename does not. 
> The attached patch appends the directory path on the logged exception.
> {code:title=ViewFs.java|borderStyle=solid}
>  InodeTree.ResolveResult resSrc = 
>fsState.resolve(getUriPath(src), false); 
>  if (resSrc.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + src);
>  }
>
>  InodeTree.ResolveResult resDst = 
>  fsState.resolve(getUriPath(dst), false);
>  if (resDst.isInternalDir()) {
>throw new AccessControlException(
> -  "Cannot Rename within internal dirs of mount table: it is 
> readOnly");
> +  "Cannot Rename within internal dirs of mount table: it is readOnly"
> +  + dst);
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1567#comment-1567
 ] 

Steve Loughran commented on HADOOP-13017:
-

thanks for the review

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuhao Bi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuhao Bi updated HADOOP-13765:
--
Description: 
In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in finally 
block.
If we get the homeDir Path successfully but got an IOE in the finally block we 
will return the null result.
Maybe we can simply ignore this IOE and just return the result we have got.
Related codes are shown below.
{code:title=SFTPFileSystem.java|borderStyle=solid}
  public Path getHomeDirectory() {
ChannelSftp channel = null;
try {
  channel = connect();
  Path homeDir = new Path(channel.pwd());
  return homeDir;
} catch (Exception ioe) {
  return null;
} finally {
  try {
disconnect(channel);
  } catch (IOException ioe) {
//May be we can just ignore this IOE and do not return null here.
return null;
  }
}
  }
{code}

  was:
In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in finally 
block.
If we get the homeDir Path successfully but got an IOE in the finally block we 
will return the null result.
Maybe we can simply ignore this IOE and just return the result we have 
got.Related codes are shown below.
{code:title=SFTPFileSystem.java|borderStyle=solid}
  public Path getHomeDirectory() {
ChannelSftp channel = null;
try {
  channel = connect();
  Path homeDir = new Path(channel.pwd());
  return homeDir;
} catch (Exception ioe) {
  return null;
} finally {
  try {
disconnect(channel);
  } catch (IOException ioe) {
//May be we can just ignore this IOE
return null;
  }
}
  }
{code}


> Return HomeDirectory if possible in SFTPFileSystem
> --
>
> Key: HADOOP-13765
> URL: https://issues.apache.org/jira/browse/HADOOP-13765
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Yuhao Bi
>
> In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in 
> finally block.
> If we get the homeDir Path successfully but got an IOE in the finally block 
> we will return the null result.
> Maybe we can simply ignore this IOE and just return the result we have got.
> Related codes are shown below.
> {code:title=SFTPFileSystem.java|borderStyle=solid}
>   public Path getHomeDirectory() {
> ChannelSftp channel = null;
> try {
>   channel = connect();
>   Path homeDir = new Path(channel.pwd());
>   return homeDir;
> } catch (Exception ioe) {
>   return null;
> } finally {
>   try {
> disconnect(channel);
>   } catch (IOException ioe) {
> //May be we can just ignore this IOE and do not return null here.
> return null;
>   }
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611130#comment-15611130
 ] 

Steve Loughran commented on HADOOP-13514:
-

This has highlighted a problem which we need to to more of when playing with 
maven: test all the hdfs and yarn builds before the upgrade. I propose that 
from now on anything going near maven plugins needs to be also submitted to 
hdfs and yarn builds

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13765) Return HomeDirectory if possible in SFTPFileSystem

2016-10-27 Thread Yuhao Bi (JIRA)
Yuhao Bi created HADOOP-13765:
-

 Summary: Return HomeDirectory if possible in SFTPFileSystem
 Key: HADOOP-13765
 URL: https://issues.apache.org/jira/browse/HADOOP-13765
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Yuhao Bi


In SFTPFileSystem#getHomeDirectory(), we disconnect the ChannelSftp in finally 
block.
If we get the homeDir Path successfully but got an IOE in the finally block we 
will return the null result.
Maybe we can simply ignore this IOE and just return the result we have 
got.Related codes are shown below.
{code:title=SFTPFileSystem.java|borderStyle=solid}
  public Path getHomeDirectory() {
ChannelSftp channel = null;
try {
  channel = connect();
  Path homeDir = new Path(channel.pwd());
  return homeDir;
} catch (Exception ioe) {
  return null;
} finally {
  try {
disconnect(channel);
  } catch (IOException ioe) {
//May be we can just ignore this IOE
return null;
  }
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13762) S3A: Set thread names with more specific information about the call.

2016-10-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611106#comment-15611106
 ] 

Steve Loughran commented on HADOOP-13762:
-

can you change thread names on the fly? What's the cost?

username comes with FS instance; we can include both in construction. Hadn't 
thought about dynamic ops 

> S3A: Set thread names with more specific information about the call.
> 
>
> Key: HADOOP-13762
> URL: https://issues.apache.org/jira/browse/HADOOP-13762
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>
> Running {{jstack}} on a hung process and reading the stack traces is a 
> helpful way to determine exactly what code in the process is stuck.  This 
> would be even more helpful if we included more descriptive information about 
> the specific file system method call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13234) Get random port by new ServerSocket(0).getLocalPort() in ServerSocketUtil#getPort

2016-10-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610986#comment-15610986
 ] 

Brahma Reddy Battula commented on HADOOP-13234:
---

Ok..Only difference might {{ServerSocket(0).getLocalPort()}} will give free 
port (32768-61000 on Linux and 49152-65535 on Windows) and {{getPort{p, 
retries)}} will retries the range between p and 65535( diff might be retry).

So we can close this issue..? and to adress the following which is mentioned by 
[~xyao], need to increase the retry..?
{noformat}
java.io.IOException: Port is already in use; giving up after 10 times.
at 
org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:809)
{noformat}

> Get random port by new ServerSocket(0).getLocalPort() in 
> ServerSocketUtil#getPort
> -
>
> Key: HADOOP-13234
> URL: https://issues.apache.org/jira/browse/HADOOP-13234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13234-002.patch, HADOOP-13234.patch
>
>
> As per [~iwasakims] comment from 
> [here|https://issues.apache.org/jira/browse/HDFS-10367?focusedCommentId=15275604=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15275604]
> we can get available random port by {{new ServerSocket(0).getLocalPort()}} 
> and it's more portable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610947#comment-15610947
 ] 

Hudson commented on HADOOP-13017:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10696 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10696/])
HADOOP-13017. Implementations of InputStream.read(buffer, offset, bytes) 
(iwasakims: rev 0bdd263d82a4510f16df49238d57c9f78ac28ae7)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (edit) 
hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/HttpInputStreamWithRelease.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LimitInputStream.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java
* (edit) 
hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeInputStream.java


> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13017:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks, [~ste...@apache.org].

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: HADOOP-13037-002.patch

Patch compilation is failing to resolve SDK dependency with error 

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hadoop-tools-dist: Failed to resolve dependencies for one or more 
projects in the reactor. Reason: No versions are present in the repository for 
the artifact with a range [2.0,)
[ERROR] com.microsoft.azure:azure-data-lake-store-sdk:jar:null
[ERROR] 
[ERROR] from the specified remote repositories:
[ERROR] apache.snapshots.https 
(https://repository.apache.org/content/repositories/snapshots, releases=true, 
snapshots=true),
[ERROR] repository.jboss.org 
(http://repository.jboss.org/nexus/content/groups/public/, releases=true, 
snapshots=false),
[ERROR] central (http://repo.maven.apache.org/maven2, releases=true, 
snapshots=false),
[ERROR] snapshots-repo 
(https://oss.sonatype.org/content/repositories/snapshots, releases=false, 
snapshots=true)
[ERROR] Path to dependency:
[ERROR] 1) org.apache.hadoop:hadoop-tools-dist:jar:3.0.0-alpha2-SNAPSHOT
[ERROR] 2) org.apache.hadoop:hadoop-azure-datalake:jar:3.0.0-alpha2-SNAPSHOT
[ERROR] -> [Help 1]
[ERROR] 
{code}

However with test-patch script, Patch does pass through compilation.

Instead of passing range version, added a fixed version for SDK. Resubmitting 
the patch.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Patch Available  (was: Open)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: (was: HADOOP-13037-002.patch)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-27 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Open  (was: Patch Available)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13017) Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-10-27 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610911#comment-15610911
 ] 

Masatake Iwasaki commented on HADOOP-13017:
---

+1, committing this.

> Implementations of InputStream.read(buffer, offset, bytes) to exit 0 if 
> bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13017-002.patch, HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2016-10-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610899#comment-15610899
 ] 

Hadoop QA commented on HADOOP-1381:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 51s{color} 
| {color:red} root generated 2 new + 700 unchanged - 3 fixed = 702 total (was 
703) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 5 new + 266 unchanged - 22 fixed = 271 total (was 288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-1381 |
| GITHUB PR | https://github.com/apache/hadoop/pull/147 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 68fda58a8023 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9f32364 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10905/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10905/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10905/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10905/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The distance between sync blocks in SequenceFiles should be configurable 
> rather than hard coded to 2000 bytes
> -
>
>